[samigo-team] Samigo performance--limits and solutions

David Horwitz david.horwitz at uct.ac.za
Wed Apr 10 01:05:34 PDT 2013


Hi Jim,

Can you tell us more about your setup (version, database) & the assessment organisation? These all have an impact on performance and how it effect you.

Some observations from us. We run exams in the sizes you mention - our cluster is sized the way it is largely to accommodate this.

There are 2 distinct load issues we have seen:

- starting an quiz is expensive, particularly if the questions are drawn from a pool. Rutgers identified an issue that's fixed in 2.9.0 and Sam's fixes should further improve matters here. We advise the staff giving large exams like this to try stagger starts to help mitigate this.

- The performance on moving to next degrades as the student moves to the next question. I did some work on this under https://jira.sakaiproject.org/browse/SAM-1523 that i thought was in trunk but i see isn't.

D

On Tue, 2013-04-09 at 14:19 -0400, Bryan Holladay wrote:
https://jira.sakaiproject.org/browse/SAM-1695


Subtask 2 & 3 are verified in production.  Subtask 1 is still waiting for David to reply about production statistics.


2 & 3 have shown a 2000% reduction in the number of DB calls


On Tue, Apr 9, 2013 at 2:07 PM, Jim Mezzanotte <jmezzanotte at rsmart.com<mailto:jmezzanotte at rsmart.com>> wrote:
Hi all,

We're encountering Samigo performance issues with large (800+) course
sites, and specifically in cases where groups of students (up to 200)
are taking an assessment simultaneously in one physical location with
the Respondus lockdown browser. There's not a way to decrease this
concurrent usage, so I wanted to get feedback from others on your
experience with Samigo performance limits, and any steps you've taken
to maximize performance.

I'm referring specifically to speed of navigation between
questions/parts for students, as well as speed of access for
instructors scoring an assessment. Another thing I'm trying to
determine is whether access time for concurrent usage changes
according to these parameters:

a) usage distributed across the tool in multiple sites
b) usage for multiple assessments in a single site
c) usage for a single assessment in a single site

Determining this poses a challenge with manual testing (I'm looking
into automated testing). But thanks in advance for sharing any
experiences/solutions/best practices.

Best,
Jim Mezzanotte
rSmart
_______________________________________________
samigo-team mailing list
samigo-team at collab.sakaiproject.org<mailto:samigo-team at collab.sakaiproject.org>
http://collab.sakaiproject.org/mailman/listinfo/samigo-team




_______________________________________________
samigo-team mailing list
samigo-team at collab.sakaiproject.org<mailto:samigo-team at collab.sakaiproject.org>
http://collab.sakaiproject.org/mailman/listinfo/samigo-team


________________________________
UNIVERSITY OF CAPE TOWN

This e-mail is subject to the UCT ICT policies and e-mail disclaimer published on our website at http://www.uct.ac.za/about/policies/emaildisclaimer/ or obtainable from +27 21 650 9111. This e-mail is intended only for the person(s) to whom it is addressed. If the e-mail has reached you in error, please notify the author. If you are not the intended recipient of the e-mail you may not use, disclose, copy, redirect or print the content. If this e-mail is not related to the business of UCT it is sent by the sender in the sender's individual capacity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://collab.sakaiproject.org/pipermail/samigo-team/attachments/20130410/4c33d245/attachment.html 


More information about the samigo-team mailing list