[Contrib: Evaluation System] Sakai Evaluation system - howdoyou make sense of 'mean' in Evaluation results?

Daniel Merino daniel.merino at unavarra.es
Wed Feb 19 03:43:02 PST 2014


Hi Fawei,

any maintain/instructor user can see results if the survey is configured 
to allow it. The issue here is that, no matter if the survey is 
configured to allow this, these users always receive the mail sent when 
results are available. This happens at least in our 1.3.0.

If you mean sharing results out of the site, we have not received any 
request for it. I suppose that users generate the report in pdf or xls 
and share it as any other file.

Best regards.

El 19/02/14 12:23, Fawei Geng escribió:
> Hi Daniel and Will,
>
> Thank you both for the swift responses. Glad to know that I was not the only one who found the 'mean' needed improvements.  Great to hear that you both have developed the tool further. I will talk to team here and then get back to you off list.
>
> As I am replying to the list, just wonder if you (or anyone there) have done any work that enables people to share the survey results with their colleagues.
>
> Kindest regards
>
> Fawei
>
> -----Original Message-----
> From: evaluation-bounces at collab.sakaiproject.org [mailto:evaluation-bounces at collab.sakaiproject.org] On Behalf Of Daniel Merino
> Sent: 19 February 2014 08:23
> To: evaluation at collab.sakaiproject.org
> Subject: Re: [Contrib: Evaluation System] Sakai Evaluation system - how doyou make sense of 'mean' in Evaluation results?
>
> Hi Fawei & Will,
>
> I also did a change for a local requirement about this. My patch detects if answer choices are numerical and, it they are, the weighted mean is applied over the numerical choices, and not over the answer numbers, something that had more logic for our users. Also I added weighted means for blocks.
>
> I packed these changes in a patch that also does a bunch of improvements in how pdf reports are shown. The work is at
> https://jira.sakaiproject.org/browse/EVALSYS-1100 , though probably is a bit obsolete, because I did not see too much interest for it in the past and I did not upload subsequent changes.
>
> If interest about this patch has grown and somebody would commit it, I could spend some time for updating the patch for trunk version.
>
> Hope it helps.
> Best regards.
>
> El 18/02/14 19:10, Will Humphries escribió:
>> Hi Fawei,
>>
>> You're correct, the mean is calculated as a weighted average, with the
>> first scale option worth 1 point. There is no ability to assign custom
>> numeric values to the options in a scale. The 'mean' can be
>> meaningless or misleading, it depends on the scale used and who you ask.
>>
>> We had a local requirement at Tufts that for some scales, the
>> rightmost scale option a student saw should be worth 1 point instead.
>> To accomplish this, we added a check to the 'Create/Modify Scale' page
>> that, when selected, reverses the order that scales options are
>> displayed in when an evaluation is being taken. The work is here
>> https://jira.sakaiproject.org/browse/EVALSYS-1387 , I don't think it's
>> made it into community trunk.
>>
>> -Will
>>
>> On 2/18/14 12:25 PM, Fawei Geng wrote:
>>> Dear all,
>>>
>>> When looking at the evaluation results via the web interface, each
>>> (rating scale and multiple answer ) question has a 'mean'. Having had
>>> a close look, statistically the 'mean' seems a bit meaningless, or
>>> even misleading.
>>>
>>> For example I tested a 'rating scale' question. One question uses
>>> 5-point 'Strongly agree to strongly disagree' while the other uses
>>> 5-point 'strongly disagree to strongly agree'. For both question I
>>> chose the same answer -strongly agree. But I got mean =1.0 for the
>>> first question and mean = 5.00 for the second question. This means
>>> that the system gives the first answer a value of '1' and the fifth
>>> answer a value of '5' regardless.
>>>
>>> Does anyone have a better way of using the 'mean' that I am not aware of?
>>>
>>> Many thanks
>>>
>>> Fawei
>>>
>>> ------------------------------------------------------------
>>> Fawei Geng, FHEA CMALT MBCS
>>>
>>> Learning Technology Support Officer
>>> IT Services, University of Oxford
>>>
>>> 13 Banbury Road, Oxford OX2 6NN
>>>
>>> Blog: http://blogs.oucs.ox.ac.uk/fawei/
>>>
>>> Twitter: http://twitter.com/oxford4learning/
>>>
>>> ------------------------------------------------------------
>>>
>>>
>>>
>>> _______________________________________________
>>> evaluation mailing list
>>> evaluation at collab.sakaiproject.org
>>> http://collab.sakaiproject.org/mailman/listinfo/evaluation
>>>
>>> TO UNSUBSCRIBE: send email to evaluation-unsubscribe at collab.sakaiproject.org with a subject of "unsubscribe"
>> _______________________________________________
>> evaluation mailing list
>> evaluation at collab.sakaiproject.org
>> http://collab.sakaiproject.org/mailman/listinfo/evaluation
>>
>> TO UNSUBSCRIBE: send email to evaluation-unsubscribe at collab.sakaiproject.org with a subject of "unsubscribe"
>>
> --
> Daniel Merino Echeverría
> daniel.merino at unavarra.es
> Gestor de teleformación - Centro Superior de Innovación Educativa.
> Tfno: 948-168489 - Universidad Pública de Navarra.
> _______________________________________________
> evaluation mailing list
> evaluation at collab.sakaiproject.org
> http://collab.sakaiproject.org/mailman/listinfo/evaluation
>
> TO UNSUBSCRIBE: send email to evaluation-unsubscribe at collab.sakaiproject.org with a subject of "unsubscribe"
>

-- 
Daniel Merino Echeverría
daniel.merino at unavarra.es
Gestor de teleformación - Centro Superior de Innovación Educativa.
Tfno: 948-168489 - Universidad Pública de Navarra.
--
Lo que consideramos como justicia es con mucha frecuencia una injusticia 
cometida en favor nuestro. (Reveillere)


More information about the evaluation mailing list