[DG: Teaching & Learning] benchmarking question (was: Re: [Using Sakai] Thank you and Next Steps for those who attended the learning activities investigation kickoff )

kamann at stanford.edu kamann at stanford.edu
Wed Sep 30 16:42:36 PDT 2009


Hi Luke
Thanks for your question--does this mean you were on the kickoff call on Tuesday, or were you not able to make it?  Let me know so I can add you to the meeting minutes (http://confluence.sakaiproject.org/display/UX/Kickoff+Meeting) I hope you don't mind, but I'm going to cc the ux and pedagogy lists since I think others will have similar questions.

Your hunch that your list might be too specific for the benchmarking matrix is right. I'll admit, we haven't fleshed out the benchmarking process nearly as much as the end-user interviews, but it's more about broad comparisons of products that campuses are using other than Sakai (or other than T&Q in sakai); we want to understand at a high level what is successful and not successful about other products.  Since every row would have to be filled out for every product, we can't get too specific. What we might do is include a section called "security" that has two rows "Ability to control when and where students take a test" and "Ability to control what, when and where students review after completing a test." Then people could fill that in with as much detail as is relevant. I'll add that now.

So I'm really curious about is what application Weber State is using to meet the needs you allude to if not Sakai--do you have Columns to contribute? Or is this need completely unmet needs you are expressing?

I think what you are trying to say is that these capabilities are important unmet needs to certain people at your campus, not that you are using other products to meet them. I have two alternatives to capturing this, in addition to the suggestion above.

1) End User Interview: I strongly believe we could benefit from understanding instructors who need to conduct high stakes testing. Stanford's original Tests & Quizzes contribution was around formative testing (meant to help students learn, low grade weight, therefore lots of feedback) rather than summative testing (high stakes, high security and control); that latter half was contributed by other schools. 

It's that later half, with all it's settings, that we have heard people find daunting--they are never sure if they've made the right selections and thus only the most motivated instructors are willing to risk learning about the settings, experimenting and actually using it.  Part of the solution might be in better feedback to make it more obvious what they've selected, but as I was trying to say during the kickoff, just as important is understanding the context of where these requests are coming from. Who is it that is asking for this? Is it multiple, distinct types of instructors, or are they very similar? What is each of their motivation? What is the scenario driving this? 

Thus, interviews with  instructor who do high stakes testing is going to be key. What is your role at Weber State? I'm not sure if you are coming to this initiative as the instructor who needs this type of high stakes testing, or a developer/manager on campus to whom requests are coming? If the latter, are instructors or department heads coming directly to you, or is there some sort of customer facing advocate on campus who serves as an intermediary. In other words, are you a potential interviewee or a  interviewer (granted one who's maybe never done it before)? If we don't get an interview from your campus that can contextualize these feature requests, hopefully there are campuses with similar high stakes testing needs that can provide this kind of essential insight or work something out.

2) Stakeholder Question. If you visit http://confluence.sakaiproject.org/display/UX/Stakeholder+Interviews, you'll see there are several places you might describe these features. However, each of these lists will ask you to put it in context of an instructional challenge or a particular person, so I hope you can organize it along these lines (if you can't, it points to the need for end user interviews, but you can do your best and indicate where you are unsure)

Thoughts for the Future of Learning Activities in Sakai: http://confluence.sakaiproject.org/display/UX/Thoughts+for+the+Future+of+Learning+Activities+in+Sakai

Tell us about the users who sparked new functionality (not by name):
http://confluence.sakaiproject.org/display/UX/Tell+us+about+the+users+who+sparked+new+functionality

Keli Amann
User Experience Specialist
Academic Computing, Stanford University

----- Original Message -----
From: "Luke Fernandez" <luke.fernandez at gmail.com>
To: "Keli Sato Amann" <kamann at stanford.edu>
Sent: Wednesday, September 30, 2009 10:04:22 AM GMT -08:00 US/Canada Pacific
Subject: benchmarking question (was: Re: [Using Sakai] Thank you and Next  Steps for those who attended the learning activities investigation kickoff )

Keli,

How much specificity is useful in the benchmarking y axis grid?  I'm
tempted to add criteria like:

Can the instructor pick from a list of testing centers what centers
the test is available at?

or criteria like:

Allow students to review only those questions they missed.
Allow review at any computer, not just at the sites where the test was
available.
Allow review only after the test run is completed.
Allow review anytime after the student completes the test.

But I'm not sure these things are what you are looking for in benchmarking....

Cheers,

Luke

On Tue, Sep 29, 2009 at 10:25 PM,  <kamann at stanford.edu> wrote:
> Hello
> Thanks to all who attended our kickoff call. It was great to have so many people attend, though we are sorry if some had difficulty getting on the conference line--please let us know if you weren't getting in so we can adjust our plans for the next meeting.
>
> Meeting Notes
> (posted to http://confluence.sakaiproject.org/display/UX/Kickoff+Meeting)
> We'll be adding the names of the people we either saw in the Breeze or heard on the phone, but if we missed you or misspelled your name, please add your name to this page.
>
>    * For those of you who want to participate in the stakeholder discussion, visithttp://confluence.sakaiproject.org/display/UX/Stakeholder Interviews
>    * For those of you who wish to contribute to Benchmarking of related products, please leave comments athttp://confluence.sakaiproject.org/display/UX/Benchmarking
>    * For those of you who want to participate in end user interviews, read the next steps
>
> Next Steps
> For those of you who are interested in conducting end user interviews at your campus, there are a few to dos
> 1) Take a look at the two outstanding issues below that were raised during the meeting.
> 2) Visit the page Ethnographic Interviews - Interviewing and Observing Usersand
>
> a) Review the Contextual Inquiry guidelines. Make note of any questions you have
>
> b) Start thinking about people you might recruit and enter the types of people you think you might interview (do not contact them until you've considered the IRB issue below)
>
> 3) Attend upcoming meetings. We'll send out a reminder this Friday with meeting details. We'll be sure to send out meeting details to the sakai-user, ux, and pedagogy lists, but we anticipate meetings for the next 5 weeks every Tuesday at the same time. A detailed calendar can be seen athttp://confluence.sakaiproject.org/display/UX/Schedule for Investigation Phase.
>
> Outstanding Issues
> A few issues were raised during the meeting. We can discuss this by email and at the next meeting
>
> 1) Institutional Review Board: Although this is research with humans subjects, we believe this would be considered a quality improvement project, which many IRBs would consider exempt. Nonetheless, since every IRB is different, please contact your campus IRB if you have concerns as to whether your they would consider this endeavor generalizable research with human subjects. Contact us if you have any questions; we'll will post a page describing this project, including confidentiality measures and consent forms we plan on using, before the end of the week. In short, though, we do not plan on publishing or presenting this research; we will describe the personas based on user interviews as we describe the design that they inspire. We will credit all schools and people that participated, but will not make public the individual interview notes, except anonymously among interviewees.
>
> 2) Instructional Designers as interviewers and interviewees: Because instructional designers work directly with instructors, we think they'll do well at interviewing end users, even if it is finding longer term design solutions, rather than an immediate fix. However, we were reminded during the kickoff that we also want Instructional Designers as interviewees, since they are involved in the creation of learning activities. We'll be reaching out to the IDs on the call to discuss ways we can have they both be interviewers and interviewees (we may create a custom Contextual Inquiry guideline for IDs or enlist the IDs to help create this).
> _______________________________________________
> sakai-user mailing list
> sakai-user at collab.sakaiproject.org
> http://collab.sakaiproject.org/mailman/listinfo/sakai-user
>
> TO UNSUBSCRIBE: send email to sakai-user-unsubscribe at collab.sakaiproject.org with a subject of "unsubscribe"
>


More information about the pedagogy mailing list