[Portfolio] [DG: Teaching & Learning] [Management] A manifesto for Grading and Rating in Sakai

Wende Morgaine wendemm at gmail.com
Fri Oct 30 19:01:18 PDT 2009


Hi Sean,

Your comment about rubrics made me want to spread the word about a project
AAC&U <http://www.aacu.org/index.cfm> is just now wrapping up:
VALUE<http://www.aacu.org/value/index.cfm>(Valid Assessment of
Learning in Undergraduate Education).

We have spent the last 18 months working with over 90 faculty from across
the country to develop common
rubrics<http://www.aacu.org/value/rubrics/index.cfm>to be used for
institutional or program level assessment of the following
areas:

Intellectual and Practical Skills

   - Inquiry and analysis
   - Critical thinking
   - Creative thinking
   - Written communication
   - Oral communication
   - Quantitative literacy
   - Information literacy
   - Teamwork
   - Problem solving
   - Reading

Personal and Social Responsibility

   - Civic knowledge and engagement—local and global
   - Intercultural knowledge and competence
   - Ethical reasoning
   - Foundations and skills for lifelong learning

Integrative Learning

   - Integrative Learning

The resulting 15 rubrics <http://www.aacu.org/value/rubrics/index.cfm> have
been tested and revised by over 100 campuses, some of which are Sakai and
OSP schools.

The rubrics <http://www.aacu.org/value/rubrics/index.cfm> are free and open
to the public for use as is or for use as catalysts for assessment
discussions and rubric creation on local campuses.  Please feel free to
include them in Sakai if your users have interest.  They have already been
integrated into other LMS and portfolio software such as LiveText, Epsilen,
and most recently, Waypoint Outcomes.

Let me know if I can provide additional information.  :)
--
Wende Bonner Morgaine
VALUE Initiative Manager, Association of American College & Universities
Faculty, Portland State University
wendemm at gmail.com
(503) 577.7712 Cell


On Fri, Oct 30, 2009 at 6:50 PM, Sean Keesler <sean.keesler at threecanoes.com>
wrote:
>
> Trying again...
>
> As I look over the "capabilities spreadsheet", I see a lot of focus on
> the interactions between instructors and students...
> "What do *I* need to do with *my* students?"
> One thing is "grade" their work."
>
> I think that there is a piece that sits on top of the core capability
> to rate/grade that relates to "management of learning"...which I know
> isn't typically thought of as a key design focus of an LMS, but may be
> the concern of admins....
> "How do *I* (the program chair, department head, dean, provost) want
> to encourage/influence/support the teaching behavior of our faculty?"
> One thing is to encourage/require/foster the development/use of
> standards/rubrics in grading/rating.
>
> The idea of the application of "rubrics" (which arguably is the
> difference between an academic assessment system and any arbitrary
> rating system) is a feature that seems like it would fit into this
> latter category of possible LMS capabilities.
>
> There may be other capabilities that sit outside the
> "instructor-student" relationship. It may need to be another "tab" on
> that spreadsheet.
>
>
> Sean Keesler
> 130 Academy Street
> Manlius, New York 13104 USA
> 315-663-7756
> sean.keesler at threecanoes.com
>
>
>
> On Tue, Oct 20, 2009 at 10:01 AM, Sean Keesler
> <sean.keesler at threecanoes.com> wrote:
> > One of the things that are crucial to making meaning out of the
> > assessment process (and the grades/ratings that are the record of that
> > process) are sets of rubrics that document HOW rating/grading should
> > be done or has been done.
> >
> > How you do or DO NOT manage and/or mandate the application of rubrics
> > to the assessment of student work is a local decision that may vary
> > within and amongst faculty members, departments to entire colleges
> > with the university, but the capability to author, share, modify and
> > find rubrics suitable for a any one application would seem to me to be
> > a missing piece of John's manifesto and one that would make the idea
> > of assessment a core piece of Sakai 3.
> >
> > It has a lot of impact on the deployment of ePortfolios where multiple
> > faculty (perhaps from different departments or colleges) could be
> > asked to blindly assess a collection of student work through their
> > lens of specialization. Providing guidance for these faculty to HOW to
> > grade a portfolio gives the entire process more validity.
> >
> > Rubrics are also a vehicle for a university to articulate how it
> > differentiates it's standards for excellence from other colleges or
> > for showing that program X complies with Association Y's expectations.
> >
> > A while ago I jotted down some different ways that rubrics might be
> > managed in an LMS.  I believe that issues like the Spellings
> > Commission Report and the No Child Left Behind fiasco (K12) and so
> > they may be receiving more attention here than elsewhere. It may be
> > interesting to see what patterns exist in the community around the
> > application and use of rubrics.
> >
> > 1. Managed assessments:
> > Some rubrics are rather specific to (and must be tied to) a particular
> > assessment item and must be approved by an "assessment coordinator"
> > for educational QA purposes as part of a larger assessment system
> > strategy. Changing the assessment/rubric in this case involves more
> > than just the teacher.
> >
> > 2. Generally reusable (but unchangeable) rubrics
> > Some rubrics may be general purpose rubrics that are NOT tied to an
> > assessment, but the dissemination of these approved rubrics may be a
> > strategy of an institution to push forward an agenda of best practice
> > for assessment by providing a handy reference library of general
> > purpose writing, mathematics and science rubrics (for example). While
> > the choice whether or not to use one of these "off the shelf" rubrics
> > (and which one) is left to the teacher, providing some information to
> > the teacher about the schools expectations of its students at
> > different stages (and perhaps suggesting an appropriate rubric for
> > this grade level/stage of development) would make this service more
> > valuable.
> >
> > 3. Reusable rubric templates:
> > Similar to the above, but the library of "off the shelf" rubrics are
> > merely starting points. There is not a priority to ensure that
> > everyone is doing assessment the exact same way. When a teacher uses
> > one of these rubrics, they can easily edit the performance indicators
> > to suit their needs and create a new rubric, just for their new
> > assignment. (Rubristar approach)
> >
> > 4. Sharing of rubrics:
> > This is a bottom up approach to establishing "best practice". As the
> > teachers create their own rubrics against goals, they have the
> > opportunity to publish them as part of the "reusable" library so other
> > teachers can use/edit/republish them. (Someone?)
> >
> >
> >
> > Sean Keesler
> > 130 Academy Street
> > Manlius, New York 13104 USA
> > 315-663-7756
> > sean.keesler at threecanoes.com
> >
> >
> >
> > On Fri, Oct 16, 2009 at 11:07 AM, Noah Botimer <botimer at umich.edu>
wrote:
> >> Hello John,
> >> Thank you for the rather comprehensive narrative. I believe that these
are
> >> important for the archives as we change our ideas and software over
time.
> >> They leave a better historical record of our state of mind at any given
> >> point than a pile of JIRA tickets. Our successive approximation is
better
> >> validated when we have a record of these richer "data" points.
> >> Now, more on task...
> >> This is a fair account from my perspective, and is especially important
in
> >> that it carves out a first-class place for two things that have been
> >> historical weaknesses:
> >>  1. The ability to treat various artifacts individually and in
collections,
> >> consistently, across types of "stuff" and activity (e.g., reflection
vs.
> >> feedback vs. grading)
> >>  2. The ability to retrieve meaningful performance (or other) data in
detail
> >> and aggregate, consistently, and without extensive one-off programming
> >>
> >> Interestingly enough, these two areas are what I've spent four years
working
> >> on -- so I suppose it's not surprising that I call them out. I mention
them
> >> as weaknesses from my experience. It has been difficult to combine
> >> assignment information with student-crafted presentation. It has been
> >> difficult to combine course-based (assignment, quiz, etc.) data and
> >> program-based activity (annual review, capstone, student teaching
> >> performance) and map them to curricular goals and reports...
> >> Please do not take my comments as complaints of where we are. What is
more
> >> important is that I see this narrative as recognizing these activities
not,
> >> as we have, as things that can be bolted on post-construction but,
rather,
> >> as shaping the core provisions of a meaningful academic and
collaborative
> >> platform. We are, as a community, much more aware of our successes and
> >> shortfalls. This, I feel, is very healthy and inspiring.
> >> I believe this discussion is going in the right direction and sincerely
hope
> >> that we can find the energy to support it.
> >> Thanks,
> >> -Noah
> >> On Oct 16, 2009, at 6:02 AM, John Norman wrote:
> >>
> >> I have collected my thoughts around grading and rating in Sakai. I
offer
> >> them now partly because I feel ready, partly because there are open
> >> questions about Gradebook in Sakai 3 and partly because we have just
had a
> >> discussion in which I suggest it is hard to break things out of a
coherent
> >> Sakai 3 project. If accepted as is, this represents a logical area of
> >> activity than can readily be envisioned as a standalone activity -
maybe
> >> even a separate product.
> >> First of all I'd like to suggest that grading is a subset of a general
> >> rating and feedback activity. Many artifacts can be rated, from
instructor
> >> performance during a course (course evaluation), through quality of a
> >> teaching asset or exercise (rating) to assessing the quality of a
student
> >> portfolio (feedback) and assessing the performance of a student on an
> >> assignment or test (grading). The common pattern is: an artifact is
produced
> >> by one individual (or group) and some value judgement is recorded by
one or
> >> more other people.
> >> The process by which an artifact is judged can be simple or complex.
Complex
> >> processes include multi-stage workflows where raw scores are obtained
by one
> >> process and raw scores moderated to a final grade by another process. I
see
> >> plagiarism detection as one particular wrinkle in such a workflow.
> >> I suggest that (nearly) everything in Sakai should be ratable/gradable.
I
> >> will refer to the ratable/gradable elements as "artifacts" to indicate
that
> >> they may not be 'technical elements' but some aggregation of technical
> >> elements that makes sense for rating/grading purposes. Moreover, we
should
> >> not forget that some of the artifacts that are rated/graded may not be
> >> electronic and the 'artifact' may be a proxy for some real world
activity or
> >> output that cannot be captured electronically.
> >> The activity of rating/grading is essentially a human judgement. Tests
and
> >> quizzes represent a subset of this situation where the human codifies
their
> >> judgement into rules applied by the testing engine and the test engine
> >> automates the application of scores. The Quiz with the student answers
> >> represents the artifact and the raw scores and/or processed grade
represents
> >> the judgement. The people involved in rating/grading can be anyone:
> >> students, teachers, peers.
> >> The artifact to be rated or graded may not be stable over time, in
which
> >> case a 'snapshot' of some kind is desirable for audit purposes. An
example
> >> might be the state of my personal portfolio pages on the first day of
May,
> >> when they are declared to be assessed. I may wish to continue
maintaining
> >> the pages after the assessment, but their status at the time of
assessing is
> >> worth recording. A different example might be my performance in a piece
of
> >> drama. I have no idea how this would be recorded in the real world, but
I
> >> imagine that the grader might write down some critique/commentary and
then
> >> assign a grade. The critique/commentary would become the recorded
artifact
> >> (in some places there might be a video recording but I don't assume
that)
> >> and separately there would be a grade/score/rating. Teacher performance
in
> >> class evaluated by students is not far from this model. The questions
in the
> >> evaluation form might be considered the rubric for the teachers
performance.
> >> In this world, we would want a flexible reporting platform that allows
grade
> >> information (including an archive of artifact snapshots) to be
collected and
> >> analysed (and sometimes further processed). I suggest we think of using
> >> something like BIRT to create this flexible reporting environment and
then
> >> consider certain predefined views of the data and derived reports from
the
> >> data as the essence of "GradeBook" functionality. i.e. "GradeBook" is a
> >> subset of functionality from a powerful reporting
environment. Ultimately
> >> "the official record" will need to be updated.
> >> I think it is really important to anticipate that some of the artifacts
to
> >> be graded may come from outside Sakai and Sakai needs to be able to
accept
> >> artifacts for grading and also to accept graded artifacts for inclusion
in
> >> reporting. I see two main implementation options for Sakai
> >> 1. A Sakai service with published external entry points (Moodle/Mahara
> >> integration would be an example)
> >> 2. A new Sakai 'product' which would be an institutional grading/rating
> >> service that receives artifacts from a number of places (including the
Sakai
> >> Course Management System) and manages the grading/rating workflow into
a
> >> flexible reporting system that creates a complete record for an
individual
> >> and allows this information to be displayed in a number of places
(including
> >> Sakai CMS)
> >> A strong attraction of the second model is that it fits with the idea
that
> >> assessing performance is a core competence of the institution that
preceded
> >> and will survive the CMS, but which is unlikely to be developed for us
by
> >> the commercial world. It could also represent a shared service with a
> >> student information system.
> >> Having set out my manifesto, it is interesting to consider what the
product
> >> council might do with it. From my personal perspective it would be
great if
> >> we adopted it as the Sakai manifesto (following review/revision) and
called
> >> for developments to align with it, but there is an open question
regarding
> >> the value of 'adoption' of the manifesto if nobody is interested in
> >> developing products/code that address the manifesto.
> >> John
> >> PS I have forwarded this message that I saw as I came in this morning
> >> because in my mind it illustrates an early step in the direction of my
> >> manifesto, although I have taken it much further (perhaps
unrecognisably).
> >> Begin forwarded message:
> >>
> >> From: David Horwitz <david.horwitz at uct.ac.za>
> >> Date: 16 October 2009 09:29:58 BST
> >> To: sakai-dev <sakai-dev at collab.sakaiproject.org>,
> >> production at collab.sakaiproject.org,
announcements at collab.sakaiproject.org
> >> Subject: [Announcements] 2.7 Framework: commons and edu-servise
1.0.0-beta01
> >> released
> >> Hi All,
> >>
> >> We're proud to announce the first of 2 framework releases in support of
the
> >> upcoming 2.7 release. The creation of these bundles aims to rationalize
our
> >> dependency tree and enable a more modular approach to Sakai releases.
> >>
> >> Commons 1.0.0-beta01
> >> The commons package contains common services depended on by a number of
> >> Sakai tools, but outside the scope of the Kernel. The services included
are:
> >>
> >> SakaiPerson Service (profile data)
> >> Type Service
> >> privacy service
> >> archive service
> >> import service
> >>
> >> The project site can be viewed at:
> >> http://source.sakaiproject.org/release/common/1.0.0-beta01/
> >> (Note experimental site no Sakai skins etc.)
> >>
> >> Edu-Services 1.0.0-beta01
> >> Edu-services contain core shared services that support teaching and
learning
> >> functionality in Sakai. It contains:
> >>
> >> Course management service
> >> Gradebook service
> >> Sections service
> >>
> >> The project site can be viewed at:
> >> http://source.sakaiproject.org/release/edu-services/1.0.0-beta01/
> >>
> >>
> >> _______________________________________________
> >> announcements mailing list
> >> announcements at collab.sakaiproject.org
> >> http://collab.sakaiproject.org/mailman/listinfo/announcements
> >>
> >> TO UNSUBSCRIBE: send email to
> >> announcements-unsubscribe at collab.sakaiproject.org with a subject of
> >> "unsubscribe"
> >>
> >> _______________________________________________
> >> management mailing list
> >> management at collab.sakaiproject.org
> >> http://collab.sakaiproject.org/mailman/listinfo/management
> >> TO UNSUBSCRIBE: send email to
management-unsubscribe at collab.sakaiproject.org
> >> with a subject of "unsubscribe"
> >>
> >> _______________________________________________
> >> portfolio mailing list
> >> portfolio at collab.sakaiproject.org
> >> http://collab.sakaiproject.org/mailman/listinfo/portfolio
> >>
> >> TO UNSUBSCRIBE: send email to
portfolio-unsubscribe at collab.sakaiproject.org
> >> with a subject of "unsubscribe"
> >>
> >
> _______________________________________________
> pedagogy mailing list
> pedagogy at collab.sakaiproject.org
> http://collab.sakaiproject.org/mailman/listinfo/pedagogy
>
> TO UNSUBSCRIBE: send email to pedagogy-unsubscribe at collab.sakaiproject.orgwith a subject of "unsubscribe"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://collab.sakaiproject.org/pipermail/portfolio/attachments/20091030/224f5eaf/attachment-0001.html 


More information about the portfolio mailing list