[Portfolio] [DG: Teaching & Learning] [Management] A manifesto for Grading and Rating in Sakai

Sean Keesler sean.keesler at threecanoes.com
Tue Nov 3 07:43:10 PST 2009


I see your point, Bob.

When I was at Syracuse University, we provided a matrix that
articulated the faculties conceptual framework around what they
thought was important for graduates of the teacher preparation
program. However, one of their goals was to encourage the students to
begin to create their own framework for how they thought about the
practice of teaching and to explain their own practice in their own
terms. Assessing whether or not they did that was recognized as a
difficult problem. You hope it happens without prompting, but once the
prompt is there, it seems less authentic when performed (but is it
really?). I am still working with them to develop their OSP
implementation to address these concerns.

Even so, their first stab at an agreed upon a framework for making
decisions was seen as helpful to teach what a "conceptual framework"
is to the students earlier in the program. It also catalyzed a
discussion and eventual agreement about the program objectives amongst
the faculty.


Sean Keesler
130 Academy Street
Manlius, New York 13104 USA
315-663-7756
sean.keesler at threecanoes.com



On Tue, Nov 3, 2009 at 9:37 AM, Robert Squillace <rs84 at nyu.edu> wrote:
> Hey All,
>
> First, please - Bob!  I'm only "Dr Squillace" on my vita and in my signature line!
>
> Yes, I do think LMS-assisted, rubric-driven assessment of the kind you describe, where both students and faculty are aware of the prescribed learning goals that equate with success in the program, can have tremendous value - for a certain kind of program.
>
> Again taking an example like architecture, it could be a great benefit for students to know that both they and the program's success are ultimately going to be assessed on, e.g., the ability to predict how different building materials will react under different climatological conditions, the ability to design an airport terminal with all the required elements (that one got my sister-in-law--she forgot about the manager's office and wound up putting it in such a location that the airport manager would have had to climb over the baggage ramps to get in!), etc.  Students could both be accountable and hold their professors accountable for learning what they needed to learn.
>
> But in a humanities program/major, the goal is not so much students demonstrating, for instance, the ability to read Shakespeare and understand the changing cultural position of his work.  The mark of success for a humanities program is students developing the curiosity, e. g., to read Shakespeare and Shakespeare criticism on their own because they themselves want to understand his cultural position.  The goal is self-directed learning - what we've come in our program to call "promoting student agency."
>
> Assessing self-direction via rubrics transparent to students and faculty is problematic.  You can't use as a program rubric "student must demonstrate that he or she has seen and understood a Shakespeare play on his or her own," because then they aren't seeing it on their own.  This is a central dilemma for humanities education - you do need to articulate specific learning goals for basic program assessment purposes, but at the same time you need to keep in mind that these goals are all means to the greater end of promoting self-directed education.  Even when sharing a syllabus with students, instructors need to be very careful that they don't confuse the means (success at what's demanded on paper A and presentation B) for the end (being able to fend for themselves educationally).  It's an issue we face constantly - the "what do I need to do to get an A?" question.
>
> Anyway, I'm new here - I'm just sharing the view from my particular perspective!
>
> Best,
> Bob
>
> Dr. Robert L Squillace
> Assistant Dean for Academic Affairs
> Liberal Studies Program
> New York University
> 726 Broadway, 6th Floor
> New York, NY 10003-9580
> (212) 992-8735
> rs84 at nyu.edu
>
> ----- Original Message -----
> From: Sean Keesler <sean.keesler at threecanoes.com>
> Date: Tuesday, November 3, 2009 0:29 am
> Subject: Re: [DG: Teaching & Learning] [Portfolio] [Management] A manifesto     for Grading and Rating in Sakai
> To: Robert Squillace <rs84 at nyu.edu>
> Cc: Noah Botimer <botimer at umich.edu>, management at collab.sakaiproject.org, portfolio at collab.sakaiproject.org, "pedagogy at collab.sakaiproject.org Learning" <pedagogy at collab.sakaiproject.org>
>
>> It sounds like Dr. Squillace agrees that the ability to create, share
>> and use rubrics in an LMS could be a good thing for some programs.
>>
>> I'm not sure about that I would agree that assessment of portfolios
>> (liberal arts or otherwise) means that their should be no rubrics
>> involved. If rubrics were used it may make the assessment process
>> transparent to students and document consensus amongst the faculty
>> about what they will use as the metric for distinguishing one level of
>> "understanding" (in this case) from another. I think that is what the
>> VALUE project is all about.
>>
>> Sean
>>
>>
>>
>> On Mon, Nov 2, 2009 at 1:16 PM, Robert Squillace <rs84 at nyu.edu> wrote:
>> > Speaking as an administrator in a Liberal Arts program currently
>> engaged in a major assessment effort (and also sitting on the
>> University's assessment committee), I just wanted to note that I see a
>> major difference in using an LMS to help assess a pre-professional
>> curriculum as opposed to a liberal arts curriculum.  For
>> pre-professional programs, the assessment measures can be pretty
>> direct - if 90% of your students pass the brutal architecture
>> certifying exam on the first try, for instance, it's clear you're
>> doing a very good job.  Assessing the success of a liberal arts
>> education is harder.  An LMS can, of course, allow you to disseminate
>> rubrics more readily than you could without an LMS configured for that
>> purpose, and rubrics do allow a program to determine what students are
>> learning without quite so many variations in interpretation as you'd
>> find between the grades instructors give in their individual courses
>> based on their individual senses of what should count.
>> >
>> > But the central goal of liberal education is to develop
>> self-understanding (and understanding in general), which is extremely
>> hard to measure; it's the very thing that can't be reduced to rubrics,
>> which of necessity focus on achievements that different observers can
>> agree are demonstrably present in the piece(s) of work being assessed.
>>  You can assess whether students can tell the difference between poems
>> by John Donne and Rumi, but it's much harder to find a concrete
>> manifestation of whether you have given them the ability meaningfully
>> to encounter a poem on their own.  You can, of course, develop some
>> sort of program-wide exit assignment that, e. g., asks students to
>> analyze a poem they've never read before, but all that will tell you
>> is how successful they have been in adopting the rhetorical stances of
>> your program, not the extent to which your program has influenced the
>> way they think in situations when they know they are not being
>> directly observed and judged.
>> >
>> > I see great potential for using an LMS with a robust portfolio tool
>> to develop substantially different and possibly far superior means of
>> assessment for liberal arts programs.  Rubrics are applied to the end
>> product of learning on the assumption that the process of change that
>> led to those final products, the growth of the student's mind, is not
>> available itself for observation.  But with a strong portfolio tool,
>> one in which students do not merely stockpile the precise items the
>> program prescribes but control the type of items they save and the
>> manner in which they are presented, one might have a window into the
>> development of each student's mind.  In order to configure an
>> open-ended portfolio, students need to practice taxonomy, placing
>> items in categories that reflect the way they think about the
>> relations between them.  If such a portfolio is versioned and makes
>> space for student reflection, it becomes possible to see how a
>> student's way of thinking and modes of understa
>> > nding grow (or fail to) over time; if such a portfolio is allowed to
>> grow organically, the very sorts of items students choose to save and
>> comment upon, the ways they choose to represent themselves, can tell a
>> program a great deal about what is and is not having an influence on
>> students over time, what is and is not sticking.  In other words, a
>> program can look directly at how a student's work has changed over his
>> or her time at the University, see both what they produced and what
>> they thought about it at a metacognitive level, and get a sense of the
>> real difference what they teach is making.  I think we've always
>> wanted to see how students ways of thinking change over time; it was
>> just never practical in a world of discrete courses in which the work
>> of one term was basically wiped clean to start the next.
>> >
>> > Anyway, Lucy Appert, Barbra Mack, and I, as well as many colleagues
>> at NYU, are working along these lines, and would be curious to see
>> what people think.
>> >
>> > Yours,
>> > Bob Squillace
>> >
>> > Dr. Robert L Squillace
>> > Assistant Dean for Academic Affairs
>> > Liberal Studies Program
>> > New York University
>> > 726 Broadway, 6th Floor
>> > New York, NY 10003-9580
>> > (212) 992-8735
>> > rs84 at nyu.edu
>> >
>> > ----- Original Message -----
>> > From: Sean Keesler <sean.keesler at threecanoes.com>
>> > Date: Friday, October 30, 2009 9:50 pm
>> > Subject: Re: [DG: Teaching & Learning] [Portfolio] [Management] A
>> manifesto     for Grading and Rating in Sakai
>> > To: Noah Botimer <botimer at umich.edu>
>> > Cc: management at collab.sakaiproject.org,
>> portfolio at collab.sakaiproject.org, "pedagogy at collab.sakaiproject.org
>> Learning" <pedagogy at collab.sakaiproject.org>
>> >
>> >> Trying again...
>> >>
>> >> As I look over the "capabilities spreadsheet", I see a lot of focus
>> on
>> >> the interactions between instructors and students...
>> >> "What do *I* need to do with *my* students?"
>> >> One thing is "grade" their work."
>> >>
>> >> I think that there is a piece that sits on top of the core capability
>> >> to rate/grade that relates to "management of learning"...which I know
>> >> isn't typically thought of as a key design focus of an LMS, but may
>> be
>> >> the concern of admins....
>> >> "How do *I* (the program chair, department head, dean, provost) want
>> >> to encourage/influence/support the teaching behavior of our faculty?"
>> >> One thing is to encourage/require/foster the development/use of
>> >> standards/rubrics in grading/rating.
>> >>
>> >> The idea of the application of "rubrics" (which arguably is the
>> >> difference between an academic assessment system and any arbitrary
>> >> rating system) is a feature that seems like it would fit into this
>> >> latter category of possible LMS capabilities.
>> >>
>> >> There may be other capabilities that sit outside the
>> >> "instructor-student" relationship. It may need to be another "tab"
>> on
>> >> that spreadsheet.
>> >>
>> >>
>> >> Sean Keesler
>> >> 130 Academy Street
>> >> Manlius, New York 13104 USA
>> >> 315-663-7756
>> >> sean.keesler at threecanoes.com
>> >>
>> >>
>> >>
>> >> On Tue, Oct 20, 2009 at 10:01 AM, Sean Keesler
>> >> <sean.keesler at threecanoes.com> wrote:
>> >> > One of the things that are crucial to making meaning out of the
>> >> > assessment process (and the grades/ratings that are the record of
>> that
>> >> > process) are sets of rubrics that document HOW rating/grading should
>> >> > be done or has been done.
>> >> >
>> >> > How you do or DO NOT manage and/or mandate the application of rubrics
>> >> > to the assessment of student work is a local decision that may vary
>> >> > within and amongst faculty members, departments to entire colleges
>> >> > with the university, but the capability to author, share, modify
>> and
>> >> > find rubrics suitable for a any one application would seem to me
>> to
>> >> be
>> >> > a missing piece of John's manifesto and one that would make the idea
>> >> > of assessment a core piece of Sakai 3.
>> >> >
>> >> > It has a lot of impact on the deployment of ePortfolios where multiple
>> >> > faculty (perhaps from different departments or colleges) could be
>> >> > asked to blindly assess a collection of student work through their
>> >> > lens of specialization. Providing guidance for these faculty to HOW
>> >> to
>> >> > grade a portfolio gives the entire process more validity.
>> >> >
>> >> > Rubrics are also a vehicle for a university to articulate how it
>> >> > differentiates it's standards for excellence from other colleges
>> or
>> >> > for showing that program X complies with Association Y's expectations.
>> >> >
>> >> > A while ago I jotted down some different ways that rubrics might
>> be
>> >> > managed in an LMS.  I believe that issues like the Spellings
>> >> > Commission Report and the No Child Left Behind fiasco (K12) and so
>> >> > they may be receiving more attention here than elsewhere. It may
>> be
>> >> > interesting to see what patterns exist in the community around the
>> >> > application and use of rubrics.
>> >> >
>> >> > 1. Managed assessments:
>> >> > Some rubrics are rather specific to (and must be tied to) a particular
>> >> > assessment item and must be approved by an "assessment coordinator"
>> >> > for educational QA purposes as part of a larger assessment system
>> >> > strategy. Changing the assessment/rubric in this case involves more
>> >> > than just the teacher.
>> >> >
>> >> > 2. Generally reusable (but unchangeable) rubrics
>> >> > Some rubrics may be general purpose rubrics that are NOT tied to
>> an
>> >> > assessment, but the dissemination of these approved rubrics may
>> be a
>> >> > strategy of an institution to push forward an agenda of best practice
>> >> > for assessment by providing a handy reference library of general
>> >> > purpose writing, mathematics and science rubrics (for example). While
>> >> > the choice whether or not to use one of these "off the shelf" rubrics
>> >> > (and which one) is left to the teacher, providing some
>> information to
>> >> > the teacher about the schools expectations of its students at
>> >> > different stages (and perhaps suggesting an appropriate rubric for
>> >> > this grade level/stage of development) would make this service more
>> >> > valuable.
>> >> >
>> >> > 3. Reusable rubric templates:
>> >> > Similar to the above, but the library of "off the shelf" rubrics
>> are
>> >> > merely starting points. There is not a priority to ensure that
>> >> > everyone is doing assessment the exact same way. When a teacher uses
>> >> > one of these rubrics, they can easily edit the performance indicators
>> >> > to suit their needs and create a new rubric, just for their new
>> >> > assignment. (Rubristar approach)
>> >> >
>> >> > 4. Sharing of rubrics:
>> >> > This is a bottom up approach to establishing "best practice". As
>> the
>> >> > teachers create their own rubrics against goals, they have the
>> >> > opportunity to publish them as part of the "reusable" library so
>> other
>> >> > teachers can use/edit/republish them. (Someone?)
>> >> >
>> >> >
>> >> >
>> >> > Sean Keesler
>> >> > 130 Academy Street
>> >> > Manlius, New York 13104 USA
>> >> > 315-663-7756
>> >> > sean.keesler at threecanoes.com
>> >> >
>> >> >
>> >> >
>> >> > On Fri, Oct 16, 2009 at 11:07 AM, Noah Botimer
>> <botimer at umich.edu> wrote:
>> >> >> Hello John,
>> >> >> Thank you for the rather comprehensive narrative. I believe that
>> >> these are
>> >> >> important for the archives as we change our ideas and software over
>> >> time.
>> >> >> They leave a better historical record of our state of mind at
>> any given
>> >> >> point than a pile of JIRA tickets. Our successive approximation
>> is
>> >> better
>> >> >> validated when we have a record of these richer "data" points.
>> >> >> Now, more on task...
>> >> >> This is a fair account from my perspective, and is especially
>> >> important in
>> >> >> that it carves out a first-class place for two things that have
>> been
>> >> >> historical weaknesses:
>> >> >>  1. The ability to treat various artifacts individually and in collections,
>> >> >> consistently, across types of "stuff" and activity (e.g.,
>> >> reflection vs.
>> >> >> feedback vs. grading)
>> >> >>  2. The ability to retrieve meaningful performance (or other) data
>> >> in detail
>> >> >> and aggregate, consistently, and without extensive one-off programming
>> >> >>
>> >> >> Interestingly enough, these two areas are what I've spent four
>> >> years working
>> >> >> on -- so I suppose it's not surprising that I call them out. I
>> >> mention them
>> >> >> as weaknesses from my experience. It has been difficult to combine
>> >> >> assignment information with student-crafted presentation. It has
>> been
>> >> >> difficult to combine course-based (assignment, quiz, etc.) data
>> and
>> >> >> program-based activity (annual review, capstone, student teaching
>> >> >> performance) and map them to curricular goals and reports...
>> >> >> Please do not take my comments as complaints of where we are. What
>> >> is more
>> >> >> important is that I see this narrative as recognizing these
>> >> activities not,
>> >> >> as we have, as things that can be bolted on post-construction but,
>> >> rather,
>> >> >> as shaping the core provisions of a meaningful academic and collaborative
>> >> >> platform. We are, as a community, much more aware of our successes
>> >> and
>> >> >> shortfalls. This, I feel, is very healthy and inspiring.
>> >> >> I believe this discussion is going in the right direction and
>> >> sincerely hope
>> >> >> that we can find the energy to support it.
>> >> >> Thanks,
>> >> >> -Noah
>> >> >> On Oct 16, 2009, at 6:02 AM, John Norman wrote:
>> >> >>
>> >> >> I have collected my thoughts around grading and rating in Sakai.
>> I
>> >> offer
>> >> >> them now partly because I feel ready, partly because there are open
>> >> >> questions about Gradebook in Sakai 3 and partly because we have
>> >> just had a
>> >> >> discussion in which I suggest it is hard to break things out of
>> a coherent
>> >> >> Sakai 3 project. If accepted as is, this represents a logical area
>> >> of
>> >> >> activity than can readily be envisioned as a standalone activity
>> -
>> >> maybe
>> >> >> even a separate product.
>> >> >> First of all I'd like to suggest that grading is a subset of a general
>> >> >> rating and feedback activity. Many artifacts can be rated, from
>> instructor
>> >> >> performance during a course (course evaluation), through quality
>> of
>> >> a
>> >> >> teaching asset or exercise (rating) to assessing the quality of
>> a student
>> >> >> portfolio (feedback) and assessing the performance of a student
>> on
>> >> an
>> >> >> assignment or test (grading). The common pattern is: an artifact
>> is
>> >> produced
>> >> >> by one individual (or group) and some value judgement is recorded
>> >> by one or
>> >> >> more other people.
>> >> >> The process by which an artifact is judged can be simple or
>> >> complex. Complex
>> >> >> processes include multi-stage workflows where raw scores are
>> >> obtained by one
>> >> >> process and raw scores moderated to a final grade by another
>> >> process. I see
>> >> >> plagiarism detection as one particular wrinkle in such a workflow.
>> >> >> I suggest that (nearly) everything in Sakai should be
>> >> ratable/gradable. I
>> >> >> will refer to the ratable/gradable elements as "artifacts" to
>> >> indicate that
>> >> >> they may not be 'technical elements' but some aggregation of technical
>> >> >> elements that makes sense for rating/grading purposes. Moreover,
>> we
>> >> should
>> >> >> not forget that some of the artifacts that are rated/graded may
>> not
>> >> be
>> >> >> electronic and the 'artifact' may be a proxy for some real world
>> >> activity or
>> >> >> output that cannot be captured electronically.
>> >> >> The activity of rating/grading is essentially a human judgement.
>> >> Tests and
>> >> >> quizzes represent a subset of this situation where the human
>> >> codifies their
>> >> >> judgement into rules applied by the testing engine and the test
>> engine
>> >> >> automates the application of scores. The Quiz with the student answers
>> >> >> represents the artifact and the raw scores and/or processed
>> grade represents
>> >> >> the judgement. The people involved in rating/grading can be anyone:
>> >> >> students, teachers, peers.
>> >> >> The artifact to be rated or graded may not be stable over time,
>> in
>> >> which
>> >> >> case a 'snapshot' of some kind is desirable for audit purposes.
>> An
>> >> example
>> >> >> might be the state of my personal portfolio pages on the first day
>> >> of May,
>> >> >> when they are declared to be assessed. I may wish to continue maintaining
>> >> >> the pages after the assessment, but their status at the time of
>> >> assessing is
>> >> >> worth recording. A different example might be my performance in
>> a
>> >> piece of
>> >> >> drama. I have no idea how this would be recorded in the real world,
>> >> but I
>> >> >> imagine that the grader might write down some critique/commentary
>> >> and then
>> >> >> assign a grade. The critique/commentary would become the
>> recorded artifact
>> >> >> (in some places there might be a video recording but I don't assume
>> >> that)
>> >> >> and separately there would be a grade/score/rating. Teacher
>> >> performance in
>> >> >> class evaluated by students is not far from this model. The
>> >> questions in the
>> >> >> evaluation form might be considered the rubric for the teachers
>> performance.
>> >> >> In this world, we would want a flexible reporting platform that
>> >> allows grade
>> >> >> information (including an archive of artifact snapshots) to be
>> >> collected and
>> >> >> analysed (and sometimes further processed). I suggest we think
>> of using
>> >> >> something like BIRT to create this flexible reporting environment
>> >> and then
>> >> >> consider certain predefined views of the data and derived reports
>> >> from the
>> >> >> data as the essence of "GradeBook" functionality. i.e. "GradeBook"
>> >> is a
>> >> >> subset of functionality from a powerful reporting environment. Ultimately
>> >> >> "the official record" will need to be updated.
>> >> >> I think it is really important to anticipate that some of the
>> >> artifacts to
>> >> >> be graded may come from outside Sakai and Sakai needs to be able
>> to
>> >> accept
>> >> >> artifacts for grading and also to accept graded artifacts for
>> >> inclusion in
>> >> >> reporting. I see two main implementation options for Sakai
>> >> >> 1. A Sakai service with published external entry points (Moodle/Mahara
>> >> >> integration would be an example)
>> >> >> 2. A new Sakai 'product' which would be an institutional grading/rating
>> >> >> service that receives artifacts from a number of places (including
>> >> the Sakai
>> >> >> Course Management System) and manages the grading/rating workflow
>> >> into a
>> >> >> flexible reporting system that creates a complete record for an
>> individual
>> >> >> and allows this information to be displayed in a number of
>> places (including
>> >> >> Sakai CMS)
>> >> >> A strong attraction of the second model is that it fits with the
>> >> idea that
>> >> >> assessing performance is a core competence of the institution that
>> >> preceded
>> >> >> and will survive the CMS, but which is unlikely to be developed
>> for
>> >> us by
>> >> >> the commercial world. It could also represent a shared service with
>> >> a
>> >> >> student information system.
>> >> >> Having set out my manifesto, it is interesting to consider what
>> the
>> >> product
>> >> >> council might do with it. From my personal perspective it would
>> be
>> >> great if
>> >> >> we adopted it as the Sakai manifesto (following review/revision)
>> >> and called
>> >> >> for developments to align with it, but there is an open question
>> regarding
>> >> >> the value of 'adoption' of the manifesto if nobody is interested
>> in
>> >> >> developing products/code that address the manifesto.
>> >> >> John
>> >> >> PS I have forwarded this message that I saw as I came in this morning
>> >> >> because in my mind it illustrates an early step in the direction
>> of
>> >> my
>> >> >> manifesto, although I have taken it much further (perhaps unrecognisably).
>> >> >> Begin forwarded message:
>> >> >>
>> >> >> From: David Horwitz <david.horwitz at uct.ac.za>
>> >> >> Date: 16 October 2009 09:29:58 BST
>> >> >> To: sakai-dev <sakai-dev at collab.sakaiproject.org>,
>> >> >> production at collab.sakaiproject.org, announcements at collab.sakaiproject.org
>> >> >> Subject: [Announcements] 2.7 Framework: commons and edu-servise
>> 1.0.0-beta01
>> >> >> released
>> >> >> Hi All,
>> >> >>
>> >> >> We're proud to announce the first of 2 framework releases in
>> >> support of the
>> >> >> upcoming 2.7 release. The creation of these bundles aims to
>> >> rationalize our
>> >> >> dependency tree and enable a more modular approach to Sakai releases.
>> >> >>
>> >> >> Commons 1.0.0-beta01
>> >> >> The commons package contains common services depended on by a
>> >> number of
>> >> >> Sakai tools, but outside the scope of the Kernel. The services
>> >> included are:
>> >> >>
>> >> >> SakaiPerson Service (profile data)
>> >> >> Type Service
>> >> >> privacy service
>> >> >> archive service
>> >> >> import service
>> >> >>
>> >> >> The project site can be viewed at:
>> >> >> http://source.sakaiproject.org/release/common/1.0.0-beta01/
>> >> >> (Note experimental site no Sakai skins etc.)
>> >> >>
>> >> >> Edu-Services 1.0.0-beta01
>> >> >> Edu-services contain core shared services that support teaching
>> and
>> >> learning
>> >> >> functionality in Sakai. It contains:
>> >> >>
>> >> >> Course management service
>> >> >> Gradebook service
>> >> >> Sections service
>> >> >>
>> >> >> The project site can be viewed at:
>> >> >> http://source.sakaiproject.org/release/edu-services/1.0.0-beta01/
>> >> >>
>> >> >>
>> >> >> _______________________________________________
>> >> >> announcements mailing list
>> >> >> announcements at collab.sakaiproject.org
>> >> >> http://collab.sakaiproject.org/mailman/listinfo/announcements
>> >> >>
>> >> >> TO UNSUBSCRIBE: send email to
>> >> >> announcements-unsubscribe at collab.sakaiproject.org with a subject
>> of
>> >> >> "unsubscribe"
>> >> >>
>> >> >> _______________________________________________
>> >> >> management mailing list
>> >> >> management at collab.sakaiproject.org
>> >> >> http://collab.sakaiproject.org/mailman/listinfo/management
>> >> >> TO UNSUBSCRIBE: send email to management-unsubscribe at collab.sakaiproject.org
>> >> >> with a subject of "unsubscribe"
>> >> >>
>> >> >> _______________________________________________
>> >> >> portfolio mailing list
>> >> >> portfolio at collab.sakaiproject.org
>> >> >> http://collab.sakaiproject.org/mailman/listinfo/portfolio
>> >> >>
>> >> >> TO UNSUBSCRIBE: send email to portfolio-unsubscribe at collab.sakaiproject.org
>> >> >> with a subject of "unsubscribe"
>> >> >>
>> >> >
>> >> _______________________________________________
>> >> pedagogy mailing list
>> >> pedagogy at collab.sakaiproject.org
>> >> http://collab.sakaiproject.org/mailman/listinfo/pedagogy
>> >>
>> >> TO UNSUBSCRIBE: send email to
>> >> pedagogy-unsubscribe at collab.sakaiproject.org with a subject of "unsubscribe"
>> >
>> >
>


More information about the portfolio mailing list