[sakai2-tcc] Decisions input needed! - from today's TCC-CLECC, Wednesday, 2013 March 6

Jean-Francois Leveque jean-francois.leveque at upmc.fr
Thu Mar 7 06:28:49 PST 2013


I agree we should reach all users but we need to find users who could 
later get resources involved. The first part is easy and the second is hard.

As long as we're looking for matching active lists on 
http://collab.sakaiproject.org/pipermail/, pedagogy, production, and 
sakai-user, maybe announcements too are good places to reach users. I'm 
not sure about documentation and end-user-support.

I think we should ask the same questions for all tools in a global survey.

I don't think a question such as "What do you use it for (options)" is 
useful right now.

WDYT?

Cheers,
J-F

On 07/03/2013 15:09, Steve Swinsburg wrote:
> Why focus on the needs of teaching and learning for this one tool, when
> the C in CLE stands for collaborative? A lot of people don't use Sakai
> for teaching, they use it to support research and facilitate
> collaboration, something that a wiki excels at.
>
> A simple survey would do here:
> 1. Do you use the wiki (y/n)
> 2. What so you use it for (options)
> 3. If the wiki went away, how sad would you be (really sad, a bit sad,
> meh, I'd just use another tool, I hate the wiki)
>
> Cheers
> S
>
>
> Gesendent von meinem iPhone
>
> On 07/03/2013, at 8:02, Neal Caidin <nealcaidin at sakaifoundation.org
> <mailto:nealcaidin at sakaifoundation.org>> wrote:
>
>> See my comments in red below.
>>
>> - Neal
>>
>> On Mar 6, 2013, at 3:47 PM, Steve Swinsburg <steve.swinsburg at gmail.com
>> <mailto:steve.swinsburg at gmail.com>> wrote:
>>
>>>>
>>>> 1) CLE 2.9.2
>>>>
>>>> 1a ) DECISION - Schedule - see separate email sent to TCC from me,
>>>> to make a final decision on the communication of the 2.9.2 schedule.
>>>> Everyone on the call was supportive (+1). Making sure all on the TCC
>>>> have a chance to chime in by Friday.
>>>
>>> +1 for timeline
>>>
>>>>
>>>> 1b) DECISION - Release process - The general sense seemed to be to
>>>> try the way proposed by Matt Jones and Steve Swinsburg, to use
>>>> revision numbers on the branch for testing iterations of the
>>>> release, and not take the extra time to package the beta and release
>>>> candidates with tags. We will still use the same system for tracking
>>>> Jiras though. For example, we will use 2.9.2-rc01 issue in Jira for
>>>> Version affected, it will simply apply to what is on QA server which
>>>> we are labeling rc01, even though it is just a snapshot at a
>>>> particular revision. Does that make sense? Please say any concerns
>>>> by Friday.
>>>>
>>>
>>> For this to work, the branch will need to bind to stable versions of
>>> artefacts and indies. At the moment everything is snapshots. That
>>> needs to change.
>>> For Jira, I'd be inclined to forget rc01/rc02, just use 2.9.2
>>> [Tentative] and then based on the date it was fixed we know what we
>>> have. Its the same either way, we link a date with a revision, or a
>>> revision with a label in Jira.
>>
>>>
>>>
>>>>
>>>> 3) Survey Monkey survey -
>>>> 3a) DECISION - Need data clean up? We had 104 responses but a dozen
>>>> institutions have more than one response. On the call, the general
>>>> feeling was that a cleanup is needed. It was pointed out that last
>>>> year's survey was not cleaned up, by decision of TCC. About the same
>>>> percentage of cleanup was needed (10 - 11%). At least one factor in
>>>> last year's decision was that an institution or two might have been
>>>> answering based on OAE as the learning management system, whereas
>>>> this year that is not a factor. Should I proceed with cleanup? If I
>>>> don't hear any feedback by Friday, I'll proceed with the cleanup.
>>>
>>> If its only 10% then thats fine, but if you can easily identify the
>>> dupes, present a set of cleaned up data.
>> It will take a little bit of work, because I'll need to contact the
>> institutions to find out which of the 2 or 3 entries I should keep and
>> which I should delete. But it shouldn't be too bad. I would guess it
>> will take about a week (duration) and an hour or two of work (sending
>> about a dozen emails, getting information back, updating survey monkey).
>>
>>>
>>>>
>>>> 3b) DECISION - Publish to community as soon as possible? From the
>>>> phone call, the consensus is to publish the results as soon as they
>>>> are available (after cleanup). I'm 100% in favor of this too. Any
>>>> concerns? Please speak by… you guessed it, Friday if you have a
>>>> concern about publishing the results to the community after cleanup.
>>>
>>> +1
>>>
>>>>
>>>> 5) HOT_TOPIC - rWiki - no decisions made just general discussion and
>>>> suggestions. Neal volunteers to do outreach and see if we can find
>>>> pedogogical representatives for input. BOF at conference? There was
>>>> some discussion about kicking off a Private TCC discussion (copying
>>>> CLECC on the thread). One option discussed is a TCC review of rWiki.
>>>> Two TCC members, at least, think that if this happens it should be
>>>> limited to a one month review, or it would not be a good use of
>>>> time. We also discussed having more public discussion, which anyone
>>>> can kick off at anytime. If we want BOF, we need to get a proposal
>>>> in by March 11. Nobody assigned to do that at the moment.
>>>
>>> What would a pedagogical representative add to the discussion? I
>>> though the problem was the lack of developer support?
>>> Will we have a pedagogical discussion around the outdated Syllabus tool?
>> The idea is to focus on the needs for teaching and learning. Perhaps
>> other tools could serve the same or similar purpose? Or perhaps the
>> priority of the needs is not critical to instructors' overall
>> pedagogy? I think I see your point about looking at the tool in
>> isolation. The feedback would not necessarily be all the things that
>> are wrong with the tool, but what are the use cases involved. Not
>> sure. But personally I don't see a harm in identifying pedagogical
>> resources and starting a discussion. Naturally have to be careful
>> about setting expectations. We wouldn't want a group of folks thinking
>> they were providing the CLE "development team" with a set of
>> requirements which would be addressed, since that will just make for
>> frustration all the way around.
>>
>>>
>>> Note that a review of tools is coming in the incubation process, but
>>> will be a while off. Timeline is that incubation process will be in
>>> draft form at conference for discussion and refinement, then worked
>>> on a bit more until being ratified by the Board. People will be
>>> interviewed in the process so please let me know if you are
>>> interested in participating.
>>>
>>> My concern is that you'll get skewed results if you just look at one
>>> tool in isolation, without a benchmark. I could find a million things
>>> wrong with any tool and paint a grim picture.
>>>
>>> Perhaps the CLE tool scorecard could be used in the interim? I am
>>> bringing that to the incubation process anyway as a starting point
>>> for a lightweight review.
>>>
>>> cheers,
>>> Steve


More information about the sakai2-tcc mailing list