Skip to end of metadata
Go to start of metadata

Summary

The idea behind this review process is to leverage the collective knowledge of the Sakai Teaching and Learning community by having them review Sakai Tools and provide feedback, in the form of a brief reports, on both the tools capability's with regards to supporting teaching and learning activities as well as the tool's usability (ease of use of its interface).  Those involved in this review process would adhere to the following guiding principles:

  • To the extent possible, recommendations for capability or user interface enhancements will be based on user research, feedback and testing.
  • Thoughts and ideas will be surfaced often and openly as means to gather broad input from the community.
  • Those leading technical development work as well as the larger technical community within Sakai would be engaged throughout the process as collaborative partners on the review.
  • Those involved in the review process will commit to helping to seek the resources needed to develop any new capabilities or other enhancements with the understanding that requests for improvements without additional resources represents an "unfunded mandate".

The goal will be to provide a feedback loop into the development process so that as new work is planned the ideas from the larger teaching and learning community can be considered.  At the same time, we hope to use the final report as means to engage institutional decision makers and work to secure the resources needed to support the development of the recommended changes.  We would generate data/information for the report those a range of collaborative activities including:

  • Teaching and Learning User Research - the goal of this work would be to identify new capabilities that, if introduced, could either allow the tool to be used for a broader set of "traditional" teaching and learning use cases OR support new and innovative approaches to teaching and learning.  One approach that we may take for this work is a series of online sharing exercises, which would  be recorded for future reference, in which folks with more experience using the tool would share examples of their work, best practices, challenges, etc. with the larger group as means to stimulate discussion of new capabilities that would be beneficial.  We would also observe and research instructor and learner behavior as means to better understand their needs.
  • Light Weight Usability Testing- We are a bit less sure of how to proceed with this type of review but have discussed conducting lightweight usability studies on our campuses and then working to pool the outcomes from these into a collective summary of usability issues.  If designer were available to help create wireframes the group could also provide feedback on such concept materials. We should also consider reviewing "accessibility" and internationalization issues in this type of review.

Once the review process was completed we envision bringing the reports we might generate to others in the community, particularly the TCC, as means to get their input and help in identifying technical strategies and the time/effort that these would require.  This will help us prioritize the plans while also giving us a sense of the resources needed to not just implement the enhancements but also maintain them over time.  This information could then be used to help secure the resource needed for the work.

Phases of Capability Review Process

  • Start Up Phase - This would be an initial period when we would meet with those who are leading the development effort on a tool or capability to have an initial discussion around their thoughts and areas of interest in terms of getting feedback from the T&L group.  We would also use it as a time to communicate out broadly regarding our plans to engage in a capability review as means to get others involved.
  • Phase One: Preliminary Capability Enhancement Ideas - This would be an early stage brainstorming activity that would span a relatively short period (2-3 weeks maybe) that would try and generate as many ideas as possible for both enhancements to existing capabilities, entirely new capabilities and usability (UI/UX) improvements.  We would encourage participation from as many people as possible with the idea that the T&L Group might discuss these as means to develop initial thoughts around prioritization.
  • Phase One: Community Check-In -  This would be a formal period of time when we could pause in adding to our initial list and focus on getting preliminary feedback from those leading development, from the TCC/PMC and institutional decision makers.  In particular, we would be looking for technical folks to surface concerns, issues or challenges that might help us decide which ideas to focus in on.
  • Phase Two: Usability Testing and Capability Reviews - This would be a period when we would engage in the types of work outlined above which might include lightweight usability testing and identify capability enhancements.  We would work to remain in direct contact with tool developers on questions that surface but would also make sure the process was open to all and transparent.
  • Phase Two: Community Check-In - This would be a similar check-in as during phase one but would be more specifically focused on assessing technical issues and scoping the time, effort and resources needed to implement the proposed enhancements.
  • Phase Three: Draft Capability Review Report - We would develop and release a report that would include recommendations for future development work and the resources needed to do this work.
  • Phase Three: Community Check-In - This would be a final check-in period during which people could question and comment on the draft report
  • Phase Three: Release Final Report and Seek Resources - We would release the report broadly and work both at our local institutions but also within the community to engage institutional decision makers as means to secure the resources needed to implement the enhancements.

Draft Report Outline

Executive Summary

Capability Overview - What is the tool intended to be used for and the basics of how it works

Capability Use Cases, Best Practices and Tips - This would be a series of short screen movies that would be used to demonstrate different use cases based on real world application of the tool.  Workarounds that users are developing could be highlighted as means to show where new development effort might be useful.

Suggestions for Capability Enhancements - This would be a list, possibility also placed in JIRA, of enhancements that the T&L has reached consensus on as being important for future development effort.  We might also prioritize these to provide feedback on which were the most important from our perspective.

Suggestions for UI Enhancements - We would like to produce wireframes that provide visual examples of interface changes that we would recommend based on direct feedback from users as well as light weight usability studies that we would run on several campuses.

Technical Assessment and Resource Requirements - This is just a rough idea at this point but it would be very useful if more technically savvy Sakai folks could review the report and provide an assessment of what the best technical approaches would be to address the different suggestions and what types of resources (people time) would be needed.  This information would allow the T&L to engage with their local institutional decision makers as means to help find resources needed to support the development effort.

 

The current Sakai Teaching and Learning group has started to engage in this type of work which is being documented on the Lessons Tool: Draft Capability Review Process wiki page.

  • No labels

15 Comments

  1. I have made some updates to the information above based on the Sakai Teaching and Learning Call today.

  2. I am curious if we can capture some of the best practices in the online help or the tour for a new teacher. E.G.: A different tour for teachers than students or a series of tours.

    1. I was thinking more of a fairly light weight documentation of best practices rather than something more formal only because I think past efforts to do formal best practices has been rather time consuming and hit challenges associated with customizations that can sometimes make one best practice impossible to implement at another institution (or, as has happened with user support materials efforts, different screen layouts, etc. can make it hard to formally document best practices which can then be shared across institutions).

      I'm not suggesting that an effort to formally document and share best practices would not be useful, I'm just sensing that this could be a major project in and of itself and am being careful not to try and do too much as once.

      Josh

      1. Consider grabbing the low lying fruit and then reflecting on the process. Perhaps a few examples of best practices in the help or at least a few links to already existing documentation.

  3. We could gather outstanding JIRAs for the tool/feature being reviewed before starting the review process.  By editing or combining existing JIRAs, adding new JIRAs, and resolving outdated or duplicate JIRAs, we might accidentally clean up JIRA a little bit as part of this process.

    1. I really like this idea – perhaps some sort of taxonomy could come out of this process so that common issues are presented in a more uniform manner, thereby making it easier to identify issues.

  4. Related to point #2 about usability, could the group also encourage some best practices (or at least effective testing procedures) in UI and UX testing for certain aspects of the system?  I am thinking of suggested frameworks rather than detailed documentation.

    1. This may be a suggestion where you (we) should just do it, on a small scale, and see if the reaction is positive and the suggestions used. 

  5. So many good ideas here, however the scope seems to be expanding and I am afraid we may become overwhelmed quite quickly.  

    Perhaps we should start by identifying and agreeing on what our first (most-pressing?) "small" goal is?  Then the second and third and so on.  That would help keep things manageable.

    I know it would help me in terms of focus, to set up some small goals, and then decide on the best way to go about getting each done and allocate a certain amount of time to each.

    For example, with our overall goal of providing a feedback loop into the development process, we could start with Lesson Builder as our test case (since there is interest and work already begun) and determine what our next steps should be?  Are we going to use the Design Lenses to review LB and if so, exactly how do we go about doing that? Or are we going to go through JIRA?  (At some point, we will have to compile a list of what is in JIRA on LB and set priorities.)  Or should we do both simultaneously (in small dedicated groups!) and bring the results of both processes together for the final report?

    It's possible that if we can present the community with clear goals and suggested methods for accomplishing them, we may get wider participation.  People may feel nervous about getting involved because they have no idea what they are getting themselves into!  If we knew more precisely what we need and for what tasks, we may even be able to put out a call for help to just those groups that have the necessary knowledge and skills.  A more detailed well-thought out list of wants/needs may also be more appealing to institutional administrators who provide funding and resources.

     

    1. What you are writing is my understanding of where we are at. Using Design Lenses to look at Lessons as the first case. It might be worthwhile to have someone to serve as a project manager or task coordinator so that we can make sure each task gets assigned and has some follow up. I don't know if I could take this role on myself but would be happy to work with someone/mentor. 

      The amount of effort to work with Jira in parallel should not be too great and will likely yield positive results, preventing duplicate work, documenting needs, tracking status. 2 cents.

  6. At NYU we talked through the draft Teaching & Learning capability review process, and we're very excited to get involved. NYU just completed a transition from Blackboard to Sakai CLE 2.8. A key reason we moved from Blackboard to Sakai is the open source nature of Sakai and the ability to contribute to improving the tools over time.

    Members of our LMS service team will be chiming in with more detail in coming days, but I can give you some high-level feedback from our conversation:

    • The perspective of the teacher really needs to be baked in to tools. Some consultations start with faculty saying straight out "Who built this? Did they talk to a teacher?"
    • Getting feedback into the process and having it impact design and development is important.
    • "Teaching and Learning Review" is a good way for soliciting feedback. We'd like to include faculty members, being mindful of using their time effectively.
    • "Usability Review" also sounds like a good way forward. We've done some formal UX testing here at NYU and may be able to contribute test scripts.
    • Regarding "Best Practice Documentation," gathering links to existing documentation and videos is useful. Trying to establish a single best practice caused some concern in our conversation, as within NYU our schools have divergent needs and teaching models. One size does not fit all in our experience. 
  7. Hey all - Just wanted to add on to what Max added re: formal UX testing.

    The NYU UX team (myself and my colleague Mark Reilly) have conducted a series of formalized UX sessions to better understand our users and what modifications we can easily make to help users.

    Following the theme of the Lessons Tool, I'd like to share our informal results from testing this tool- these notes represent about three users navigating through the same testing script, and you can already get a sense of what some of the major pain points are - https://docs.google.com/a/nyu.edu/document/d/1hX5yhqACJw60kIrxKEaS4KN5R-q0Vp2rs6AavulXllg/edit (doc is also open for commenting).

    Additionally, here is a link to my presentation from the last Apereo Conference on the importance of running your own UX sessions, and how easy (and fun!) they can be - http://lanyrd.com/2013/apereo/schtyz/

     

  8. I think that having suggestions come from this group gives credibility that the requests have been vetted and prioritized. I know there is always a resource issue on getting things done but everyone's happier when efforts put into changes that have support in the community -especially those from the user perspective

  9. commenting on 8/30 update.

    The capability review is really tool centric. In some ways this is practical because this is how Sakai is designed but I'm wondering if there should be a section on what is lacking in overall capability or interoperability between the tools.

  10. Commenting on the 9/18 update. Two things to consider. In the Start up Phase, if it is a new idea or concept there will not be anyone who is leading the development effort, because there is none. The Start up Phase could work for enhancements to existing tools. 

    The process seems a bit too linear to me, unless the Phases can overlap. The earlier the better with respect to finding resources. If we cannot find resources while Phase 1 and 2 are developing, that might be a statement of lack of interest or prioritization, possibly.