Skip to end of metadata
Go to start of metadata

AGENDA FOR CALL:

  1. Discuss Josh's Instructional Visioning DRAFT Proposal.  Some questions to start to answer might be:
    1. Does this process and workflow make sense? If so, then...
    2. What should the make up of our local "end-user groups" look like?  Should we focus on early adopters? include traditional and non-technology instructors? Mix students and instructors or keep them separate?
    3. How would we run/facilitate the end-user interview/brainstorming sessions?  What process/tools might be use to make this as efficient a process as possible (i.e. low investment of time needed with high quality results)?  What does the output from these sessions need to look like to be useful to other members of the community?  Where should the output be posted/stored?
    4. How does the final product of this effort get folded into other Sakai processes?  How does it fit into the Sakai 3 efforts? How might the Product Council use it?  How might developers use it? UX? Commercial affiliates?
  2. Discuss how we might engage with groups in different time zones.
  3. Brief note on the 2010 Teaching with Sakai Innovation Award program

NOTES/MINUTES FROM CALL:

  • Keli noted that the work that Stanford is leading right now is broader than Test & Quizzes and is looking at "learning activities" in general as means to try and prevent "tool-centric" feedback.  Assessing students is part of the learning process and may be used throughout a range of tools.
  • Keli noted that there are other groups, such as the Library group, who are engaged in work that will involve end-user interviews and the gathering of input from users at a range of institutions.  
  • Lynn raised a question regarding the ability to recruit faculty who would be willing to put in time on a regular basis to a visioning process.
    • We discuss potential incentives for faculty to participate, such as creating a formal faculty committee for the work that would count towards "tenure and promotion".
    • Keli mentioned that they asked faculty on the MSIS survey if they would be willing to participate in interviews as means to identify willing participants.
    • There was also a question raised as to the makeup of the end-user population.  Should they include early adopters? Traditional/typical instructors? Should they represent a range of student populations (adult learners, traditional students, etc.).
  • This lead to a general discussion over whether individual interviews were better than conducting group sessions
    • There was some benefit seen from group sessions because...
      • Participants tend to play off and build on each others ideas/thoughts
      • If is an efficient way to identify consensus...if you conduct individual interviews you then need to check in with everyone to see if they agree or disagree with each other on key points
    • There were also some clear disadvantages to using groups...
      • There is a major risk of "group think" happening
      • Some higher level folks prefer the individual attention and don't reach well at times to participating with others
      • It was also noted that being able to observe users is very valuable at it provides important information that may not come out of just talking
        • There was a note later on in the discussion that knowing "why" a users is doing something is very important, sometimes even more important than what they are doing.
    • There was also some discussion of the Strategy Lab software the Thanos Partners uses for market research as it might provide some tools for facilitating online brainstorming sessions
  • "Process objective should drive format" - This was noted strongly...we need to firm up what the end objective is and then come back to figure out the right group make up/format for collecting end-user feedback.
    • There was a note that having the ability to create "personas" would be one useful output from the process
    • One important realization was that this group might be uniquely positioned to identify intersection points between tools.  For example, by listening to users we may identify ways in which Resources and Test & Quizzes or Assignments and Test & Quizzes need to work together...such intersections may not be obvious to those developing these tools, particularly if they are separate development teams.
    • We may also find that there are some tools which are "infrastructure" tools (meaning, that they play a fundamental role for users) and others which are "middle" or "top" level tools which play much more specific roles within teaching and learning.
  • The "elephant in the room" question of who will use the output from this process and how do we make sure it is leveraged.  There was also a question as to Sakai 3 timeline and how this process will fit into that timeline.
  • The call ended with general consensus that the proposed Instructional Visioning process is a good draft and start but that some critical details are needed.
  • No labels