Child pages
  • QA 2.0.1 PRR Meeting 20050822
Skip to end of metadata
Go to start of metadata

Below is a summary of the QA 2.0.1 Post-Release Review conference call on 8/22/2005. Feel free to add additional items or modify any ITEM/ACTION shown below.

Attendees: Michael Beasley, Nadine Blackwood, Carol Dippel, Clay Fenlason, David Haines, Peter Knoop, Margaret Petit, Seth Theriault, Anthony Whyte.


Review Item



Improve the process to manage release documentation resulting from the QA Deployment (DP) test effort.

Continue to use the QA space in Confluence for draft release documentation purposes. There is public access to view and comment in this space. Upon launch, completed documents will be posted to the release web site and the main Confluence space entitled Release Information. Clay and Dave will collaborate to improve the management of the release web site (ex.



The DP effort needs to begin earlier in the DEV cycle so the QA team can begin test and document creation earlier.

The QA DP team should become involved in the Release Engineering process to enable an earlier work effort.


Improve the project planning process to allow creation of standard test documentation such as schedules, test plans and test conditions. We need to move from an ad-hoc to a more structured QA effort.

Carol is working with Mike Elledge & Chuck Severance to ensure QA time in-between functional & maintenance releases to allow for QA planning and analysis phases where test documentation is created. Michael Beasley, Nadine Blackwood, Margaret Petit & Carol Dippel are currently in-process of test documentation creation based on 2.0.1 OOTB functionality.


There is a concern re: how 2.1.0 will be deployed both as a clean install and upgrade process. At present, we don't have the capability to test the upgrade process due to machine and human resource constraints.

The Sakai Project funds adequate QA resources.


Improve Jira problem Descriptions to help DEV & QA teams more accurately and efficiently fix and regression test Jira items.

Send communication to the Sakai Community re: the importance of making clear, written problem Descriptions in Jira.


Improve Jira resolution Comments to assist the QA team.

Send communication to sakai-dev requesting Jira Comments for regression test guidance. With SVN, there is now a new Subversion Commits tab in Jira that provides improved information to guide the test effort.


Early in the 2.0.1 release cycle, the QA team experienced significant delays resulting from the project's conversion from CVS to Subversion (SVN).

Send communication to Sakai Community re: SVN Process & Policies. Dave will prepare a document for review.


The Jira Fix Version/s column is used before and during a release cycle to manage Jira items included in the release. Better management of this meta data is needed to help ensure inclusion of items earlier in the release cycle.

Send communication to Sakai Community re: the importance of managing the Fix Version/s column in Jira.


New &/or significant changes late in the release cycle increases the probability of release delays due to regressions.

The majority of significant changes planned for a given release should be included early in the release cycle (ex. ~m1 or ~m2).
Carol will work with Mike Elledge & Chuck to define QA Entrance Criteria.


Samigo is the most complex Sakai application but is currently understaffed in both DEV & QA. Additional complex Sakai applications will continue to be created and the Samigo experience may be a foreshadow of future DEV/QA issues.

Risk & complexity analysis should be conducted for each Sakai application to ensure adequate staffing for both DEV & QA. Complex functionality should be implemented early in the release cycle.


How can we continue to recruit QA participants?

Suggestions are requested from the Sakai Community.


For 1.5.0, 1.5.1, 2.0.0 & 2.0.1 releases, the QA team did not have time between releases for adequate planning and analysis for the next release. Test documentation has not been created resulting in an ad hoc test effort.

Improve project planning to take into consideration QA schedule overlap between functional (ex. 2.0.0) & maintenance (ex. 2.0.1) releases. This will allow adequate time for QA planning and analysis for each release. Advance QA receipt of requirements and specifications at least one month before the test cycle begins will allow time for creation of test documentation. See Item #3.


Should Sakai OOTB (aka Enterprise Edition) include changes to all components in a given release OR segregate components into their own release cycles? If segregated, how should the release process be redefined?

Suggestions are requested from the Sakai Community.


For 2.0.0 & 2.0.1, we leveraged global time differences to pass off critical QA work effort (ex. Samigo) from California (PDT/GMT -7) to Capetown, South Africa (UTC/GMT +2 hours).

Hooray! Let's discuss ways to make more effective use of this time zone advantage.


The release process should be made more transparent to the wider Sakai community. Sometimes decisions are made internally for good reasons but we can forget to communicate their effect more broadly. EXAMPLE: A decision was made to release 2.0.0 with significant Samigo issues with a plan to immediately follow-up with a patch. However, the patch led to new regressions and got buried in the larger 2.0.1 release effort. It was an oversight that we forgot to post the patch on the 2.0.0 Release Notes web site once it was completed.

The SEPP QA WG Membership is now open to encourage broader communication. Let's discuss other ways to improve communication.


From Stephen Marquard: I was encouraged that by the time 2.0.1 was released, all the items on the filters had been reviewed, which was an improvement over 2.0.0 (if I remember correctly).

Hooray! This was a significant improvement over 1.5.0, 1.5.1 & 2.0.0!


From Stephen Marquard: I keep leaning towards wanting a more "management-ish" approach to QA, which I know is probably not totally appropriate for this sort of dynamic open source project. By that I mean knowing upfront the approximate size of the QA task, the number of people involved, and allocating timeframes/resources appropriately. Practically speaking it would be helpful to know who else is out there on the QA team (though I realise the extent of people's involvement is different). From the point of view of our team here, it would be useful to know approximate dates of the QA process and how to allocate time with concrete milestones, otherwise either the sense of urgency declines as time passes, or suddenly things are very urgent towards the end. In general, more transparency in the release engineering process would be helpful (I see this came up at the Michigan meeting - reported on Confluence somewhere).

Improve project planning; Same as Items 3 & 12.


From Stephen Marquard: Not all items in JIRA can successfully be verified on stable (e.g. installation-related issues). REFERENCE:

is a case in point - it got inappropriately closed twice and reopened twice.

Improve Jira issue descriptions to provide details about how items should be QA'ed.


From Stephen Marquard: Testing issues across all 3 stable platforms is time-consuming (and sometimes slow for us - bandwidth issue). I closed some issues towards the very end solely on the basis of testing with our local installation of rc4+mysql, which isn't strictly following the process, but anything else would have taken much longer.

Improve Jira issue descriptions to provide test advice to help determine whether a change needs to be verified against all 3 DBs (e.g. many Samigo issues), and which only need to be tested on one system (e.g. CSS issues).


From Stephen Marquard: I would like to see "stricter" QA in some areas, i.e. QA not only limited to verifying what developers have fixed and that the system can be installed. For example, in the issue I logged about Samigo

, there are layout issues arising from XHTML output which is basically broken (I attach the results of a W3C validation check on the default Samigo tool page). Now I realise this was reported very late and there was probably good reason not to get into fixing it for 2.0.1, but to my mind it shouldn't be possible for a release (even a minor point release) to have code which produces obviously broken output. Certain categories of problems like broken XHTML should automatically be regarded as blockers.

Improve project planning; Same as Items 3 & 12. We should also research automated tools for possible usage against each release. If agreed to regularly use automated tools, we will need someone to volunteer for this effort (ex. SEPP institution).




  • No labels