Skip to end of metadata
Go to start of metadata
2.6.0 Post-Mortem




Jira project oversight

Foundation ceased to monitor bug reports coming in.

Megan May

Communication issues

Throughout the QA cycle the main communication method (ie collab) was crippled.  In addition there is A LOT of off list discussion and decision making going on.  

Megan May

JIRA Workflow

The process used in JIRA is complex and not widely known. This contributes to issues being lost in the shuffle

Megan May

QA leadership transition

Early in the QA cycle the leadership was transitioned.  Not sure what can be done about this.

Megan May


Lack of enforcement of the processes that have been outlined (ie sharing upcoming changes in the release to the community before the release)

Megan May

Project Support

A number of projects or work have not had support as the teams have moved onto other initiatives. There are too many areas where issues crop up in the release and there isn't any clear line of responsibility. The new development process hints at resolving this

Megan May

Testing Visibility

It was unclear (to me at least) what had and had not been tested early in the cycle.

Michael Korcuska

Testing Coordination between early adopters

I sense a lack of coordination/communication/division of labor between schools testing the x-branch and beta and rc tags. If true, we are losing opportunities to scale the QA process.

Anthony Whyte

Release artifact scheduling

Alpha, Beta, RC tags should be cut on a regular and predictable basis (e.g., every other Thursday). We have begun doing this as of the 2.6 rc stage but we should agree to make this a regular practice.

Anthony Whyte

QA server refresh

There is a noticeable lag between the announcement of a alpha/beta/rc tag release and the refreshing of some QA servers. This lag can stretch into several days which shortens the testing frame available to testers before the release of the next tag

Anthony Whyte

Lack of Resources

QA/testing resources have dwindled or been reassigned during the release process, though understandable this has hampered the QA effort.

Pete Peterson


Multiple requests for QA/testing results and status have largely gone unanswered.

Pete Peterson


Currently when issues are reported and subsequently fixed there are usually no testcases included and developers working on fixes as well as testers verifing solutions must create these tests ad hock or wait on a request for a testcases before they can continue. This hampers development and testing and slows the entire process down. In addition, testcases will become even more critical as we begin automating the QA process. One solution to this, which works well with suggestions made here, is to "REQUIRE" testcases be included before any fix is moved into a branch. This provides developers and QA testers with a solid replication and verification script. These testcases will then be incorporated into regression tests for this tool. Note: rSmart is currently doing something very similar to this had has found it very effective in improving and streamlining the QA and issue resolution process.

Pete Peterson

Maintenance releases

We should reconsider our "rationed" maintenance release strategy. It adds an unnecessary level of complexity to release management and the resulting artifacts from a fix perspective tend to fall behind the maintenance branch due to delays in the release cycle. Instead, we should be strict up front, so to speak, about what is merged into the maintenance branch and only merge fixes when there is sign off from QA. Our maintenance releases could then be generated rather quickly: we decide on a revision point that constitutes the next release (2.6.1), we copy to a 2.6.1 release branch, clean up the poms (we should be able to leverage the Maven release plugin here), write up release notes and cut the release. In other words, no more alpah/beta/rc tags for maintenance releases (e.g., the Tomcat team has not put out an beta tag since 5.5.17). We perform no additional testing on the release branch other than to ensure the artifacts generated from it (demo, bin, src) start up in Tomcat. We then update the 2.6.x poms version number to the next SNAPSHOT version (e.g., 2.6.2-SNAPSHOT) and recommence the cycle. But to operationalize this we need an increased commitment from the community to support active and ongoing QA of maintenance work.

Anthony Whyte


As with Megan I find the current Jira form overly-complicated. IMO it is a topic that we should discuss in an effort to refine and simplify the process with a target of providing a strawman proposal for the Boston conference. At a minimum, however, I'd like to see project teams empower themselves to use the fix version to target a fix for an upcoming maintenance release (e.g., 2.6.1, or 2.6.2 or 2.6.3, etc.). The "contract" here is that the fix will be made and QA'd properly before the it goes into the maintenance branch and tags are cut. This implies that there exists for project teams a well-under general schedule regarding the tagging and releasing and that project teams include QA testers and/or work with Pete Peterson to secure testing resources to ensure that their work is ready for merging. If, for example, a non-security, non-blocker/critical fix targeted for an upcoming maintenance release is not ready, the release management working group could decide after consulting with the project team to reset the fix version, incrementing by one, if appropriate, to the next maintenance release. These decisions can be documented in Jira and communicated via the QA list or a new release management list.

Anthony Whyte


Create a release management email list. Release management is a critical component of the Sakai development process but lacks its own list to discuss, document and archive issues, procedures and proposals specific to release work. The QA and dev lists have served as fall back lists but neither is an adequate substitute for a specific subject matter list.

Anthony Whyte


It would be less work for schools or individuals running QA servers if the releases were packaged as virtual machine images. Hopefully easier access to servers would lower the barrier for people to help with QA.

David Haines


Current incentives to participate in the release process are ineffective. Release managers can offer no incentives to developers to have them maintain code and developers have little incentive to do the "last mile" changes and testing when that work doesn't have local consequences. Altruism is a wonderful but weak incentive.
To rationalize the incentives Sakai could do the following:

  • Cut back the core projects of Sakai to the bare minimum necessary to compile and run the smallest possible instance. The Sakai foundation will take the responsibility to ensure that this code is maintained and given QA.
  • Create clear technical quality standards for code and QA to assess whether or not projects can be considered to be "Sakai quality".
  • Evaluate non-core projects based on these standards.
  • Include non-core projects in a release only if they meet these objective standards. (No tools are grandfathered in.)
  • Create an installer so that projects that are not in a release are easy to add if some installation wishes to add them locally.
    This approach ensures that a) inclusion in a release is an assurance of tool quality and b) inclusion in a release is up to the developers. Exclusion from a release doesn't mean that a project isn't necessarily low quality just that it hasn't shown that it is Sakai quality. The risk /benefit tradeoff of using such projects needs to be assessed based on local needs.
    This simplifies and regularizes the release process considerably. Anything in a release will be high quality. Any installation can easily be customized for local needs by adding tools not in the release. Anyone who wants a tool in the release need only to look at the criteria and make sure that the project meets them. If no one is concerned enough to ensure the project meets the appropriate standards this project simply doesn't belong in the release.

David Haines


  • Divided attention of a limited developer pool across two big branches of Sakai work, along with the strategic choices of institutions between or across them.  I think that the community hasn't directly confronted (including adequately planning for) the interim resource implications.  I think we need to have a serious talk about the future of 2.7 and 2.8, which I hear people casually raise in conversation already.
  • Issues not being discovered early enough, along with establishing a clear and tested threshold of completeness. To take the delayed-release issue (which I'm most familiar with since Carl worked on it) it was undertaken as a patch for 2.5 a year ago January, and it recently resurfaced as a blocker late in the 2.6 release process some fifteen months later.
  • Release planning that fails to take resource availability into account.  I think we still tend to be presumptuous about setting target release dates by the academic calendar and counting backward, and taking the engagement of persons during the critical periods for granted.  Again, I think we need to have a serious community conversation about implications for 2.7 before too much is assumed, and that release would need to be managed with clearer expectations.
  • The ability of institutions to get their development resource back to work on other things after contributing code to a certain threshold of completeness.  Establishing that completeness is only half the problem - we still need to be able to hand off maintenance at some point, and both coding standards and release management should facilitate this. I think the new proposed development process attempts to tackle some of these issues. A "maintenance group" in particular could be key.

    Then there are more workmanlike considerations like better test coverage, code review and standards, and freeing up separable components to release on their own schedule rather than making broad assumptions that cut across all projects and teams.
  • Comments from Clay Fenlason per email 4/20/09 - Posted by Peter Peterson


our current release practices make it difficult (if not impossible) for code to be exercised outside of the release cycle. For instance an institution that wants features of tool x ahead of the release schedule generally fork the code by making a msub branch or vendor drop. While this gives the feature some coverage it does introduce a large number of factors that affect the feed back. We should seriously consider if we can revisit our release practices to make it easier for early adopters to use the code destined for future releases.
An example of this will be the planned 1.1.0 kernel release we're planing for after the 2.6.0 release as a "preview" release. This will allow the schools that want to test the new functionality (Terracotta Clustering) to do so a ahead of time - and in return we may get 7 months of the code being exercised before it goes into the 2.7 release.

David Horwitz


We have reached a juncture where we need to rethink the way we think of and handle Sakai releases. The problems I see are:

  • We lump everyone in the same risk profile. Different users have different tolerances of risk - some will run contrib tools early others wont. At the moment its very difficult for a user with a high tolerance of risk to run new code early. We're in essence breaking the OS mantra of  "release early, release often", and loose the informal QA we'd get from these users. I think the new product development  process  addresses part of this - and we're going to have to think about in the K2 world - lets start adopting practices  that make our life easier.
  • We deal with too much code in the release. Sakai is huge and we don't differentiate between code that hasn't changed over several release cycles and code that is evolving.
  • We use maven very badly. When I look at how easy it is manage K1 versus what I go through to put our production build together and what I seen Anthony going through to do the Sakai release its obvious we could be doing this way better.While I don't think radical change is possible I do have some practical suggestions:

  • We break the dependency of Sakai tools on master for building. This is a bizarre pattern that gives every tool a dependency on every other tool. Tools should inherit from the kernel and where more complex dependencies are needed we can define and release poms to meet the needs of developer (e.g. Sakai-gradable-velocity-tool.pom)
  • We identify stable pieces of code that are not deployed to tomcat themselves but are used by other Sakai tools and cut binary releases of them. There are several of these, mostly libraries and poms in velocity and JSF but I'm sure there are others.
  • Learn from the K1 experience - this has worked well - is there other code we could do this with? I'd like to tentatively suggest we may want 2 other bundles that provide stable widely used functionality:
        1) Common services (commons project (SakaiPerson & Type Service, Taggable Service, Privacy Service, Email Template service)
        2) Common edu service (Course Management, Content Review, Gradebook Service ? )
  • Identify very stable projects (such as the admin tools) and tag them with their own version and leave them between releases. For instance its been 8 months since a change to the Admin user tool, bar translation updates and housekeeping.
  • Identify projects with a clear development team and drive (I'm thinking here of OSP and T&Q) and engage those teams to see if they may benefit from their own release cycle/versions (much like the kernel has) As most of the issue verification on these 2 projects I've seen comes from within those teams. The Sakai QA process becomes more of oversight and due diligence process in these cases our role is to satisfy ourselves that the code we include is good quality, and to support  those teams in achieving this.

Comments from David Horwitz per email 6/6/09 - Posted by Peter Peterson


I think David Horwitz already suggested this, if so +1, but each tool must begin performing their own releases and generating their own artifacts. The Sakai release process needs to become more of an assemblage of already released artifacts than where we are today. I have been using Redhat as an example for years in this regard - At each release cycle they are looking around to see which projects are mature enough, and high quality to include in a packaged release. IMHO, this is where we need to be. Thanks!

Lance Speelmon


When Sakai 2.6.0 is released, all existing QA servers should move to running a known revision of the 2.6.x maintenance branch.
These servers would be refreshed on a regular (weekly or bi-weekly basis) with another known revision (determined by the QA director) of the same branch.
When 2.7 code freeze happens and a 2.7.x branch becomes available, a majority of these QA servers gradually migrate to using this new branch. Eventually, when the alphas/betas/RCs become available, most of the QA servers will be running these tags in anticipation of a release.
At least two QA servers would continue to run the 2.6 maintenance branch during the "official" support period after the 2.7 release.
Lather, rinse, repeat.
I think this approach provides some advantages for the community:
1) Stability and predictability for the QA process. People no longer have to "ramp up" for a specific release. Time can be better spent on testing as issues are fixed in the maintenance branch. Server admins have a predictable schedule for new deployments. Managers may have more flexibility to allow people to work on QA that offers both local and community benefit.
2) Perimeter defense. With this approach, there are multiple layers of QA. When an fix or feature is first committed to a branch, it is picked up within 24 hours on Nightly2. Then with a week or so, ti would appear on the QA servers running the maintenance branch. Finally, it would appear in the various alpha/beta/RC tags later in the cycle.
3) Support. Since QA servers will be running a known revision of the current maintenance branch, important bug and security fixes can be targeted, tested, verified, and communicated more easily.

The QA server that are "left behind" assure the necessary testing environment for these fixes against known revisions (we need some more QA servers or existing ones with more capacity to make this part work).

Comments from Seth Theriault per email 7/7/09 - Posted by Peter Peterson




  • No labels


  1. I think David Horwitz already suggested this, if so +1, but each tool must begin performing their own releases and generating their own artifacts. The Sakai release process needs to become more of an assemblage of already released artifacts than where we are today. I have been using Redhat as an example for years in this regard - At each release cycle they are looking around to see which projects are mature enough, and high quality to include in a packaged release. IMHO, this is where we need to be. Thanks! L

  2. Rutgers has substantial staff available for QA. Our staff tried to help, but were unable to find a way to get plugged into the official QA process. We ended up running our own QA process. We did report results through Jira, but I think more formal participation in the QA process would have been helpful.

    Part of our problem was the changing expectations for release. We can't afford to do a full QA cycle more than once. We did a full test of major functionality of every tool, followed by quick passes over the system whenever we do a major update. It's important to choose which version we do the full QA with as intelligently as possible.

    Even now after all the postmortems we are doing our own planning with little confidence in the information we're getting. We are currently running RC01+ in production. My staff are in the process of integrating diffs from RC01 to RC06. We will not have a chance to do this with any other release before Sept. We would have preferred to wait for the final release, but currently have such low confidence in statements about the release that at some point we go with what's there.

    Once we've integrated RC06 we'll take a few specific patches, but that's it.

    I sense a disconnect between the way we have to operate and the way the release management group is working. The release management group is saying "we release no wine before its time." However we're forced to operate with "we use the best code that is available by DATE" for a couple of dates, one targeted at May 20, which is the best time we have to make a version change, and another around August 15, which is our last chance for a major upgrade before Fall.

    While some of my staff are very nervous about using pre-release code, there was (and after significant production experience, still is) no doubt in my mind that 2.6 RC01 was overall a better system than 2.5. There's a critical decision that I think the release management team failed to make: at what point is it clear that, with all the remaining issues, a new release is still better than the previous one. I believe once that point has been made, milestone releases should be generated with the best available code to meet specific calendar targets.

  3. The next opportunity to get this right or have a disaster is 3.0. I'd like to see us start now to decide how that's going to work.

    From what I've seen of 3.0, my sense is that we could have a version with the new portal and content authoring in time for Rutgers to use in production in the fall of 2011. I'm willing to live with existing tools where 3.0 rewrites aren't available. I'd like to see that happen. But that would mean designing a process that mercilessly drops everything from the target that won't be ready, and that has its release management and QA act together. It also means starting to develop and test the whole tool set against K2 now. We can drop new features, but we can't afford to drop any significant tools because the developers haven't gotten around to making them work with K2.

    It's not clear to me that the Sakai community has the staffing to simultaneously do 2.7 and 3.0. If we're not going to have 3.0 slip forever, we need to concentrate on 3.0. I'd be happy to see 2.7 trimmed back to just that work that can be done without compromising 3.0.

    I believe we need to define milestones for conversion of tools to 3.0, and have programmers ready to come in when a tool's nominal maintainers aren't meeting the milestones. I'm willing to be part of the group. If someone who is not familiar with the tool has to do work on it, there is a higher than normal probability of missing something, so this decision has to happen early – not in the week before final release.