Skip to end of metadata
Go to start of metadata

WORK IN PROGRESS

Please join in to improve this page and its links. I've also started a list of open tasks relating to testing.

Overview of automated testing approaches

Any testing is better than no testing, but different approaches make sense for different use cases. There's no "one size fits all." Here are the flavors I know of:

A) Pure "unit tests"

Test-driven development relies on unit tests which target a single class or package, stubbing or mocking any external dependencies. Tightly focused unit tests are also needed for classes which perform complex calculations or parsing. In virtually all circumstances, such tests are lightweight enough to be run as automated regression tests on every build. There's nothing Sakai-specific about these: you can use whatever testing framework you like and tie into Maven's standard "test" phase or Eclipse's "JUnit Test" runner.

Supporting technology: JUnit or TestNG. EasyMock 2. Maven Surefire plugin. Eclipse "Run as JUnit Test". Sakai App Builder.

Examples:

B) Project-bounded service tests

These tests cover the project's expected operations as realistically as possible while avoiding noise from external projects. They typically don't stub or mock well-established standard services (particularly data access), and they typically cross package lines within the project itself.

During project development, you write tests that mimic what you expect a real application user or a real service client to do. After project delivery, your first step when dealing with any bug should be to write a test that will replicate the bug scenario.

This is probably the most important testing for most Java developers in Sakai. The vast majority of Sakai projects simply query and update a database, and so stubbing out the persistence layer and SQL handling (as in the theoretically purest form of "unit tests") would eliminate pretty much all your logic! On the other hand, you do want to stay focused on your own work and not be distracted by the blooming buzzing confusion of the full Sakai suite, and so you still use stubs, mocks, or standalone implementations for all services outside project boundaries. (I used to hand-craft stubs, but the newest version of EasyMock very easily stubs any service that doesn't require a real persistence layer.)

Pedantically speaking, these count as "integration tests" – but they're very selective about what gets integrated! Under most circumstances, they're played against an in-memory database and lightweight enough to be run as an automated regression test on every build. Again, there's nothing Sakai specific about them.

Supporting technology: Spring's JUnit-based test classes. (If you use Spring MVC, you can even get good test support for your controller logic.) EasyMock 2. Maven Surefire plugin. Eclipse "Run as JUnit Test". Sakai App Builder.

Examples:

C) Tests integrated with Sakai services

Class-bounded "unit tests" or project-bounded "integration tests" will use stubbed or mocked implementations of external Sakai services. But when working on Sakai core services themselves, on plug-in "provider" components, or on bugs in a complex call-stack, tests need to run in a more realistic Sakai environment. This is when service-level integration comes into play.

Sakai's "test-harness" package provides support for tests which integrate directly with deployed component services rather than going through a web server. (The underlying support can be used for purposes other than integration testing – notably loading large amounts of test data.) It works this magic by doing more-or-less what the component manager does when Tomcat starts up. As a result, your test code can use real Sakai component services as well as your project-specific code.

This environment doesn't perfectly emulate all the classloading quirks which might bite a web application or component deployed to a real Tomcat server, but it tests the logic effectively, and it's the approach I use most often when changing a core service. Although too heavyweight to run automatically on every build, it's still much faster than starting and shutting down Tomcat every time I change the code and much more direct than having to define, enable, and go through web services.

Note that any scripting language with easy access to Java classes (Groovy and Jython come to mind) should be able to use the test-harness support. I haven't tried this yet myself, but if it works it might considerably lower the development cost of integration tests and utility programs.

Supporting technology: test-harness's SakaiTestBase, SakaiDependencyInjectionTests, or ComponentContainerEmulator. Maven Surefire plugin. Eclipse "Run as JUnit Test".

Examples:

D) Front-to-back web application functional testing

Automating this sort of test so that it can be run from Maven or Eclipse or as part of continuous integration involves deploying components and webapps to a real server, starting the web application, and then using a test framework. This is typically what you'll find in sample code that uses Maven's "integration-test" phase.

For the full Sakai suite, or even for the Cafe subset of Sakai, this will naturally take a very long time. Many project teams may find it worth their while to enable a standalone version of the web application (as well as a Sakai-embedded one) to greatly reduce turnaround time for both automated tests and testing by human beings.

Supporting technology: HttpUnit, HtmlUnit, the Maven Selenium plugin, or The Grinder. Maven Cargo plugin. If a standalone version of the application is available, Jetty and its Maven plugin.

Examples:

  • ???

E) Front-to-back web application performance and scalability testing

To emulate thousands of users coming from different locations requires special technology and setups. U. Michigan has recommended that we centralize on The Grinder, which we could also use for functional testing. See Community Performance Testing Resources for more.

F) Tests run inside Tomcat within a deployed component or webapp

This is the use case targeted by Aaron's and Steve's "test-runner". These may be initiated automatically or via a web interface. Benefits include the absolutely realistic environment, and (as with Selenium) some ability for QA and others to manually initiate scripted tests in a "live" environment from a browser.

Supporting technology: test-runner.

Examples:

  • No labels