I plan to send some form of the following to the team.
Business stakeholders (sales, marketing, other execs) contribute requirements in the form of capabilities or features for the system. These requirements are described in the language of the business or the user, typically as use cases, and include acceptance criteria (e.g. “this feature is considered complete when you can do the following things….”). Product management helps describe these capabilities as discreet features that are decomposed to the level that makes sense when evaluated individually (i.e. an individual feature adds incremental value to the business), but are not aggregated unnecessarily (i.e. all separable capabilities have been separated). The product manager facilitates the prioritization of this feature backlog, and this backlog is reviewed frequently.
At the outset of a sprint, the development team (all people who will do work to implement and test the product features) meet with the product manager and business stakeholders to review the feature backlog. Since the backlog is in priority order, the team selects features from the backlog top down. Based on the feature description, they make a rough estimate of the effort required to implement it. Developers sign up for work on features and track the amount of time they have committed. When all developer time is used up (the time allocated by each developer for the sprint), the feature selection is complete for the sprint.
During feature selection, a certain level of rough task breakdown is done to know how much time each person can commit. Immediately after feature selection, the team breaks down each feature into the engineering tasks that must be done to implement the feature. This requires some design and coordination, since the tasks should be to the granularity of hours (typically no more than 48 hours per task). The team may bring in product managers or other stakeholders to clarify requirements or propose certain design alternatives. After the task breakdown session, developers should know enough to begin work.
Developers write code that implements functionality as defined by the sprint feature set. Unit tests (i.e. tests that exercise this code at the object or method level and are minimally dependent on other components of the system) are developed together during the development phase of the sprint cycle. QA engineers may collaborate with development engineers during this process to help write good tests and so that QA understands what sorts of unit tests exist. Developers also provide QA engineers with input on the test plan that will be executed after the sprint development is complete. This collaboration is strongly encouraged.
During the development of new functionality (or fixing bugs in existing functionality, or refactoring existing code to make the implementation simpler), the component(s) being modified are tested locally, or in some shared development environment.
Code that implements the new/changed functionality and the code that implements the unit tests are submitted together. Code isn’t submitted until the best reasonable effort has been taken to show that the submittal doesn’t break the build or break the integration with other components in the system. Ideally, this is accomplished by running a suite of integration tests locally that checks that the contracts between components are still valid.
Continuous Build and Test Process
A build and test process is continuously running on a dedicated build and test server. This process monitors the source code repository and determines when changes have been submitted. It then:
- Synchronizes a local filesystem with the target version (e.g. the head of the codeline trunk)
- Invokes makefiles/scripts that build each of the components
- If the builds succeed, the built and tested system is packaged and deployed into the build and test environment
- Runs the automated tests against the system and reports test results
- If either the build or tests fail, the users who checked in changes since the last successful build and test are notified by email
To make automated test development as simple as possible, a test framework can be used. Tests implement an interface that is run by the framework. A mechanism exists to add tests to the suite that is automatically executed. It is sometimes desirable to partition the test suite so that some tests are run more frequently than others (e.g. a relatively small “quick check” suite that is run continuously during the day and a “full check” that is run overnight). Test frameworks like JUnit, CPPUnit, NUnit, and HTTPUnit work like this, and can be used in conjunction with automation software like CruiseControl or Anthill.
End of Sprint Process
At the end of a sprint, a build that has passed the automated build and test process is deployed into a QA/Staging environment. A branch is created in the code repository corresponding to the build. The QA team runs through the test plan developed for the sprint (which includes regression testing and new functionality testing). Performance benchmarks may be taken at this point and compared against previous builds.
When bugs are found during the QA process (typically while development on the next sprint is underway), the bugs are fixed in the sprint branch and integrated into the main trunk. Developers involved in the fix sync their local filesystem to the version of the code in the sprint branch and debug and test locally before submitting the change. The build and test process is run against the sprint branch. The resulting package is redeployed in the QA environment.
When a release candidate build has been qualified, the package representing that build is deployed into the production environment. If necessary, migration scripts are run against the production database.