In fact, the way you deal with errors is the most important. You may know this sentence: the later an error is discovered, the more money it costs. At the early stage of a project, an error costs nearly nothing, because you only designed your project on paper; when a defect is found during development stage, you have to spend - or loose - time to fix it, maybe forcing you to change your design; etc. And the worst case is when your customers find a defect.
Thus, any software project should really consider handling errors, because there will have some, whatever you do. Even if you have the best developers of the world, even if you over-pay them... mistakes will happen, and the way you deal with them can make a huge difference.
The Impulse project, as I eluded earlier, aims at finding a reliable workflow. And you might have already guessed it: we need to handle errors well. So I asked Rémi to set up funny things to handle them easily. As matter of fact, it should not be hard to follow the workflow, because when it's hard, developers do not want to bother with it.
Here are the steps of our process, aiming at finding defects as early as possible.
Unit testing
Each unit of code (in C++ it is often a single class) should have associated tests, and of course, should pass them. Ideally, tests have to focus on a specific point of the specs, all tests should cover all the specs, and they shall be written at the same time than as units themselves...
Well, it is important, but forcing the developers to follow this process is the best way for them to hate unit tests. Developers have to become aware of their benefits, and to know what is really important to test. For instance, the global behavior is worth checking it, but the setter and getter may be simple enough not to test them.
The tests are also the first real application of the unit, before it becomes integrated into the whole project. Therefore developers should enjoy doing them!

Each developer have to test their modules before continuing in the workflow. At least, they must verify that it compiles, i.e. it will be usable without breaking the development.
To make test development easier, a small library contains functions and macros to test or compare values. There is also automated tests, to launch a set of tests with a single command line.
Team review
This is the big new step: I decided to impose team review for the code. I won't talk for pages about reviews, their benefit and so on, you may found hundreds links on your favorite search engine. I also give two links at the end of this section.
However, why I have imposed team reviews in brief:
- We can fix errors early in the process.
- We know modules, which others wrote: everyone has a better global vision of the project.
- The best developers help the not as good ones to improve.
Here is how the whole process should be:
- Each module has its own repository (an upcoming story about our Mercurial architecture may be coming). During development, developers may push into that repository.
- When a module is ready to be reviewed (and it passes all the unit tests), its code is frozen and a code review is created. The developer in charge of the module assigns another developer as the review moderator (who may or may not work on the module as well).
- The developer in charge introduces the module and the changes made since last review, if any.
- Every developer who does not contribute to this module read slowly and carefully the code :) They post comments through the review tool, and the moderator avoids that they turn into trolls!
- Once every spotted problem is fixed, the moderator validates the review. The module's development continues in its repository. (No regular development is allowed in the testing repository, except for bugfixes.)
So we are going to use this workflow on the project. I didn't say this was the best: the project will show what is good and what must be improved. Since we never did code review before, the results are a total mystery.
If you want to know more about team reviews, I liked in particular this white-paper: http://smartbear.com/docs/BestPracticesForPeerCodeReview.pdf (guess what: this company also sells a tool for code reviews! I don't advertise their tool, I am just saying that this document is easy to read and has cool graphs). If you want deeper information, it may be worth reading these articles: http://www.processimpact.com/pubs.shtml#pr.
Quality Assurance
The QA has its headquarters on the testing repository. Since all the code in that repository passed unit tests and code review, the QA should not deal with deep code problems. Its two main focuses are:
- Integration issues. Indeed, modules were developed inside their own space, and problems may appear when they are grouped. (This should be seldom since the code architecture is well defined and the module repositories must regularly pull the testing repository.)
- Game testing: rendering, control behaviors, typos...
When both the QA and the project managers agree to release a version, the testing repository is pushed into the stable repository, and a tag is added. Exceptionally, critical bugfixes are allowed in the stable repository.
We hope that this workflow will allow us to handle defects effectively. It is both easy to understand and apply. The Impulse project should show us what are the good points, and what must be improved. And of course, we will keep you posted about.
No comments:
Post a Comment