I was chatting with a friend the other day about code reviews. His team was starting to adopt the code review practice and he had some questions on the practicalities. We have been practicing code reviews successfully within the Nokia's eSWT team for the last 3 years. I have tried to answer his questions as much as I can and at least my feeling was they were helpful for him. Perhaps our experience can be helpful to others.
The mechanics of our code reviews is simple. We conduct team wide code review meetings for all the major new feature development. The definition of major feature differs on every software project but it is basically a single task that is large enough to require the eye balls of the whole team. Team usually reaches a definition “major feature” of its own by time. Team code review starts about a week before the actual review meeting. Every team member spends some time to review the code and we collect the comments on a wiki page. At the meeting the owner of the code goes over the comments and explains the interesting parts of the code. One may think that the meeting is a bit redundant since the code is already reviewed beforehand and the comments are collected. However, that is not the case because the review meeting is about anything but review. It acts as a platform for discussing the coding related issues (the algorithms, duplicate functionality, style, tools, utilities etc… ). The meeting also represents an approval of the developed code by the team and marks the beginning of collective ownership of the reviewed code. In a sense, we dance the Haka, and accept the new code.
Anything that is smaller than a “major feature” gets peer reviewed. We do not use any tools to assist peer reviews. I am a bit amazed that some actually rely on tools to make a team to adopt code reviews. In my experience, tools do not help with the willingness to adopt a new practice, but this does not mean that they are not helpful once you have the practice in place. The only tool we are using for the peer reviews is the code repository itself and its mechanics is explained below.
The mastery of the code reviews depends on the human factors rather than technical. I guess it is common for teams to quit doing code reviews after giving it a try for once or twice. I remember the very first code review meeting we have had as well. First, we started to talk about the smallest bit of formatting errors. Although it is nice to have a unified format for the code discussing every bit of it on a review meeting is a waste that steals the time from the real problems. Also after a while. the owner of the reviewed code started to take it personal and started to argue with reviewers. At the end, it was a waste. When I was leaving the room I thought “is it worth to have code reviews in the expense of the team”. However, we were persistent on keeping the code reviews so we had come up with following principles to improve the practice a bit. Some of these, like the individual pre-meeting reviews, were put into practice right after the initial disaster, some we have added along the way.
- Do individual reviews and report your findings on the code before the review meeting.
- Only team members are allowed to comment at the meetings.(Chicken and Pig)
- Do not discuss the small formatting problems in the meetings.
- On peer reviews add reviewer to the commit comments, after all reviewer is as much responsible for that commit as well.
- Review comments are not personal, reviewers review the code not the person and comment about the code not the person.
- No developer is spared from code reviews.
This code review practice is developed by time, and will continue to evolve. It is not a silver bullet practice and you need to make up your own if you are serious about it.