Code review, user stories and sustainable software

The purpose of this short post is to identify the building blocks of an effective code review. The suggestions you will find here come out of our experience and are especially meaningful for our code review team.

We believe that the use of code review is one way to create sustainable software that’s open to changes but closed to modification. By linking each code review to a story we also believe that it’s possible to communicate the effort needed for each piece of the software to all the stakeholders of a project.

We strongly believe that the code review has to be informal because it’s not a way to check if a developer is good or bad at his job, it’s the only way in which an organization can keep a reasonable code quality in place and improve the knowledge of all of its team members.
In an ideal scenario, code review can be performed sitting together in front of a monitor with an author who drives the review by sitting at the keyboard and mouse, opening various files, pointing out the changes since the previous release and explaining why it was done this way. This approach is great and can bring enormous benefits because it simple to do, anyone can do it without any training and it doesn’t involve an impersonal interaction with e-mail or instant messaging systems.
We have some concerns regarding this approach because we have identified some recurrent downfalls:

•    The author often forgets to present something to the reviewer
•    The author can explain something to the reviewer, but without an output from the conversation the next one that looks at the code won’t have the benefit of the explanation
•    Some change requests made by the reviewer can be forgotten
•    There is no way to track the issues in order to create a check list for all team members to follow as a “Best practices” guide

In our opinion, the major drawback of this approach is that sometimes code review is postponed because two people have to arrange a meeting (not always so easy) and there is no way to check that the changes will actually be made by the author.

After some failures in our code review practices we have outlined our procedure…

First of all we ask the whole team to prepare a code review for each piece of code they commit into our source control system so that everyone will be responsible for readability without any additional requests.
A code review is connected to a user story, so all of the code review will be defined through a meaningful title and a note that points the reviewer to the requirements the author worked on. The great advantage of this approach is that the reviewer can be aware of the feature that has been developed and can therefore have the right approach to the inspection of a specific piece of code.
Every author has to attach a file to a code review (the format is up to the author) that summarizes the workflow he followed to develop that piece of software: in this way, first of all the reviewing team’s job will be faster and secondly, the author will recap what he did and probably find and / or fix some issues before submitting the review.

The building blocks of a code review can be summarized as follows:

  • Title: Usually made up of an ID and something that is meaningful
  • Description: A very short description of the feature developed
  • Stories involved: A list of backlog items involved in the review
  • Workflow: A file that summarizes the workflow of this part of the software

Ok, it sounds like a wonderful scenario but how can a developer put a code review in place quickly and easily? And how do we avoid having meetings postponed or time wasted on both sides? How do we keep track of the workflow of the review?

We tried several different ways and at last we identified an online tool that can help us in this process. With this tool, each developer can attach files through a file upload or through the source control system so that, depending on the kind of feature he is working on, he can submit the code and then start the code review process, or he can upload the source code without using the source control.
The advantage of a code review over a piece of code that is already in the source control system is that if there are dependencies between the stories to which the code is related and other requirements, the team will not be blocked and development can continue in an iterative way. Furthermore, if a feature is going to be changed dramatically, keeping this code outside the source control system until it’s fully reviewed and refined makes life easier for the other team members and requires them to change their code only once during the development cycle.
In both scenarios, the time the review team spends on a review doesn’t impact the overall speed of the team.
Another benefit of using an online tool is that we can keep track of the defects opened by the review team and periodically create checklists for all team members.
In this scenario, each developer can perform the following steps before submitting a code review:

1.    Identify the code needed to create a code review (usually no more than 600 rows of code)
2.    Iterate through the checklists, making an initial review to avoid common mistakes
3.    Make a recap of the workflow of this feature in order to double-check the logic and gradually produce the documentation of the software
4.    Chose a meaningful and appropriate title for the review
5.    Identify the stories to which the review is related
6.    Create the review

As for the reviewer, his job is not only to ask for changes but to provide enough information so that the finding may provide enough detail about the problem that anyone can

•    understand the vulnerability
•    understand possible attack scenarios
•    know the key factors driving likelihood and impact

There is value in both assigning a qualitative value to each finding and further discussing why this value was assigned. Some possible risk ratings are

•    Critical
•    High
•    Moderate
•    Low

Justifying the assigned risk ratings is very important. This will allow stakeholders (especially non-technical ones) to gain a better understanding of the issue at hand. Two key points to identify are

•    Likelihood (ease of discovery and execution)
•    Business/Technical impact

Each code reviewer has to suggest remedies to a defect so that alternatives can be clear enough for the authors to provide (when requested) a reliable estimate of the effort required.

Another practice we follow is to provide authors with a reference to help them understand how to avoid this kind of defect: we call this “source” and the value assigned to it can be

•    Check list
•    Systematic
•    Use case

This is usually very helpful in preventing the need to resubmit a code review with common mistakes.

In order to make the code review a practice aimed at helping our team to grow, we periodically create reports that show not the number of defects or their authors (this is the way formal reviews usually take place), but rather the defects themselves, and we ask the team which ones can become part of our checklist.

This post describes only part of our process, but we believe that these building blocks can be the right way to start improving the quality of the software that an organization is working on.

This entry was posted in Agile, Best Practices. Bookmark the permalink.

5 Responses to Code review, user stories and sustainable software

  1. Good points here. I think that code review formal or informal is one of the best things that an organization can do to improve it’s product quality and serves as a practical training vehicle to all participants.

    A bit of ancient history: When I was working on Multics in 1968 we instituted code review (we called it auditing); it was the developers who wanted to do this with particular emphasis on maintainability so that the next developer of a module would have an easier time. It was a required step for submission of code. We built a couple of tools (compare_ascii – that displayed the differences between two files and was useful for bug fix submissions), and source_submisssion_test and object_submission_test – that looked for things that we noticed were problems. )

    We continued to enhance the process through the life of the product (capped in 1986); it helped us qualify for a B2 Security rating.

  2. Informal or formal review should be chosen based on the code and its needs, focusing on the result rather than the process. The goals are
    – zero defect software.
    – team confidence that the software is zero defect.
    A single process won’t fit all code. A one liner fix may be obvious and need only a glance from a colleague. A new paging algorithm may need proofs, extensive testing, special tools, debate. I like the documentation of workflow: it answers my “nasty question” of “what did you DO to make your code high quality?”

    (another Multician, a student of John’s)

  3. gnstudio says:

    Hi Tom,

    First of all I believe that the relationship between format and the code is not correct. If you have to make a choice between formal and informal review is the organization in which you are doing (or planning to do) code review that drive your choice.

    Code with zero defects doesn’t mean to me high quality code because I believe that the quality of the code has to consider also readability, performance, right fragmentation, etc.

    I’m also aware that some reviews can bring developers to deeply change some algorithms driving them again in testing phase, for this reason we ask always to experienced developers to drive the review.
    By “experienced developers” I don’t mean only a code guru but people able to understand the effort that some changes may need and able to ask those changes only if thy are really needed.

    In order to keep the quality as high as possible we do a lot of pseudo code and architecture reviews, let me find a couple of hours today to write something about that.


  4. Pingback: Code Quality through reviews and pseudo code « Gnstudio’s Weblog

  5. Pingback: 2010 in review « Gnstudio’s Weblog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s