Code reviews for fun and profit

On this page, I've laid out my thoughts on code reviews as of mid-2021. I've picked up this set of opinions from working at various companies, observing open source projects, discussing the topic with others online, and general personal experience. Like mostly everything in the software world, these are opinions and not facts. While I may feel strongly about some of these principles, your experiences might differ, and I'm happy to hear feedback on places where you think my views are incorrect or should be fleshed out.

Goals of code reviews

To come to conclusions about how code reviews should work, we need to have a set of goals that we're trying to achieve from them. The main benefits I'm aware of are improving individuals' skills, ensuring sufficient codebase quality, and spreading organizational knowledge about the codebase. As we talk about what a good code review process looks like, we'll refer back to these three main goals to ensure the process is doing what we set out to achieve.

Improving individual skills

Everyone has had times when they designed something or wrote code that satisfied them, only to have someone else point out an obviously superior way. It's these experiences that are at the root of improving individual skills through code reviews — when you have someone other than the author look over some code, you're likely to find different approaches or patterns that can be applied. Over time, seeing these different perspectives and suggestions will lead an engineer to have a more robust system for writing code or designing systems.

Improvements here can manifest in different ways. The obvious example is having a reviewer give feedback to the code's author, helping the author learn and grow as an engineer. A less-discussed alternative is having the reviewer observe new patterns in the code, allowing them to indirectly learn from the author.

Ensuring sufficient codebase quality

Quality of a codebase is highly dependent on the organization. To some, it may mean that the code has a test or two, while to others, it means that it follows a set of written best practices. Some may even believe it needs to adhere to strict standards to be high quality, though most organizations fall somewhere in the middle of these spectrums. One thing is certain — code should adhere to some minimum standards in order to make it easier to work with, make changes to, and reliably achieve its design goals.

Spreading organizational knowledge

The root of this goal comes from the morbidly named concept of the bus factor. In many projects, it's desirable for the project to be able to continue even if a key developer was hit by a bus. More practically, when an engineer goes on vacation or takes a break from the project, it's important to have someone else who is able to work with the codebase and understand various decisions that were previously made.

While it is likely folly for any organization to ignore these benefits entirely, this may be the easiest goal to make trade-offs around. An organization with high turnover almost certainly needs to ensure this is a priority. On the other hand, an organization that pays highly, has interesting work, and treats employees well is less likely to suffer problems from a lower investment here. However, no company should skip all investment in this area, as there is some risk of a key engineer going on vacation before an important deadline, or even encountering the proverbial bus.

When not to use code reviews

Code reviews should be used when code is being written to achieve some pre-defined goal. When the goal hasn't been determined yet, such as when considering how to build a new system or figuring out how to best achieve a business objective, a code review probably isn't the right tool. Trying to manage these cases via code review is too late, akin to aiming a gun after pulling the trigger, and some other review process to determine what to build may have been more valuable than immediately writing code.

Logistics of code reviews

There's a long list of things to consider when determining what ideal code reviews would be for your organization. Here are some that I think are particularly important to consider when doing a code review.

Cost of failure

If you're writing code that is security critical or lives rely on, your review process should look a lot different from someone building a system that's supported on a best-effort basis. A code review process that only allows correct code through can still end up with a codebase made out of spaghetti, but probably won't kill people. If your organization doesn't have these constraints, bias towards a process that puts the focus on sharing knowledge and having a good codebase at the expense of finding all bugs.

Timeliness

Asking engineers to perform a code review can be disruptive, as it can be time-consuming and require significant focus. However, delaying code reviews can be just as disruptive on the person requesting the review. Beyond unblocking teammates, expeditiously performing code reviews can ensure that other engineers gain context on important systems before they waste time going down the wrong path, and helps prioritize their growth as an engineer.

If you aren't willing to perform reviews during the day in order to avoid interruptions, I recommend blocking off time at the start and end of your day for them. This will ensure that requests for a review aren't outstanding for more than one business day, which generally feels like a decent balance.

Note: sometimes it's clear that a review will take more time than you have — in these cases it's important to let the requester know this to allow them to manage their time and expectations, or find another reviewer if necessary.

Review the code, not the author

It's fairly common to mention that code and code authorship are different things, but in a field where it's so easy to tie your identity and sense of self-worth into the code you write, I think it's worth stating again. When you're performing a code review, you should compartmentalize the person who wrote the code from the code itself. Remember, to err is human.

It's equally important to remember this as the author whose code is being reviewed. The reviewer is acting in good faith and attempting to realize the benefits of code reviews for both you and the organization, and it's important to be receptive to criticism rather than defensive.

Who should review

Sometimes it's obvious who the reviewer should be for a changeset, particularly in companies with strong code ownership. For companies with a weaker model of ownership where you don't have a good idea of whom the owner might be, it's worth looking at the version control log to see people who've recently touched the relevant files or systems (and you should get a better ownership model). When someone requests a review from you, but you think there's a better reviewer, you should proactively pass the responsibility to them.

Who should be involved

You should have the concept of CC'ing someone on a review, in which followers are added to the review process with no expectation of them contributing to the review. This can give people additional visibility into changes in areas they may be interested in, and makes it easier to get targeted comments on that without adding much work for them. CC'ing extra people on a review should be encouraged. In the worst case, they get a bit of extra inbox noise and never look at the changeset. In the best case, they glance at it and notice a major issue that no one else would've caught!

You should bias towards CC'ing other team members on reviews, particularly newer ones, in order to give them more context into the codebase and how your team operates. This can present valuable opportunities for them to grow by observing the existing interactions, and it's very likely that they'll also have valuable input. Additionally, newer team members can be exceptional at highlighting gaps in documentation that more tenured engineers might not notice.

Optimal changeset size

It's nice for changesets to encompass one conceptual change. For example, combining a refactor with functional changes is both harder to review and more risky, as bugs in the changed functionality become harder to fix without reverts that can affect other parts of the codebase. If you're asked to review a changeset that contains multiple conceptual changes like this, you should ask for them to be split apart. However, this shouldn't be a strict rule. Sometimes it's easy to make minor improvements to the codebase while an engineer is already doing something else, but they wouldn't make them if forced to split the improvements into a separate changeset. Like everything else, use your best judgement, and try to be accommodating when teammates make your reviews slightly harder in exchange for a better codebase.

Note that this advice doesn't cover anything about the size (number of lines, etc) of a changeset. Large changesets can take longer to review, but may also be more effective to review in one pass than splitting it up into multiple passes. If you have a strong preference here, you should make it clear to people asking you for review.

When to review

The most common time to perform a code review is when a developer wants to merge a changeset into your codebase. A code review at this point allows you to ensure the author is making a reasonable set of changes, maintaining the quality of your codebase, and has the context they need to contribute effectively. However, this isn't always the most leveraged time to do a code review.

Draft reviews

It's sometimes useful to quickly run a set of a changes past a teammate, and draft reviews give a method for doing this. Draft reviews are a lighter-weight process allowing people to weigh in on a set of prototype changes, but without the changes needing to be fully polished. This type of review allows engineers to get fast feedback in areas they may not be as confident in. For example, an engineer might write some MVP code to solve a problem, then request a draft review before writing tests, cleaning up the code, or handling edge cases.

Feature reviews

In some cases, a feature is being built out of various smaller changesets that are applied at different points in time, and it makes more sense to do lighter reviews on each changeset in exchange for doing a review of the full feature before it's shipped to users. Feature reviews like this involve looking at the union of all changesets involved in building the feature, and allow you to comprehensively review the impact and design of an entire feature.

You should strongly consider feature reviews when changesets can be shipped without significantly impacting existing functionality, but are gradually introducing a new major piece of functionality. As an example, the introduction of a new service may require the creation of various internal libraries that are hard to effectively review one-by-one, but present a clear picture when looked at as a whole.

Even if you aren't working on a group of linear changesets building up into a new feature, feature reviews can still be a useful tool in your arsenal for ensuring the codebase is progressing in the desired fashion and not losing sight of the forest for the trees. Performing them after major milestones can help you proactively address technical debt or determine when a set of design decisions was incorrect, allowing your team to avoid making those mistakes again in the future. You should consider doing a full feature review whenever a system is shipping, "finished", or entering maintenance mode.

How to review a changeset

When you're asked to review code, you'll almost always be given a diff containing the differences between the existing codebase and the proposed new codebase, along with some message justifying the changes. You may be tempted to immediately scroll through this set, but that's almost certainly not the right approach! When you're reviewing code, it's important to remember that you're reviewing the resulting codebase, not just the diff. Small diffs can have huge consequences on a codebase, while massive diffs can have no noticeable effects, and the consequences of applying said diffs are all that matter from a review perspective. As an analogy, only reading diffs is like reflecting on every decision you make without ever introspecting on where your life is going.

Start by reading through the commit message to get a sense for why the change was written, what you should expect it to do, and how it was tested. This will inform how you find important pieces of the change, letting you quickly focus your effort where it's most valuable. If this justification for this change doesn't make sense, now is a good time to talk to the author. They may simply want to write a better message, which you should provide coaching on. However, it's possible that the change is wildly misguided — discovering this now will allow you to redirect time when you would've been puzzling over a strange review into teaching a teammate more about the affected system.

Sometimes, a changeset can be inherently complicated. In cases like this, you should bias towards pulling in the author and having them walk you through the changeset to help you understand. This isn't a substitute for your review, but it can make it much easier for you to pick up the necessary context.

Skim the diff, using the previous context you've picked up from the commit message to find areas to focus on first. You'll probably want to focus on the skeleton of the changeset first. For example, a quick pass might tell you that a changeset modifies some build files, adds a couple tests, affects a few different systems you understand, and affects one you don't. At this point, you should have a fairly thorough understanding of what the changeset is trying to accomplish, as well as generally how it's trying to do that. The only thing left is to read through the code and write feedback for the author!

Code review tooling

The final step of reviewing a changeset is to actually review the code, and it's unfortunately where most tooling is suboptimal. This section discusses an ideal code reviewing setup, with the caveat that it's not always feasible to do something like this in practice, and that tooling to do so will likely need to be specifically maintained by your team.

A baseline

There are some obvious baseline requirements for a reviewing tool. A tool for reviewing code should have a way to view diffs and corresponding code, publish feedback and comments, and allow authors and reviewers to discuss with each other and view the history of changes to the changeset. Common code review software (GitHub, GitLab, Phabricator, Gerrit, etc) handles this functionality well.

However, these requirements mainly cover the collaborative aspects of code reviewing, while leaving out the most important part — the reviewer's experience of understanding the code.

Navigating the codebase

As mentioned previously, the diff you've been handed is just a map to help you explore what the codebase will look like after this changeset is applied. To review with confidence, it's important to have some way to navigate the codebase with this diff applied. It's typically possible to do this via your version control system, though this generally involves the laborious process of fetching new changes and checking out the author's branch locally. Ideally, you'd have a more efficient system, where it's possible to instantly view the new codebase using your local IDE and tools. A decent approximation of this can be done by mounting other developers' checkouts locally using NFS or SSHFS, but you should be aiming for it to be equally easy to work with other developers' code as your own.

If something like this isn't feasible, your team may want to build a wrapper around your version control system to make this easier. As a last resort, your tooling for viewing diffs almost certainly supports expanding context, and you should use this liberally to see how the changeset would impact your codebase.

Running the changeset

It can often be more efficient to just run the change and step through it, allowing you to easily observe the effects and how it fits into your codebase's execution patterns. For this to be practical, your team should be comfortable using a debugger to walk through your systems, or should have some system for retroactively examining code that ran (such as FunctionTrace or OpenTelemetry). Additionally, it needs to be fast to either build or use the author's build of the codebase, and you need a development environment that's representative of production.

While none of these tools are required for performing good code reviews, it will require more mental energy from engineers to review changesets without them. If an optimal code review at your organization requires a manual process of navigating to a webpage, fetching updates to the codebase, checking out the author's branch, figuring out how to build the code, then figuring out how get it to run, you should not expect any code reviews to be performed optimally.

Things not to look for

When you're doing a code review, there are some things that aren't your responsibility to look for. To an extent, this gives responsibility to authors rather than allowing them to use the reviewer as a safety net. Note that you should probably be clear about your expectations here, as a mismatch can lead to code that has never been considered in-depth and may not function at all.

Bugs and obviously broken code

The author of the changeset wouldn't be sending it to you for review if they didn't believe it worked, and you should put the responsibility of verifying this on them. This saves you time and allows you to focus on higher-level things, which is where your main strength as a reviewer should lie. This belief may be controversial, and it has a few major caveats.

Think about the context of the change. If it has implications on security or safety, you should be reviewing it closely with an eye for finding bugs, even if you trust that the author has already done this. People make mistakes, and having a thorough reviewer can help mitigate this.

Feel free to question error and edge case handling. While the author has an obvious responsibility for verifying that the happy path works, it can be much harder to handle exceptional cases. If they haven't written code handling this, be concerned! If they have, question whether the errors can ever actually occur, and whether it's been verified to work or is covered by tests.

Minor stylistic choices

All programmers will have differences in opinions around how they'd like to style their code, and it's easy to clash when someone with the opposite opinion enters your codebase. It's simply not worth the effort or conflict to try to enforce your style manually. If it's something you feel strongly about, feel free to point out the benefits in your review, but don't expect the author to make changes based on them (I frequently extol the virtues of inlining single-use functions in my reviews).

If it really matters to you, consider using an opinionated code formatter and never considering stylistic choices again, or just fix the style yourself after they've merged it. If you must point out stylistic differences anyways, you may want to present them using a framework like do, try, consider, where you clearly specify what actions should be taken as a result of your feedback.

Things to look for

Now that we've covered what shouldn't be handled in code reviews, there are various things you definitely should be looking for.

Edge cases that you're aware of

As someone being asked to review code, you likely have a good understanding of the broader system the changes are interacting with. Are there any edge cases that the author might not have known about that this code could interact with? Think of this as a time to help ensure the code is correct as well as pass on some organizational knowledge to a teammate.

Places where documentation can be improved

Code is typically read many more times than it is written, so investing in documentation will generally pay dividends by making it easier for future engineers (including you) to know why and how things works. I'm a big fan of inline code comments for this purpose, due to the outsized tendency for external documentation to quickly go stale. Similarly, it's hard to overstate the benefits of judicious asserts as living documentation of expectations.

When you review a change that's hard to understand, doesn't have an obvious purpose, or handles edge cases, consider asking the author to improve documentation around the change. A pernicious trend here is to instead have a discussion in the comments of your review tool, leaving you and the author with clarity about the reasons behind and mechanisms of the change, but leaving any future reader in the dark. Many questions you have should really be answered in the form of permanent documentation via comments in the code, and you should push for this whenever possible. Similarly, good commit messages are often more useful directly in the code they're changing.

Introduction of complicated or surprising behaviour

Code should be roughly as simple as possible while still achieving the goals it set out for. A change that has complicated or surprising behaviour increases organizational tech debt, making it harder for the code to be understood or changed in the future. It's worth looking at changes like this from a broader perspective. Are there other things that could be changed to achieve the same goals? Is this feature even worth the extra burden it'll create? As a last resort, can you ensure that the surprising behaviour is well documented?

I see this as one of the better ways to help engineers grow technically. It's common for newer engineers to design overcomplicated solutions as they work myopically towards a goal. Taking time to step back and look for other solutions that simplify some gnarly code can pay dividends in their careers as well as in your future maintenance efforts on the codebase.

Non-obvious alternative changes

As a follow-on from the previous point, it's worth thinking about alternatives even if the proposed change isn't an eldritch horror. If you're an owner or stakeholder in the affected region of code, you may find yourself with much more context on how the system works than the naive author of the change being reviewed. Proposing an alternative can lead to a system that's easier to understand and maintain, as well as an author that better understands how this system works for the future.

As a caveat, this is an area where it's easy to ask people to do a lot of extra work for what can be perceived as little gain. You should pick your battles wisely here, as doing this occasionally may boost the standards of your codebase and the abilities of other engineers, but too often may make people wary of coming to you with changes.

Security flaws

Certain changes deserve a more intensive review process. If the affected code is security-sensitive or safety-critical, you should have a strong understanding of the change and actively look for bugs or lurking flaws. Changes that can directly impact users like this are too important to be good growth areas for other engineers, and deserve the extra effort that's required from having two people look thoroughly at the actual implementation of the code.

If you do find problems here, your work isn't done yet! You should come up with a hypothesis around a root cause, pass on information about the root cause to your teammates, and then check if that cause is found elsewhere nearby. In an ideal world, you and the author would work together to ensure that a similar issue never occurs again in the codebase. In a more realistic world, the author should probably add tests to at least catch some bugs of the same class, and the other engineers around you should leave with a better understanding of risks they face.

Future maintainability issues

Even if a changeset has perfect functionality and uses what you as an expert would deem the optimal design, it may not be sufficient to ensure that it can be maintained in the future. Along with documentation, tests (typically paired with a lack of a good type system) can be invaluable for ensuring code can still be easily changed. Similarly, a language with a strong type system won't help save the day if the type system isn't being used effectively. You should ensure authors have mitigated these gaps, or have confidence that they won't cause problems.

Unnecessary inefficiencies

Ever wondered why small webapps can require hundreds of backend servers to run? It's probably because efficiency wasn't a key design constraint when the system was built, and no serious effort has been put into remedying this. When you're reviewing code, it's worth keeping an eye out for obvious inefficiencies and pointing them out. Computers like to run code quickly, and helping them do so often won't add unnecessary complexity to your codebase.

When feasible, I believe you have an ethical obligation to write efficient code, and you should be a proponent of this in code reviews. Unnecessary inefficiencies not only waste value resources (money, electricity, users' time, etc) at scale, but also tarnish the craft of software engineering.

This is another area where other engineers can grow by following your example, though take care not to lead them into the quagmire of over-optimization, where they entangle esoteric data structures in order to save a few cycles at the cost of your sanity. Additionally, don't forget about variations of Amdahl's law; optimizing a small piece of the system will yield proportionally small overall improvements, but writing the improvements may not take a similarly small amount of time.

External interfaces

Pay close attention to public and external interfaces, where mistakes can be costly to fix and may be permanent. While poor design decisions inside a service can often be fixed with a bottle of Mountain Dew and a night of refactoring, you may find that cleaning up an external API which thousands of users rely on takes months of additional work.

Insufficiently reviewed areas

Sometimes you may not be the best reviewer for a piece of code, and you should attempt to proactively identify this and suggest a better reviewer for that section. This can be more work than glossing over a section you don't understand, but in the long run should lead to a better codebase, and both you and the author having a fuller understanding of it.

Review outcomes

At the end of a code review, the outcome should be a set of feedback and either an approval or a rejection of the changeset. Some teams treat all feedback as blocking, preventing the changeset from moving forward until any of it as addressed. This is fundamentally incorrect.

Feedback should come in two forms: blocking feedback and improvements. Blocking feedback encompasses things like an incorrect system design, security flaws, or issues that will definitely cause problems down the line. You should make it clear to the author that this type of feedback is important and needs to be addressed before development can proceed.

Improvements should be pointed out to the author as your perspective, potentially with some measurement of how much they matter to you, but be left up to the author's best judgement on whether to implement them. With the assumption of good faith on the author's part, you should be able to convince them to make improvements to their code before they continue, and may want to reconsider your feedback if they disagree. In many cases, it's not even necessary to re-review the code after improvements are made, and development can proceed without blocking with the assumption that they'll implement the good ideas you've presented.

You may notice authors not acting in good faith after you've suggested improvements, with them agreeing (or not disagreeing) that the improvements would be valuable but not implementing them. If this continues to happen, you should switch to treat feedback for that author as blocking until the two of you can come to an agreement.

Conclusion

Done well, code reviews can have significant benefits across your teammates and organization. Done poorly, they can impede development, lead to conflicts, and result in a codebase resembling a night at Olive Garden. Hopefully my thoughts on them will help you be more intentional about the way you approach code reviews and the benefits you're receiving from them.

Other articles