I see human code reviews as one tool in the quality toolbox. My opinion is that to keep code reviews interesting and engaging, humans should be the last link in the chain and get the most interesting problems. What I mean is that if the code review is burdened with pointing out that an opened resource was not closed or that a specific path through the code will never happen, code reviews become draining and boring. I also believe that code reviews need to scale up to teams that are not co-located. That might mean using an asynchronous process, like a workflow system or using collaboration tools to do the code review through teleconferences and screen sharing. A workflow system can prevent code from promotion into the mainline build until one or more reviewers have accepted it.To keep the code reviews interesting and challenging, I give the grunt work to the machines and use static analysis and profiling tools first. Before you can involve the humans, your code needs to pass the suite of static analysis tests at the prescribed level. This will weed out all the typical mistakes that are larger than what a compiler finds. There are many analysis and profiling tools available in open source and commercially. Most of my development work is in server-side Java, and my analysis tools of choice are FindBugs, PMD and the profiling tool in Rational Software Architect. FindBugs is a byte code analyzer, so it looks at what the Java compiler produces and is less concerned with the form of source code. PMD analyzes source code. Both tools have configurable thresholds for problem severity and they can accept custom problem patterns. PMD has a big library of problem patterns, including things like overly complex or long functions or methods. The RSA profiling tool only tests timing down to the method level of classes. It can quickly help a developer focus on where the sluggish parts of a system are hiding, which is valuable information going into a review. Once the code makes it through this array of automated tests, bring the humans in to look at it and get their input. I have found this approach in our case changes the review from a potentially adversarial situation into one with an educational tone. The review meeting, if it happens synchronously, is not overtaken by the small problems and pointing out basic mistakes. It is concerned with making recommendations at a higher level to improve the larger design.FindBugs, U. of Maryland, http://findbugs.sourceforge.net/PMD, SourceForge, http://pmd.sourceforge.net/Rational Software Architect for WebSphere Software, http://www-01.ibm.com/software/awdtools/swarchitect/websphere/