Tags: , , , , | Categories: Code Development, General, Testing Posted by nurih on 11/8/2008 3:32 PM | Comments (0)

If you ask the average developer what might be done to improve code, they would probably come up with "use design patterns" or "do code reviews" or even "write unit tests". While all these are valid and useful, it is rare to hear "measure it". It's odd, when you think about it, because most of us consider ourselves scientists or sorts. Some of use obtained a degree in computer science, and we view the coding practice and a deterministic endeavor. Why is it then that we don't measure our work using standard methodologies and objective tools and evidence?

For one, some of us are blissfully unaware of the existence of such methods. Indeed, the science of quality measurement of code has been the domain of university halls more so than practiced in the "real" world. Six Sigma and CMMI are probably the more familiar endeavors prescribing some sort of measure/improve into the coding practice but both include scant little in terms of measuring code itself. Rather they focus on the results of the software endeavor not on the "internal quality" of code.

Another reason for low adoption of code quality measurement is lack of tools. We have wealth of guidance instruments, but less so code quality focused. For example, FxCop and the addition of Code Analysis to VSTS have brought huge contribution to code reviewing and uniformity in coding among teams. But let's face it – with so much guidance, it's all too easy to either dismiss the whole process as "too picky" or focus too much on one aspect of coding style rather than the underlying runtime binary. This is to say that it is very possible that what would be considered "good style" may not yield good runtime, and vice-versa.

For a professional tool which enables you to view, understand, explore, analyze and improve your code look no further than NDepend. (www.ndepend.com). The tool is quite extensive and robust, and has matured in its presentation, exploration and integration capabilities becoming a great value for those of use interested digging deeper then the "my code seems to work" crowd.

The installation is fairly straightforward. You pretty much unpack the download and place your license file in your installation directory. Upon running the tool, you can chose to install integration to VS2005, VS2008 and Reflector (now a RedGate property btw).

Before using the tool for the first time, you can watch a few basic screen casts available from NDepend. The videos have no narration, so I found myself using the pause button if the text balloons flew by a bit quick. But that's no big deal with a 3-5 minute video. Once you get comfortable with the basics, you can almost immediately reap the benefits. Through a very relevant set of canned queries and screens you can quickly get a feel for how your code measures up. A graphic "size gram" presents methods, types, classes, namespaces or assemblies in varying sizes according to measures like lines of code (LOC – either the source itself or the resultant IL), Cyclometric Complexity and other very useful views of code cohesiveness and complexity. This visual let's you quickly identify or drill into the "biggest offender".

Once you chose a target for exploration, the view in the assembly-method tree, the graphic size-gram and the dependency matrix all work in tandem: you chose an element in one, and the focal point shifts or drills down in the other two. There is also a pane which acts like a context menu which displays the metrics numbers for the selected method, field, assembly etc. This allows you to get the summary very quickly at any given point of your exploration.

When you use the dependency matrix, method or types and their dependents are easily correlated. A measure of code quality is how tightly different types are coupled or dependent on each other. Theory is that if a dependency tree is too deep or too vast, change in a type will ripple through a lot of code whereas shallow or narrow dependency will have less dramatically affected by change. So it's a great thing to have a measure of your dependency relationships among your classes and assemblies. This measure tends to affect code most in the maintenance phase, but of course is as useful during initial prototype/refactor cycles pre-release.

Another great feature is a dependency graph, producing a visual map of dependencies among the assemblies analyzed. I have found it very useful when "cold reading" legacy code I was charged in maintaining. Using the visualization I could more quickly determine what's going on and understand how pieces of code work together rather than follow painstakingly with bookmarks and "follow the code" with a debugger.

As for the metrics themselves, you would probably choose your own policy regarding measures and their relevance. For one, the numbers are great as relative comparison of various code pieces. You may find that some dependencies are "very deep" – which in theory is "bad" – but that the indication points to a base class which you designed very well and serves as the base for everything. For an extreme example, most of us will agree that the "deep dependency" on System.String is well justified and doesn't merit change. It is important for the user to understand and digest the metrics in context, and draw appropriate conclusions.

The tool is built on an underlying query technology called CQL. Once a project is analyzed, the database of findings is exposed both through built in queries. These queries can be modified and new queries can be built to correlate your important factors. Quite honestly, I have not gotten to a point of need for customization yet. The existing presentations are very rich and useful out of the box. One instance where you might want to produce custom queries would be to exclude known "violations" by adding a where clause, thereby preventing code you already analyzed and mitigated from appearing or skewing the view of the rest of your code.

In summary, I found NDepend very useful in examining legacy and new code. It gave me insights beyond empirical style oriented rules. It is much more informative to me to have a complexity measure or IL-LOC rather than a rule like "methods should not span more than 2 screen-full". Microsoft does include code metrics in VS 2010, and code analysis in VSTS or testing editions. If that is not within your budget, then you can have NDepend today and gain valuable insight right away. I would advise taking it slow in the beginning because there is a slight learning curve to the tool usage and navigation, and ascribing relevant weight to the findings takes time. But once you get a hang of it, it becomes indispensible.