Skip to main content

Software Quality

Greg Finzer asked the question on a Linedin Group "What practices do you use to improve the quality of an application?". Greg shared is ideas on improving application quality here, and focuses on developer practices to increase quality. I have my own ideas on what quality means, and how to improve it. Here's my take on it.

How do you define quality?

Quality is hard to define, and I don't know of any satisfactory description (much like architecture). In Zen and The Art of Motorcycle Maintenance, the narrator (Pirsig) spends a great deal to time trying to define quality, which eventually drove him insane (requiring shock therapy).

One way you can define quality might be how close the item comes to ideal, when viewed through different perspectives.

For example, for a software project, we can look at quality from the perspective of the end user - does the application work? How easy is it to use?

We can look at code quality - How complex is the code? How much of the complexity is inherent to the problem, and how much is accidental? Is the code well factored? Is the code layered?

We might look at quality through the eyes of a tester. How many bugs are being found? How often are bugs being found? How testable is the system?

We might look at quality through the eyes of a graphic designer. How well is the UI laid out? Is terminology appropriate and consistent?

There are many other ways of looking at quality. Many of these views on quality are subjective. You can find quality issues by applying some heuristics from each point of view. For example, developers would apply SOLID principles to determine how clean codes is. Graphic designers can use design heuristics to spot issues with the design.

Of course there are some ways of quantifying quality. From a developer perspective, we might use static analysis tools to identify issues with our code bases. We might use automated testing tools to run regression tests to validate the application functions as expected. We might perform performance measurements to verify latency and responsiveness are within tolerances.

Regardless of which view of quality you take or how you quantify it, the only way to improve "Quality" is to have passionate people devoted to improving it. You need to have quality as a core value.

I don't think any particular coding practice will lead to better quality. It takes skilled developers applying good design skills to deliver quality products.

If I had to pick, automated testing (any flavour, I don't believe in being dogmatic about any one technique) and good old design skills would be the practices to focus on.

Rich Hickey has a good talk called "Simple Made Easy" that I'd recommend

Here are a few books I'd recommend.

"The Software Craftsman" by Sandro Mancuso

"The Pragmatic Programmer" by Dave Thomas and Andy Hunt

"Domain Driven Design" by Eric Evans

"Clean Code" by Robert C. Martin (aka Uncle Bob)

"Working Effectively With Legacy Code" by Michael Feathers is a great book.


Popular posts from this blog

Generating Java Mixed Mode Flame Graphs

Overview I've seen Brendan Gregg's talk on generating mixed-mode flame graphs and I wanted to reproduce those flamegraphs for myself. Setting up the tools is a little bit of work, so I wanted to capture those steps. Check out the Java in Flames post on the Netflix blog for more information.

I've created github repo (  that contains the scripts used to get this going, including a Vagrantfile, and JMeter Test Plan.

Here's a flame graph I generated while applying load (via JMeter) to the basic arithmetic Tomcat sample application. All the green stacks are Java code, red stacks are kernel code, and yellow stacks are C++ code. The big green pile on the right is all the Tomcat Java code that's being run.

Tools Here's the technologies I used (I'm writing this on a Mac).
VirtualBox 5.1.12Vagrant 1.9.1bento/ubuntu-16.04 (kernel 4.4.0-38)Tomcat 7.0.68JMeter 3.1OpenJDK 8 1.8.111linux-tools-4.4.0-38linux-tools-commonBrendan Gregg's Fla…

Basic Web Performance Testing With JMeter and Gatling

Introduction In this post I'll give a quick way to get some basic web performance metrics using both JMeter and Gatling.

JMeter is a well known, open source, Java based tool for performance testing. It has a lot of features, and can be a little confusing at first. Scripts (aka Test Plans), are XML documents, edited using the JMeter GUI.  There are lots of options, supports a wide variety of protocols, and produces some OK looking graphs and reports.

Gatling is a lesser known tool, but I really like it. It's a Scala based tool, with scripts written in a nice DSL. While the scripts require some basic Scala, they are fairly easy to understand and modify. The output is a nice looking, interactive, HTML page.
Metrics Below are the basic metrics gathered by both JMeter and Gatling. If you are just starting performance testing, these might be a good starting point.

Response Time – Difference between time when request was sent and time when response has been fully received

Latency –…

Multi Threaded NUnit Tests

Recently I needed to reproduce an Entity Framework deadlock issue. The test needed to run in NUnit, and involved firing off two separate threads. The trouble is that in NUnit, exceptions in threads terminate the parent thread without failing the test.

For example, here's a test that starts two threads: the first thread simply logs to the console, while the other thread turfs an exception. What I expected was that this test should fail. However, the test actually passes.

readonly ThreadStart[] delegates = { () => { Console.WriteLine("Nothing to see here"); }, () => { throw new InvalidOperationException("Blow up"); } }; [Test] public void SimpleMultiThreading() { var threads = delegates.Select(d => new Thread(d)).ToList(); foreach (var t in threads) { t.Start(); } foreach (var t in threads) { t.Join(); } }
Peter Provost posted an article that describes how to make this test fail. It works…