Skip to main content

Clarity

The last couple of code reviews of my work lately have turned up some common code smells: function names that don't adequately reflect behavior; misplaced functions; and sub-par function decomposition.

I don't think I'm the only developer afflicted with the bad naming, since it's one of the two hardest things in computer science. However, the function placement and improper decomposition are a signal to me that I have a clarity problem. 

I've been building software for almost 12 years, and I take my coding very seriously. I'm pretty sure I have OCD tendencies, especially when it comes to code. Ever have to shut off your car when you realize that the emblem on your key is facing the wrong way? I have :). 

My desk is littered with programming books like Pragmatic Programming, The Practice of Programming, Clean Code, Refactoring, and Domain Driven Design. But despite my years of development and many programming books I've read, I still find software development challenging. 

Writing simple, clear code is hard. It takes a lot of practice to get good at. 

I think that in order to write clear code, you need to have clarity. You need to get your mind right.

In general I code using a make it work, make it right, make it fast approach. When I'm starting a new feature, I start out by roughing in a working prototype, to determine roughly what code needs to be written, and where it needs to go. At this point, I'm not really concerned with naming things, since it's still a sketch, that will change many times. I don't want to waste a lot of times to come up with good names during the first pass.

For me, the code is too viscous at this point to spend a lot of time on names, since functions are being added, renamed, removed often. I give rough names, pretty much the first name that comes to mind, because it's most likely going to change anyway.

After getting the prototype feature complete and all tests passing, I start refactoring, cleaning up the design, eliminating SOLID violations, duplicated code, etc. Once I'm satisfied with the overall look of the code, I get a code review.

My last couple of reviews of my work turned up the same problem. Lack of clarity. Primarily function names don't do what you would expect just reading the name. Ward Cunningham definition of clean code, code that does pretty much what you expect it to, is always in the back of my mind. The problem seems to be that the function name meets this criteria to me, but not to someone else.

Having the clarity discussion during code reviews has resulted in better intention revealing names. 

Comments

Post a Comment

Popular posts from this blog

Generating Java Mixed Mode Flame Graphs

Overview I've seen Brendan Gregg's talk on generating mixed-mode flame graphs  and I wanted to reproduce those flamegraphs for myself. Setting up the tools is a little bit of work, so I wanted to capture those steps. Check out the Java in Flames post on the Netflix blog for more information. I've created github repo ( github.com/jerometerry/perf )  that contains the scripts used to get this going, including a Vagrantfile, and JMeter Test Plan. Here's a flame graph I generated while applying load (via JMeter) to the basic arithmetic Tomcat sample application. All the green stacks are Java code, red stacks are kernel code, and yellow stacks are C++ code. The big green pile on the right is all the Tomcat Java code that's being run. Tools Here's the technologies I used (I'm writing this on a Mac). VirtualBox 5.1.12 Vagrant 1.9.1 bento/ubuntu-16.04 (kernel 4.4.0-38) Tomcat 7.0.68 JMeter 3.1 OpenJDK 8 1.8.111 linux-tools-4.4.0-38 linux-to...

Basic Web Performance Testing With JMeter and Gatling

Introduction In this post I'll give a quick way to get some basic web performance metrics using both JMeter and Gatling . JMeter is a well known, open source, Java based tool for performance testing. It has a lot of features, and can be a little confusing at first. Scripts (aka Test Plans), are XML documents, edited using the JMeter GUI.  There are lots of options, supports a wide variety of protocols, and produces some OK looking graphs and reports. Gatling is a lesser known tool, but I really like it. It's a Scala based tool, with scripts written in a nice DSL. While the scripts require some basic Scala, they are fairly easy to understand and modify. The output is a nice looking, interactive, HTML page. Metrics   Below are the basic metrics gathered by both JMeter and Gatling . If you are just starting performance testing, these might be a good starting point . Response Time – Difference between time when request was sent and time when response has been fully re...

Multi Threaded NUnit Tests

Recently I needed to reproduce an Entity Framework deadlock issue. The test needed to run in NUnit, and involved firing off two separate threads. The trouble is that in NUnit, exceptions in threads terminate the parent thread without failing the test. For example, here's a test that starts two threads: the first thread simply logs to the console, while the other thread turfs an exception. What I expected was that this test should fail. However, the test actually passes. readonly ThreadStart[] delegates = { () => { Console.WriteLine("Nothing to see here"); }, () => { throw new InvalidOperationException("Blow up"); } }; [Test] public void SimpleMultiThreading() { var threads = delegates.Select(d => new Thread(d)).ToList(); foreach (var t in threads) { t.Start(); } foreach (var t in threads) { t.Join(); } } Peter Provost posted an article that describes how to make this test fail. It...