Skip to main content

Self Reflection of a Software Developer

The recent debate over #IsTDDDead is very interesting to me. Not because I believe TDD to actually be dead, but because I think it goes against core beliefs of some software developers, especially for those who practice TDD on a regular basis and has now become integral to how they build software.

Now that the shock of the proclamation that TDD is dead has subsided a little bit, we can start examining the reasons prompting this statement, and maybe incorporate some of these thoughts back into our own views of software development.

But for that to happen, we need to reflect on our own view of the world, and be open to the possibility that some of our views might be wrong. Admitting that we're wrong is something that can be hard to do, especially if its our long standing beliefs that are in question.


I approach software development assuming that there is a better way, but I haven't found it yet. As a result, I'm always in pursuit of a better way.

Who better to learn from than those with decades of experience who publish their tried and true methods. I try to read as many software development books as possible, my favorites being The Pragmatic Programmer, Clean Code, Extreme Programming Explained, and Refactoring: Improving the Design of Existing Code, Domain Driven Design, The Art of Unit Testing, Pragmatic Thinking & Learning, and The Lean Startup.

While I'm not really tied to any specific methodology, I do appreciate the benefits of TDD. Having a suite of tests that you rely on to make the decision to ship or not is very compelling. Being able to Refactor safely is also a nice benefit, especially if you take an incremental approach to evolving your architecture.

The good thing about always assuming there's something your missing is that you never stop learning, and you are open to other peoples opinions and ways of doing things.

Comments

Popular posts from this blog

Basic Web Performance Testing With JMeter and Gatling

Introduction In this post I'll give a quick way to get some basic web performance metrics using both JMeter and Gatling . JMeter is a well known, open source, Java based tool for performance testing. It has a lot of features, and can be a little confusing at first. Scripts (aka Test Plans), are XML documents, edited using the JMeter GUI.  There are lots of options, supports a wide variety of protocols, and produces some OK looking graphs and reports. Gatling is a lesser known tool, but I really like it. It's a Scala based tool, with scripts written in a nice DSL. While the scripts require some basic Scala, they are fairly easy to understand and modify. The output is a nice looking, interactive, HTML page. Metrics   Below are the basic metrics gathered by both JMeter and Gatling . If you are just starting performance testing, these might be a good starting point . Response Time – Difference between time when request was sent and time when response has been fully rec

Generating Java Mixed Mode Flame Graphs

Overview I've seen Brendan Gregg's talk on generating mixed-mode flame graphs  and I wanted to reproduce those flamegraphs for myself. Setting up the tools is a little bit of work, so I wanted to capture those steps. Check out the Java in Flames post on the Netflix blog for more information. I've created github repo ( github.com/jerometerry/perf )  that contains the scripts used to get this going, including a Vagrantfile, and JMeter Test Plan. Here's a flame graph I generated while applying load (via JMeter) to the basic arithmetic Tomcat sample application. All the green stacks are Java code, red stacks are kernel code, and yellow stacks are C++ code. The big green pile on the right is all the Tomcat Java code that's being run. Tools Here's the technologies I used (I'm writing this on a Mac). VirtualBox 5.1.12 Vagrant 1.9.1 bento/ubuntu-16.04 (kernel 4.4.0-38) Tomcat 7.0.68 JMeter 3.1 OpenJDK 8 1.8.111 linux-tools-4.4.0-38 linux-to

Multi Threaded NUnit Tests

Recently I needed to reproduce an Entity Framework deadlock issue. The test needed to run in NUnit, and involved firing off two separate threads. The trouble is that in NUnit, exceptions in threads terminate the parent thread without failing the test. For example, here's a test that starts two threads: the first thread simply logs to the console, while the other thread turfs an exception. What I expected was that this test should fail. However, the test actually passes. readonly ThreadStart[] delegates = { () => { Console.WriteLine("Nothing to see here"); }, () => { throw new InvalidOperationException("Blow up"); } }; [Test] public void SimpleMultiThreading() { var threads = delegates.Select(d => new Thread(d)).ToList(); foreach (var t in threads) { t.Start(); } foreach (var t in threads) { t.Join(); } } Peter Provost posted an article that describes how to make this test fail. It