Skip to main content

Multi Threaded NUnit Tests

Recently I needed to reproduce an Entity Framework deadlock issue. The test needed to run in NUnit, and involved firing off two separate threads. The trouble is that in NUnit, exceptions in threads terminate the parent thread without failing the test.

For example, here's a test that starts two threads: the first thread simply logs to the console, while the other thread turfs an exception. What I expected was that this test should fail. However, the test actually passes.

readonly ThreadStart[] delegates = {
    () => {
        Console.WriteLine("Nothing to see here");
    }, () => {
        throw new InvalidOperationException("Blow up");
    }
};

[Test] 
public void SimpleMultiThreading() {
    var threads = delegates.Select(d => new Thread(d)).ToList();
    foreach (var t in threads) {
        t.Start();
    }

    foreach (var t in threads) {
        t.Join();
    }
}

Peter Provost posted an article that describes how to make this test fail. It works by wrapping the thread code in a CrossThreadTestRunner class, that catches exceptions inside the thread, and rethrows the exception outside the thread via a Thread.Join call. Peter also posted another article describing how to use the CrossThreadTestRunner class.

Here's a partial code listing of the CrossThreadTestRunner class

public class CrossThreadTestRunner {
    private ThreadStart userDelegate;
    private Exception lastException;
    
    public CrossThreadTestRunner(ThreadStart userDelegate) {
        this.userDelegate = userDelegate;
    }
    
    public void Run() {
        Thread t = new Thread(new ThreadStart(MultiThreadedWorker));
        t.Start();
        t.Join();

        if (lastException != null) {
            ThrowExceptionPreservingStack(lastException);
        }
    }
    
    private void MultiThreadedWorker() {
        try {
            userDelegate.Invoke();
        }
        catch (Exception e) {
            lastException = e;
        }
    }
}

Peter's solution works if you are only using a single thread per test. If you wanted to run multiple threads in a test, some tweaks are required to the CrossThreadTestRunner class, since the CrossThreadTestRunner.Run method starts the thread, then does a Thread.Join to wait for the thread to complete. The problem with this is that it prevents threads from running in parallel, since Thread.Join blocks the calling thread until the target thread exits.

The change to allow for parallel thread execution is to split up the run method into a Start and a Join method, to mirror the corresponding methods on the Thread class. Here's the modified listing.

public class CrossThreadTestRunner {
    private ThreadStart userDelegate;
    private Exception lastException;
    private Thread thread;
    
    public CrossThreadTestRunner(ThreadStart userDelegate) {
        this.userDelegate = userDelegate;
        this.thread = new Thread(new ThreadStart(MultiThreadedWorker));
    }
    
    public void Start() {
        thread.Start();
    }
    
    public void Join() {
        thread.Join();

        if (lastException != null) {
            ThrowExceptionPreservingStack(lastException);
        }
    }
    
    private void MultiThreadedWorker() {
        try {
            userDelegate.Invoke();
        }
        catch (Exception e) {
            lastException = e;
        }
    }
}

Using the modified version of the CrossThreadTaskRunner, it's now possible to make the original test fail.

readonly ThreadStart[] delegates = {
    () => {
        Console.WriteLine("Nothing to see here");
    }, () => {
        throw new InvalidOperationException("Blow up");
    }
};

[Test] 
public void BetterMultiThreading() {
    var threads = delegates.Select(d => new CrossThreadTestRunner(d)).ToList();
    foreach (var t in threads) {
        t.Start();
    }

    foreach (var t in threads) {
        t.Join();
    }
}

One final tweak to turn this into a green test, is to add the ExpectedExceptionAttribute to the test method, to indicate that the test should fail with the given exception.

The final test looks something like the following

readonly ThreadStart[] delegates = {
    () => {
        Console.WriteLine("Nothing to see here");
    }, () => {
        throw new InvalidOperationException("Blow up");
    }
};

[Test]
[ExpectedException(typeof(InvalidOperationException))]
public void BetterMultiThreading() {
    var threads = delegates.Select(d => new CrossThreadTestRunner(d)).ToList();
    foreach (var t in threads) {
        t.Start();
    }

    foreach (var t in threads) {
        t.Join();
    }
}

Here's the full listing of the modified version of the CrossThreadTestRunner class.

class CrossThreadTestRunner {
        private Exception lastException;
        private readonly Thread thread;
        private readonly ThreadStart start;

        private const string RemoteStackTraceFieldName = "_remoteStackTraceString";
        private static readonly FieldInfo RemoteStackTraceField = typeof(Exception).GetField(RemoteStackTraceFieldName, BindingFlags.Instance | BindingFlags.NonPublic);

        public CrossThreadTestRunner(ThreadStart start) {
            this.start = start;
            this.thread = new Thread(Run);
            this.thread.SetApartmentState(ApartmentState.STA);
        }

        public void Start() {
            lastException = null;
            thread.Start();
        }

        public void Join() {
            thread.Join();

            if (lastException != null) {
                ThrowExceptionPreservingStack(lastException);
            }
        }

        private void Run() {
            try {
                start.Invoke();
            }
            catch (Exception e) {
                lastException = e;
            }
        }

        [ReflectionPermission(SecurityAction.Demand)]
        private static void ThrowExceptionPreservingStack(Exception exception) {
            if (RemoteStackTraceField != null) {
                RemoteStackTraceField.SetValue(exception, exception.StackTrace + Environment.NewLine);
            }
            throw exception;
        }
    }

The code for this article is on GitHub.

Comments

Popular posts from this blog

Generating Java Mixed Mode Flame Graphs

Overview I've seen Brendan Gregg's talk on generating mixed-mode flame graphs and I wanted to reproduce those flamegraphs for myself. Setting up the tools is a little bit of work, so I wanted to capture those steps. Check out the Java in Flames post on the Netflix blog for more information.

I've created github repo (github.com/jerometerry/perf)  that contains the scripts used to get this going, including a Vagrantfile, and JMeter Test Plan.

Here's a flame graph I generated while applying load (via JMeter) to the basic arithmetic Tomcat sample application. All the green stacks are Java code, red stacks are kernel code, and yellow stacks are C++ code. The big green pile on the right is all the Tomcat Java code that's being run.


Tools Here's the technologies I used (I'm writing this on a Mac).
VirtualBox 5.1.12Vagrant 1.9.1bento/ubuntu-16.04 (kernel 4.4.0-38)Tomcat 7.0.68JMeter 3.1OpenJDK 8 1.8.111linux-tools-4.4.0-38linux-tools-commonBrendan Gregg's Fla…

Basic Web Performance Testing With JMeter and Gatling

Introduction In this post I'll give a quick way to get some basic web performance metrics using both JMeter and Gatling.

JMeter is a well known, open source, Java based tool for performance testing. It has a lot of features, and can be a little confusing at first. Scripts (aka Test Plans), are XML documents, edited using the JMeter GUI.  There are lots of options, supports a wide variety of protocols, and produces some OK looking graphs and reports.

Gatling is a lesser known tool, but I really like it. It's a Scala based tool, with scripts written in a nice DSL. While the scripts require some basic Scala, they are fairly easy to understand and modify. The output is a nice looking, interactive, HTML page.
Metrics Below are the basic metrics gathered by both JMeter and Gatling. If you are just starting performance testing, these might be a good starting point.

Response Time – Difference between time when request was sent and time when response has been fully received

Latency –…