Skip to main content

Notes on Getting Started with JMeter

If you're new to performance testing and just starting out using JMeter like me, it can be a little daunting. There's a lot to learn, and there's really no one thing you can read that will make you get it. You just have to play with it for a while until you start to get it. Here's a few things that might help you get started.

Reference Material

Ophir Prusak from BlazeMeter has a couple of good intro videos on JMeter
The thing I like about these videos is that they are clear, decently explained with just he right amount of information, and aren't too technical. These videos are aimed at promoting the BlazeMeter product, but they focus more on JMeter itself, and contain a lot of information.

WebPageTest.org is a handy open source performance tool to use in conjunction with JMeter. It can analyze page load speeds of applications available on the web.

Mobile Web Performance - Getting and Staying Fast

This is a nice introduction to performance of websites for consumption by mobile devices.

Basic Web Performance Testing With JMeter and Gatling

My original blog post on getting started with JMeter and Gatling. 

Notes

JMeter Tests Back-end Only

This might be obvious, but JMeter is used to test the back end of a website. It doesn't load the HTML responses into a browser, or run any JavaScript. If you want to measure page load speeds the user will notice, you need to use other tools. For example, using JMeter to generate load on your site, then use Selenium / WebDriver to load your site.

Users = Threads

This is basic JMeter, but worth nothing for beginners. In JMeter, a user is represented as a thread. If you want to simulate more users, you add more threads to a thread group.

There are only a limited number of threads that can be handled by JMeter, since a JMeter thread uses an OS thread. The maximum number of threads is machine dependent, but you can expect up to 1K threads at best. After that you would need to start using a cluster of machines running JMeter.

Request Rates 

The measure of the load put on the web server is the number of requests sent to the server in a given period of time (e.g. requests per second). To increase the load, you can increase the number of users (threads), and / or decrease the amount of delay (aka think time) between requests.

To estimate the load put on the server, divide the number of threads by the think time. E.g. 100 users / 10 s = 10 requests per second. This is an upper bound of course, since it doesn't factor in the amount of time the server takes to process the response, but if you have a low latency site, the math is accurate enough.

Tip use a Constant Timer in your test plan to simulate think time. Then you can tweak the think time and the number of threads to get the desired load your after.

Tip: Use a Throughput Shaping Timer (from the Standard Plugins) to level out your request rate. As the number of threads increases, you may see peaks and valleys in the hit rate. The Throughput Shaping Timer can ensure that the hit rate doesn't exceed the configured value. You might want to do this if you're measuring CPU usage on the app server, and CPU is sensitive to the hit rate.

Standard Plugins

The Standard JMeter Plugins are pretty handy to have in your toolbox. I especially like the Ultimate Thread Group, PerfMon plugins, as well as the extra graphing plugins (Response Times Over Time, Active Threads Over Time and the Hits Per Second).

Comments

  1. Nice and helpful articles on Jmeter. Finally I get to launch the Jmeter. Your articles are very easy to understand and helpful.

    Thanks Much!
    Akash

    ReplyDelete
  2. While I'm still not so clear with the results that i am getting in tables and graphs, its like i'm looking at the numbers in table without knowing what they are and why theuare?

    how to understand them? Please suggest if you have any other tips?

    ReplyDelete

Post a Comment

Popular posts from this blog

Generating Java Mixed Mode Flame Graphs

Overview I've seen Brendan Gregg's talk on generating mixed-mode flame graphs  and I wanted to reproduce those flamegraphs for myself. Setting up the tools is a little bit of work, so I wanted to capture those steps. Check out the Java in Flames post on the Netflix blog for more information. I've created github repo ( github.com/jerometerry/perf )  that contains the scripts used to get this going, including a Vagrantfile, and JMeter Test Plan. Here's a flame graph I generated while applying load (via JMeter) to the basic arithmetic Tomcat sample application. All the green stacks are Java code, red stacks are kernel code, and yellow stacks are C++ code. The big green pile on the right is all the Tomcat Java code that's being run. Tools Here's the technologies I used (I'm writing this on a Mac). VirtualBox 5.1.12 Vagrant 1.9.1 bento/ubuntu-16.04 (kernel 4.4.0-38) Tomcat 7.0.68 JMeter 3.1 OpenJDK 8 1.8.111 linux-tools-4.4.0-38 linux-to

Basic Web Performance Testing With JMeter and Gatling

Introduction In this post I'll give a quick way to get some basic web performance metrics using both JMeter and Gatling . JMeter is a well known, open source, Java based tool for performance testing. It has a lot of features, and can be a little confusing at first. Scripts (aka Test Plans), are XML documents, edited using the JMeter GUI.  There are lots of options, supports a wide variety of protocols, and produces some OK looking graphs and reports. Gatling is a lesser known tool, but I really like it. It's a Scala based tool, with scripts written in a nice DSL. While the scripts require some basic Scala, they are fairly easy to understand and modify. The output is a nice looking, interactive, HTML page. Metrics   Below are the basic metrics gathered by both JMeter and Gatling . If you are just starting performance testing, these might be a good starting point . Response Time – Difference between time when request was sent and time when response has been fully rec

Multi Threaded NUnit Tests

Recently I needed to reproduce an Entity Framework deadlock issue. The test needed to run in NUnit, and involved firing off two separate threads. The trouble is that in NUnit, exceptions in threads terminate the parent thread without failing the test. For example, here's a test that starts two threads: the first thread simply logs to the console, while the other thread turfs an exception. What I expected was that this test should fail. However, the test actually passes. readonly ThreadStart[] delegates = { () => { Console.WriteLine("Nothing to see here"); }, () => { throw new InvalidOperationException("Blow up"); } }; [Test] public void SimpleMultiThreading() { var threads = delegates.Select(d => new Thread(d)).ToList(); foreach (var t in threads) { t.Start(); } foreach (var t in threads) { t.Join(); } } Peter Provost posted an article that describes how to make this test fail. It