Skip to main content

My Response to "Avoid Lazy Loading in ASP.NET"

Shawn Wildermuth wrote a post Avoid Lazy Loading in ASP.NET where he argues that web applications should not make use of the lazy loading features in ORMs. His argument for not using lazy loading is because of potential problems, including increased web page latency, and extra load on the database server. He would rather avoid these pitfalls by avoiding using lazy loading.

For reference, here's the documentation on lazy loading in EF Core. 

I agree with his position that lazy loading can lead to performance issues, but I disagree with the assertion that lazy loading should be avoided.

My main issue is the advice to not use the lazy loading feature out right in web applications. I am not a fan of black listing a technology because of the potential issues.

Instead, I'd rather see an explanation of the potential pitfalls, under what conditions, and how those problems would manifest themselves in web applications.

I wrote up my comments on his post, but my comments got marked as spam. Below is the comment I left on Shawn's post. I'm also working on a little mini project using EF Core 2.1 lazy loading, to do a performance analysis.



I agree that there's a potential for a performance impact by using lazy loading, but that shouldn't prevent someone from using it.

If we follow this argument, I could turn this around and advise folks to avoid ORM's because it limits scalability. Or I could also argue that at high loads, the use of garbage collected languages will limit scalability. But those kinds of statements can't be made generally. You can't possibly know the requirements for every project, or what the bottlenecks might be.

You also didn't include any empirical evidence that illustrates the performance issues.

I've built and scaled web applications that use lazy loading, and lazying loading was rarely an issue. Use of connection pools helps ensure that the web application doesn't make too many simultaneous queries to the DB server, if tuned properly. And when lazy loading is causing performance issues, load on the database server could be an issue, but you're more likely to see a spike in network traffic, CPU, and memory first, due to the number of queries being executed and the conversion of the result set into objects. Latency can also increase due to overhead of the additional queries, but if there are only a small number of queries, this shouldn't add much overhead. All these things are differ from project to project, so it's hard to say if the performance overhead would be an issue or not. It also depends on how much traffic the website is getting, and if you're at sufficient scale, your monthly hosting bill.

Another thing not mentioned in this post is monitoring of web applications. APMs such as New Relic make spotting these kinds of issues fairly easy. If you are having performance issues, you can slice and dice the web requests, looking for the offenders, and optimize the code as necessary. Any design decision can impact performance, not just lazy loading.

I'm also curious who the intended audience of this blog post is. I assume that you're trying to help future developers from falling into traps with lazy loading in web applications. But think about the person on the other end reading this. Should they rewrite their application that uses lazy loading in light of this article, with no performance indicators that lazy loading is causing issues? I don't think applications should be rewritten because there's a potential in the future that lazy loading could be an issue. Or think about a senior dev that's considering using lazy loading in their design. Should they rework the design because of the potential for issues down the road?

I appreciate that you have a distain for use of lazy loading in web applications, and the awareness this post brings to the potential pitfalls. But I don't think that we should outright ban the use of lazy loading for every web app ever.

My advice on any technology choice is to monitor performance regularly, and optimize as necessary. If the technology choice makes sense in the design and performs well enough, use it.

Comments

  1. Great Article
    The IEEE Xplore digital library is your gateway to trusted research—journals, conferences, standards, ebooks, and educational courses—with more than 3 million articles to help you fuel imagination, build from previous research, and inspire new ideas.
    Final Year Projects for CSE in Node.js
    IEEE will pave a new way in knowledge-sharing and spreading ideas across the globe. Project Centers in Chennai for CSE

    Node.js Corporate Training
    JavaScript Training in Chennai

    ReplyDelete
  2. Really I enjoy your site with effective and useful information. It is included very nice post with a lot of our resources. Thanks for share. i enjoy this post. Jacob

    ReplyDelete
  3. Your post is highly informative. Thank you for sharing this valuable information, https://www.edmontonfurnacecare.com/

    ReplyDelete

Post a Comment

Popular posts from this blog

Generating Java Mixed Mode Flame Graphs

Overview I've seen Brendan Gregg's talk on generating mixed-mode flame graphs  and I wanted to reproduce those flamegraphs for myself. Setting up the tools is a little bit of work, so I wanted to capture those steps. Check out the Java in Flames post on the Netflix blog for more information. I've created github repo ( github.com/jerometerry/perf )  that contains the scripts used to get this going, including a Vagrantfile, and JMeter Test Plan. Here's a flame graph I generated while applying load (via JMeter) to the basic arithmetic Tomcat sample application. All the green stacks are Java code, red stacks are kernel code, and yellow stacks are C++ code. The big green pile on the right is all the Tomcat Java code that's being run. Tools Here's the technologies I used (I'm writing this on a Mac). VirtualBox 5.1.12 Vagrant 1.9.1 bento/ubuntu-16.04 (kernel 4.4.0-38) Tomcat 7.0.68 JMeter 3.1 OpenJDK 8 1.8.111 linux-tools-4.4.0-38 linux-to

Basic Web Performance Testing With JMeter and Gatling

Introduction In this post I'll give a quick way to get some basic web performance metrics using both JMeter and Gatling . JMeter is a well known, open source, Java based tool for performance testing. It has a lot of features, and can be a little confusing at first. Scripts (aka Test Plans), are XML documents, edited using the JMeter GUI.  There are lots of options, supports a wide variety of protocols, and produces some OK looking graphs and reports. Gatling is a lesser known tool, but I really like it. It's a Scala based tool, with scripts written in a nice DSL. While the scripts require some basic Scala, they are fairly easy to understand and modify. The output is a nice looking, interactive, HTML page. Metrics   Below are the basic metrics gathered by both JMeter and Gatling . If you are just starting performance testing, these might be a good starting point . Response Time – Difference between time when request was sent and time when response has been fully rec

Multi Threaded NUnit Tests

Recently I needed to reproduce an Entity Framework deadlock issue. The test needed to run in NUnit, and involved firing off two separate threads. The trouble is that in NUnit, exceptions in threads terminate the parent thread without failing the test. For example, here's a test that starts two threads: the first thread simply logs to the console, while the other thread turfs an exception. What I expected was that this test should fail. However, the test actually passes. readonly ThreadStart[] delegates = { () => { Console.WriteLine("Nothing to see here"); }, () => { throw new InvalidOperationException("Blow up"); } }; [Test] public void SimpleMultiThreading() { var threads = delegates.Select(d => new Thread(d)).ToList(); foreach (var t in threads) { t.Start(); } foreach (var t in threads) { t.Join(); } } Peter Provost posted an article that describes how to make this test fail. It