In a previous post I described the sync-over-async problem and the issues you’ll encounter when blocking threads in ASP.NET web applications. Applications that suffer from sync-over-async issues can behave erratically. Increased latency due to thread-pool starvation and dead-locking which causes service outages are the most common occurrences. Many of these problems only present themselves when an application receives significant bursts of traffic which makes them hard to debug. In my experience, there is a lot of confusion regarding async/await in ASP.NET web applications, especially around this topic. In an effort to provide evidence for why sync-over-async code should be avoided, I’ve performance tested 10 scenarios using an ASP.NET web application to demonstrate the effect it will have on the scalability of your application.
Performance Testing Set-Up
In order to performance test the web application and async scenarios, I used Crank, the benchmarking tool the .NET team uses to run benchmarks for the TechEmpower Web Framework Benchmarks (and others). Crank configures and runs the testing application, but also provides the tools necessary to generate varying amounts of traffic to test the application. Internally, Crank can use the CLI tool Bombardier to generate http requests for load testing. Bombardier is my go-to load testing tool for web applications.
All the performance tests were executed on a desktop computer with an AMD Ryzen 5950X processor (16 cores 32 threads) with 64 GB of RAM. At the time of writing, this CPU cost ~$800.
The Test Application
The test application contains a controller and action methods that demonstrate different sync-over-async scenarios. Each scenario consists of an ASP.NET action method that accepts a request, performs an async operation by calling the DoAsyncOperation
method, and returns a string value as a response. The DoAsyncOperation
method is shown below.
private async Task<string> DoAsyncOperation(int delay)
{
await Task.Delay(delay);
return "value";
}
The DoAsyncOperation
method is simple. It accepts a delay as an argument and awaits Task.Delay
to asynchronously wait. This allows the async method to be executed to simulate varying lengths of delays. The DoAsyncOperation
is an example of a good async method. However, this async method will be executed in problematic ways to determine the impact on performance.
Testing Sync-Over-Async Scenarios
For each async scenario, we execute requests over a period of 30 seconds and during that time period we collect a number of different metrics. These metrics provide insight into our application’s performance and the thread-pool during our performance tests. We execute performance tests for each scenario starting from 8 concurrent connections increasing to 2048 concurrent connections. The number of concurrent connections controls how many http requests will be sent at the same time.
Important Metrics Collected
During the performance tests, we are going to collect the following metrics:
Max Thread-Pool Count
The maximum number of threads in the thread-pool during the performance test.
Max Thread-Pool Queue Length
The maximum number of items queued to the thread-pool waiting to be processed during the performance test. High numbers here indicate that work is being queued because there are no threads currently available.
Requests
The number of requests sent to the endpoint during the testing time period.
Bad Responses
The number of requests which return bad responses (errors + time-outs) during the testing time period.
Mean Latency
The mean latency for all requests.
90th Latency
The 90th percentile latency for all requests.
Requests Per Second (RPS) Mean
The mean number of requests per second.
Async Scenario #1 (Suffers from sync-over-async issues)
[HttpGet]
public IActionResult ExecuteScenarioOne()
{
var value = Task.Run(() => DoAsyncOperation(50)).Result;
return Ok(value);
}
This method is problematic for a few reasons, mainly because it uses the Result
property on the Task
returned from executing Task.Run
. This blocks a thread while the DoAsyncOperation
executes on a second thread-pool thread. In this scenario, a single thread is blocked. The results of the performance test are shown below:
Connections | Scenario #1 |
---|---|
2048 | Max Thread-Pool Count: 397 Max Thread-Pool Queue Length: 7,210 Requests: 15,913 Bad Responses: 15,913 Mean Latency: 4,112ms 90th Latency: 4,696ms RPS Mean: 455 |
1024 | Max Thread-Pool Count: 408 Max Thread-Pool Queue Length: 7,706 Requests: 13,312 Bad Responses: 13,312 Mean Latency: 2,421ms 90th Latency: 2,630ms RPS Mean: 404 |
512 | Max Thread-Pool Count: 392 Max Thread-Pool Queue Length: 1,533 Requests: 6,656 Bad Responses: 6,656 Mean Latency: 2,339ms 90th Latency: 2,358ms RPS Mean: 202 |
256 | Max Thread-Pool Count: 402 Max Thread-Pool Queue Length: 3,619 Requests: 1,428 Bad Responses: 1,428 Mean Latency: 5,547ms 90th Latency: 15,051ms RPS Mean: 38 |
128 | Max Thread-Pool Count: 208 Max Thread-Pool Queue Length: 128 Requests: 60,774 Bad Responses: 0 Mean Latency: 63ms 90th Latency: 64ms RPS Mean: 2025 |
64 | Max Thread-Pool Count: 148 Max Thread-Pool Queue Length: 1 Requests: 30,464 Bad Responses: 0 Mean Latency: 63ms 90th Latency: 64ms RPS Mean: 1014 |
32 | Max Thread-Pool Count: 116 Max Thread-Pool Queue Length: 1 Requests: 15,200 Bad Responses: 0 Mean Latency: 63ms 90th Latency: 64ms RPS Mean: 516 |
16 | Max Thread-Pool Count: 76 Max Thread-Pool Queue Length: 6 Requests: 7,632 Bad Responses: 0 Mean Latency: 62ms 90th Latency: 64ms RPS Mean: 254 |
8 | Max Thread-Pool Count: 42 Max Thread-Pool Queue Length: 3 Requests: 3,816 Bad Responses: 0 Mean Latency: 62ms 90th Latency: 64ms RPS Mean: 127 |
In the results above for scenario one, you can see things work pretty well until we reach 256 concurrent connections and above. At 256 concurrent connections and above every request receives a bad response due to API timeouts. This behavior is fairly consistent across scenarios 1 - 9. The remaining scenarios are shown below with a link to the full data table outlining every scenario. You can see the entire table here.
Async Scenario #2 (Suffers from sync-over-async issues)
[HttpGet]
public IActionResult ExecuteScenarioTwo()
{
var value = Task.Run(() => DoAsyncOperation(50)).GetAwaiter().GetResult();
return Ok(value);
}
Similar to scenario #1, this method is problematic because it executes GetAwaiter().GetResult()
on the Task returned from Task.Run
. This blocks a thread while the DoAsyncOperation
executes on a second thread-pool thread. The difference between the two examples is how exceptions are propagated. In this scenario, a single thread is blocked.
Async Scenario #3 (Suffers from sync-over-async issues)
[HttpGet]
public IActionResult ExecuteScenarioThree()
{
var value = Task.Run(() => DoAsyncOperation(50).Result).Result;
return Ok(value);
}
Scenario #3 creates further issues by blocking the thread that enters and the thread-pool thread Task.Run
shifts the work to. This happens because we’re using Result
on the Task that DoAsyncOperation
returns as well as the Task that Task.Run
returns. In this scenario, two threads are blocked.
Async Scenario #4 (Suffers from sync-over-async issues)
[HttpGet]
public IActionResult ExecuteScenarioFour()
{
var value = Task.Run(() => DoAsyncOperation(50).GetAwaiter().GetResult()).GetAwaiter().GetResult();
return Ok(value);
}
Like scenario #3, this example blocks two threads, the thread that enters to perform the work, and the thread-pool thread that Task.Run
shifts the work too. The difference here is that GetAwaiter().GetResult()
is being used instead of Task.Result
. If the Task
fails, GetResult()
will just throw the exception directly, while Task.Result
will throw an AggregateException
containing the actual exception.
Async Scenario #5 (Suffers from sync-over-async issues)
[HttpGet]
public IActionResult ExecuteScenarioFive()
{
var value = DoAsyncOperation(50).Result;
return Ok(value);
}
In scenario #5 we’ve improved things slightly by getting rid of the unnecessary Task.Run
, but we’re still blocking the thread that performs the work because we’re using Task.Result
. In this scenario, a single thread is blocked.
Async Scenario #6 (Suffers from sync-over-async issues)
[HttpGet]
public IActionResult ExecuteScenarioSix()
{
var value = DoAsyncOperation(50).GetAwaiter().GetResult();
return Ok(value);
}
Again, this is similar to scenario #5 because we’re no longer using Task.Run
, but we’re still blocking the thread that enters because we’re using GetAwaiter().GetResult()
. In this scenario, a single thread is blocked.
Async Scenario #7 (Suffers from sync-over-async issues)
[HttpGet]
public string ExecuteScenarioSeven()
{
var task = DoAsyncOperation(50);
task.Wait();
return task.GetAwaiter().GetResult();
}
This scenario blocks a single thread when Task.Wait()
is called.
Async Scenario #8 (Suffers from sync-over-async issues)
[HttpGet]
public async Task<IActionResult> ExecuteScenarioEight()
{
var value = await Task.Run(() => DoAsyncOperation(50).Result);
return Ok(value);
}
In this scenario, Task.Run
returns a task and we’re correctly awaiting it, however, we’re using Task.Result
on the asynchronous operation passed to Task.Run
, which ultimately blocks the thread-pool thread that Task.Run
shifts the work to. Additionally, Task.Run
in this scenario is not necessary and adds some overhead. In this scenario, a single thread is blocked.
Async Scenario #9 (Suffers from sync-over-async issues)
[HttpGet]
public async Task<IActionResult> ExecuteScenarioNine()
{
var value = await Task.Run(() => DoAsyncOperation(50).GetAwaiter().GetResult());
return Ok(value);
}
Similar to scenario #9, we correctly await Task.Run
, however, we block the thread-pool thread that Task.Run
shifts the work to because we’re using GetAwaiter().GetResult()
. In this scenario a single thread is blocked.
Async Scenario #10 (How to correctly write an async method.)
[HttpGet]
public async Task<IActionResult> ExecuteScenarioTen()
{
var value = await DoAsyncOperation(50);
return Ok(value);
}
Finally, we come to our last scenario. This is the correct way to way to execute an asynchronous method without blocking any threads. In this scenario, no threads are blocked, and the thread that initiates the asynchronous operation will return to the thread-pool to service other requests while our method waits. Once waiting is completed, a thread will be chosen from the thread-pool to complete the work.
Full Results
The full data table which includes all scenarios can be seen here. Each column represents one of the async scenarios outlined above. Recall, that scenarios 1-9 all suffer from the sync-over-async problem in one form or another. Scenario #10 does not suffer from any async problems and is an example of a good async endpoint. The leftmost column represents the concurrent number of connections used for each test. As the concurrent number of connections grows, you can see the effects on the thread-pool count and thread-pool queue. For the purposes of the performance test, if any request receives a bad response, the scenario will be marked as a failure and colored red. If all requests receive successful responses, the scenario will be marked green. You can see in the beginning that all scenarios up to and including 128 concurrent connections all return successful responses. However, you can start to see differences in the number of work items that are queued and waiting for a thread from the thread-pool to complete the work.
The important thing to keep in mind for each scenario is that if it is marked red, that represents your API failing in production.