Common async pitfalls—part oneNov 17, 2020 · 5 minute read · Comments
The .NET Framework provides a great programming model that enables high performance code using an easy to understand syntax. However, this can often give developers a false sense of security, and the language and runtime aren’t without pitfalls. Ideally static analysers, like the Microsoft.VisualStudio.Threading.Analyzers Roslyn analysers, would catch all these issues at build time. While they do help catch a lot of mistakes, they can’t catch everything, so it’s important to understand the problems and how to avoid them.
Here’s a collection of some of the most common pitfalls I’ve come across—either myself, colleagues and friends, or examples in documentation—and how to avoid them.
The main benefit of asynchronous programming is that the thread pool can be smaller than a synchronous application while performing the same amount of work. However, once a piece of code begins to block threads, the resulting thread pool starvation can be ugly.
If I run a small test, which makes 5000 concurrent HTTP requests to a local server, there are dramatically different results depending on how many blocking calls are used.
% blocking shows the number of calls that use
Task.Result, which blocks the thread. All other requests use
|% Blocking||Threads||Total Duration||Avg. Duration|
The increased total duration when using blocking calls is due to the thread pool growth, which happens slowly. You can always tune the thread pool settings to achieve better performance, but it will never match the performance you can achieve with non-blocking calls.
Like all other blocking calls, any methods from
System.IO.Stream should use their async equivalents:
FlushAsync, etc. Also, after writing to a stream, you should call the
FlushAsync method before disposing the stream. If not, the
Dispose method may perform some blocking calls.
You should always propagate cancellation tokens to the next caller in the chain. This is called a cooperative cancellation model. If not, you can end up with methods that run longer than expected, or even worse, never complete.
To indicate to the caller that cancellation is supported, the final parameter in the method signature should be a
If you need to put a timeout on an inner method call, you can link one cancellation token to another. For example, you want to make a service-to-service call, and you want to enforce a timeout, while still respecting the external cancellation.
Cancelling uncancellable operations
Sometimes you may find the need to call an API which does not accept a cancellation token, but your API receives a token and is expected to respect cancellation. In this case the typical pattern involves managing two tasks and effectively abandoning the un-cancellable operation after the token signals.
Occasionally, you may find yourself wanting to perform asynchronous work during initialization of a class instance. Unfortunately, there is no way to make constructors async.
There are a couple of different ways to solve this. Here’s a pattern I like:
- A public static creator method, which publicly replaces the constructor
- A private async member method, which does the work the constructor used to do
- A private constructor, so callers can’t directly instantiate the class by mistake
So, if I apply the same pattern to the class above the class becomes:
And we can instantiate the class by calling
var foo = await Foo.CreateAsync(1, 2);.
In cases where the class is part of an inheritance hierarchy, the constructor can be made protected and
InitializeAsync can be made protected virtual, so it can be overridden and called from derived classes. Each derived class will need to have its own
Avoid premature optimization
It might be very tempting to try to perform parallel work by not immediately awaiting tasks. In some cases, you can make significant performance improvements. However, if not used with care you can end up in debugging hell involving socket or port exhaustion, or database connection pool saturation.
Using async everywhere generally pays off without having to make any individual piece of code faster via parallelization. When threads aren’t blocking you can achieve higher performance with the same amount of CPU.
Avoid Task.Factory.StartNew, and use Task.Run only when needed
Even in the cases where not immediately awaiting is safe, you should avoid
Task.Factory.StartNew, and only use
Task.Run when you need to run some CPU-bound code asynchronously.
The main way
Task.Factory.StartNew is dangerous is that it can look like tasks are awaited when they aren’t. For example, if you
async-ify the following code:
be careful because changing the delegate to one that returns
Task.Factory.StartNew will now return
Task<Task>. Awaiting only the outer task will only wait until the actual task starts, not finishes.
Normally what you want to do, when you know delegates are not CPU-bound, is to just use the delegates themselves. This is almost always the right thing to do.
However, if you are certain the delegates are CPU-bound, and you want to offload this to the thread pool, you can use
Task.Run. It’s designed to support async delegates. I’d still recommend reading Task.Run Etiquette and Proper Usage for a more thorough explanation.
If, for some extremely unlikely reason, you really do need to use
Task.Factory.StartNew you can use
await await to convert a
Task<Task> into a
Task that represents the actual work. I’d recommend reading Task.Run vs Task.Factory.StartNew for a deeper dive into the topic.
Using the null conditional operator with awaitables can be dangerous. Awaiting null throws a
Instead, you must do a manual check first.
A Null-conditional await is currently under consideration for future versions of C#, but until then you’re stuck with manually checking.