The most dangerous code in our codebase came from Stack Overflow.

Not malicious—just a well-intentioned answer to “how do I call async from sync?” that someone copy-pasted in 2016. Eight years and hundreds of call sites later, it was silently degrading our application’s performance during every scaling event—20 minutes of latency spikes and timeouts that we blamed on “infrastructure.”

If you’ve read my post about hunting an 8-year-old stack overflow bug, you know we spent a week cleaning up “unrelated” issues before finding the real culprit. This post is about those issues—specifically, the async anti-pattern that was arguably worse than the bug itself.

The Irony: We Thought We Were Safe

Our team had a rule: no .Result or .Wait() in code review. We’d read the articles. We knew about deadlocks. We felt good about our async hygiene.

What we didn’t realize was that we had something far more dangerous hiding in plain sight—a “helper” class that did exactly what .Result does, just with extra steps.

The Stack Overflow Solution That Wasn’t

Somewhere around 2016, Entity Framework 6 introduced async methods like ToListAsync(). Modern! Efficient! The future of .NET!

There was just one problem: our application was built on ASPX Web Forms, and the page lifecycle is fundamentally synchronous. You can’t await in a synchronous method without making the entire call stack async.

So someone found this Stack Overflow answer and created a helper:

// From Stack Overflow - DO NOT USE
public static class AsyncHelper
{
    public static TResult RunSync<TResult>(Func<Task<TResult>> func)
        => Task.Run(func).GetAwaiter().GetResult();
}

It compiled. It worked. It let us use ToListAsync() without rewriting our entire application. Win, right?

Why It Spread Everywhere

The pattern was seductive. Any time someone wanted to use an async EF6 method in a synchronous context, they’d reach for AsyncHelper:

public List<User> GetActiveUsers()
{
    return AsyncHelper.RunSync(() =>
        _context.Users.Where(u => u.Active).ToListAsync());
}

By 2024, we had hundreds of call sites. It was in our data access layer, our business logic, our page code-behind files. It was everywhere.

And here’s the thing: it mostly worked. Not perfectly—but well enough that we never prioritized fixing it. Everyone on the team knew it was “not ideal,” but no one understood HOW bad it actually was.

What Thread Pool Starvation Actually Looks Like

Here’s what AsyncHelper.RunSync actually does:

  1. Task.Run() queues work to the ThreadPool, taking a thread
  2. Inside the task, ToListAsync() sends a query to SQL Server
  3. While waiting for I/O, that thread is released (this part is fine)
  4. When SQL responds, a thread pool thread resumes execution
  5. Meanwhile, .GetAwaiter().GetResult() blocks the calling thread until completion

So for every call, you have:

  • The original request thread: BLOCKED, sitting idle, consuming 1MB of stack space
  • The ThreadPool: Churning threads to handle the async work

Under load, this compounds:

Request 1: Thread blocks waiting on AsyncHelper
Request 2: Thread blocks waiting on AsyncHelper
Request 3: Thread blocks waiting on AsyncHelper
...

ThreadPool sees: "All my threads are busy!"
ThreadPool does: Slowly adds threads (~1-2 per second)

Result after minutes of load:
ThreadPool: 200+ threads, most BLOCKED (not doing useful work)

The overhead isn’t the async state machine (that’s actually lightweight). It’s:

  • Blocked threads consuming 1MB stack each
  • Context switching between hundreds of threads
  • CPU cache thrashing as the OS juggles all these threads

Context switching alone can add 10-20% CPU overhead. Which brings us to the real problem.

The 14-20 Minute Scaling Window From Hell

Our scaling was CPU-based. When CPU crossed a threshold, new IIS servers would spin up. Simple enough.

But here’s what actually happened during a scaling event:

┌─────────────────────────────────────────────────────────────────┐
│  SCALING TRIGGERED (CPU threshold hit)                          │
│                                                                 │
│  ┌─────────────── 14-20 MINUTE WARMUP WINDOW ───────────────┐  │
│  │                                                           │  │
│  │  PHASE 1: Server Starting                                 │  │
│  │  - New IIS server process starts                          │  │
│  │  - Worker thread pool initializing to MIN count           │  │
│  │    (Threads created slowly: ~1-2 per second)              │  │
│  │                                                           │  │
│  │  PHASE 2: "Healthy" But Not Ready                         │  │
│  │  - Server added to load balancer (IIS reports healthy)    │  │
│  │  - Thread pool still growing toward min                   │  │
│  │  - Redis connections being established proactively        │  │
│  │                                                           │  │
│  │  PHASE 3: First Requests Arrive                           │  │
│  │  - JIT compilation on first hits (no pre-JIT)             │  │
│  │  - Limited threads available (still warming)              │  │
│  │  - AsyncHelper blocks what threads exist                  │  │
│  │  - Redis proactive connections competing for threads      │  │
│  │                                                           │  │
│  │  THE COLLISION: Requests queue, timeouts, user errors     │  │
│  │                                                           │  │
│  └───────────────────────────────────────────────────────────┘  │
│                                                                 │
│  Server finally able to absorb load normally                    │
└─────────────────────────────────────────────────────────────────┘

Throughout that 14-20 minute window, the old servers—the ones that triggered scaling in the first place—were still at high CPU. Still had AsyncHelper blocking threads. Now competing with the new servers for Redis connections.

AsyncHelper didn’t cause the scaling. But it turned every scaling event into a 20-minute window of degraded performance—slow page loads, timeouts, and frustrated users.

The servers were “healthy” according to the load balancer. But they were drowning before they could swim.

The Evidence

We found this through Debug Diagnostic dumps during incidents. The pattern was unmistakable once we knew what to look for:

  • Dozens to hundreds of threads visible in async task states
  • IIS worker pool contention clearly visible
  • Sentry lit up with async task failures during scaling events

The failures weren’t constant—they only appeared during scale-up or scale-down. Educators would see errors right after scaling occurred. The chaos of scaling events made it easy to blame “infrastructure” rather than our own code.

Why We Didn’t Catch It Sooner

  • Only visible under peak load. Normal traffic? Fine. Scale event during back-to-school rush? Chaos.
  • Scaling events are inherently chaotic. Easy to blame the scaling itself.
  • We had banned .Result! We felt safe. AsyncHelper slipped through because it looked different.
  • The pattern was everywhere. When hundreds of call sites use something, it feels “normal.”
  • “Everyone knows it’s bad” isn’t action. We knew it was suboptimal. We didn’t know it was catastrophic.

The Phased Removal

You can’t fix hundreds of call sites overnight. We removed AsyncHelper in phases, driven by incidents:

  • January: First major incident. Cleaned up the hottest paths.
  • June: Second incident. Deeper removal from critical flows.
  • December: Final push, combined with the stack overflow fix from my other post.

Each removal followed this pattern:

// BEFORE: AsyncHelper bridge
public List<User> GetActiveUsers()
{
    return AsyncHelper.RunSync(() =>
        _context.Users.Where(u => u.Active).ToListAsync());
}

// OPTION 1: Async all the way (if you can change callers)
public async Task<List<User>> GetActiveUsersAsync()
{
    return await _context.Users.Where(u => u.Active).ToListAsync();
}

// OPTION 2: Just use sync EF6 (if you can't go async)
public List<User> GetActiveUsers()
{
    return _context.Users.Where(u => u.Active).ToList();
}

Here’s the uncomfortable truth about legacy apps: sometimes sync .ToList() is safer than fake async with .ToListAsync().

If you can’t go async all the way to the HTTP handler, don’t pretend. The “bridge” pattern causes more problems than it solves.

How to Find This in Your Codebase

Search for these patterns:

Task.Run.*GetAwaiter.*GetResult
AsyncHelper
RunSync

Check your error tracking (Sentry, Application Insights, etc.) for:

  • Async-related exceptions
  • Timeouts during scaling events specifically
  • Patterns that only appear under peak load

Run Debug Diagnostic during load tests and look for thread pool contention.

Lessons Learned

  1. Stack Overflow answers have shelf lives. What worked in 2012 may be an anti-pattern in 2024. That answer was written before async/await patterns were well understood.

  2. Banning .Result isn’t enough. AsyncHelper does the same thing with extra steps. Review for the pattern, not just the syntax.

  3. Scaling events are your canary. If things break specifically during scale-up or scale-down, suspect thread pool issues.

  4. Partial async is worse than no async. The bridge pattern creates problems that pure sync code never would.

  5. “Everyone knows it’s bad” isn’t action. Until we quantified the impact—20 minutes of degraded performance during every scaling event—we couldn’t get prioritization to fix it.

  6. “Healthy” doesn’t mean ready. Load balancer health checks don’t account for thread pool warmup, connection pool establishment, or JIT compilation.

The Elephant in the Room: ASPX Itself

Let’s be honest: ASPX Web Forms is legacy technology. It was designed in the early 2000s, before async/await existed, before cloud scaling was a concern, before anyone knew what “thread pool starvation” meant in a web context.

The real solution wasn’t AsyncHelper. It wasn’t even removing AsyncHelper. The real solution was modernizing the stack—moving to ASP.NET MVC or ASP.NET Core, where async is a first-class citizen from the HTTP handler down.

But that requires investment. Time. Budget. Buy-in from leadership who see “it works” and don’t understand why you’d rewrite something that works.

So we lived with ASPX. And when EF6 introduced ToListAsync(), someone reached for AsyncHelper because the alternative was a multi-quarter migration project that nobody wanted to fund.

This is the reality of legacy systems: you make tradeoffs. Sometimes those tradeoffs blow up years later during a scaling event.

What We’d Do Differently

If I could go back to 2016, I’d say: don’t use ToListAsync() if you can’t go async all the way. The synchronous ToList() is fine. It’s honest about what it’s doing. The performance difference is negligible compared to the disaster that AsyncHelper creates under load.

Or better yet: invest in making the critical paths properly async from the HTTP handler down. It’s more work upfront, but it’s work that doesn’t explode during your highest-traffic moments.


This was the noise we had to clean up before finding the real bug. Next up: “Why I Deleted BuildContainsExpression”—the story of how a “clever” LINQ helper caused stack overflows after 8 years.

Have your own async horror stories? I’d love to hear them—reach out.