Race Conditions: When Timing Is Everything
6 min read
March 8, 2026

Table of contents
👋 Introduction
Hey everyone!
After four weeks on WiFi, we’re back to web security.
Race conditions show up constantly in bug bounty programs. The problem has always been reliable exploitation: send two requests 50ms apart and the server processes them sequentially. No race. James Kettle’s 2023 research at PortSwigger changed that by showing how HTTP/2 eliminates network jitter entirely.
If you caught Issue 11 on Web3 withdrawals, you saw race conditions mentioned in the context of double-spending. That was the surface. This week we go deep: TOCTOU mechanics, the single-packet attack, limit overrun, and multi-endpoint races.
Let’s break some state machines 👇
⏱ The Core Problem: TOCTOU
Time-Of-Check to Time-Of-Use (TOCTOU) is the root cause of most web race conditions.
The application checks a condition at one point in time. Then it acts on the result at a later point. Between check and action, the state can change. An attacker who lands requests inside that window can make the application act on a condition that is no longer true.
Classic example: gift card redemption.
1. App checks if gift card has been used → status: unused
2. [race window opens]
3. App marks gift card as used
4. App credits balance
Send 20 requests simultaneously. All 20 arrive at step 1. All 20 see status: unused. All 20 proceed to step 4. One gift card becomes 20 credits.
The race window is the gap between check and action. Narrow windows (microseconds, single database transaction split into two queries) are harder to hit but still exploitable with the right tooling. Your job is to identify the window and fill it.
🔍 Finding the Race Window
Three signals point to exploitable race conditions.
Limit enforcement on countable actions. Gift cards, discount codes, referral bonuses, free trial activations, vote counts. Any operation the application limits per user or per resource. If the limit is checked before the action is recorded, it’s potentially raceable.
Suspicious error timing. Errors like “insufficient funds” appearing after a transaction completes suggest the balance check and the deduction are separate operations. That gap is your window.
State transitions on single-use resources. Password reset tokens, email verification codes, one-time links. Any resource that transitions from valid to used in two separate steps is a candidate.
How to test: Send the same state-changing request twice simultaneously. Different response codes or bodies between two identical requests indicate state-dependent behavior worth investigating.
⚡ The Single-Packet Attack
The problem with race conditions has always been network jitter. Requests sent milliseconds apart arrive milliseconds apart. The server processes them sequentially. No race.
James Kettle’s 2023 paper “Smashing the state machine” introduced the single-packet attack. HTTP/2 (RFC 9113) multiplexes multiple requests over a single TCP connection. A client can send multiple complete HTTP/2 requests in one TCP packet. The server receives all of them before processing any.
Network jitter eliminated. Completely.
For HTTP/1.1 targets, the technique is last-byte synchronization. Send all requests with everything except the final byte. Hold the last bytes. Flush them all simultaneously. Requests complete at near-identical times without HTTP/2 support.
Burp Suite’s Turbo Intruder handles both automatically.
💀 Attack 1: Limit Overrun
Limit overrun is the most common race condition in bug bounty. You exceed a per-user limit by sending requests faster than the application can enforce it.
Scenario: A platform allows one redemption per promo code. Endpoint: POST /api/promo/redeem.
Step 1: Confirm the endpoint works. Redeem once manually, verify the discount applies.
Step 2: Reset state (fresh account or fresh code) and launch the race.
Turbo Intruder script for single-packet attack (HTTP/2 targets):
def queueRequests(target, wordlists):
engine = RequestEngine(endpoint=target.endpoint,
concurrentConnections=1,
engine=Engine.BURP2)
for i in range(20):
engine.queue(target.req, gate='race1')
engine.openGate('race1')
def handleResponse(req, interesting):
table.add(req)
The gate parameter holds all 20 requests until openGate fires. All 20 release in a single HTTP/2 packet. The server receives them simultaneously before processing any of them.
What success looks like:
- Multiple
200 OKresponses where you expected only one - Different response bodies across simultaneous requests (state inconsistency)
- One success followed by “already redeemed” errors, but the credit appears multiple times
🔀 Attack 2: Multi-Endpoint Race
Harder to find, higher impact. Two different endpoints share state. The check happens on one. The action happens on another.
Scenario: An e-commerce checkout flow.
POST /api/cart/checkout → validates cart, processes payment
POST /api/cart/add-item → adds an item to the cart
After initiating checkout (state: under review), add an item to the cart before the checkout finalizes. If the checkout reads cart contents once and the add-item endpoint doesn’t validate checkout state, you get an item added post-validation.
T=0ms POST /api/cart/checkout (payment submitted, state: processing)
T=0ms POST /api/cart/add-item (item added simultaneously)
Some applications finalize the order and update the cart in separate database transactions. Land your request in that gap and the item ships without payment.
How to map this: Look for shared identifiers across endpoints (cart ID, session, order ID). Any endpoint that reads a shared resource without locking it is a candidate for this attack. Checkout flows, balance transfers, inventory reservations, and multi-step form submissions are all worth testing.
🛠 Turbo Intruder
Turbo Intruder is the standard tool for web race conditions. Install it from the BApp Store.
For HTTP/1.1 targets (last-byte sync):
def queueRequests(target, wordlists):
engine = RequestEngine(endpoint=target.endpoint,
concurrentConnections=20,
engine=Engine.THREADED,
lastByte=True)
for i in range(20):
engine.queue(target.req, gate='race1')
engine.openGate('race1')
def handleResponse(req, interesting):
table.add(req)
lastByte=True holds the final byte of each request in the send buffer. openGate flushes them simultaneously. Less precise than the HTTP/2 single-packet approach, but effective when the server doesn’t support HTTP/2.
Choosing the right engine:
- HTTP/2 target:
Engine.BURP2withconcurrentConnections=1and gate - HTTP/1.1 target:
Engine.THREADEDwithlastByte=Trueand gate - When unsure: try BURP2 first, fall back to THREADED
🎯 Key Takeaways
Race conditions are consistently underreported because reliable exploitation used to require perfect network conditions. The single-packet attack removes that constraint. If the target speaks HTTP/2, you can send 20 simultaneous requests in one TCP packet and the network is no longer a factor.
Limit overrun is where you start on any new target. Any endpoint that enforces a per-user limit by checking a database field before writing the result is worth testing. Gift cards, promo codes, referral bonuses, vote buttons, free trial activations.
Multi-endpoint races require mapping the application’s state machine. Find endpoints that share state without locking. The check happens on one endpoint. The action happens on another. Checkout flows and balance transfers are the highest-value targets.
Turbo Intruder handles both attack patterns. Gate mechanism for simultaneous release. Engine.BURP2 for HTTP/2. Engine.THREADED with lastByte=True for HTTP/1.1.
Practice:
- PortSwigger Academy: Race Conditions - four labs covering limit overrun, multi-endpoint, single-endpoint partial construction, and connection warming
- James Kettle’s full paper: “Smashing the state machine” - required reading before your next engagement
Thanks for reading, and happy hunting!
— Ruben
Other Issues
Previous Issue
Next Issue
💬 Comments Available
Drop your thoughts in the comments below! Found a bug or have feedback? Let me know.