What Happens in the Split Second After Your VPN Disconnects
A layer-by-layer breakdown of what happens when a VPN drops — tunnel teardown, routing table rewrites, TCP connection death, DNS leaks, and why QUIC survives it all.
What Actually Happens in That Split Second?
You're on a VPN, everything works. You disconnect — or your Wi-Fi blips for 200ms — and suddenly connections drop, pages hang, and your SSH session freezes. To your app it looks like “the internet went down.” But the reality is far more layered. In that split second, your operating system is tearing down a virtual network interface, rewriting routing tables, switching DNS resolvers, and every in-flight TCP connection is caught in the crossfire.
Quick Primer: How a VPN Connection Works
Before we break the disconnect, we need to understand what a VPN sets up when it connects:
1. Tunnel Interface
The VPN client creates a virtual network interface (utun0 on macOS, tun0 on Linux, a TAP/TUN adapter on Windows). All your traffic flows through this virtual “pipe” instead of going directly to your physical NIC.
2. Routing Table Override
The VPN pushes new routes that override your default gateway. A typical full-tunnel VPN adds 0.0.0.0/1 and 128.0.0.0/1 pointing at the tunnel — this captures all traffic without replacing your original default route.
3. DNS Resolver Swap
The VPN client reconfigures your DNS to use the VPN provider's resolver (e.g. 10.8.0.1) instead of your ISP's or your local router's. This prevents DNS queries from leaking outside the tunnel.
4. Source IP Changes
Your applications now bind to the tunnel's IP (e.g. 10.8.0.42). Every TCP connection, every UDP datagram — they all have this as the source address. Remote servers see the VPN exit node's public IP.
The Disconnect Sequence — Layer by Layer
When the VPN disconnects (whether intentional or from a network blip), here's the cascade that unfolds in roughly 50–500ms:
Layer 1: The Tunnel Interface Goes Down
The VPN client tears down the virtual interface (utun0 /tun0). This is an OS-level event — the kernel marks the interface as DOWN. You can watch this happen in real time:
# Before disconnect
$ ifconfig utun0
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1400
inet 10.8.0.42 --> 10.8.0.42 netmask 0xff000000
# After disconnect
$ ifconfig utun0
ifconfig: interface utun0 does not existThe interface doesn't just go “down” — it's completely removed. Any socket bound to that interface's IP address is now bound to a nonexistent address.
Layer 2: The Routing Table Rewrite
This is where the real chaos starts. When the VPN was connected, your routing table looked something like this:
Destination Gateway Interface 0.0.0.0/1 10.8.0.1 utun0 ← VPN captures all traffic 128.0.0.0/1 10.8.0.1 utun0 ← VPN captures all traffic default 192.168.1.1 en0 ← original (lower priority) vpn-server-ip/32 192.168.1.1 en0 ← VPN's own traffic to server
The 0.0.0.0/1 + 128.0.0.0/1 trick is clever — these two routes together cover the entire IPv4 address space and are more specific than the default route, so they win. But they don't replace the default, so when the VPN removes them:
Destination Gateway Interface default 192.168.1.1 en0 ← back to physical NIC
The OS falls back to the original default route. But this takes a few milliseconds — and during that window, packets in the kernel's transmit queue that were addressed to the tunnel have no valid route and are silently dropped.
Layer 3: TCP Connections Die
This is the most visible symptom. Here's why every existing TCP connection breaks:
(src_ip, src_port, dst_ip, dst_port). When the VPN disconnects, your source IP changes from 10.8.0.42 (tunnel) to 192.168.1.100 (physical NIC). The 4-tuple no longer matches — the remote server has no idea who you are.What happens to existing connections depends on timing:
| Scenario | What happens |
|---|---|
| App sends data immediately after disconnect | Packet goes out with new source IP → remote server sends RST (unknown connection) → app gets ECONNRESET |
| App is idle (e.g. SSH session) | Connection appears "frozen" — no packets flowing, no error yet. Hangs until TCP keepalive fires (default: 2+ hours on most OSes) or app timeout kicks in |
| App retransmits in-flight data | Socket is still bound to the old tunnel IP → retransmit has no valid route → ENETUNREACH or silent drop → connection dies after retries exhausted |
| WebSocket / gRPC stream | Underlying TCP dies → framework fires onclose/onError → app must reconnect |
Layer 4: The DNS Leak Window
While the VPN was active, DNS queries went to the VPN provider's resolver (typically a private IP like 10.8.0.1 reachable only through the tunnel). The moment the tunnel goes down:
- The VPN client restores the original DNS config (your router at
192.168.1.1, or your ISP's resolver) - Any cached DNS entries from the VPN resolver are still in the OS cache — but new queries go to the unencrypted resolver
- For a brief window, your DNS queries are sent in plaintext over your ISP's network — this is the classic DNS leak
# During VPN $ scutil --dns | grep "nameserver" nameserver[0] : 10.8.0.1 # After disconnect $ scutil --dns | grep "nameserver" nameserver[0] : 192.168.1.1 nameserver[1] : 8.8.8.8
Layer 5: What the Kill Switch Does
A VPN kill switch prevents traffic from leaking during the disconnect window. It works by adding firewall rules (not just routes) that block all traffic not going through the tunnel:
# Allow traffic to VPN server itself (to maintain the tunnel) iptables -A OUTPUT -d <vpn-server-ip> -j ACCEPT # Allow traffic through the tunnel interface iptables -A OUTPUT -o tun0 -j ACCEPT # Block everything else iptables -A OUTPUT -j DROP
When the tunnel goes down, the -o tun0 rule matches nothing (interface gone), and the DROP rule catches everything. No traffic leaks — but also no internet until the VPN reconnects or the kill switch is disabled.
The Flicker: Disconnect + Reconnect in Under a Second
The trickiest scenario is when the VPN doesn't fully disconnect — your Wi-Fi blips for 200ms, the VPN control channel detects the loss, tears down the tunnel, and immediately tries to reconnect. During this rapid cycle:
- Tunnel down — routes removed, source IP changes
- Tunnel back up — new routes pushed, source IP changes again (possibly a different tunnel IP)
- All old TCP connections are dead regardless — the 4-tuple changed twice. Even connections that survived the 200ms gap are now using the wrong source IP.
This is why apps with aggressive reconnect logic (Slack, VS Code Remote, cloud IDEs) detect network changes at the OS level and proactively reconnect instead of waiting for TCP to time out.
What Survives and What Doesn't
| Protocol / App | Survives? | Why |
|---|---|---|
QUIC / HTTP/3 | Yes | Connection ID is not tied to IP — designed for network migration |
MPTCP | Yes | Can shift traffic between network paths seamlessly |
TCP (standard) | No | 4-tuple breaks when source IP changes |
WebSocket | No | Built on TCP — dies when the underlying connection resets |
SSH | No* | Dies, but mosh (mobile shell) survives by using UDP + its own session layer |
DNS-over-HTTPS | Partial | New DoH connection needed, but no plaintext leak if configured |
This is exactly why QUIC was designed the way it is. Google built it specifically because mobile users constantly switch between Wi-Fi and cellular — the exact same source-IP-change problem as a VPN disconnect. QUIC identifies connections by a Connection ID in the packet header, not by the IP 4-tuple, so it survives network transitions seamlessly.
What This Means for Backend Developers
If your users are on VPNs (corporate environments, remote workers, privacy-conscious users), you should expect:
- Sudden connection resets mid-request — your server will see
connection reset by peeror a half-open TCP socket. Don't log these as errors — they're normal. - Client IP changes between requests — don't use IP-based session pinning or rate limiting as the sole identifier. Use tokens or session cookies instead.
- Retry storms after VPN reconnects — when the VPN comes back, every client-side connection retries simultaneously. Design your backend for these burst patterns.
- WebSocket reconnection floods — hundreds of clients reconnecting in the same second. Use exponential backoff with jitter on the client, and connection rate limiting on the server.
TL;DR
- VPN disconnect = tunnel interface removed + routing table rewritten + DNS resolver swapped — all within milliseconds
- Every existing TCP connection breaks because the source IP changes (4-tuple mismatch)
- DNS queries briefly leak to your ISP resolver in plaintext (unless kill switch is active)
- The “frozen SSH session” happens because the connection is dead but TCP doesn't know yet — it waits for keepalive timeout
- QUIC/HTTP3 survives because it uses Connection IDs instead of IP 4-tuples
- Kill switches use firewall rules (not just routes) to prevent any traffic leaking during the transition