3 Networking Tricks to Level-Up Your System Design Interview
Client-Side Load Balancing, Server-sent Events, and Circuit Breakers
In system design interviews many candidates focus on database choices and scaling strategies, but networking knowledge can significantly set you apart from other candidates.
In this edition, we’re sharing three powerful networking tricks that will elevate your system design solutions and impress your interviewers.
1. Client-Side Load Balancing: The Performance Booster
Most system design solutions incorporate load balancers to distribute traffic across multiple servers. However, many candidates default to dedicated load balancers without considering client-side alternatives. Especially for infra-style interviews, client-side load balancing can be key to high performance systems.
How It Works
With client-side load balancing, the client itself decides which server to talk to rather than relying on a middleman. Here's the basic flow:
Clients (infrequently) request available server information from a service registry
Clients maintain a local list of available servers
Clients directly connect to servers using simple algorithms (round-robin, random, etc.)
The list is periodically refreshed to account for server changes or updates are pushed to the client
The Benefits
Where it applies, client-side load balancing is superior to dedicated load balancers:
Reduced Latency: Eliminates an extra network hop for each request that would normally go through a dedicated load balancer device or server
Improved Throughput: Direct connections mean higher throughput potential, the load balancer can’t be a bottleneck
Better Scaling: The load-balancing work is distributed across clients and the server updates are often less frequent than the requests themselves
Simplified Architecture: Fewer infrastructure components to manage
When to Use It
Client-side load balancing works best when:
You control the client implementation
You have a relatively small number of backend servers
You need maximum performance
Your services are within the same network
Two great examples of this in the real world are DNS and Redis’ Cluster client.
In interviews, client-side load balancing works particularly well for microservice architectures where a constrained number of services frequently communicate with each other. Client-side load balancing can apply for externally-facing services, but they’ll need to tolerate some delays (like DNS!).
If it seems like a client-side solution might work, it’s worth talking through the options with your interviewer. Even if you end up needing a dedicated load balancer, it shows breadth that many candidates don’t have.
2. Server-Sent Events (SSE) for Push Notifications
When junior candidates need to push notifications to clients, they often reach for two solutions:
Apple’s APN (Apple Push Notification)/Google’s FCM (Firebase Cloud Messaging) or
Websockets.
But these have some distinct downsides. APN and FCM don’t provide any latency guarantees and are low throughput. They’re best for notifications that will literally pop up on the users phone “Your Uber Arrived” vs frequent pushes like your inbound driver’s location.
And websockets involve extra networking hoops to jump through and usually new infra since they don’t communicate over the typical HTTP channels that are common for many services.
Instead, Server-Sent Events (SSE) offer a simpler, more efficient solution for many push scenarios.
How It Works
SSE is built on top of HTTP and allows servers to push data to clients over a single, long-lived connection. Whereas vanilla HTTP response are sometimes consumed as one big blob:
SSE responses are chunked into individual messages:
Here’s how it works:
1. Client establishes an SSE connection by initiating an HTTP request (often using the simple `EventSource` API)
2. Server keeps the connection open and sends messages as part of the response as needed
3. Each message is processed by the client as it arrives
4. If the connection drops (or the server needs to close it), the client automatically reconnects
Unlike WebSockets, SSE is a one-way communication channel (server to client only), but this constraint becomes an advantage for many real-time notification scenarios.
The Benefits
Simpler Implementation: SSE is built on standard HTTP with minimal overhead. With a few exceptions, if HTTP works SSE is likely to work already.
Native Browser Support: No additional libraries needed for most clients
Works with Existing Infrastructure: Compatible with proxies, load balancers, and other HTTP infrastructure
Less Stateful: SSE connections are meant to be open for 30s to 1 minute where they are usually dropped and a reconnect initiated automatically. This means scaling is easy! Servers simply drop connections during scale-down events or deployments.
When to Use It
SSE is ideal for:
Real-time dashboards and monitoring
News feeds and social media updates
Notification systems
Stock tickers and live sports updates
AI systems that stream tokens or responses
A particularly good example is streaming AI-generated text responses where you want users to see tokens as they're generated, rather than waiting for the entire response.
3. Circuit Breakers for Resilient Communication
System designs are often most interesting in how they fail and senior candidates are frequently asked questions like “what happens when this service goes down”. Sometimes the answer is simple: “we fail and retry until it boots back up” — but occasionally that will introduce new problems for the system!
If your database has gone down cold and you need to boot it up one instance at a time, having a firehose of retries and angry users might pin down an instance from ever getting started. You can't get the first instance up, so you have no hope of getting the whole database back online. You’re stuck!
Enter circuit breakers.
How It Works
Circuit breakers protect your system when network calls to dependencies fail repeatedly:
The circuit breaker monitors for failures when calling external services
When failures exceed a threshold, the circuit "trips" to an open state
While open, requests immediately fail without attempting the actual call
After a timeout period, the circuit transitions to a "half-open" state
A test request determines whether to close the circuit or keep it open
This pattern, inspired by electrical circuit breakers, prevents cascading failures across distributed systems and gives failing services time to recover.
The Benefits
Circuit breakers are so sexy to any engineer who has spent time oncall.
Fail Fast: Quickly reject requests to failing services instead of waiting for timeouts
Reduce Load: Prevent overwhelming already struggling services with more requests
Self-Healing: Automatically test recovery without full traffic load
Improved User Experience: Provide fast fallbacks instead of hanging UI
System Stability: Prevent failures in one service from affecting the entire system
When to Use It
Use circuit breakers when an interviewer is deep-diving on reliability, failure modes, or disaster recovery. Being able to mention circuit breakers and apply them in useful places is a great way to show off knowledge that otherwise is won at 3:00am battling a hardware failure when the system Just. Won’t. Come. Back. Up.
Some example sites to apply them:
External API calls to third-party services
Database connections and queries
Service-to-service communication in microservices
Resource-intensive operations that might time out
Any network call that could fail or become slow
Conclusion
These three networking tricks—client-side load balancing, Server-Sent Events for push notifications, and circuit breakers for resilient communication—will set you apart from many candidates in system design interviews. By understanding when and how to apply them, you'll demonstrate a deeper understanding of distributed systems and network communication.
To learn more about them, our recently published Networking Essentials guide walks you through the important networking concepts you’ll need in a system design interview. The Realtime Updates pattern guide is a great resource for learning more about tradeoffs and options for systems that involve pushes, notifications, and real-time updates.
Remember that in system design interviews, it's not just about what components you ultimately choose, but also how you explain the tradeoffs and considerations that led to your decisions. Having more tools in your toolbox allows you to show off both breadth and depth that are necessary to land that new gig!
Changelog
People are constantly asking us what’s new with Hello Interview, so we’re going to keep a changelog here to keep you up-to-date. Since our last update:
Platform Updates
Performance updates! (Site feel snappier?)
New Content
We’ve got more coming down the pipe that we’re excited to share in our next update!