How Tech Coding Assessments Are Splintering in 2025
After years of stability, tech coding interviews are undergoing their most significant transformation in over a decade.
While system design and behavioral interviews have remained relatively steady and predictable across the industry, we're witnessing a growing chasm between Big Tech's traditional LeetCode style interviews and a new wave of practical, project-based coding assessments pioneered by smaller companies.
The Entrenched Big Tech Approach
FAANG companies remain stubbornly committed to their existing formats, with only minor adjustments. The inertia of these processes is enormous. These companies have built entire recruiting machines around their current processes, with years of calibration data. They're reluctant to make dramatic changes without compelling evidence that alternatives would work better at scale.
The truth is that Big Tech interviews have always been a game—but, importantly, one where the rules are well known and public. The surprise in 2025 isn't that this game exists; it's that the rules have remained so static while the industry around them has transformed completely.
Most FAANG engineers readily acknowledge this disconnect. While the reality is that most engineers haven't implemented a merge sort from scratch since their interviews, these algorithmic challenges serve a deeper purpose. Leetcode interviews screen for intelligence, coding ability, and the tenacity to study and put in the necessary practice. Engineering leadership can cite numerous cases where they've had to terminate employees who simply couldn't code. The process may seem like hazing, but it's a form of assessment that works just well enough at identifying these core competencies to justify its continued existence.
These companies have calculated that willingness to grind through arbitrary algorithmic challenges correlates just enough with the on-the-job characteristics they value: persistence, ability to learn quickly, and comfort with abstract problem-solving. It's high precision but low recall, and that's a tradeoff they are happy to accept. As long as those who make it through the filter are high quality, they're willing to accept the false negatives—the great engineers who get filtered out by this artificial process. At their scale, ensuring consistently high quality hires trumps capturing every qualified candidate.
The Startup Revolution
While Big Tech doubles down on tradition, a quiet revolution is happening elsewhere. Respected mid-sized companies including Stripe, Coinbase, OpenAI, and Airbnb have moved toward more realistic, open-ended coding challenges that better reflect actual work.
Rather than solving LeetCode questions, candidates tackle problems like designing a query engine, implementing a key-value store, or designing an in-memory database to handle transactions. These aren't questions you'll find on LeetCode - they're bite-sized coding tests that directly mirror the actual engineering challenges you'd face on the job.
Early-stage startups have pushed even further, often replacing traditional coding exercises with take-home projects that explicitly allow the use of AI tools. Thought leaders across the industry have been vocal advocates for this shift, with successful implementations demonstrating how this approach better evaluates candidates' real-world problem-solving abilities.
Why this shift matters: This isn't just a minor variation in interview style—it's a fundamentally different philosophy of evaluation. The traditional process tests your ability to memorize and regurgitate algorithmic patterns under pressure. The new approach tests your ability to build working software with the actual tools you'd use on the job. One process is a game; the other is a simulation.
The AI Cheating Problem
The elephant in the room playing a role in pushing this divergence? AI-driven interview fraud.
The AI cheating problem is more prevalent than most companies want to admit. A good friend of mine conducting interviews at Amazon says 4 out of the last 7 junior-to-mid-level candidates he interviewed were provably cheating with AI tools. A founder of a small startup corroborates this — it's rampant. And this isn't just happening at big companies.
This creates an impossible situation for companies using traditional formats:
They can increase monitoring, creating a hostile, high-pressure environment that drives away top candidates.
They can make problems increasingly obscure and difficult, further divorcing the interview from actual work.
They can embrace the reality that AI is now part of an engineer's toolkit.
Big Tech is largely pursuing options 1 and 2, while startups are embracing option 3. This explains why we're seeing (and expecting more) FAANG companies return to mandatory onsite interviews while simultaneously increasing problem difficulty—both attempts to combat AI-assisted cheating.
Meanwhile, forward-thinking companies are explicitly incorporating AI into their assessment process. The emerging philosophy is straightforward: If engineers are going to use ChatGPT on the job anyway—and let's be honest, everyone does—then companies should evaluate how effectively candidates use it, not pretend it doesn't exist.
The Inevitable Convergence
This divergence seems unsustainable in the age of increasingly capable AI tools. As models become better at solving the exact algorithmic puzzles used in traditional interviews, the signal value of these assessments will inevitably diminish.
No engineer in the future will need to manually code algorithms like parenthesis balancing or binary tree traversals—they'll prompt an AI to generate that code. The companies pioneering more realistic, project-based assessments are adapting to the reality of how engineering work will actually be done moving forward.
Will FAANG eventually shift their approach? It's hard to say. These companies have massive investment in their current processes, and the institutional inertia is strong. But as AI continues to get better at solving algorithm puzzles, they'll face increasing pressure to evolve or risk losing their signal-to-noise ratio entirely.
What This Means For Your Interview Prep
If you're prepping for interviews in 2025, this splintering creates both challenges and opportunities:
Know which game you're playing. Research your target companies' specific interview formats—don't assume all tech interviews follow the same pattern anymore. We have a growing list of over 2,000 real interview questions from recent interviews across the industry worth consulting.
Consider the startup advantage. If you're stronger at building real systems than solving abstract puzzles, smaller companies' interview formats may give you a better chance to showcase your actual engineering abilities. Don't just target the biggest names.
Prepare for multiple formats. Until the industry settles on a new standard, the safest approach is developing both traditional algorithm skills and practical project implementation abilities.
Embrace AI as a tool, not a crutch. The companies leading this evolution aren't anti-AI—they're looking for engineers who can effectively leverage these tools while understanding the underlying principles. Practice using AI assistants as part of your workflow, not as a replacement for core understanding.
The Future Is Already Here, Just Unevenly Distributed
What's most fascinating about this moment is watching the future of tech interviewing emerge in real-time. For decades, interview practices pioneered by Google and other tech giants trickled down to smaller companies eager to emulate their success. Now, innovation in technical evaluation is bubbling up from smaller, more agile organizations, with Big Tech watching carefully from behind.
The traditional algorithm-focused interview isn't disappearing overnight—it's too deeply entrenched in hiring infrastructure at major companies. But the writing is on the wall. As AI continues to reshape how actual engineering work happens, evaluations that better reflect this new reality will inevitably gain traction.
Changelog
Since our last update:
Platform Updates
Problem Breakdowns Overview page: See all our breakdowns of common system design interview questions in one place. Read the breakdown or try the problem yourself with our Guided Practice.
New Content
We’ve got more coming down the pipe that we’re excited to share in our next update!
Is there a good way to prepare for the more open-ended coding/design hybrid questions asked by companies like Stripe?