I find this article more harmful than useful. Developers read such articles and start to think that optimizations are not needed. They can write very straightforward code and the hardware will do the magic.
In reality, we have Jira, a single page of which may download 27(!) megabytes of JS. Is it because the developers thought everyone already has a 10 Gbps network?
We have Facebook iOS app that takes 1.2 TB of the phone's storage (350 MB of the app + 850 MB of caches). Is it because the storage is cheap and fast?
I think that a developer should always care about performance and should try to consume fewer resources. Because in the end these terabytes of memory, hundreds of CPU cores, and gigabit throughputs cost money. If developers think about the software but not only hardware, we will live in a different world where services and apps are fast and small and do not require 48 CPU cores to do their job.
There's nothing here that says "don't care about performance". The article is saying "the parameters you've been designing under are no longer current". Horizontal scaling isn't inherently less wasteful than vertical scaling (in many cases it's actually more).
If you read modern hardware numbers and take away that you should write garbage code, that's ... not our recommendation.
True! Cost is definitely a realistic constraint. In reality, in most FAANG interviews at least it's good to acknowledge it, but it's not usually a meaningful constraint.
Nice, I've updated to 100k+. Millions might be achievable, but they're under fairly contrived conditions (light message volume, most notably in this article). If you have any references to production systems with this many concurrent connections, I'd be super interested to see!
I find this article more harmful than useful. Developers read such articles and start to think that optimizations are not needed. They can write very straightforward code and the hardware will do the magic.
In reality, we have Jira, a single page of which may download 27(!) megabytes of JS. Is it because the developers thought everyone already has a 10 Gbps network?
We have Facebook iOS app that takes 1.2 TB of the phone's storage (350 MB of the app + 850 MB of caches). Is it because the storage is cheap and fast?
I think that a developer should always care about performance and should try to consume fewer resources. Because in the end these terabytes of memory, hundreds of CPU cores, and gigabit throughputs cost money. If developers think about the software but not only hardware, we will live in a different world where services and apps are fast and small and do not require 48 CPU cores to do their job.
There's nothing here that says "don't care about performance". The article is saying "the parameters you've been designing under are no longer current". Horizontal scaling isn't inherently less wasteful than vertical scaling (in many cases it's actually more).
If you read modern hardware numbers and take away that you should write garbage code, that's ... not our recommendation.
It is all true but how does it map to the costs? Operational costs for such solutions might be rather high for many businesses, imo.
True! Cost is definitely a realistic constraint. In reality, in most FAANG interviews at least it's good to acknowledge it, but it's not usually a meaningful constraint.
App servers: up to 20k concurrent connections - isn't it too low? 10M connections were possible on single machine in 2015 https://migratorydata.com/blog/migratorydata-solved-the-c10m-problem/
Nice, I've updated to 100k+. Millions might be achievable, but they're under fairly contrived conditions (light message volume, most notably in this article). If you have any references to production systems with this many concurrent connections, I'd be super interested to see!