We see stories of hyper-growth startups using bleeding-edge infrastructure all the time, I decided I’ll provide my insights about all you actually need for getting to as many as 100000 users on your app and even more, without having to worry about downtime. You don’t need 50 different services to keep your app up all the time, a few are all you need to get started and reach a good number of users.
Bear in mind that after a certain number of users, closer to more than 20 Million, it makes more economic sense to have control over your infrastructure and to keep everything running, you do need to have over 50 services sometimes (Encompassing logging, monitoring, backend microservices, micro-frontends, storage solutions and whatnot). But most startups that I’ve worked with and built from the ground up are serving a good number of users without downtime with minimal configuration and a bill whose amount wouldn’t even be largely noticeable on the Credit Card statement.
With a plethora of resources and platforms at your disposal, building an app that can scale as much as required is pretty easy today. With services like AWS Fargate, AutoScaling, Google Cloud’s own App Engine or Firebase and a host of auto-scaling offerings from multiple cloud platforms have one selling point, that you focus on building and growing your app, and leave the hosting and deployment part to us.
Let’s break down a few simple things you can do in various aspects of your app.
The above techniques not only work for your first 100 Thousand users but should also work for millions in case it’s done right and you don’t mind a large bill once it gets that big. Although at a scale larger than that, it just makes more sense to have a dedicated infrastructure layer that you can provision and manage yourself. Dropbox is a good example of this, they used AWS S3 for File Storage on the cloud but due to the scale they’re at today, they are moving parts of their infrastructure to in-house.
When starting out a project that you want a lot of people to use, build it from the ground up with that assumption, a few loose ends due to time make sense, but a few good decisions at the beginning can make a large difference later when your app has scaled up and doesn’t have too many bottlenecks that take weeks to resolve due to technical debt or the possibility of a scale-up not having been considered.
Choices like going with a NoSQL Database when you don’t have too much-related data, configuring auto-scaling groups to scale up the number of servers starting with one server in the beginning, or better yet ditching all that and letting a Platform/Backend As A Service provider do all of that for you is one of the best options you have. It does open you to the possibility of downtimes when major providers like AWS, GCP or Azure go down but they have an SLA of 99.9999% so downtimes are very rare instances.
The more components of your application depend on each other, the more difficult it is going to be to scale them up, this isn’t so important on the front end as it is on the backend, although it still is important on the front end in case your application grows larger, in which case we have the micro-frontend or proxy deployment patterns at our disposal. The last thing you want is getting over 10000 users and seeing constant drops in the requests that are being sent to your backend or the database you’re using because your backend is not able to handle those requests.
As highlighted above, to achieve scalability, you need to keep things decoupled, have each backend component be responsible for doing one thing, and one thing alone, and each of those functions can be deployed separate from each other and scaled horizontally as requests come because there are going to be parts of your backend that’ll receive more requests than the other and scaling the entire backend because of that wouldn’t make sense (I.E: Vertically scaling).
The tech stack I usually go with for most of the projects that I start is the following: