I've been building web applications with JavaScript for more than 10 years now, and at some point you realise that writing the code is only half the battle. The other half is making sure the thing doesn't fall over when actual people start using it.

At work we deal with applications that need to handle traffic spikes — not millions of users, but enough that a badly configured server will embarrass you on a Friday afternoon. So over the years I've ended up working quite a lot with cloud services, mostly AWS, and here is what I think works and what doesn't when you combine JavaScript with the cloud.

The JavaScript side

I'm not going to sit here and tell you JavaScript is the best language in the world, but it is the one I keep coming back to. The fact that you can use it on the frontend with React or Vue, and then write your backend with Node.js, means you don't have to context-switch between languages all the time. That alone saves a lot of mental energy.

The ecosystem is massive though, and that's both a blessing and a curse. Need a library for anything? There's 47 of them on npm. Choosing one is the hard part. I tend to stick with the boring choices — React for the frontend, Express or Fastify for the backend — because I've learned the hard way that fancy libraries tend to break at the worst possible moment.

The cloud side

We use AWS at work, so most of my experience is there. I've tried Azure and GCP as well, and honestly they all do the same stuff, just with different names and slightly worse documentation.

The things I use the most:

  • EC2 for when you need an actual server and you want control over it. Yes, it's old school, but sometimes you just need to install stuff and configure it your way.
  • S3 for storing files. It's cheap, it works, and I've never had a problem with it.
  • Lambda for small jobs that don't need a server running 24/7. I use it mostly for image processing and data transformations.
  • RDS when you need a proper database and you don't want to manage it yourself. PostgreSQL on RDS has saved me more weekends than I care to admit.

The auto-scaling stuff is great in theory, but in practice I've found that it's not magic. You still need to set your thresholds properly, and if your application has memory leaks (it probably does), adding more instances is just going to make your AWS bill bigger without actually fixing anything.

Things I've learned the hard way

1. Caching matters more than you think. We had an application that was hammering the database on every request. Added Redis for caching and the response time went from 800ms to 50ms. I should have done that from the start.

2. CDN is not optional. If you have users in different countries, CloudFront (or whatever CDN you prefer) makes a huge difference. Static assets should be served from edge locations, not from your server in us-east-1.

3. Serverless is not the answer to everything. I went through a phase where I tried to put everything in Lambda functions. Bad idea. Cold starts will kill your response time, and debugging serverless is a nightmare. Use it for the right things — event-driven tasks, small API endpoints — not for your entire application.

4. Monitor everything from day one. I know, nobody wants to set up monitoring when they're rushing to ship features. But finding out your application has been running at 90% CPU for three days because of a customer complaint is not a good feeling. CloudWatch is basic but it works. Set up alarms early.

So what actually works?

For me, the setup that works best is a Node.js backend on EC2 (or ECS if you're feeling fancy), PostgreSQL on RDS, Redis for caching, S3 for file storage, and CloudFront in front of everything. It's not the most exciting architecture, but it works, and when something breaks at 2am I can actually figure out what's going on.

JavaScript and cloud services are a solid combination, but the technology is only part of it. The rest is about making choices that you can actually maintain and debug when things go wrong. And things will go wrong, trust me.