0
votes

I have deployed a MEAN Stack app (not using any MEAN frameworks) on google compute engine. Currently, it's deployed over a single n1-standard-1 virtual machine (1 virtual CPU + 3.75GB memory).

Inside of it, I'm running 4 docker containers. One each for:

  • reverse proxy - listens on port 80(http) & 443(https) then proxy request to locally running node instance (port 3000)
  • web application (nodejs + expressjs)
  • mongodb
  • redis

I was using this as my dev. environment and now since the site is in production (it's performing pretty slow).

How do I refactor this to a better scalable architecture?

My solutions:

  • deploy with at least 4 vm's with above config. (each running a single docker container): reverse proxy (1 vm), mongodb - (1 vm), redis - (1 vm), app - (multiple vm's). flexible but a bit costly?
  • deploy with a bigger vm (more CPU & RAM). cheaper alternative than above?

Ideally, I want to start small (with less cost) & and scale upwards as requirement grows but architecture has to be flexible enough to scale easily.

May I have some good solutions?

1

1 Answers

1
votes

If your environment is already containerized and you're willing to keep it that way, I can suggest using Google Container Engine.

Google Container Engine is a powerful cluster manager and orchestration system for running your Docker containers. Container Engine schedules your containers into the cluster and manages them automatically based on requirements you define (such as CPU and memory). It's built on top of the open source Kubernetes.

You will spin-up a small cluster of 3 nodes (could be single core nodes) and you will deploy your containers as pods. You can probably get rid of reverse proxy as this is something Google can do for you using a notion of a 'service'.

When you may need more resources, you can just resize your cluster by adding more nodes (instances) to it.

Google HTTP/S load balancer can help you with terminating ssl and spreading the load over multiple nodes.