Back

Explore Courses Blog Tutorials Interview Questions
0 votes
3 views
in AWS by (19.1k points)

My organization's website is a Django app running on front end web servers + a few background processing servers in AWS.

We're currently using Ansible for both :

  • system configuration (from a bare OS image)
  • frequent manually-triggered code deployments.

The same Ansible playbook is able to provide either a local Vagrant dev VM or a production EC2 instance from scratch.

We now want to implement autoscaling in EC2, and that requires some changes towards a "treat servers as cattle, not pets" philosophy.

The first prerequisite was to move from a statically managed Ansible inventory to a dynamic, EC2 API-based one, done.

The next big question is how to deploy in this new world where throwaway instances come up & down in the middle of the night. The options I can think of are :

  1. Bake a new fully-deployed AMI for each deploy, create a new AS Launch config and update the AS group with that. Sounds very, very cumbersome, but also very reliable because of the clean slate approach, and will ensure that any system changes the code requires will be here. Also, no additional steps needed on instance bootup, up & running more quickly.
  2. Use a base AMI that doesn't change very often, automatically get the latest app code from git upon bootup, start the web server. Once it up just does manual deploys as needed, like before. But what if the new code depends on a change in the system config (new package, permissions, etc)? Looks like you have to start taking care of dependencies between code versions and system/AMI versions, whereas the "just do a full ansible run" approach was more integrated and more reliable. Is it more than just a potential headache in practice?
  3. Use Docker? I have a strong hunch it can be useful, but I'm not sure yet how it would fit our picture. We're a relatively self-contained Django front-end app with just RabbitMQ + Memcache as services, which we're never going to run on the same host anyway. So what benefits are there in building a Docker image using Ansible that contains system packages + latest code, rather than having Ansible just do it directly on an EC2 instance?

How do you do it? Any insights / best practices?

1 Answer

0 votes
by (44.4k points)

I would just go with prebaking the AMIs with Ansible and then use CloudFormation to deploy your stacks with Autoscaling, Monitoring and your pre-baked AMIs. The advantage of this is that if you have most of the application stack pre-baked into the AMI autoscaling UP will happen faster.

Docker is another approach however, in my opinion, it adds an additional layer in your application that you just might not need if you're already using EC2. Docker can be really useful if you say want to containerize in a single server. Maybe you have got some extra capacity in a server and docker will enable you to run that further application on the same server without interfering with existing ones.

Having said that some people notice docker helpful not in the sort of way to optimize the resources in a single server but rather in a sort of approach that it permits you to pre-bake your applications in containers. So once you do deploy a new version or new code all you have got to do is copy/replicate these docker containers across your servers, then stop the old container versions and start the new container versions.

Related questions

0 votes
1 answer

Want to get 50% Hike on your Salary?

Learn how we helped 50,000+ professionals like you !

0 votes
1 answer
0 votes
1 answer

Browse Categories

...