1. Amazon ECS container Instances are added indirectly, it's the job of the Amazon ECS container Agent on every instance to register itself with the cluster created and named by you, see concepts and lifecycle for details. For this to work out, you would like to follow the steps in this documentation in Launching an Amazon ECS container Instance, be it manually or via automation. Be aware of step 10.:
By default, your container instance launches into your default cluster. If you wish to launch into your own cluster rather than the default, choose the Advanced Details list and paste the following script into the User data field, replacing your_cluster_name with the name of your cluster.
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
2. You only need a single instance for ECS to work as such, because the cluster itself is managed by AWS on your behalf. This wouldn't be sufficient for high availability scenarios though:
- Because the container hosts are REeeee regular Amazon EC2 instances, you would need to follow AWS best practices and spread them over two or three Availability Zones (AZ) so that a (rare) outage of an AZ doesn't impact your cluster, because ECS can migrate your containers to a different host instance (provided your cluster has sufficient spare capacity).
- Many advanced clump technologies that facilitate containers have their own service orchestration layers and usually need an uneven number >= three (service) instances for a high availability setup. You can check out more about this in section optimum Cluster Size within Administration for instance (see also Running CoreOS with AWS EC2 instrumentation Service).
3. This refers back to the high availability and service orchestration topics mentioned in 2. already, more exactly you face the problem of service discovery, which becomes more prevalent even when using container technologies in general and micro-services in particular.