Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AWS by (12.9k points)

I have a job processing architecture based on AWS that requires EC2 instances query S3 and SQS. In order for running instances to have access to the API the credentials are sent as user data (-f) in the form of a base64 encoded shell script. For example:

$ cat ec2.sh

...

export AWS_ACCOUNT_NUMBER='1111-1111-1111'

export AWS_ACCESS_KEY_ID='0x0x0x0x0x0x0x0x0x0'

...

$ zip -P 'secret-password' ec2.sh

$ openssl enc -base64 -in ec2.zip

Many instances are launched...

$ ec2run ami-a83fabc0 -n 20 -f ec2.zip

Each instance decodes and decrypts ec2.zip using the 'secret-password' which is hard-coded into an init script. Although it does work, I have two issues with my approach.

'zip -P' is not very secure

The password is hard-coded in the instance (it's always 'secret-password')

Is there a more elegant or accepted approach? Using gpg to encrypt the credentials and storing the private key on the instance to decrypt it is an approach I'm considering now but I'm unaware of any caveats. Can I use the AWS keypairs directly? Am I missing some super obvious part of the API?

1 Answer

0 votes
by (18.2k points)

You can transfer the credentials using scp with authentication(key pair). this way you won't have to perform any custom encryption. Also, make sure that the permissions on the key files are set to 0400 at all times.

Related questions

Want to get 50% Hike on your Salary?

Learn how we helped 50,000+ professionals like you !

0 votes
1 answer

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...