Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AWS by (19.1k points)

I'm running a Django web-app on AWS Elastic Beanstalk that needs specific files available to it to run (in fact, an nltk corpus of stopwords). Since instances come and go, I copied the needed folder to the S3 bucket that was created by my elastic beanstalk and planned to add a copy command using awscli to my elastic beanstalk configuration file. But I can't get it to work.

Instances launched by my beanstalk should have read access to the S3 bucket because this is the bucket created automatically by the beanstalk. So beanstalk also created an IAM role aws-elasticbeanstalk-ec2-role which is an instance profile that is attached to every instance it launches. This role includes the AWSElasticBeanstalkWebTier policy which seems to grant both read and write access to the S3 bucket:

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Sid": "BucketAccess",

      "Action": [

        "s3:Get*",

        "s3:List*",

        "s3:PutObject"

      ],

      "Effect": "Allow",

      "Resource": [

        "arn:aws:s3:::elasticbeanstalk-*",

        "arn:aws:s3:::elasticbeanstalk-*/*"

      ]

    }

  ]

I tried adding the following command to .ebextensions/my_app.config:

commands: 

  01_copy_nltk_data:

    command: aws s3 cp s3://<my_bucket>/nltk_data /usr/local/share/

But I get the following error when I try to deploy even though I can see the folder in my S3 console

Command failed on the instance. Return code: 1 Output: An error occurred (404) when calling the HeadObject operation: Key "nltk_data" does not exist

Any ideas?

1 Answer

0 votes
by (44.4k points)

AWS Support had the answer: my nltk_data folder had subfolders and files inside so the aws s3 cp command needed the --recursive option.

Related questions

Want to get 50% Hike on your Salary?

Learn how we helped 50,000+ professionals like you !

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...