Back

Explore Courses Blog Tutorials Interview Questions
+2 votes
17 views
in AWS by (19.1k points)

I'm trying to set up an Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from an S3 bucket.

aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm.

This script works perfectly on my local machine but fails with the following error on the Amazon Image:

2016-03-22 01:07:47,110 - MainThread - botocore.auth - DEBUG - StringToSign:

HEAD

Tue, 22 Mar 2016 01:07:47 GMT

x-amz-security-token:AQoDYXdzEPr//////////wEa4ANtcDKVDItVq8Z5OKms8wpQ3MS4dxLtxVq6Om1aWDhLmZhL2zdqiasNBV4nQtVqwyPsRVyxl1Urq1BBCnZzDdl4blSklm6dvu+3efjwjhudk7AKaCEHWlTd/VR3cksSNMFTcI9aIUUwzGW8lD9y8MVpKzDkpxzNB7ZJbr9HQNu8uF/st0f45+ABLm8X4FsBPCl2I3wKqvwV/s2VioP/tJf7RGQK3FC079oxw3mOid5sEi28o0Qp4h/Vy9xEHQ28YQNHXOBafHi0vt7vZpOtOfCJBzXvKbk4zRXbLMamnWVe3V0dArncbNEgL1aAi1ooSQ8+Xps8ufFnqDp7HsquAj50p459XnPedv90uFFd6YnwiVkng9nNTAF+2Jo73+eKTt955Us25Chxvk72nAQsAZlt6NpfR+fF/Qs7jjMGSF6ucjkKbm0x5aCqCw6YknsoE1Rtn8Qz9tFxTmUzyCTNd7uRaxbswm7oHOdsM/Q69otjzqSIztlwgUh2M53LzgChQYx5RjYlrjcyAolRguJjpSq3LwZ5NEacm/W17bDOdaZL3y1977rSJrCxb7lmnHCOER5W0tsF9+XUGW1LMX69EWgFYdn5QNqFk6mcJsZWrR9dkehaQwjLPcv/29QcM+b5u/0goazCtwU=

/aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm

2016-03-22 01:07:47,111 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>

2016-03-22 01:07:47,111 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (1): aws-codedeploy-us-west-2.s3.amazonaws.com

2016-03-22 01:07:47,151 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /latest/codedeploy-agent.noarch.rpm HTTP/1.1" 403 0

2016-03-22 01:07:47,151 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '0mRvGge9ugu+KKyDmROm4jcTa1hAnA5Ax8vUlkKZXoJ//HVJAKxbpFHvOGaqiECa4sgon2F1kXw=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '6204CD88E880E5DD', 'date': 'Tue, 22 Mar 2016 01:07:46 GMT', 'content-type': 'application/xml'}

2016-03-22 01:07:47,152 - MainThread - botocore.parsers - DEBUG - Response body:

2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f421075bcd0>

2016-03-22 01:07:47,152 - MainThread - botocore.retryhandler - DEBUG - No retry needed.

2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7f4211085758>

2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <awscli.errorhandler.ErrorHandler object at 0x7f421100cc90>

2016-03-22 01:07:47,152 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 403

2016-03-22 01:07:47,152 - MainThread - awscli.customizations.s3.s3handler - DEBUG - Exception caught during task execution: A client error (403) occurred when calling the HeadObject operation: Forbidden

Traceback (most recent call last):

  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 100, in call

    total_files, total_parts = self._enqueue_tasks(files)

  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 178, in _enqueue_tasks

    for filename in files:

  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/fileinfobuilder.py", line 31, in call

    for file_base in files:

  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 142, in call

    for src_path, extra_information in file_iterator:

  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 314, in list_objects

    yield self._list_single_object(s3_path)

  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 343, in _list_single_object

    response = self._client.head_object(**params)

  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call

    return self._make_api_call(operation_name, kwargs)

  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 488, in _make_api_call

    model=operation_model, context=request_context

  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit

    return self._emit(event_name, kwargs)

  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit

    response = handler(**kwargs)

  File "/usr/local/lib/python2.7/site-packages/awscli/errorhandler.py", line 70, in __call__

    http_status_code=http_response.status_code)

ClientError: A client error (403) occurred when calling the HeadObject operation: Forbidden

2016-03-22 01:07:47,153 - Thread-1 - awscli.customizations.s3.executor - DEBUG - Received print task: PrintTask(message='A client error (403) occurred when calling the HeadObject operation: Forbidden', error=True, total_parts=None, warning=None)

A client error (403) occurred when calling the HeadObject operation: Forbidden

However, when I run it with the --no-sign-request option, it works perfectly:

aws --debug --no-sign-request s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .

Can someone please explain what is going on?

6 Answers

+4 votes
by (44.4k points)

First, check whether your attached policy provides complete access to S3 and also to access objects within a S3 bucket, you have to provide this in the policy:

“Resource”:"arn:aws:s3:::BUCKET_NAME/*"

Rather than this,

"Resource": "arn:aws:s3:::BUCKET_NAME"

The first statement allows complete access to all the objects available in the given S3 bucket.

If this is not the problem, then check whether the EC2 instances and the buckets are in the same regions. If they are not in the same regions, then it will raise errors. So, make sure EC2 instances and the buckets are in the same regions.

Do you want to master AWS, then do checkout the aws certification training by Intellipaat.

by (62.9k points)
This suggestion helped to resolve the same error!
by (33.1k points)
Passing the bucket's region as a parameter worked for me.
Thanks!
by (47.2k points)
I discovered that there is no HeadBucket permission. It looks like there is, because that's what the error message tells you, but actually the HEAD operation requires the ListBucket permission.
by (106k points)
I was looking for this kind of explanation thanks
+2 votes
by (108k points)

I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets were in different regions (not us-west-2). It appears like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in. When I fixed the error in my template (it was a wrong parameter map), the error disappeared.

Want to get 50% Hike on your Salary?

Learn how we helped 50,000+ professionals like you !

+2 votes
by (32.1k points)
edited by
The solution I found out for your question is that it looks like there is no HeadBucket permission. But in actual, it is there. This is because that is what the error message is telling you, i.e.  the HEAD operation requires the ListBucket permission.

Also, your IAM policy and your bucket policy might be conflicting.

Make sure you check both of them once.
by (29.5k points)
This sounds like the problem, can you confirm if this works @yuvraj
0 votes
by (29.3k points)

I got this error message due to my EC2 instance's clock is out of sync.

so I fix this issue on Ubuntu using:

sudo ntpdate ntp.ubuntu.com

sudo apt-get install ntp

0 votes
by (19.9k points)

I was also facing a similar issue. I gave the aws cli the full S3 access from the Managed Policies and it worked. This maybe why you are facing this issue as well.

0 votes
by (40.7k points)

I was getting the error A client error (403) occurred when calling the HeadObject operation: Forbidden for my aws cli copy command aws s3 cp s3://bucket/file file. I was using a IAM role which had full S3 access using an Inline Policy.

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Allow",

      "Action": "s3:*",

      "Resource": "*"

    }

  ]

}

If I give it the full S3 access from the Managed Policies instead, then the command works. I think this must be a bug from Amazon because the policies in both cases were exactly the same.

Browse Categories

...