Back

Explore Courses Blog Tutorials Interview Questions
0 votes
3 views
in Python by (47.6k points)

Requests is a really nice library. I'd like to use it to download big files (>1GB). The problem is it's not possible to keep the whole file in memory I need to read it in chunks. And this is a problem with the following code

import requests

def DownloadFile(url):

local_filename = url.split('/')[-1]

r = requests.get(url)

f = open(local_filename, 'wb')

for chunk in r.iter_content(chunk_size=512 * 1024):

if chunk: # filter out keep-alive new chunks 

f.write(chunk)

f.close()

return

By some reason, it doesn't work this way. It still loads response into memory before saving it to a file.

1 Answer

0 votes
by (106k points)

You can download a large file in python with requests by using the following code. In Python, memory usage is restricted regardless of the size of the downloaded file:

def download_file(url):

local_filename = url.split('/')[-1] 

requests.get(url, stream=True) as r:

r.raise_for_status()

with open(local_filename, 'wb') as f:

for chunk in r.iter_content(chunk_size=8192):

if chunk:

f.write(chunk) 

return local_filename

One important point to note here that the number of bytes returned using iter_content is not exactly the chunk_size; it's expected to be a random number that is often far bigger and is expected to be different in every iteration.

Related questions

0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
asked Aug 10, 2019 in Java by Suresh (3.4k points)

Browse Categories

...