Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Python by (47.6k points)

I have a problem to parse 1000's of text files(around 3000 lines in each file of ~400KB size ) in a folder. I did read them using readlines,

for filename in os.listdir (input_dir) : 

if filename.endswith(".gz"): 

f = gzip.open(file, 'rb') 

else: 

f = open(file, 'rb') 

file_content = f.readlines() 

f.close() 

len_file = len(file_content) 

while i < len_file: 

line = file_content[i].split(delimiter) 

... my logic ... 

i += 1

This works completely fine for the sample from my inputs (50,100 files). When I ran on the whole input more than 5K files, the time-taken was nowhere close to linear increment. I planned to do performance analysis and did a Cprofile analysis. The time taken for the more files in exponentially increasing with reaching worse rates when inputs reached to 7K files.

Here is the the cumulative time-taken for readlines , first -> 354 files(sample from input) and second -> 7473 files (whole input)

ncalls tottime percall cumtime percall filename:lineno(function) 354 0.192 0.001 **0.192** 0.001 {method 'readlines' of 'file' objects} 7473 1329.380 0.178 **1329.380** 0.178 {method 'readlines' of 'file' objects}

Because of this, the time taken by my code is not linearly scaling as the input increases. I read some doc notes on readlines(), where people have claimed that this readlines() reads whole file content into memory and hence generally consumes more memory compared to readline() or read().

I agree with this point, but should the garbage collector automatically clear that loaded content from memory at the end of my loop, hence at any instant my memory should have only the contents of my currently processed file right? But, there is some catch here. Can somebody give some insights into this issue?

Is this an inherent behaviour of readlines() or my wrong interpretation of python garbage collector. Glad to know.

Also, suggest some alternative ways of doing the same in-memory and time-efficient manner. TIA.

1 Answer

0 votes
by (106k points)

Read line by line, not the whole file:

for line in open(file_name, 'rb'):

# process line here

Even better use with for automatically closing the file:

with open(file_name, 'rb') as f: 

for line in f: 

# process line here

The above will read the file object using an iterator, one line at a time.

To know more about this you can have a look at the following video tutorial:-

Related questions

0 votes
1 answer
+1 vote
2 answers
0 votes
1 answer

Browse Categories

...