I'm kinda new to Python and Datascience.
I have a 33gb CSV file Dataset, and I want to change it in DataFrame to do some stuff on it.
I tried to do it the 'Casual' with pandas.read_csv and it's taking ages to parse..
I searched on the internet and found this article.
It says that the most efficient way to read a large CSV file is to use CSV.DictReader.
So I tried to do that :
Can anyone tell me what's the most efficient way to parse a large dataset into pandas?