Back
Right now I'm importing a fairly large CSV as a dataframe every time I run the script. Is there a good solution for keeping that dataframe constantly available in between runs so I don't have to spend all that time waiting for the script to run?
Use to_pickle to pickle it:
df.to_pickle(file_name) # where to save it, usually as a .pkl
After that load it back using the below line of code:
df = pd.read_pickle(file_name)
31k questions
32.8k answers
501 comments
693 users