I am trying to load table from a SQLLite .db file stored at local disk. Is there any clean way to do this in PySpark?
Currently, I am using a solution that works but not as elegant. First I read the table using pandas though sqlite3. One concern is that during the process schema information is not passed (may or may not be a problem). I am wondering whether there is a direct way to load the table without using Pandas.
import sqlite3
import pandas as pd
db_path = 'alocalfile.db'
query = 'SELECT * from ATableToLoad'
conn = sqlite3.connect(db_path)
a_pandas_df = pd.read_sql_query(query, conn)
a_spark_df = SQLContext.createDataFrame(a_pandas_df)
There seems a way using jdbc to do this, but I have not figure out how to use it in PySpark.