Explore Courses Blog Tutorials Interview Questions
0 votes
in Big Data Hadoop & Spark by (11.4k points)

I want to run my existing application with Apache Spark and MySQL.

1 Answer

0 votes
by (32.3k points)
edited by

Using Scala, the below commands will work:

import org.apache.spark.sql.SQLContext

val sqlcontext = new org.apache.spark.sql.SQLContext(sc)

val dataframe_mysql ="jdbc").option("url", "jdbc:mysql://Public_IP:3306/YOUR_DB_NAME").option("driver", "com.mysql.jdbc.Driver").option("dbtable", "tblage").option("user", "your_user").option("password", "your_password").load()

If you want to know more about Spark, then do check out this awesome video tutorial:

Related questions

Browse Categories