Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)
How can I kill a running process in the Spark shell on my local OSX machine without exiting?

For example, if I just do a simple .count() on an RDD, it can take a while and sometimes I want to kill it.

However, if I do Ctrl C then it kills the whole shell.

Is there a way to kill the process but not the shell?

1 Answer

0 votes
by (32.3k points)

You can simply use the Master Web Interface to kill or Visualize the Job. 

Also, in order to kill the process that is failing repeatedly,you can do this: 

./bin/spark-class org.apache.spark.deploy.Client kill <master url> <driver ID>

Reference: http://spark.apache.org/docs/latest/spark-standalone.html

Browse Categories

...