0 votes
1 view
in Big Data Hadoop & Spark by (11.5k points)
How can I kill a running process in the Spark shell on my local OSX machine without exiting?

For example, if I just do a simple .count() on an RDD, it can take a while and sometimes I want to kill it.

However, if I do Ctrl C then it kills the whole shell.

Is there a way to kill the process but not the shell?

1 Answer

0 votes
by (26.3k points)

You can simply use the Master Web Interface to kill or Visualize the Job. 

Also, in order to kill the process that is failing repeatedly,you can do this: 

./bin/spark-class org.apache.spark.deploy.Client kill <master url> <driver ID>

Reference: http://spark.apache.org/docs/latest/spark-standalone.html

Welcome to Intellipaat Community. Get your technical queries answered by top developers !


Categories

...