Back

Explore Courses Blog Tutorials Interview Questions
0 votes
1 view
in Big Data Hadoop & Spark by (11.4k points)

when i run my shark queries, the memory gets hoarded in the main memory This is my top command result.


Mem: 74237344k total, 70080492k used, 4156852k free, 399544k buffers Swap: 4194288k total, 480k used, 4193808k free, 65965904k cached


this doesn't change even if i kill/stop shark,spark, hadoop processes. Right now, the only way to clear the cache is to reboot the machine.

has anyone faced this issue before? 

1 Answer

0 votes
by (32.3k points)

I don't know if you are using the cache() method to persist RDDs?

But keep in mind that cache() just calls persist(), So in order to remove the cache for an RDD, call unpersist().

Related questions

0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer

Browse Categories

...