You can run Spark without Hadoop in the standalone mode (without a resource manager). However, if you have to run it in a multi-node domain, you will need a resource manager, such as YARN along with a distributed file system (S3, HDFS, etc.). Also, it is possible to install Spark without Hadoop. Please remember that Spark is an independent computing framework whereas Hadoop is a distributed storage system (HDFS) with a MapReduce computing framework.
Here is a video tutorial which you can watch to learn more about spark:-