For local mode you only have one executor, and this executor is your driver, so you need to set the driver's memory instead.
The reason for this is that the Worker "lives" within the driver JVM process that you start when you start spark-shell and the default memory used for that is 512M. You can increase that by setting spark.driver.memory to something higher, for example 5g. You can do that by either:
setting it in the properties file (default is spark-defaults.conf),
spark.driver.memory 5g
or by supplying configuration setting at runtime:
$ ./bin/spark-shell --driver-memory 5g
The reason for 265.4 MB is that Spark dedicates spark.storage.memoryFraction * spark.storage.safetyFraction to the total amount of storage memory and by default they are 0.6 and 0.9.
512 MB * 0.6 * 0.9 ~ 265.4 MB
So be aware that not the whole amount of driver memory will be available for RDD storage.
But when you'll start running this on a cluster, the spark.executor.memory setting will take over when calculating the amount to dedicate to Spark's memory cache.
If you want to know more about Spark, then do check out this awesome video tutorial: