Docker originally used Linux Containers but later switched to run containers. which runs in the same operating system as its host. This allows it to share a lot of the host operating system resources.
A full virtualized system gets its own set of resources allocated to it and does minimal sharing. You get more isolation, but it is much heavier (requires more resources). With Docker, you get less isolation, but the containers are lightweight (require fewer resources). So you could easily run thousands of containers on a host. A full virtualized system usually takes minutes to start, whereas Docker/LXC/runC containers take seconds.
There are pros and cons for each type of virtualized system. If you want full isolation with guaranteed resources, a full VM is a way to go. If you just want to isolate processes from each other and want to run a ton of them on a reasonably sized host, then Docker/LXC/runC seems to be the way to go.
Why is deploying software to a docker image (if that's the right term) easier than simply deploying to a consistent production environment?
Deploying a consistent production environment is easier said than done. Even if you use tools like Chef and Puppet, there are always OS updates and other things that change between hosts and environments.
Docker gives you the ability to snapshot the OS into a shared image and makes it easy to deploy on other Docker hosts. Locally, dev, QA, prod, etc.: all the same image. Sure you can do this with other tools, but not nearly as easily or fast. This is great for testing.
As there are more differences but I hope these are the major differences between the docker image and virtual machine.