Making a neural network nowadays is still more art than science. Because we don’t understand enough about why they work, we’re left with little tips and tricks that seem arbitrary, errors that are hard to troubleshoot, and many wasted experiments.
Neural networks work because physics works. Their convolutions and the activation functions efficiently learn the relatively simple physical rules that govern cats, dogs, and even spherical cows. Their layers reflect the hierarchies we find in everyday life, organizing matter from atoms to galaxies. Neural Networks have a large number of free parameters that contain the weights and biases between interconnected units and this gives them the flexibility to fit highly complex data only when it is trained correctly, that other models are too simple and casual to fit. This model complexity brings with it the problems of training such a complex network and ensuring the resultant model generalizes to the examples it’s trained on (typically neural networks require large volumes of training data, that other models don't).
In the end I would like to say that it’s the model complexity that allows neural nets to solve more complex classification tasks, and to have a broader application ( when it is applied to raw data such as image pixel intensities, etc.), but their complexity means that large volumes of training data are required and training them can be a difficult task.