There’s no reason to make the updates synchronous in a feedforward method, (like backpropagation, Deep NNs, Convolutional NNs) means that if you’ve implemented the system in parallel, then update whenever and wherever you like - it’s unlikely to alter the result. Mind you, splitting the nodes across processors might not be quite the right thing to do, as you generally need to balance computation and communication to get a good speed-up from multiple processors.
The question becomes (more) interesting when you consider recurrent networks. But the real neural systems are not synchronous so that asynchronous update is suitable - just ensure that the asynchronous update does not mean that some units only rarely get re-evaluated.
If you wish to know about Neural Network then visit this Neural Network Tutorial.