I am currently trying to set up a Neural Network for information extraction and I am pretty fluent with the (basic) concepts of Neural Networks, except for one which seems to puzzle me. It is probably pretty obvious but I can't seem to found information about it.
Where/How do Neural Networks store their memory? ( / Machine Learning)
There is quite a bit of information available online about Neural Networks and Machine Learning but they all seem to skip over memory storage. For example, after restarting the program, where does it find its memory to continue learning/predicting? Many examples online don't seem to 'retain' memory but I can't imagine this being 'safe' for real/big-scale deployment.
I have a difficult time wording my question, so please let me know if I need to elaborate a bit more. Thanks,
To follow up on the answers below
Every Neural Network will have edge weights associated with them. These edge weights are adjusted during the training session of a Neural Network.
This is exactly where I am struggling, how do/should I vision this secondary memory? Is this like RAM? that doesn't seem logical. The reason I ask because I haven't encountered an example online that defines or specifies this secondary memory (for example in something more concrete such as an XML file, or maybe even a huge array).