I am now learning about LSM (Liquid State Machines), and I try to understand how they are used for learning.
I am pretty confused about what I read over the web.
I'll write what I understood -> It may be incorrect and I'll be glad if you can correct me and explain what is true:
LSMs are not trained at all: They are just initialized with many "temporal neurons" (e.g. Leaky Integrate & Fire neurons), while their thresholds are selected randomly, and so the connections between them (i.e. not each neuron has to have a common edge with each of the other neurons).
If you want to "learn" that x time-units after inputting I, the occurrence Y occurs, you need to "wait" x time-units with the LIF "detectors", and see which neurons fired at this specific moment. Then, you can train a classifier (e.g. FeedForward Network), that this specific subset of firing neurons means that the occurrence Y happened.
You may use many "temporal neurons" in your "liquid", so you may have many possible different subsets of firing neurons, so a specific subset of firing neurons becomes almost unique for the moment after you waited x time-units, after inputting your input I
I don't know whether what I wrote above is true, or whether it is a total garbage.
Please tell me if this is the correct usage and targets of LIF.