So we have that a word to vector model has been trained on a precise corpus to be readily given words as inputs (one-hot encoding is a way for the input) to represent the word as a vector of the typically high dimensionality of numbers.
Does it carry information? Yes, we could say that it carries information about the words. Similar words have similar vectors. The point is that we humans, we tend to don't define the similarity however the neural network defines the similarity for us supported the corpus that we tend to provide.
Does it carry meaning? No, as a result of this illustration will include grammatical or syntactical data and so the solution would be that it doesn't carry any meaning.
To learn more about Word2vec, study the machine learning tutorial