The embedding themselves, are learned in the same way as word2vec’s embeddings are learned using a skip-gram model.
If you post which explains it in great detail as from this point forward I assume you are familiar with it.are familiar with the word2vec skip-gram model, great, if not I recommend this great
The most natural way I can think about explaining node2vec is to explain how node2vec generates a “corpus” and if we understand word2vec we already know how to embed a corpus.
So how do we produce this corpus from a graph? That’s exactly the innovative part of node2vec and it does so in an intelligent way which is done using the sampling strategy.
For more information regarding the same, refer to the following link: https://towardsdatascience.com/node2vec-embeddings-for-graph-data-32a866340fef
If you wish to learn about Artificial Intelligence then visit this Artificial Intelligence Course.