Tokenization is a very common task in NLP, it is basically a task of chopping a character into pieces, called as token, and throwing away the certain characters at the same time, like punctuation.
If I speak in a common term, it is just to split apart the text into the individual units, and each individual unit, should have a value associated with it.
For an example, if you Input like this,
Input: World, Americans, Countrymen, Borrow.
Here is an informative end-to-end tutorial on Natural language processing, covering the concept of Tokenization, and with an implementation of Sentiment Analysis using NLTK, and that is not it, since it is an end-to-end video, you will also learn the Components of NLP, Natural Language Understanding, Natural Language Generation, Packages of NLP, Uni-gram, Bi-gram, Tr-gram, stemming, lemmatization, parts of speech tagging, named entity recognition and many more, make sure to follow this video, it will be very helpful.