Logits is a simple term which can mean many different things: In Mathematics, Logitis a function that is used to maps probabilities ( [0, 1] ) to R ( (-inf, inf) ) .

Ex-

N=ln M - ln (1-M) , where M=1/(1+e^-N)

Ex-

A logits of 0 corresponds to a probability of 0.5.

A negative logit corresponds to a probability of less than 0.5.

A positive logit corresponds to a probability greater than 0.5.

In ML, logits can be defined as a vector of raw predictions that a classification model generates and it is passed to a normalized function. In case the model is solving a classification problem having multiple class, logits will behave as an input to the softmax function. Then the softmax function generates a vector of probability having one value for each class. Logits sometime also refer to the element-wise inverse of the sigmoid function.

Hope this answer helps.