Expectation–maximization (EM) algorithm is an iterative method used in finding the maximum likelihood estimates of different parameters in a statistical model when your data is incomplete, missing data points or contains unobserved latent variables. The maximum likelihood estimation helps in finding the best fit model for a set of given data.
The general idea behind this algorithm is as follows:
1.It starts with an initial estimation of what each parameter should be.
2.Finds the likelihood of each parameter producing the data point.
3.Calculates weights for each data points and then combines the weights with the given data(expectation).
4. Tries to find a better estimation for each parameter using the weight-adjusted data(maximization).
The EM algorithm iterates alternatively between the maximization (M) step, which finds the parameters maximizing the expected log-likelihood found on the E step and the performing expectation (E) step, which creates the function for the expectation of the log-likelihood.
theta <- initial guess for hidden parameters
while not converged:
Q(theta'|theta) = E[log L(theta|Z)] #e-step
theta <- argmax_theta' Q(theta'|theta) #m-step
e-step: given the current estimation of Z(latent variables) to calculate the expected log-likelihood function.
m-step: it finds theta which helps in maximizing the Q(model).
Visit here to know more about What is intuitive explanation of the Expectation maximization technique.