Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

In most cases, the Baum-Welch algorithm is used to train a Hidden Markov model.

In many papers, however, it is argued that the BW algorithm will optimize until it got stuck in a local optimum.

Does there exist an exact algorithm that actually succeeds in finding the global optimum (except enumerating nearly all possible models and evaluating them)?

Of course, for most applications, BW will work fine. We are however interested in finding lower bounds of the amount of information loss when reducing the number of states. Therefore we always need to generate the best model possible.

We are thus looking for an efficient NP-hard algorithm (that only enumerates over a (potentially) the exponential number of extreme points) and not over a discretized number of floating points for each probability in the model.

1 Answer

0 votes
by (107k points)

This problem almost certainly is not convex, because there will, in fact, be multiple globally optimal solutions with the same scores - given any proposed solution, which typically gives a probability distribution for observations given the underlying state, you can, for instance, rename hidden state 0 as hidden state 1, and vice versa, adjusting the probability distributions so that the observed behavior generated by the two different solutions is identical. Thus if there are N hidden states in the model then there are at least N! local optimums produced by permuting the hidden states amongst themselves.

If you are looking to learn more about Artificial Intelligence then you visit Artificial Intelligence Tutorial. Also, if you are appearing for job profiles of AI Engineer or AI Expert then you can prepare for the interviews on Artificial Intelligence Interview Questions.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...