Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

What are the relevant differences, in terms of performance and use cases, between simulated annealing (with bean search) and genetic algorithms?

I know that SA can be thought of as GA where the population size is only one, but I don't know the key difference between the two.

Also, I am trying to think of a situation where SA will outperform GA or GA will outperform SA. Just one simple example which will help me understand will be enough.

1 Answer

0 votes
by (108k points)
Simulated annealing takes a population and applies a reducing random variation to each member of the population. It is a technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic function to approximate global optimization in a large search space for an optimization problem. The rate, amount and type of random variation is part of the designing process.

The genetic algorithm takes a population and repeatedly takes two members of the population and "mates" them to produce a new member. The new member may be placed in a new population, the "parents" may be removed from the original population. They may always produce two children. The manner in which they are chosen, the manner in which they "mate", the number of parents allowed to participate in mating, and many other factors can be altered.

A Genetic Algorithm maintains a population of possible solutions, and at each step, selects pairs of a possible solution, combines them (crossover) and applies some random changes (mutation). The algorithm is based on the idea of "survival of the fittest" where the selection process is done according to fitness criteria (usually in optimization problems it is simply the value of the objective function evaluated using the current solution). The crossover is done in the hope that two good solutions, when combined, might give an even better solution.

On the other hand, Simulated Annealing only tracks one solution in the space of possible solutions, and at each iteration considers whether to move to a neighboring solution or stay in the current one according to some probabilities (which decays over time). This is different from a heuristic search (say greedy search) in that it doesn't suffer from the problems of local optimum since it can get unstuck from cases where all neighboring solutions are worst the current one.

Browse Categories

...