Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I try to learn and implement a simple genetic algorithm library for my project. At this time, evolution, selection of population is ready, and I'm trying to implement a simple good mutation operator like the Gaussian mutation operator (GMO) for my genetic evolution engine in Java and Scala.

I find some information on the Gaussian mutation operator (GMO) into the paper A mutation operator based on a Pareto ranking for multi-objective evolutionary algorithms (P.M. Mateo, I. Alberto), page 6 and 7.

But I have some problem to find other information on how to implement this Gaussian mutation operator and other useful variants of this operator in Java. What should I do?

I'm using the random.nextGaussian() function of random Java util, but this method only returns a random number between 0 and 1.

So,

a) How can I modify the precision of the return number in this case? (For example, I want to get a random double number between 0 and 1 with a step equal to 0.00001.)

b) and how can I specify mu and sigma for this function, because I want to search locally about a value of my genome, not between -1 and 1. How can I adjust that local research around my genome value?

After research, I found an answer to the b) question. It seems I can displace the Gaussian random number like this:

newGenomeValue = oldGenomeValue + (( gaussiandRndNumber * sigma ) + mean )

where mean = my genome value.

(Cf. method of the bottom page in How can I generate random numbers with a normal or Gaussian distribution?.)

1 Answer

0 votes
by (108k points)
edited by

The solution to your first problem is very simple, you just have to round off to the nearest 0.00001 to get your answer in these units. Here 

p = 0.00001;

quantized_x =p * Math.rint(x/p);//This method returns the closest floating-point value

Coming to your second question, your approach is right but you need to rescale your variable to the desired range. The only thing that can be added is the change of variable theorem from calculus:http://en.wikipedia.org/wiki/Integration_by_substitution

If you work out this formula in the case of a Gaussian distribution with 0 mean and standard deviation 1 being transformed by a linear shift and a rescaling, then you will see that what you wrote out was correct.

Here is a part of the code that you can implement:

double next_gaussian()

{

double x = rng.nextGaussian();  //Use whichever method you like 

//here you can generate an initial [-1,1] gaussian distribution

 y = (x * 0.5) + 0.5;                //Rescale to [0,1]

return Math.rint(y * 100000.0) * 0.00001; //Quantize to step size 0.00001

}

Hope this helps!

Watch this video to learn about Neural Networks:

Browse Categories

...