In this case, D1 is your initial representation, and D2 is a redundant code. The theory of these codes allows you to calculate the probability of recovering the correct representation, given the size of the code D2, and the probability of D2 being corrupted.
A reference for binary error-correcting codes is David MacKay's Information Theory, Inference, and Learning Algorithms. As you mentioned natural numbers from 0 to N, not binary numbers. You can also search for "analog error-correcting codes," which might get you closer to exactly what you're requesting here.
For genetic algorithms, apparently, these can also be applied to the problem of discovering ideal error-correcting codes, for example in this paper.
Thus, to solve this, studying Machine Learning Algorithms would be a better technique. Studying Machine Learning Course would also be beneficial as far as making a career in the software domain is concerned.
Hope this answer helps you!