I am fairly new to Tensorflow and ML in general, so I hereby apologize for a (likely) trivial question.
I use the dropout technique to improve the learning rates of my network, and it seems to work just fine. Then, I would like to test the network on some data to see if it works like this:
def Ask(self, image):
return self.session.run(self.model, feed_dict = {self.inputPh: image})
Obviously, it yields different results each time as the dropout is still in place. One solution I can think of is to create two separate models - one for a training and the other one for actual later use of the network, however, such a solution seems impractical to me.
What's the common approach to solving this problem?