Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (55.6k points)
I am facing a problem understanding reward in reinforcement learning. Can anyone explain with example?

1 Answer

0 votes
by (119k points)

In reinforcement learning, we train ML models by using rewards and punishments. Whenever our machine takes a correct decision we reward a point and in wrong decisions negative point. From these responses, our model learns how to react in that particular situation.

Suppose, you want to teach new tricks to your dog. We will try a different strategy since dogs won’t understand any human language. We make our dog react in many ways and if the response is right, we give some food to the dog as a reward. So, whenever the dog is in that situation, it will do the same thing. It will also learn from negative experiences of what not to do in those situations.

In case you are interested in learning reinforcement learning, I would suggest this Reinforcement learning training

You can watch this video to have a better understanding of reinforcement learning:

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...