Gamma (γ) is the discount rate. It varies between 0 and 1. The higher the value the less you are discounting. Gamma is seen as part of the problem, not of the algorithm. A reinforcement learning algorithm tries for each state to optimize the cumulative discounted reward:
R1 + gamma*R2 + gamma^2*R3 + gamma^3*r4 ...
where rn is the reward received at time step n from the current state. So, for one choice of gamma, the algorithm may optimize one thing, and for another choice, it will optimize something else.
Lambda (λ) is a credit assignment variable. It varies between the value between 0 and 1. If the value is higher then more credit you can assign to further back states and actions. Lambda is a part of the algorithm and not of the problem. The lambda parameter decides how much you bootstrap on earlier learned value versus using the current Monte Carlo roll-out. This indicates a trade-off between more bias (low lambda) and more variance (high lambda). In many cases, initiating lambda to zero is already a fine algorithm, but setting lambda higher helps speed up things.
If you wish to learn about Reinforcement Learning then visit this Artificial Intelligence Course.