The common way of dealing with this problem is with actor-critic methods. These naturally extend to continuous action spaces.
Reinforcement Learning often considers real-world domains which deal with continuous state and action spaces to obtain optimal rewards. Many solutions are applied on RL Algorithms to solve continuous state problems, the same techniques can be hardly extended to continuous action spaces, where, besides the computation of a good approximation of the value function, a fast method for the identification of the highest-valued action is needed.
However, it can be novel by using the actor-critic approach in which the policy of the actor is estimated through sequential Monte Carlo methods as proposed in “Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods”
In this paper, learn the problem statement with a controller to drive a boat from the left bank to the right bank of a river, with a strong non-linear current. The Proposed SMC Algorithm and Continuous Q-learning are combined to measure the results obtained in a control problem consisting of steering a boat across a river.
Thus, for more details, Reinforcement Learning will be quite important as far as the software domain is concerned. Also, Machine Learning Course also helps in making a desirable career.