Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I'm working on a game where on each update of the game loop, the AI is run. During this update, I have the chance to turn the AI-controlled entity and/or make it accelerate in the direction it is facing. I want it to reach a final location (within a reasonable range) and at that location have a specific velocity and direction (again it doesn't need to be exact) That is, given a current:

  • P0(x, y) = Current position vector

  • V0(x, y) = Current velocity vector (units/second)

  • θ0 = Current direction (radians)

  • τmax = Max turn speed (radians/second)

  • αmax = Max acceleration (units/second^2)

  • |V|max = Absolute max speed (units/second)

  • Pf(x, y) = Target position vector

  • Vf(x, y) = Target velocity vector (units/second)

  • θf = Target rotation (radians)

Select an immediate:

  • τ = A turn speed within [-τmax, τmax]

  • α = An acceleration scalar within [0, αmax] (must accelerate in the direction it's currently facing)

Such that these are minimized:

  • t = Total time to move to the destination

  • |Pt-Pf| = Distance from target position at the end

  • |Vt-Vf| = Deviation from target velocity at end

  • tf| = Deviation from target rotation at end (wrapped to (-π,π))

The parameters can be re-computed during each iteration of the game loop. A picture says 1000 words so for the example given the current state as the blue dude, reach approximately the state of the red dude within as short a time as possible (arrows are velocity):

Pic

Assuming a constant α and τ for Δt (Δt → 0 for an ideal solution) and splitting position/velocity into components, this gives (I think, my math is probably off):

Equations

(EDIT: that last one should be θ = θ0 + τΔt)

So, how do I select an immediate α and τ (remember these will be recomputed every iteration of the game loop, usually > 100 fps)? The simplest, most naive way I can think of is:

  1. Select a Δt equal to the average of the last few Δts between updates of the game loop (i.e. very small)

  2. Compute the above 5 equations of the next step for all combinations of (α, τ) = {0, αmax} x {-τmax, 0, τmax} (only 6 combinations and 5 equations for each, so shouldn't take too long, and since they are run so often, the rather restrictive ranges will be amortized in the end)

  3. Assign weights to position, velocity, and rotation. Perhaps these weights could be dynamic (i.e. the further from the position the entity is, the more position is weighted).

  4. Greedily choose the one that minimizes these for the location Δt from now.

It's potentially fast & simple, however, there are a few glaring problems with this:

  • Arbitrary selection of weights

  • It's a greedy algorithm that (by its very nature) can't backtrack

  • It doesn't take into account the problem space

  • If it frequently changes acceleration or turns, the animation could look "jerky".

Note that while the algorithm can (and probably should) save state between iterations, but Pf, Vf, and θf can change every iteration (i.e. if the entity is trying to follow/position itself near another), so the algorithm needs to be able to adapt to changing conditions.

Any ideas? Is there a simple solution for this I'm missing?

Thanks.

1 Answer

0 votes
by (108k points)

What you need here is a PD controller. Just draw a line from the current position to the target. Then after that take the line direction in radians, that's your target radians. The current error in radians is the current heading - line heading. Call it Eh which is a heading error. Then you can assume the current turn rate is going to be KpEh+d/dt EhKd. do this every step with a new line.

The above explanation is for the heading.

Here acceleration means to accelerate until I've reached max speed or I won't be able to stop in time. You threw up a bunch of integrals so I'm sure you'll be fine with that calculation.

now for dealing with changing target position without changing the target line each frame: You're translating the target into a line. The line then becomes the target for the PD controller. If the target moves, then by the description of the target line has to as well. The nice thing about a PD controller is that it is BUILT to handle these changing situations. That's the whole point. Change the line as you see fit, and the PD controller will adapt automatically from any starting point.

If you wish to learn about Artificial Intelligence then visit this Artificial Intelligence Course.

Browse Categories

...