Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I've made a robot controlled by Arduino and Processing, which moves in a room by rotating itself (like a sphere).

What I need is to be able to get the new location once it moves on the floor (let's say within a 3m x 3m room). I'm using a 9DOF sensor (3 axes of accelerometer data, 3 axes gyroscopic, and 3 axes of magnetic data) to determine its roll, pitch and yaw and also its direction.

How is it possible to identify accurately the location of the robot in Cartesian (x,y,z) coordinates relative to its starting position? I cannot use a GPS since the movement is less than 20cm per rotation and the robot will be used indoors.

I found some indoor ranging and 3D positioning solutions like POSIX or by using a fixed camera. However, I need it to be cost-efficient.

Is there any way to convert the 9DOF data to get the new location or any other sensor to do that? Any other solution such as an algorithm?

1 Answer

0 votes
by (107k points)

Alternatively what people are using is to use "sensor fusion", which combines the data of several sensors into a better estimate of e.g. the position. It will however still accumulate error over time if you rely on the accelerometer and gyro alone. The magnetic vector will help you, but it will probably still be inaccurate.

I have found the following guide online that gives an introduction to sensor fusion with Karmann filters on an Arduino.

http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1114&context=aerosp

The autonomous module comes with several artificial intelligence and autonomy features:

  • Autonomous exploration. The algorithm identifies the not yet explored area and defines an optimal exploration strategy.

  • Pathfinding. The algorithm looks for the optimal path (Dijkstra).

  • Simultaneous Localization and Mapping (SLAM) regarding its surroundings. With the help of lidar, the robot can recognize its nearby surroundings and use the sensors’ data to localize itself on a map that will be completed during its movements.

  • Automatic path following

  • Obstacle detection and avoidance

  • The real-time path is re-planning. If the obstacle avoidance directs to a significant gap with the initially planned path, this feature will look on the established map for a new path allowing the robot to reach its final goal.

  • Detection of areas of interest

  • Object classification in sections through visual recognition; Learning and Recognition Algorithm.

For more information regarding the approaches to mobile robot localization in the indoor environment, refer to the following link:https://pdfs.semanticscholar.org/c6d3/77e67711f5f688141257e6c61809e80869e4.pdf

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...