Abstract
This work proposes a novel, learning-based method to leverage navigation time performance of unmanned aerial vehicles in dense environments by planning swift maneuvers using motion primitives. In the proposed planning framework, desirable motion primitives are explored by reinforcement learning. Two-stage training composed of learning in simulations and real flights is conducted to build up a swift motion primitive library. The library is then referred in real-time and the primitives are utilized by an intelligent control authority switch mechanism when swift maneuvers are needed for particular portions of a trajectory. Since the library is constructed upon realistic Gazebo simulations and real flights together, probable modeling uncertainties which can degrade planning performance are minimal. Moreover, since the library is in the form of motion primitives, it is computationally inexpensive to be retained and used for planning as compared to solving optimal motion planning problem algebraically. Overall, the proposed method allows for exceptional, swift maneuvers and enhances navigation time performance in dense environments up to 20% as being demonstrated by real flights with Diatone FPV250 quadrotor equipped with PX4 FMU.
Original language | English |
---|---|
Journal | Autonomous Robots |
Volume | 43 |
Issue | 7 |
Pages (from-to) | 1733-1745 |
Number of pages | 13 |
ISSN | 0929-5593 |
DOIs | |
Publication status | Published - Oct 2019 |
Keywords
- Agile maneuvers
- Path planning
- Quadrotor
- Reinforcement learning
- DESIGN
- FLIGHT
- TRAJECTORY GENERATION