Improved Adaptive-Reinforcement Learning Control for morphing unmanned air vehicles

IEEE Trans Syst Man Cybern B Cybern. 2008 Aug;38(4):1014-20. doi: 10.1109/TSMCB.2008.922018.

Abstract

This paper presents an improved Adaptive-Reinforcement Learning Control methodology for the problem of unmanned air vehicle morphing control. The reinforcement learning morphing control function that learns the optimal shape change policy is integrated with an adaptive dynamic inversion control trajectory tracking function. An episodic unsupervised learning simulation using the Q-learning method is developed to replace an earlier and less accurate Actor-Critic algorithm. Sequential Function Approximation, a Galerkin-based scattered data approximation scheme, replaces a K-Nearest Neighbors (KNN) method and is used to generalize the learning from previously experienced quantized states and actions to the continuous state-action space, all of which may not have been experienced before. The improved method showed smaller errors and improved learning of the optimal shape compared to the KNN.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Aircraft*
  • Algorithms*
  • Computer Simulation
  • Feedback
  • Models, Theoretical*
  • Programming, Linear*
  • Reinforcement, Psychology*
  • Robotics / methods*
  • Systems Theory*