Offline Reinforcement Learning With Behavior Value Regularization

IEEE Trans Cybern. 2024 Apr 26:PP. doi: 10.1109/TCYB.2024.3385910. Online ahead of print.

Abstract

Offline reinforcement learning (offline RL) aims to find task-solving policies from prerecorded datasets without online environment interaction. It is unfortunate that extrapolation errors can cause over-optimistic Q -value estimates when learning with a fixed dataset, limiting the performance of the learned policy. To tackle this issue, this article proposes an offline actor-critic with behavior value regularization (OAC-BVR) method. In the policy evaluation stage, the difference between the Q -function and the value of the behavior policy is considered as the regularization term, driving the learned value function to approach the value of the behavior policy. The convergence of the proposed policy evaluation with behavior value regularization (PE-BVR) and the value function difference are analyzed, respectively. Compared with existing offline actor-critic methods, the proposed OAC-BVR method integrates the value of the behavior policy, thereby simultaneously alleviating over-optimistic Q -value estimates and reducing Q -function bias. Experimental results on the D4RL MuJoCo and Maze2d datasets demonstrate the validity of the proposed PE-BVR and the performance advantage of OAC-BVR over the state-of-the-art offline RL algorithms. The code of OAC-BVR is available at https://github.com/LongyangHuang/OAC-BVR.