A1447
Title: Blessing from human-AI interaction: Super reinforcement learning in confounded environments
Authors: Jiayi Wang - The University of Texas at Dallas (United States) [presenting]
Zhengling Qi - The George Washington University (United States)
Chengchun Shi - LSE (United Kingdom)
Abstract: As AI becomes more prevalent throughout society, effective methods of integrating humans and AI systems that leverage their respective strengths and mitigate risk have become an important priority. The paradigm of super reinforcement learning is introduced that takes advantage of human-AI interaction for data-driven sequential decision making. This approach utilizes the observed action, either from AI or humans, as input for achieving a stronger oracle in policy learning for the decision maker (humans or AI). In the decision process with unmeasured confounding, the actions taken by past agents can offer valuable insights into undisclosed information. By including this information for the policy search in a novel and legitimate manner, the proposed super reinforcement learning will yield a super-policy that is guaranteed to outperform both the standard optimal policy and the behavior one (e.g., past agents' actions). This stronger oracle is called a blessing from human-AI interaction. Furthermore, to address the issue of unmeasured confounding in finding super-policies using the batch data, a number of nonparametric and causal identifications are established. Building upon these novel identification results, several super-policy learning algorithms are developed, and their theoretical properties are systematically studied, such as finite-sample regret guarantee.