CMStatistics 2022: Start Registration
View Submission - CMStatistics
B1635
Title: Learning Bellman complete representations for offline policy evaluation Authors:  Jonathan Chang - Cornell University (United States)
Kaiwen Wang - Cornell University (United States)
Nathan Kallus - Cornell University (United States) [presenting]
Wen Sun - Cornell University (United States)
Abstract: Representation learning for offline reinforcement learning is studied, focusing on the task of off-policy evaluation (OPE). Recent work shows that, in contrast to supervised learning, realizability of the Q-function is not enough for learning it. Two sufficient conditions for sample-efficient OPE are Bellman completeness and coverage. Prior work often assumes that representations satisfying these conditions are given, with results being mostly theoretical in nature. We propose BCRL, which directly learns from data an approximately linear Bellman complete representation with good coverage. With this learned representation, we perform OPE using Least Square Policy Evaluation (LSPE) with linear functions in our learned representation. We present an end-to-end theoretical analysis, showing that our two-stage algorithm enjoys polynomial sample complexity provided some representation in the rich class considered is linear Bellman complete. Empirically, we extensively evaluate our algorithm on challenging, image-based continuous control tasks from the Deepmind Control Suite. We show that our representation enables better OPE compared to previous representation learning methods developed for off-policy RL. BCRL achieves competitive OPE error with the state-of-the-art method Fitted Q-Evaluation (FQE), and beats FQE when evaluating beyond the initial state distribution. Our ablations show that both linear Bellman complete and coverage components of our method are crucial.