CMStatistics 2023: Start Registration
View Submission - CMStatistics
B1949
Title: Learning reward functions from demonstrations of multi-agent interactions Authors:  Negar Mehr - University of Illinois Urbana-Champaign (United States) [presenting]
Abstract: To transform lives, robots need to interact with other agents in complex shared environments. In various scenarios, such as autonomous cars sharing roads with pedestrians and human-driven vehicles, delivery drones navigating shared aerial spaces, or robots operating within shared warehouse environments, the need for intelligent interactions among agents is evident. While reinforcement learning can facilitate efficient interactions when agents' objectives are explicitly known, this isn't always the case, especially in human-robot interactions where human rewards may be hidden. In such scenarios, the practice of inverse reinforcement learning (IRL) comes into play, where a human's reward function is learned from their demonstrations. However, in interactive applications, agents are not isolated, and the decisions of all agents are mutually coupled. Thus, the game theoretic coupling between agents' behaviors is taken into account. The focus is on how robots can learn and infer the reward functions of other agents in their surroundings, accounting for the preferences and objectives of these agents. The goal is to develop a mathematical theory and numerical algorithms to deduce these interrelated preferences from observations of agents' interactions. The approach will enhance the ability of robots to adapt and collaborate effectively in dynamic and interactive environments.