|M.Sc Student||Lifschitz Daphna|
|Subject||General Purpose Policy Summaries for Reinforcement|
|Department||Department of Industrial Engineering and Management||Supervisor||Dr. Ofra Amir|
AI agents support high stakes decision-making processes from driving cars to prescribing drugs, making it increasingly important for human users to understand their behavior. Policy summarization methods aim to convey strengths and weaknesses of such agents trained using Reinforcement Learning, by demonstrating their behavior in a subset of informative states. Therefore, the key to summarization methods is the choice of which subset of a typically vast state-space to include in the summary.
This thesis explores the use of two approaches to extracting policy summaries, drawing on different computational models of human inference from the cognitive science literature. Both approaches aim to maximize users' ability to predict the agent's actions in unseen states. The first approach assumes that users will deploy a form of imitation learning based on the observed states in the summary, that is, they will predict the agent's actions in new states based on its actions in similar states shown in the summary. The second approach assumes that users will deploy inverse reinforcement learning, that is, they will try to infer the agent's reward function, and based on this function predict its behavior in new states.
We explore the effect of the use of different computational models on the ability to reconstruct agents' policies from summaries. We demonstrate through computational simulations that a mismatch between the model used to extract a summary and the model used to reconstruct the policy results in worse reconstruction quality. Through a human-subject study, we demonstrate that people use different models to reconstruct policies in different contexts, and that matching the summary extraction model to these can improve performance. Taken together, our results suggest that it is important to carefully consider user models in policy summarization in order to create summaries that enable users to gain global understanding of agent behavior.