Heterogeneity and strategic sophistication in multi-agent reinforcement learning
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
BUIR Usage Stats
views
downloads
Series
Abstract
Decision-making powered by artificial intelligence (AI) is becoming increasingly prevalent in socio-technical systems such as finance, smart transportation, security, and robotics. Therefore, there is a critical need for developing the theoretical foundation of how multiple AI decision-makers interact with each other and with humans in order to ensure their reliable use in these systems. Since multiple AI decision-makers make decisions autonomously without central coordination, heterogeneity of their algorithms is inevitable. We establish a theoretical framework for the impact of heterogeneity on multi-agent sequential decision-making under uncertainty. First, we examine the potential heterogeneity of independent learning algorithms assuming that opponents play according to some stationary strategy. To this end, we present a broad family of algorithms that encompass widely-studied dynamics such as fictitious play and Q-learning. While existing convergence results only consider homogeneous cases, where each agent uses the same algorithm; we show that they can still converge to equilibrium if they follow any two different members of this algorithm family. This strengthens the predictive power of game-theoretic equilibrium analysis for heterogeneous systems. We then analyze how a strategically sophisticated agent can manipulate independent learning algorithms, revealing the vulnerability of such independent reinforcement learning algorithms. Finally, we demonstrate the practical implications of our findings by implementing our results in stochastic security games, highlighting its potential for real-life applications, and explore the impact of strategic AI in human-AI interactions in cyberphysical systems.