Abstract
Recent work in multi-agent intention scheduling has shown that enabling agents to predict the actions of other agents when choosing their own actions may be beneficial. However existing approaches to 'intention-aware' scheduling assume that the programs of other agents are known, or are “similar” to that of the agent making the prediction. While this assumption is reasonable in some circumstances, it is less plausible when the agents are not co-designed. In this paper, we present a new approach to multi-agent intention scheduling in which agents predict the actions of other agents based on a high-level specification of the tasks performed by an agent in the form of a reward machine (RM) rather than on its (assumed) program. We show how a reward machine can be used to generate tree and rollout policies for an MCTS-based scheduler. We evaluate our approach in a range of multi-agent environments, and show that RM-based scheduling out-performs previous intention-aware scheduling approaches in settings where agents are not co-designed.
Original language | English |
---|---|
Title of host publication | Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) |
Editors | Luc De Raedt, Luc De Raedt |
Pages | 215-222 |
ISBN (Electronic) | 9781956792003 |
DOIs | |
Publication status | Published - 2022 |
Externally published | Yes |