Multiagent systems: learning, strategic behavior, cooperation, and network formation

Series

Abstract

Many applications ranging from crowdsourcing to recommender systems involve informationally decentralized agents repeatedly interacting with each other in order to reach their goals. These networked agents base their decisions on incomplete information, which they gather through interactions with their neighbors or through cooperation, which is often costly. This chapter presents a discussion on decentralized learning algorithms that enable the agents to achieve their goals through repeated interaction. First, we discuss cooperative online learning algorithms that help the agents to discover beneficial connections with each other and exploit these connections to maximize the reward. For this case, we explain the relation between the learning speed, network topology, and cooperation cost. Then, we focus on how informationally decentralized agents form cooperation networks through learning. We explain how learning features prominently in many real-world interactions, and greatly affects the evolution of social networks. Links that otherwise would not have formed may now appear, and a much greater variety of network configurations can be reached. We show that the impact of learning on efficiency and social welfare could be both positive or negative. We also demonstrate the use of the aforementioned methods in popularity prediction, recommender systems, expert selection, and multimedia content aggregation.

Source Title

Publisher

Elsevier

Course

Other identifiers

Book Title

Cooperative and graph signal processing : principles and applications

Keywords

Contextual bandits, Informational decentralization, Network formation, Regret, Global feedback, Group feedback, Opinion dynamics

Degree Discipline

Degree Level

Degree Name

Citation

Published Version (Please cite this version)

Language

English