Tekin, CemZhang, S.Xu, J.Schaar, M. van derDjurić, P. M.Richard., C.2019-05-212019-05-2120189780128136775http://hdl.handle.net/11693/51453Chapter 26Many applications ranging from crowdsourcing to recommender systems involve informationally decentralized agents repeatedly interacting with each other in order to reach their goals. These networked agents base their decisions on incomplete information, which they gather through interactions with their neighbors or through cooperation, which is often costly. This chapter presents a discussion on decentralized learning algorithms that enable the agents to achieve their goals through repeated interaction. First, we discuss cooperative online learning algorithms that help the agents to discover beneficial connections with each other and exploit these connections to maximize the reward. For this case, we explain the relation between the learning speed, network topology, and cooperation cost. Then, we focus on how informationally decentralized agents form cooperation networks through learning. We explain how learning features prominently in many real-world interactions, and greatly affects the evolution of social networks. Links that otherwise would not have formed may now appear, and a much greater variety of network configurations can be reached. We show that the impact of learning on efficiency and social welfare could be both positive or negative. We also demonstrate the use of the aforementioned methods in popularity prediction, recommender systems, expert selection, and multimedia content aggregation.EnglishContextual banditsInformational decentralizationNetwork formationRegretGlobal feedbackGroup feedbackOpinion dynamicsMultiagent systems: learning, strategic behavior, cooperation, and network formationBook Chapter10.1016/B978-0-12-813677-5.00026-29780128136782