Sarıtaç, ÖmerKarakurt, AltuğTekin, Cem2018-04-122018-04-122017978-1-5090-4551-8http://hdl.handle.net/11693/37638Date of Conference: 27-30 Sept. 2016In this paper, we propose the Online Contextual Influence Maximization Problem (OCIMP). In OCIMP, the learner faces a series of epochs in each of which a different influence campaign is run to promote a certain product in a given social network. In each epoch, the learner first distributes a limited number of free-samples of the product among a set of seed nodes in the social network. Then, the influence spread process takes place over the network, other users get influenced and purchase the product. The goal of the learner is to maximize the expected total number of influenced users over all epochs. We depart from the prior work in two aspects: (i) the learner does not know how the influence spreads over the network, i.e., it is unaware of the influence probabilities; (ii) influence probabilities depend on the context. We develop a learning algorithm for OCIMP, called Contextual Online INfluence maximization (COIN). COIN can use any approximation algorithm that solves the offline influence maximization problem as a subroutine to obtain the set of seed nodes in each epoch. When the influence probabilities are Hölder continuous functions of the context, we prove that COIN achieves sublinear regret with respect to an approximation oracle that knows the influence probabilities for all contexts. Moreover, our regret bound holds for any sequence of contexts. We also test the performance of COIN on several social networks, and show that it performs better than other methods. © 2016 IEEE.EnglishApproximation algorithmsProbabilityTechnology transferContinuous functionsInfluence maximizationsOfflineRegret boundsSublinearSocial networking (online)Online Contextual Influence Maximization in social networksConference Paper10.1109/ALLERTON.2016.7852372