Browsing by Subject "State-dependent"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Common knowledge and state-dependent equilibria(Springer, Berlin, Heidelberg, 2012) Dalkıran, Nuh Aygun; Hoffman, M.; Paturi, R.; Ricketts, D.; Vattani, A.Many puzzling social behaviors, such as avoiding eye contact, using innuendos, and insignificant events that trigger revolutions, seem to relate to common knowledge and coordination, but the exact relationship has yet to be formalized. Herein, we present such a formalization. We state necessary and sufficient conditions for what we call state-dependent equilibria - equilibria where players play different strategies in different states of the world. In particular, if everybody behaves a certain way (e.g. does not revolt) in the usual state of the world, then in order for players to be able to behave a different way (e.g. revolt) in another state of the world, it is both necessary and sufficient for it to be common p-believed that it is not the usual state of the world, where common p-belief is a relaxation of common knowledge introduced by Monderer and Samet [16]. Our framework applies to many player r-coordination games - a generalization of coordination games that we introduce - and common (r,p)-beliefs - a generalization of common p-beliefs that we introduce. We then apply these theorems to two particular signaling structures to obtain novel results. © 2012 Springer-Verlag.Item Open Access State-dependent control of a single stage hybrid system with poisson arrivals(2011) Gokbayrak, K.We consider a single-stage hybrid manufacturing system where jobs arrive according to a Poisson process. These jobs undergo a deterministic process which is controllable. We define a stochastic hybrid optimal control problem and decompose it hierarchically to a lower-level and a higher-level problem. The lower-level problem is a deterministic optimal control problem solved by means of calculus of variations. We concentrate on the stochastic discrete-event control problem at the higher level, where the objective is to determine the service times of jobs. Employing a cost structure composed of process costs that are decreasing and strictly convex in service times, and system-time costs that are linear in system times, we show that receding horizon controllers are state-dependent controllers, where state is defined as the system size. In order to improve upon receding horizon controllers, we search for better state-dependent control policies and present two methods to obtain them. These stochastic-approximation-type methods utilize gradient estimators based on Infinitesimal Perturbation Analysis or Imbedded Markov Chain techniques. A numerical example demonstrates the performance improvements due to the proposed methods. © 2011 Springer Science+Business Media, LLC.