Abroshan, M.Yip, K. H.Tekin, CemVan Der Schaar, M.2023-02-162023-02-162022-01-102162-237Xhttp://hdl.handle.net/11693/111379In high-stakes applications of data-driven decision-making such as healthcare, it is of paramount importance to learn a policy that maximizes the reward while avoiding potentially dangerous actions when there is uncertainty. There are two main challenges usually associated with this problem. First, learning through online exploration is not possible due to the critical nature of such applications. Therefore, we need to resort to observational datasets with no counterfactuals. Second, such datasets are usually imperfect, additionally cursed with missing values in the attributes of features. In this article, we consider the problem of constructing personalized policies using logged data when there are missing values in the attributes of features in both training and test data. The goal is to recommend an action (treatment) when ~X, a degraded version of Xwith missing values, is observed. We consider three strategies for dealing with missingness. In particular, we introduce the conservative strategy where the policy is designed to safely handle the uncertainty due to missingness. In order to implement this strategy, we need to estimate posterior distribution p(X|~X) and use a variational autoencoder to achieve this. In particular, our method is based on partial variational autoencoders (PVAEs) that are designed to capture the underlying structure of features with missing values.EnglishMissing valuesObservational dataPolicy constructionVariational autoencoderConservative policy construction using variational autoencoders for logged data with missing valuesArticle10.1109/TNNLS.2021.31363852162-2388