Browsing by Subject "Distributed"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Open Access Federated learning of generative ımage priors for MRI reconstruction(Institute of Electrical and Electronics Engineers Inc., 2022-11-09) Elmas, Gökberk; Dar, Salman UH.; Korkmaz, Yilmaz; Ceyani, E.; Susam, Burak; Ozbey, Muzaffer; Avestimehr, S.; Çukur, TolgaMulti-institutional efforts can facilitate training of deep MRI reconstruction models, albeit privacy risks arise during cross-site sharing of imaging data. Federated learning (FL) has recently been introduced to address privacy concerns by enabling distributed training without transfer of imaging data. Existing FL methods employ conditional reconstruction models to map from undersampled to fully-sampled acquisitions via explicit knowledge of the accelerated imaging operator. Since conditional models generalize poorly across different acceleration rates or sampling densities, imaging operators must be fixed between training and testing, and they are typically matched across sites. To improve patient privacy, performance and flexibility in multi-site collaborations, here we introduce Federated learning of Generative IMage Priors (FedGIMP) for MRI reconstruction. FedGIMP leverages a two-stage approach: cross-site learning of a generative MRI prior, and prior adaptation following injection of the imaging operator. The global MRI prior is learned via an unconditional adversarial model that synthesizes high-quality MR images based on latent variables. A novel mapper subnetwork produces site-specific latents to maintain specificity in the prior. During inference, the prior is first combined with subject-specific imaging operators to enable reconstruction, and it is then adapted to individual cross-sections by minimizing a data-consistency loss. Comprehensive experiments on multi-institutional datasets clearly demonstrate enhanced performance of FedGIMP against both centralized and FL methods based on conditional modelsItem Open Access Federated MRI reconstruction with deep generative models(2023-07) Elmas, GökberkMulti-institutional efforts can facilitate training of deep MRI reconstruction models, albeit privacy risks arise during cross-site sharing of imaging data. Federated learning (FL) has recently been introduced to address privacy concerns by enabling distributed training without transfer of imaging data. Existing FL methods employ conditional reconstruction models to map from undersampled to fully-sampled acquisitions via explicit knowledge of the accelerated imaging operator. Since conditional models generalize poorly across different acceleration rates or sampling densities, imaging operators must be fixed between training and testing, and they are typically matched across sites. To improve patient privacy, performance and flexibility in multi-site collaborations, here we introduce Federated learning of Generative IMage Priors (FedGIMP) for MRI reconstruction. FedG-IMP leverages a two-stage approach: cross-site learning of a generative MRI prior, and prior adaptation following injection of the imaging operator. The global MRI prior is learned via an unconditional adversarial model that synthesizes high-quality MR images based on latent variables. A novel mapper subnetwork produces site-specific latents to maintain specificity in the prior. During inference, the prior is first combined with subject-specific imaging operators to enable reconstruction, and it is then adapted to individual cross-sections by minimizing a data-consistency loss. Comprehensive experiments on multi-institutional datasets clearly demonstrate enhanced performance of FedGIMP against both centralized and FL methods based on conditional models.Item Open Access Logarithmic regret bound over diffusion based distributed estimation(IEEE, 2014) Sayın, Muhammed O.; Vanlı, Nuri Denizcan; Kozat, Süleyman SerdarWe provide a logarithmic upper-bound on the regret function of the diffusion implementation for the distributed estimation. For certain learning rates, the bound shows guaranteed performance convergence of the distributed least mean square (DLMS) algorithms to the performance of the best estimation generated with hindsight of spatial and temporal data. We use a new cost definition for distributed estimation based on the widely-used statistical performance measures and the corresponding global regret function. Then, for certain learning rates, we provide an upper-bound on the global regret function without any statistical assumptions.Item Open Access Online balancing two independent criteria(Springer, 2008-10) Tse, Savio S.H.We study the online bicriteria load balancing problem in this paper. We choose a system of distributed homogeneous file servers located in a cluster as the scenario and propose two online approximate algorithms for balancing their loads and required storage spaces. We first revisit the best existing solution for document placement, and rewrite it in our first algorithm by imposing some flexibilities. The second algorithm bounds the load and storage space of each server by less than three times of their trivial lower bounds, respectively; and more importantly, for each server, the value of at least one parameter is far from its worst case. The time complexities for both algorithm are O(logM). © 2008 Springer Berlin Heidelberg.Item Open Access Single bit and reduced dimension diffusion strategies over distributed networks(IEEE, 2013) Sayin, M. O.; Kozat, S. S.We introduce novel diffusion based adaptive estimation strategies for distributed networks that have significantly less communication load and achieve comparable performance to the full information exchange configurations. After local estimates of the desired data is produced in each node, a single bit of information (or a reduced dimensional data vector) is generated using certain random projections of the local estimates. This newly generated data is diffused and then used in neighboring nodes to recover the original full information. We provide the complete state-space description and the mean stability analysis of our algorithms.