PhD Student, University of Michigan
Chengyang is speaking at
Using Markov Chain Monte Carlo (MCMC) to sample the Bayesian posterior can be computationally impractical when coupled with expensive forward models that describe complex physical systems. We adopt an alternative strategy to build posterior approximations using samples from the joint distribution of model parameters and observables, thereby sidestepping Markov Chain altogether. We employ the conditional generative adversarial network (cGAN), which learns the mapping from latent space and observation space to different posterior distributions conditioned on different possible observable realizations, by solving a minimax game between two deep neural networks. Only one cGAN needs to train offline, and different observations encountered during online usage can be passed into it to achieve fast approximate posterior sampling. In other words, our method can solve repeated Bayesian inference under different observable realizations swiftly, therefore becomes extremely useful for multi-inference computations such as experimental design. We demonstrate the use of our method on algebraic benchmark problems.
Add to my calendar
Create your personal schedule through the official app, Whova!Get Started