Generative adversarial networks (GANs) have found multiple applications in the solution of inverse problems in science and engineering. These applications are driven by the ability of these networks to learn complex distributions and to map the original feature space to a low-dimensional latent space. In this manuscript we consider the use of GANs as priors in physics-driven Bayesian inference problems. Within this approach the posterior distribution is learnt by mapping the problem to the latent space of the GAN and then using an HMC sampler for efficient sampling. We apply this approach to solving linear and nonlinear inverse problems, including an example with experimental data acquired from an application in biophysical imaging. Furthermore, we analyze the weak convergence of the approximate prior to the true prior and elucidate its dependence on the capacity of the network and the number of training samples.