Options
2021
Conference Paper
Title
Towards atmospheric turbulence simulation using a conditional variational autoencoder
Abstract
Atmospheric turbulence often limits the performance of long-range imaging systems in applications. Realistic turbulence simulations provide means to evaluate this effect and assess turbulence mitigation algorithms. Current methods typically use phase screens or turbulent point spread functions (PSFs) to simulate the image distortion and blur due to turbulence. While the first method takes long computation times, the latter requires empirical models or libraries of PSF shapes and their associated tip and tilt motion, which might be overly simplistic for some applications. In this work, an approach is evaluated which tries to avoid these issues. Generative neural networks models are able to generate extremely realistic imitations of real (image) data with a short calculation time. To treat anisoplanatic imaging for the considered application, the model output is an imitation PSF-grid that has to be applied to the input image to yield the turbulent image. Certain shape features of the model outcome can be controlled by traversing within subsets of the model input space or latent space. The use of a conditional variational autoencoder (cVAE) appears very promising to yield fast computation times and realistic PSFs and is therefore examined in this work. The cVAE is trained on field trial camera images of a remote LED array. These images are considered as grids of real PSFs. First the images are pre-processed and their PSFs properties are determined for each frame. Main goal of the cVAE is the generation of PSF-grids under conditional properties, e.g. moments of PSFs. Different approaches are discussed and employed for a qualitative evaluation of the realism of the PSF-grids generated by the trained models. A comparison of required simulation computing time is presented and further considerations regarding the simulation method are discussed.