Publications Search Results

Now showing 1 - 10 of 14
  • Publication
    Imager performance assessment with TRM4: recent developments
    ( 2023) ;
    Steiner, Dov
    ;
    Assaban, Dan
    ;
    An, Margarita
    ;
    Model-based performance assessment is an important tool in the design of electro-optical and infrared imagers and their performance comparison. It alleviates the need for expensive field measurements and can be used for parameter trade studies. TRM4 serves this purpose and features a validated approach to consider aliasing in the range performance assessment of focal plane arrays. TRM4.v3 was released in October 2021 and came with a set of new model features. One of the new capabilities is the performance assessment of imagers used for aerial imaging by computing the National Imagery Interpretability Rating Scale (NIIRS) as performance measure. The NIIRS values are calculated in TRM4 using the latest version of the General Image Quality Equation. In this paper, the result of a recent validation effort for the NIIRS calculation is reported. Current TRM4 development focusses on performance modeling of subpixel target scenarios and color cameras. For subpixel targets, the calculation of the signal-to-noise ratio (SNR) for the additional target signal is presented considering different target locations with respect to the detector matrix. Best-case and worst-case detection ranges are derived from specified threshold SNRs of detection algorithms. First results are shown using the TRM4.v3 sky background scenario feature to model ground-to-air imaging of flying targets. A first step for modeling color cameras is the extension of AMOP (Average Modulation at Optimum Phase) calculation, which is used in TRM4 to describe the spatial signal transfer characteristics along the imaging chain. It is shown how the sampling by sensors with Bayer filter and demosaicing can be included into the AMOP calculation procedure.
  • Publication
    Comparison of algorithms for contrast enhancement based on triangle orientation discrimination assessments by convolutional neural networks
    Within the last decades, a large number of techniques for contrast enhancement has been proposed. There are some comparisons of such algorithms for few images and figures of merit. However, many of these figures of merit cannot assess usability of altered image content for specific tasks, such as object recognition. In this work, the effect of contrast enhancement algorithms is evaluated by means of the triangle orientation discrimination (TOD), which is a current method for imager performance assessment. The conventional TOD approach requires observers to recognize equilateral triangles pointing in four different directions, whereas here convolutional neural network models are used for the classification task. These models are trained by artificial images with single triangles. Many methods for contrast enhancement highly depend on the content of the entire image. Therefore, the images are superimposed over natural backgrounds with varying standard deviations to provide different signal-to-background ratios. Then, these images are degraded by Gaussian blur and noise representing degradational camera effects and sensor noise. Different algorithms, such as the contrast-limited adaptive histogram equalization or local range modification, are applied. Then accuracies of the trained models on these images are compared for different contrast enhancement algorithms. Accuracy gains for low signal-to-background ratios and sufficiently large triangles are found, whereas impairments are found for high signal-to-background ratios and small triangles. A high generalization ability of our TOD model is found from the similar accuracies for several image databases used for backgrounds. Finally, implications of replacing triangles with real target signatures when using such advanced digital signal processing algorithms are discussed. The results are a step toward the assessment of those algorithms for generic target recognition.
  • Publication
    Impact of motion blur on recognition rates of CNN-based TOD classifier models
    This work investigates the impact of various types of motion blur on the recognition rate of triangle orientation discrimination (TOD) models. Models based on convolutional neural networks (CNNs) have been proposed as an automated and faster alternative to observer experiments for range performance assessment. They may also give insights into the impact of system degradations on the performance of automated target recognition algorithms. However, the effects of many image distortions on the recognition rate of such models are relatively unknown. The recognition rate of CNN-based TOD models is examined in terms of different forms of motion blur, such as jitter, linear and sinusoidal motion. For model training and validation, simulated images are used. Triangles with four directions and different sizes, positions are used as targets, which are superposed on natural images as background taken from the image database "Open Images V7". Motion blur of varying strength is applied to both the triangle and the entire image to simulate movements of the target and imager. Additionally, common degradation effects of imagers are applied, such as white sensor noise and blur due to diffraction and detector footprint. The recognition rates of the models are compared for target motion and global motion as well as for the different motion types. Furthermore, dependencies of the recognition rate on blur strength, triangle size and noise level are shown. The study shows interrelationships and differences between target motion and global motion regarding TOD classifications. The inclusion of motion blur in training can also increase model accuracy in validation. These findings are crucial for range performance assessment of thermal imagers for fast-moving targets.
  • Publication
    SIMTAD: a simulation tool for evaluating target detection performance of imaging systems
    ( 2022)
    An, Margarita
    ;
    Target detection is a crucial task in defense applications such as surveillance, infrared search and track, and missile approach warning systems. Typically, the target image is extended over a few sensor pixels of the imaging system only and the detection is performed by appropriate algorithms. In order to study the impact of imaging system design parameters and environmental conditions on the detection performance, a simulation tool is developed. Apart from computing detection ranges based on the expected signal to noise ratio the algorithm requires for detection, the tool is also meant for simulating image sequences of engaging targets. Therefore, it provides means to investigate the interplay between system design parameters and algorithms for target detection. The simulation is based on a rigorous calculation of the target image in the focal plane, with consideration of the optical transfer functions of imaging chain components. Integrating the target image over the active pixel areas yields the additional signal of the detector pixels caused by the target. Based on these values and average background noise the signal to noise ratio (SNR) is obtained as function of the target distance. Image data is generated by overlaying the additional signals over a background image. We exemplify the application of the simulation tool by studying the effect of various system parameters and environmental conditions on the resulting SNR and detection range. Corresponding simulated image data is presented as well.
  • Publication
    Prototype measurement setup to assess near-eye display imaging quality: an update
    Near-eye displays–displays positioned in close proximity to the observer’s eye–are a technology continuing to gain significance in industrial and defense applications, e.g. for augmented reality and digital night vision. Fraunhofer IOSB has recently developed a specialized measurement setup for assessing the display capabilities of such devices as part of the optoelectronic imaging chain, with the primary focus on the Modulation Transfer Function (MTF). The setup consists of an imaging system with a high-resolution CMOS camera and a motorized positioning system. It is intended to run different measurement procedures semi-automatically, performing the desired measurements at specified points on the display. This paper presents the extended work on near-eye display imaging quality assessment following the initial publication. Using a commercial virtual reality headset as a sample display, we further refined the previously described MTF measurement procedures, with one method being based on bar pattern images and another method using a slanted edge image. Refinements include improvements to the processing of the camera images as well as to the method of extracting contrast measurements. Furthermore, we implemented an additional, line-image-based method for determining the device’s MTF. The impact of the refinements is examined and the results of the different methods are discussed with the goal to find the most suitable measurement procedures for our setup and to highlight the individual merits of different measurement methods.
  • Publication
    Simulation of caustics caused by high-energy laser reflection from melting metallic targets adapted by a machine learning approach
    We present a model that calculates the reflected intensity of a high-energy laser irradiating a metallic target. It will enable us to build a laser safety model that can be used to determine nominal ocular hazard distances for high-energy laser engagements. The reflection was first measured in an experiment at 2 m distance from the target. After some irradiation time, the target begins to melt and the reflected intensity presents intensity patterns composed of caustics, which vary rapidly and are difficult to predict. A specific model is developed that produces similar caustic patterns at 2 m distance and can be used to calculate the reflected intensity at arbitrary distances. This model uses a power spectral density (PSD) to describe the melting metal surface. From this PSD, a phase screen is generated and applied onto the electric field of the laser beam, which is then propagated to a distance of 2 m. The simulated intensity distributions are compared to the measured intensity distributions. To quantify the similarity between simulation and experiment, different metrics are investigated. These metrics were chosen by evaluating their correlation with the input parameters of the model. An artificial neural network is then trained, validated and tested on simulated data using the aforementioned metrics to find the input parameters of the PSD that lead to the most similar caustics. Additionally, we tested another approach based on an autoencoder, which was tested on the MNIST dataset, to eventually generate a phase screen directly by using the caustics images.
  • Publication
    Comparison of algorithms for contrast enhancement based on TOD assessments by convolutional neural networks
    A current approach for performance assessment of imagers is triangle orientation discrimination (TOD). This approach requires observers or human visual system (HVS) models to recognize equilateral triangles pointing in four different directions. Imagers may apply embedded advanced digital signal processing (ADSP) for contrast enhancement, noise reduction, edge sharpening, etc. Unfortunately, applied methods are in general not documented and hence unknown. Within the last decades a vast amount of techniques for contrast enhancement has been proposed. There are some comparisons of such algorithms for few images and figures of merit. However, many of these figures of merit cannot assess usability of altered image content for specific tasks such as object recognition In this work different algorithms for contrast enhancement are compared in terms of TOD assessments by convolutional neural networks (CNN) as models. These models are trained by artificial images with single triangles. Many methods for contrast enhancement highly depend on the content of the entire image. Therefore, the images are superimposed by natural backgrounds with varying standard deviations to provide different signal-to-background ratios. Then these images are degraded by Gaussian blur and noise representing degradational camera effects and sensor noise. Different algorithms are applied, such as the contrast-limited adaptive histogram equalization or local range modification. Then accuracies of the trained models on these images are compared for different ADSP algorithms. Accuracy gains for low signal-to-background ratio and sufficiently large triangles are found, while impairments are found for high signal-to-background ratio and small triangles. Finally, implications of replacing triangles by real target signatures when using such ADSP algorithms are discussed. The results can be a step towards the assessment of those algorithms for generic target recognition.
  • Publication
    Prototype measurement setup to assess near-eye display imaging quality
    Near-eye displays - displays positioned in close proximity to the observer's eye - are a technology steadily gaining significance in industrial and defense applications, e.g. for augmented reality and digital night vision. In the light of the increasing use of such displays and their ongoing technological development, a specialized measurement setup is designed as a basis for evaluating these types of displays as part of the optoelectronic imaging chain. We developed a prototype measurement setup to analyze different properties of near-eye displays, with our primary focus on the Modulation Transfer Function (MTF) as a first step. The setup consists of an imaging system with a high-resolution CMOS camera and a motorized positioning system. It is intended to run different measurement procedures semiautomatically, performing the desired measurements at specified points on the display. This paper presents a comparison between different MTF measurement methods in terms of their applicability for different pixel structures. As a first step, the measurement setup's imaging capabilities are determined using a slanted edge target. A commercial virtual reality headset is then used as a sample display to test and compare different standard MTF measurement methods, such as the slanted edge or bar target method. The results are discussed with the goal to find the best measurement procedures for our setup.
  • Publication
    Towards atmospheric turbulence simulation using a conditional variational autoencoder
    Atmospheric turbulence often limits the performance of long-range imaging systems in applications. Realistic turbulence simulations provide means to evaluate this effect and assess turbulence mitigation algorithms. Current methods typically use phase screens or turbulent point spread functions (PSFs) to simulate the image distortion and blur due to turbulence. While the first method takes long computation times, the latter requires empirical models or libraries of PSF shapes and their associated tip and tilt motion, which might be overly simplistic for some applications. In this work, an approach is evaluated which tries to avoid these issues. Generative neural networks models are able to generate extremely realistic imitations of real (image) data with a short calculation time. To treat anisoplanatic imaging for the considered application, the model output is an imitation PSF-grid that has to be applied to the input image to yield the turbulent image. Certain shape features of the model outcome can be controlled by traversing within subsets of the model input space or latent space. The use of a conditional variational autoencoder (cVAE) appears very promising to yield fast computation times and realistic PSFs and is therefore examined in this work. The cVAE is trained on field trial camera images of a remote LED array. These images are considered as grids of real PSFs. First the images are pre-processed and their PSFs properties are determined for each frame. Main goal of the cVAE is the generation of PSF-grids under conditional properties, e.g. moments of PSFs. Different approaches are discussed and employed for a qualitative evaluation of the realism of the PSF-grids generated by the trained models. A comparison of required simulation computing time is presented and further considerations regarding the simulation method are discussed.
  • Publication
    Imager performance assessment with TRM4 version 3. An overview
    ( 2021) ;
    Steiner, Dov
    ;
    Model-based performance assessment is a valuable approach in the process of designing or comparing electrooptical and infrared imagers since it alleviates the need for expensive field measurement campaigns. TRM4 serves this purpose and is primarily used to calculate the range performance based on parameters of the imaging system and the environmental conditions. It features a validated approach to consider aliasing in the performance assessment of sampled imagers. This paper highlights new features and major changes in TRM4.v3, which is to be released in autumn 2021. TRM4.v3 includes the calculation of an image quality metric based on the National Imagery Interpretability Rating Scale (NIIRS). The NIIRS value computation is based on the latest version of the General Image Quality Equation. This extends the performance assessment capability of TRM4 in particular to imagers used for aerial imaging. The three-dimensional target modelling was revised to cope with a wider range of scenarios: from ground imaging of aerial targets against a sky background to aerial imaging of ground targets, including groundto-ground imaging. For imagers working in the visible to the SWIR spectral range, TRM4.v3 provides not only an improved comparison basis between lab measurements and modelling, but also allows direct integration of measured device data. This is achieved by introducing and computing (in analogy to the Minimum Temperature Difference Perceived used for thermal imagers) the so-called Minimum Contrast Perceived (MCP). This device figure of merit is similar to the Minimum Resolvable Contrast (MRC) but also applicable at frequencies above Nyquist frequency. Using measured MCP or MRC data, range performance can be calculated for devices such as cameras, telescopic sights and night vision goggles. In addition, the intensified camera module introduced in a previous publication was further elaborated and a comparison to laboratory measurement results is presented. Lastly, the graphical user interface was improved to provide a better user experience. Specifically, an interactive user assistance in form of tooltips was introduced.