Options
2019
Conference Paper
Title
Assessing detection performance of night vision VIS/LWIR-fusion
Abstract
Thermal imagers (TI) and low light imagers (LLI; e.g. Night Vision Goggles, NVG) are today's technologies for nighttime applications. Both possess their own advantages and disadvantages. LLI typically is ineffective in dark areas whereas TI operates also in complete darkness. On the other hand, TI often does not provide enough scene details, hindering situational awareness. Hence, it is nearby to combine the systems for common use. Today such combined systems are available as so-called Fusion Goggles: a LWIR bolometer optically fused to an image intensifier based NVG. Future development will probably replace the NVG by solid-state night vision technologies enabling sophisticated image fusion. Performance assessment and modeling of fused systems is an open problem. Main reason is the strong scene and task dependency of fusion algorithms. The idea followed here is to divide the task in detection and situational awareness ones and analyze them separately. An experimental common line of sight system consisting of a LWIR-bolometer and a low light CMOS camera was set-up to do so. Its first use was detection performance assessment. A data collection of a human at different distances and positions was analyzed in a perception experiment with the two original bands and eight different fusion methods. Twenty observers participated in the experiment. Their average detection probability clearly depends on the imagery. Although the resolution in LWIR is three times worse than the visual one, the achieved detection performance is much better. This transforms in the fused imagery also, with all fusion algorithms giving better performance than the visual one. However, all fusion algorithms to a different degree decrease observer performance compared to LWIR alone. This result is in good agreement with a Graph-Based Visual Saliency image analysis. Thus, it seems possible to assess fusion performance for the detection task by saliency calculations.