On Learning Joint Multi-biometric Representations by Deep Fusion
Multi-biometrics combines different biometric sources to enhance recognition, template protection, and indexing performances. One of the main challenges here is the need for joint discriminant feature representation of multi-biometric data. This is typically achieved by feature-level fusion, imposing limitations on the combinations of biometric characteristics and algorithms. Including multiple imaging sources within deep-learning networks was generally limited to multiple sources of images of the same physical object, e.g., multi-spectral object detection. Previous biometrics works were limited to use deep-learning to extract representations of single biometric characteristics. In contrast to that, our work studies creating representations of one identity by sampling different physical objects, i.e. biometric characteristics. We adapted three architectures successfully to produce and discuss jointly learned representations for different levels of correlated data, modalities, instances, and presentations. Our evaluation proved the applicability of jointly learning biometric representations, especially when the data correlation is low.