• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. What identifies a whale by its fluke? On the benefit of interpretable machine learning for whale identification
 
  • Details
  • Full
Options
2020
Conference Paper
Title

What identifies a whale by its fluke? On the benefit of interpretable machine learning for whale identification

Abstract
Interpretable and explainable machine learning have proven to be promising approaches to verify the quality of a data-driven model in general as well as to obtain more information about the quality of certain observations in practise. In this paper, we use these approaches for an application in the marine sciences to support the monitoring of whales. Whale population monitoring is an important element of whale conservation, where the identification of whales plays an important role in this process, for example to trace the migration of whales over time and space. Classical approaches use photographs and a manual mapping with special focus on the shape of the whale flukes and their unique pigmentation. However, this is not feasible for comprehensive monitoring. Machine learning methods, especially deep neural networks, have shown that they can efficiently solve the automatic observation of a large number of whales. Despite their success for many different tasks such as identification, further potentials such as interpretability and their benefits have not yet been exploited. Our main contribution is an analysis of interpretation tools, especially occlusion sensitivity maps, and the question of how the gained insights can help a whale researcher. For our analysis, we use images of humpback whale flukes provided by the Kaggle Challenge ""Humpback Whale Identification"". By means of spectral cluster analysis of heatmaps, which indicate which parts of the image are important for a decision, we can show that the they can be grouped in a meaningful way. Moreover, it appears that characteristics automatically determined by a neural network correspond to those that are considered important by a whale expert.
Author(s)
Kierdorf, J.
Institute of Geodesy and Geoinformation, University of Bonn
Garcke, J.
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Behley, J.
Institute of Geodesy and Geoinformation, University of Bonn
Cheeseman, T.
Happywhale and Southern Cross University
Roscher, R.
Institute of Computer Science, University of Osnabrueck
Mainwork
XXIV ISPRS Congress 2020. Commission II  
Conference
International Society for Photogrammetry and Remote Sensing (ISPRS Congress) 2020  
Open Access
DOI
10.5194/isprs-annals-V-2-2020-1005-2020
Language
English
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Keyword(s)
  • deep learning

  • humpback whales

  • interpretability

  • machine learning

  • neural networks

  • visualization

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024