• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Artikel
  4. Accelerated quantification of reinforcement degradation in additively manufactured Ni-WC metal matrix composites via SEM and vision transformers
 
  • Details
  • Full
Options
November 2025
Journal Article
Title

Accelerated quantification of reinforcement degradation in additively manufactured Ni-WC metal matrix composites via SEM and vision transformers

Abstract
Machine learning (ML) applications have shown potential in analyzing complex patterns in additively manufactured (AMed) structures. Metal matrix composites (MMC) offer the potential to enhance functional parts through a metal matrix and reinforcement particles. However, their processing can induce several co-existing anomalies in the microstructure, which are difficult to analyze through optical metallography. Scanning electron microscopy (SEM) can better highlight the degradation of reinforcement particles, but the analysis can be labor-intensive, time-consuming, and highly dependent on expert knowledge. Deep learning-based semantic segmentation has the potential to expedite the analysis of SEM images and hence support their characterization in the industry. This capability is particularly desired for rapid and precise quantification of defect features from the SEM images. In this study, key state-of-the-art semantic segmentation methods from self-attention-based vision transformers (ViTs) are investigated for their segmentation performance on SEM images with a focus on segmenting defect pixels. Specifically, SegFormer, MaskFormer, Mask2Former, UPerNet, DPT, Segmenter, and SETR models were evaluated. A reference fully convolutional model, DeepLabV3+, widely used on semantic segmentation tasks, is also included in the comparison. A SEM dataset representing AMed MMCs was generated through extensive experimentation and is made available in this work. Our comparison shows that several transformer-based models perform better than the reference CNN model with UPerNet (94.33 % carbide dilution accuracy) and SegFormer (93.46 % carbide dilution accuracy) consistently outperformed the other models in segmenting damage to the carbide particles in the SEM images. The findings on the validation and test sets highlight the most frequent misclassification errors at the boundaries of defective and defect-free pixels. The models were also evaluated based on their prediction confidence as a practical measure to support decision-making and model selection. As a result, the UPerNet model with the Swin backbone is recommended for segmenting SEM images from AMed MMCs in scenarios where accuracy and robustness are desired whereas the SegFormer model is recommended for its lighter design and competitive performance. In the future, the analysis can be extended by including higher capacity as well as smaller models in the comparison. Similarly, variations in specific hyperparameters can be investigated to reinforce the rationale of selecting a specific configuration.
Author(s)
Safdar, Mutahar
McGill University
Kazimi, Bashir
Forschungszentrum Jülich
Ruzaeva, Karina
Forschungszentrum Jülich
Wood, Gentry
Apollo Machine and Welding Ltd.
Zimmermann, Max Gero
Fraunhofer-Institut für Lasertechnik ILT  
Lamouche, Guy
National Research Council Canada
Wanjara, Priti
National Research Council Canada
Sandfeld, Stefan
Forschungszentrum Jülich
Zhao, Yaoyao Fiona
McGill University
Journal
Materials characterization  
Open Access
File(s)
Download (29.64 MB)
Rights
CC BY-NC-ND 4.0: Creative Commons Attribution-NonCommercial-NoDerivatives
DOI
10.1016/j.matchar.2025.115645
10.24406/publica-5832
Language
English
Fraunhofer-Institut für Lasertechnik ILT  
Keyword(s)
  • Additive manufacturing

  • Scanning electron microscopy (SEM) images

  • Metal matrix composites

  • Carbide damage

  • Semantic segmentation

  • Vision transformers

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024