Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Application of deep learning algorithms for Lithographic mask characterization

: Woldeamanual, D.S.; Erdmann, A.; Maier, A.


Smith, D.G. ; Society of Photo-Optical Instrumentation Engineers -SPIE-, Bellingham/Wash.:
Computational Optics II : 15-17 May 2018, Frankfurt, Germany
Bellingham, WA: SPIE, 2018 (Proceedings of SPIE 10694)
ISBN: 978-1-5106-1925-8
ISBN: 978-1-5106-1926-5
Paper 1069408, 12 S.
Conference "Computational Optics" <2, 2018, Frankfurt>
Fraunhofer IISB ()

The appearance of defects on the photomask is a key challenge in lithographic printing. Printable defects affect the lithographic process by causing errors in both the phase and magnitude of the light and of the sizes and location of the printed features. Presently 193 nm optical inspection tools are still the main ones for detecting pattern defects on EUV masks.1 However, pattern sizes on EUV masks could not be detected due to the resolution limit of 193 nm inspection tools. We propose and investigate the application of Convolutional Neural Networks (CNNs) to characterize and classify defects on lithographic masks. This paper details the training and evaluation of the CNNs to classify defects in simulated aerial images of an EUV setting. The simulation software Dr.LiTHO is used to simulate aerial images of defect-free masks and of masks with different types and locations of defects. Specifically we compute images of regular arrays of squares to be imaged with typical settings of EUV lithography (λ = 13.5 nm, NA= 0.33). We consider five types of absorber defects (extrusion, intrusion, oversize, undersize and center spot). The architecture of the CNN contains 4 convolutional layers (conv. layers) with a mixed size of filter,(3x3) and (5x5). The convolution stride and the spatial padding is 1 pixel for all conv. layers. Spatial pooling is carried out by 4 max-pooling layers. Two separate networks are trained for detection of the defect types and location, whereas a third algorithm combines the results. When an image is presented to the implemented algorithm and trained networks, it will return the defect type with its location. An accuracy of 99.9% on the training set and 99.3% on the test set is achieved for detection of the defect type. The network trained for location detection results in 98.7% training accuracy and 98.0% for the test set. Having a sufficient amount of training images the trained CNNs classify the types of defects and their location in the aerial image with high accuracy. The proposed method can be also applied to other defect types and simulation settings.