Publications Search Results

Now showing 1 - 10 of 21
  • Publication
    The impact of a number of samples on unsupervised feature extraction, based on deep learning for detection defects in printed circuit boards
    ( 2022)
    Volkau, Ihar
    ;
    Mujeeb, Abdul
    ;
    Dai, Wenting
    ;
    ;
    Sourin, Alexei
    Deep learning provides new ways for defect detection in automatic optical inspections (AOI). However, the existing deep learning methods require thousands of images of defects to be used for training the algorithms. It limits the usability of these approaches in manufacturing, due to lack of images of defects before the actual manufacturing starts. In contrast, we propose to train a defect detection unsupervised deep learning model, using a much smaller number of images with-out defects. We propose an unsupervised deep learning model, based on transfer learning, that ex-tracts typical semantic patterns from defectâfree samples (oneâclass training). The model is built upon a preâtrained VGG16 model. It is further trained on custom datasets with different sizes of possible defects (printed circuit boards and soldered joints) using only small number of normal samples. We have found that the defect detection can be performed very well on a smooth back-ground; however, in cases where the defect manifests as a change of texture, the detection can be less accurate. The proposed study uses deep learning selfâsupervised approach to identify if the sample under analysis contains any deviations (with types not defined in advance) from normal design. The method would improve the robustness of the AOI process to detect defects.
  • Publication
    Detection and Segmentation of Image Anomalies Based on Unsupervised Defect Reparation
    ( 2021)
    Dai, Wenting
    ;
    ;
    Sourin, Alexei
    Anomaly detection is a challenging task in the field of data analysis, especially when it comes to unsupervised pixel-level segmentation of anomalies in images. In this paper, we present a novel multi-stage image resynthesis framework for detecting and segmenting image anomalies. In contrast to existing reconstruction-based approaches, our method is based on repairing suspicious regions of defective images so that the defects can be localized in the residual map between inputs and the repaired outputs. To avoid the reconstruction artifacts caused by defects, we propose to generate each pixel of the image by its context in the first coarse reconstruction stage. Then, while excluding all safe pixels, our method repairs suspicious regions that have large deviations to the original input image in subsequent stages. After several iterations, the defects will be detected in the final residual map. The experimental results show that we achieved better performances than the state-of-the-art benchmarks using the publicly available MVTec dataset as well as a real-world equipment surface dataset. In addition, the method also demonstrates an excellent capability of repairing defects in abnormal samples.
  • Publication
    Anomaly Detection and Segmentation Based on Defect Repaired Image Resynthesis
    ( 2021)
    Dai, Wenting
    ;
    ;
    Sourin, Alexei
    Anomaly detection is a challenging task in data analysis, especially when it comes to unsupervised pixel-level segmentation of anomalies in images. In this paper, we present a novel multi-stage defect repaired image resynthesis framework for the detection and segmentation of anomalies in images. In contrast to the existing reconstruction-based approaches, our reconstruction is free from artifacts caused by defective regions so that the defects can be identified from the residual map between input samples and their resynthesized defect-eliminated outputs. Our method outperforms the state-of-art benchmarks in most categories using the publicly available MVTec dataset. Besides, the method also demonstrates an excellent capability of repairing defects in abnormal samples.
  • Publication
    Self-supervised Pairing Image Clustering and its Application in Cyber Manufacturing
    ( 2020)
    Dai, Wenting
    ;
    Jiao, Yutao
    ;
    ;
    Sourin, Alexei
    Artificial intelligence is being increasingly applied in manufacturing to maximize industrial productivity. Image clustering, as a fundamental research direction in unsupervised learning, has been used in various fields. Since no label information is required in clustering, it can perform a preliminary analysis of the data while saving lots of manpower. In this paper, we propose a novel end-to-end clustering network called Self-supervised Pairing Image Clustering (SPIC) for industrial application, which produces clustering prediction for input images in an advanced pair classification network. For training this network, a self-supervised pairing module is built to form balanced pairs accurately and efficiently without label information. Since the existence of trivial solutions cannot be avoided in most of unsupervised learning methods, two additional information theoretic-constraints regularize the training that ensures the clustering prediction to be unambiguous and close to the real data distribution during training. Experimental results indicate that the proposed SPIC outperforms the state-of-art approaches on manufacturing datasets-NEU and DAGM. It also shows the execellent generalization capability on other genral public datasets, such as MNIST, Omniglot, CIFAR10, and CIFAR100.
  • Publication
    Mid-air interaction with optical tracking for 3D modeling
    ( 2018)
    Cui, Jian
    ;
    Sourin, Alexei
    Compared to common 2D interaction done with mouse and other 2D tracking devices, 3D hand tracking with low-cost optical cameras can provide more degrees of freedom, as well as natural gestures, when shape modeling is done in virtual spaces. However, though quite precise, the optical tracking devices cannot avoid problems intrinsic to hand interaction, such as hand tremor and jump release, and they also introduce an additional problem of hand occlusion. We investigate how to minimize the negative impact of these problems, and eventually propose to use hands in a way similar to how it is done when playing the Theremin -an electronic musical instrument controlled without physical contact by hands of the performer. We suggest that the dominant hand controls manipulation and deformation of objects while the non-dominant hand controls grasping, releasing and precision of interaction. Based on this method, we describe a generic set of reliable and precise interaction gestures for various manipulation and deformation tasks. We then prove with the user study that for the tasks involving 3D manipulations and deformations, hand interaction is faster than common 2D interaction done with mouse.
  • Publication
    Interactive rendering of translucent materials under area lights using voxels and Poisson disk samples
    ( 2018)
    Koa, Ming Di
    ;
    Johan, Henry
    ;
    Sourin, Alexei
    Interactive rendering of translucent materials in virtual worlds has always proved to be challenging. Rendering their indirect illumination produces further challenges. In our work, we develop a voxel illumination framework for translucent materials illuminated by area lights. Our voxel illumination uses two existing voxel structures, the Enhanced Subsurface Light Propagation Volumes (ESLPV), which handles the local translucent material appearance and the Light Propagation Volumes (LPV), which handles indirect illumination for the surrounding diffuse surfaces. By using a set of sparse translucent Poisson disk samples (TPDS) and diffuse Poisson disk samples (DPDS) for the ESLPV and LPV, illumination can be gathered from area lights effectively. This allows the direct illumination of the translucent material to be rendered in the ESLPV, and the diffuse indirect illumination of the surrounding scene can be rendered in the LPV. Based on experiments, a small number of Poisson disk samples in each voxel are sufficient to produce good results. A uniform set of Poisson disk samples on the translucent objects is resampled and chosen as Translucent Planar Lights (TPLs) and is used to distribute lighting from translucent objects into the LPV by an additional gathering process. Our technique allows for direct and indirect illuminations from highly scattering translucent materials to be rendered interactively under area lighting at good quality. We can achieve similar effects, such as low-frequency scattered light illumination from translucent materials, when compared to offline renderers without precomputations.
  • Publication
    Interactive screenspace fragment rendering for direct illumination from area lights using gradient aware subdivision and radial basis function interpolation
    ( 2017)
    Koa, Ming di
    ;
    Johan, Henry
    ;
    Sourin, Alexei
    Interactive rendering of direct illumination from area lights in virtual worlds has always proven to be challenging. In this paper, we propose a deferred multi-resolution approach for rendering direct illumination from area lights. Our approach subdivides the screenspace into multi-resolution 2D-fragments in which higher resolution fragments are generated and placed in regions with geometric, depth and visibility-to-light discontinuities. Compared to former techniques that use inter-fragment binary visibility test, our intra-fragment technique is able to detect shadow more efficiently while using fewer fragments. We also make use of gradient information across our binary visibility tests to further allocate higher resolution fragments to regions with larger visibility discontinuities. Our technique utilizes the stream-compaction feature of the transform feedback shader (TFS) in the graphics shading pipeline to filter out fragments in multiple streams for soft shadow refinement. The bindless texture extension in graphics pipeline allows us to easily process all these generated fragments in an unsorted manner. A single pass screenspace irradiance upsampling scheme which uses radial basis functions (RBF) with an adaptive variance scaling factor is proposed for interpolating the generated fragments. This reduces artifacts caused by large fragments and it also requires fewer fragments to produce reasonable results. Our technique does not require precomputations and is able to render diffuse materials at interactive rates.
  • Publication
    Procedural modeling of architecture with round geometry
    ( 2017)
    Edelsbrunner, Johannes
    ;
    Havemann, Sven
    ;
    Sourin, Alexei
    ;
    Fellner, Dieter W.
    Creation of procedural 3D building models can significantly reduce the costs of modeling, since it allows for generating a variety of similar shapes from one procedural description. The common field of application for procedural modeling is modeling of straight building facades, which are very well suited for shape grammars-a special kind of procedural modeling system. In order to generate round building geometry, we present a way to set up different coordinate systems in shape grammars. Besides Cartesian, these are primarily cylindrical and spherical coordinate systems for generation of structures such as towers or domes, that can procedurally adapt to different dimensions and parameters. The users can apply common splitting idioms from shape grammars in their familiar way for creating round instead of straight geometry. The second enhancement we propose is to provide a way for users to give high level inputs that are used to automatically arrange and adapt parts of the models.
  • Publication
    Tangible Images of Real Life Scenes
    ( 2017)
    Zhang, Xingzi
    ;
    Goesele, Michael
    ;
    Sourin, Alexei
    Haptic technologies allow for adding a new "touching" modality into virtual scenes. 3D reconstruction of a real life scene results, however, often in millions of polygons which cannot be simultaneously visualized and haptically rendered. In this paper, we propose a way of haptic interaction with real life scenes where multiple original images of the real scenes are augmented with reconstructed polygon meshes. We present our solution to the problems of haptic model alignment with the images and interactive haptic rendering of large polygon meshes with reconstruction artifacts. In particular, the presented collision detection algorithm is not restricted by any hypothesis and robust enough to support smooth interaction with millions of polygons. The feasibility and usability of the proposed solution is evaluated in a user study.
  • Publication
    Foreword to the Special Issue on 2016 International Conference on Cyberworlds. Editorial
    ( 2017)
    Fujishiro, Issei
    ;
    Sourin, Alexei
    The accepted six papers in this special issue highlight two pivotal aspects of cyberworlds research, that is, shape modeling and multimodal interaction & rendering.