Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

The Anomaly Detector module is the unsupervised deep-learning method available, where detailed annotations are not required and the end-user is only required to assign the OK class to at least 1 image.

This module looks for anomalies in images according to a model which was previously trained.

The model is trained using defect-free images as a reference. Therefore, finding defects in the remaining images.

Find a video below demonstrating the overall functionality of this module:

Tutorial

Alternatively, if you would like a step-by-step tutorial example for this module:

Open the application > Create new project > Start Tutorial

You will be guided through the main functionalities of our software, using this module.

You can enable running the tutorial with project start at any time using:

Open the application > Start the project > Settings > Show tutorial dialog

settings.png

The tutorial dialog opens only if there are no images in the project.

Training

The input for training involves classifying at least 1 image as an OK class.

There are 3 options to perform this classification:

1. [CLICKING] - When the image is selected, click on the option OK within the ‘Annotations’ box.

2. [NUMPAD] - When the image is selected, press numpad key ‘1’, it then categorizes it as an OK image.

3. [SMART SORTING] - Click on the ‘SMART SORTING’ button, and type what’s the prefix for the OK images within the field. There is also an option to use regular expressions instead of prefixes if you check the ‘Regex’ checkbox or you can filter images by their tags.

annotations anomaly.pnganomaly smart sorting.png

Tip: It is also possible to select multiple images at once using the ‘Shift’ key and classify them in bulk, by selecting the first image, then holding 'Shift' and selecting the last image from the range.

Attention: For 3. [SMART SORTING], please note that if the filename standard is, for instance, ‘testpart_OK52’, you would have to type ‘testpart_OK' - if you type only ‘OK’ as part of the name it won’t classify, as it classifies literally by prefix.

Another approach is to check the Regex checkbox which allows you to write regular expressions instead of prefixes - then if you write only ‘OK’ it matches all images which contain ‘OK’ anywhere in their name.

Assigning at least 1 image as ‘OK’ already enables you to Start Training, but we usually recommend using at least 20% of OK images for optimal results, depending on the surface variability.

Feature size (TODO)

The view-finder size should be determined depending on how detailed the inspection model should be.

If you choose a size that is too small, the algorithm will not be able to identify details on the surroundings, therefore, some defects may not be identified.

On the other hand, if you select a size that is too large, some details can be overlooked - resulting in a large number of false positives.

Some errors are seen from a larger distance, and others can only be seen through a magnifying glass, just as when a human eye focuses on detecting errors.

Along with the view-finder size, the processing time will be directly proportional to it, as the user indicates how detailed the inspection should be.

There is no general rule on how to set the right size, however, the size of the defects you are searching for can be used as a guideline. You need to try out several sizes in order to learn what works best for your inspection scenario.

The view-finder size, ideally, should be slightly larger than the largest defect.

Detection Results - Heatmap

The result of the detection is a heatmap. When validating, heatmaps are plotted in the image for better illustration. The heatmap detected areas that are defective can be highlighted by rectangles. These rectangles are added to the detectedRectangles item in the Context.

Automatic sensitivity

This function helps you set the correct threshold, a value separating OK and NG images. Changing the threshold simultaneously changes how sensitive the model is.

After training, the threshold is only calculated based on training images. For better separation of OK and NG images, you should click on Recalculate threshold and classify both OK and NG (Not Good/defective) images in the dialog window that will be used for the calculation of the new threshold.

As a result, you get a graph containing two curves. The green curve shows how many OK images are wrongly classified as NG for a given threshold value. The red curve shows how many NG images are wrongly classified as OK for a given threshold value. Ideally, you want to choose a threshold at which both curves are minimized. It is possible to choose the threshold by clicking on the graph.

  • No labels