/
Detector

Detector

The detector module allows you to find and possibly classify an object in the image.

 

Training

Annotations

The size of the annotation should be just enough to fit the whole defect without too much excess background (especially when the background is very variable). You can draw rectangles by hand or you can double-click with the left or right mouse and a new rectangle with the same size as the last one you made will be added automatically.

To delete an annotation, you can press Delete or Backspace when the annotation is selected. If Classify is enabled, you can assign a class to each annotated defect - similarly as in the Classifier module. Then you can start the training.

image-20241217-124113.png
Detector without classification
image-20241217-124328.png
Detector with classification

Feature size

During annotations it is important to find the ideal feature size. This value determines the size of the defects the model is able to reliably find. To find out more, open the following page Feature Size

Very small feature size on big defects is allowed but not ideal for performance and reliability.

Wrong feature size can make certain rectangles invalid. If such a thing occurs PEKAT is going to notify you before the training with possible solutions.

Filter invalid images will take you back to Labeling with the images with invalid annotations will be filtered in the list on the right side, and you can easily go through those images and fix the annotations.

Change feature size will set the feature size to the largest possible value that would allow using all of the annotations for training (or to size 32 as it is the smallest possible feature size).

Continue will proceed with the training without using the invalid annotations.

Include

In case your dataset contains empty images or images with no defects, it is possible to add them into training with the Include button.

This way the model learns how empty images look and therefore it should detect less false positives.

Auto-annotations

To speed up the annotation process, there is a possibility to train a model on a small number of annotations first and then use the predictions of this model to quickly make more annotations for further training.

If we select a trained model from a list of models and then go to the Labeling tab, we can see predictions of that selected model on our images. They are marked with red rectangles with percentages.

Clicking Set annotations will automatically set annotations for all the detected rectangles in the selected image.

It is possible to edit auto-annotations afterward, add more annotations, delete them if some of them are wrong, or change their classes if Classify is enabled, the same as with normal annotations.

After auto-annotating enough images, we can start another training, which should offer better results than the initial training with less annotations.

Classify

This adds the option to classify the object into one of the classes. If the classification is enabled, each marked object needs a class to be assigned to it. Class management is the same as in classifier and the annotations change color based on the assigned class for better visibility (you can change class names and color in the Class Manager).

For more details visit Classifier. This can also be achieved by combining Detector and Classifier as separate modules as described in the video below. However, classifying objects directly in the detector is easier and faster.

Verifying Detector Model

To verify the quality of a detector model you can use Confusion Matrix available in the Inspection tab:

You can also compare the detected annotations with the annotated ones and more. You can learn more about the confusion matrix on the related page:

https://pekatvision.atlassian.net/wiki/spaces/KB32/pages/934186628/Confusion+Matrix#Detector-confusion-matrix

Configuring Evaluation Settings

Trained detector model detects features on an image and assigns them rectangles. Each rectangle has 2 basic parameters:

  • Confidence

  • Position (together with height and width)

You can then filter out rectangles detected by the model using these parameters. To do this, you can use the two sliders on the right side of the window:

  • Confidence threshold slider - rectangles with confidence value lower than the threshold will be filtered out.

  • IoU (Intersection over Union) slider - if two or more rectangles overlap, the ones with lower confidence will be filtered out. The value specifies the percentage of overlap. For example, with value of 50, the filter will be applied to all rectangles that share at least 50% of their area with other rectangle.

In conclusion - Confidence threshold is useful for filtering out rectangles with low confidence, IoU is useful for filtering out many overlapping rectangles

Confidence - more info

Trained model assigns each feature it detects a confidence value. This value represents how certain the model is that the feature is similar to the features used in training.

IoU - more info

IoU or Intersection over Union is method used to calculate the overlap of rectangles.

As the name suggests, the IoU value is calculated by taking the intersection are (the shared area) of two rectangles and dividing that by the area of their union (total area covered by both rectangles together).

 

Related content

Anomaly Detector
Anomaly Detector
Read with this
Detector
More like this
Confusion Matrix
Confusion Matrix
Read with this
Detector
More like this
Context
Read with this
Detector
More like this