Training Overview

Training Overview - Supervised & Unsupervised (Deep-Learning)

In order to perform deep learning supervised training, firstly, end-users need to perform annotations on a set of images that is called Training Images, and the remaining will be called Test Images.

The algorithm (machine learning) learns from the annotated data, which was used from the Traning set of images.

The set of Testing Images are then used to create a Prediction Model, which is finally used to provide the first set of statistics. The statistics can be used to verify if the Bad Parts from Test Images are indeed identified by the Prediction Model.

On the other hand, for the unsupervised model, unannotated (unlabeled) data is provided and the algorithm tries to make sense of by extracting features and patterns on its own.

PEKAT VISION software runs training models using both Supervised and Unsupervised methods.

The Unsupervised module available is Artificial Intelligence Modules, where the only required end-user input is to assign at least 1 image to as โ€˜OKโ€™ class.

The remaining modules are supervised, where the end-user is required to perform detailed annotations, however, the results are extremely accurate.

In order to provide a better understanding, find below a top-level schematic on how supervised training works in machine learning:

ย 

Training Overview Fluxogram

The Training mode for PEKAT are available for the following Modules:

Anomaly of Surface

Classifier

Detector

Surface Detection

Below you can find a Training Overview Fluxogram, which demonstrates the overall training steps. However, for detailed training information on a specific module, please access the desired moduleโ€™s page.

ย 

ย 

Training Graph - Loss Function Chart

The Training Model begins after clicking on โ€˜Start Trainingโ€™ button, from that point a dialog window will display a training chart only for the following modules:

Classifier

Detector

Surface Detection (Chart not available for option Type 2 - Experimental - The number of tranining cycles is defined by end-user)

Based on the chart displayed, its graph gives you indication the ideal moment to stop the training model from running.

The graph should gradually decrease over the training duration, because as the graph descreases, training model results are improving.

Note that the Training Model can be finished at any time, however, one of the following scenarios are the optimal moment:

  1. Preferably when the chart is not decreasing anymore (sideways graph)

  2. Ideally, if the graph drops below at least one of the green lines

As shown below, the graph decreased over time and it had been already below one the green lines for a while, therefore, the end-user should click on 'Finish Training' button in order to evaluate the results.

ย 

Please note the chart is not available for the Module Anomaly of Surface, since by default we have the following:

Fast Training - Minimum amount of training cycle pre-determined by the software

Deep Training - End-user defines the number of training cycles

As described above, for both functions, the number of epochs is determined before training starts, therefore loss function graph is not available.

Avoiding Overfitting

We would like to highlight that our algorithm is designed to prevent overfitting.

Another reason for that is due to the fact that Machine Learning is applied initially to the Training Images set, therefore, while performing deep training, the Error Loss Function graph is displayed, in relation to the Traning Images set.

If training is not interrupted when the graph is not decreasing anymore, the model will stop learning, meaning that you will not achieve better results. However, this will not cause weakness to the model, in other words, this will not increase the presence of false positives - It will just not provide better results.

Therefore, if the graph is not deacreasing anymore, it means the model stopped from improving.

Considering that, overfitting zones are likely to happen in machine learning, when simultaneously comparison between Traning Images set & Testing Images set (Validation) is in place, and the Testing Set (Validation) reaches its lowest peak.

ย 

Extend Model

Extend model function is only available for the modules Classifier& Detector.

This function allows you perform further training on an already trained model.

If you choose this option, the Network Type argumentation will be automatically set in compliance with the already trained model, as shown on pictures below:

ย 

Extend Model for Detector Module

ย 

Extend Model for Classifier Module

ย 


Augmentation - Training Parameters

For detailed explaination regarding argumentation (training parameters), visit the following article Augmentation Glossary.

Model Validation

By default, each training module is inactive and a model must be selected to activate (validate) it.

The model can be selected by using the radio button and then click the โ€˜Preview' button - Validation is designed to check the modelโ€™s results.

When a model is validated (selected), the statistics will then be based on the selected model, as shown on picture below:


Fast Training Mode & Deep Learning Mode

At first, we recommend to run different models in Fast Traning mode, using different sizes of view-finder and training parameters (brighness resistance, resistance to deviation, etc.) to find out what combination works best in your case. When you find suitable settings, but you would like to achieve even more precise results, you may try Deep Training mode, with more training cycles, however, using the same training parameters (settings).

Information on training process and conditions used for a particular model are available by clicking on the info icon in the list of models, as shown on picture below.

ย