Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The algorithm (machine learning) learns from the annotaded annotated data, which was used from the Traning Training set of images.

The set of Testing Images are then used to create a Prediction Model, which is finally used to provide a first set of statistics. The statistics can be used to verify if the Bad Parts from Test Images are indeed indentified identified by the Prediction Model.

On the other hand, for the unsupervised model, unannotated (unlabeled) data is provided and the algorithm tries to make sense of by extracting features and patterns on its own.

PEKAT VISION software run offers training of models using both Supervised and Unsupervised methods. Detailed description of modules can be found in part AI Modules.

The Unsupervised module available is Artificial Intelligence Modules Anomaly of Surface,where the only required end-user input is to assign at least 1 image to as OK’ class.

The remaining modules are supervised, where the end-user is required to perform detailed annotations, however, the results are extremely accurate.

...

The Training mode for PEKAT are is available for the following Modules:

...

Below you can find a Training Overview Fluxogram, which demonstrates the overall training steps. However, for detailed training information about a specific module each module, please access the desired module’s page.

Image RemovedImage Added

Training Graph - Loss Function Chart

...

Surface Detection (Chart not available for option Type 2 - Experimental - The number of tranining training cycles is defined by end-user before starting the training)

Based on the chart displayed, its graph gives you indication for the ideal moment to stop the training model from running.

The graph should gradually decrease over the training duration, because as the graph descreasesdecreases, training model results are improving.

...

As shown below, the graph decreased over time and it had been already below one the green lines for a while, therefore, the end-user should click on 'Finish Training' button in order to evaulate evaluate the results.

Please note the chart is not available for the Module Anomaly of Surface, since by default we have the following:

Fast Training - Minimum amount of training cycle cycles pre-determined by the software

...

Another reason for that is due the fact that Machine Learning is applied initially to the Trainning Training Images set, therefore, while performing deep training, the Error Loss Function graph is displayed, in relation to the Traning Training Images set.

If training is not interrupted when the graph is not decreasing anymore, the model will stop learning, meaning that you will not achieve better results. However, this will not cause weakness to the model, in other words, this will not increase the presence of false positives - It will just not provide better results.

Therefore, if the graph is not deacreasing decreasing anymore, it means the model stopped from improving.

Considering that, overfitting zones are likely to happen in machine learning, when simultaneously comparison between Traning Training Images set & Testing Images set (Validation) is in place, and the Testing Set (Validation) reaches its lowest peak.

...

If you choose this option, the Network Type argumentation will be automatically set in compliance with the already trained model, as shown on pictures below:

...

Augmentation - Training Parameters

For detailed explaination explanation regarding argumentation augmentation (training parameters), visit the following article Augmentation Glossary.

...

At first, we recommend to run different models in Fast TraningTraining mode, using different sizes of view-finder and training parameters (brighness brightness resistance, resistance to deviation, etc.) to find out what combination works best in your case. When you find suitable settings, but you would like to achieve even more precise results, you may try Deep Training mode, with more training cycles, however, using the same training parameters (settings).

...