...
To calculate the statistics, you first need to classify OK and NOK
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
...
The statistics result shows a Confusion Matrix confusion matrix, which illustrates how the predicted (evaluated by application) and actual (annotated for statistics) results correspond (or do not) to each other.
There can be 4 results, as shown in this image:
...
True positive (TP) - A user classified the image as ‘Good’
and the application evaluated the image as ‘Good’Status colour Green title ok
.Status colour Green title ok False positive (FP) - A user classified the image as ‘Bad’
but the application evaluated the image as ‘Good’Status colour Red title ng
.Status colour Green title OK False negative (FN) - A user classified the image as ‘Good’
but the application evaluated the image as ‘Bad’Status colour Green title ok
.Status colour Red title ng True negative (TN) - A user classified the image as ‘Bad’
and the application evaluated the image as ‘Bad’Status colour Red title ng
.Status colour Red title ng
The matrix shows how many images ended up in each of those categories.
...
After the statistics are successfully calculated, you can create an automatically generated report. The resulting report will be in HTML format.
You can choose how many images will be shown and whether you want to display only testing images (tick ‘Testing only’) or also training images. The entered amount of images is then randomly selected from all images classified in statistics (for testing images) and from all training images (if training images are enabled).
You can also change
Available options for report generation are:
Choose the amount of images to be included (if not all a random subset is selected)
Decide on displaying training or only testing images
Change the maximum image size - images in the report will be
...
scaled to that size.
...
Show statistics like - recall, precision, confusion matrix, processing time
Show the modules used in the flow
Set default language (
...
this can
...
also be
...
Inside the finished report, you can toggle whether rectangles and heatmaps should be shown on the evaluated images, choose to filter out images (e.g. show only images where ‘predicted’ is different from ‘actual’, images predicted as Good/Bad or images actually Good/Bad), sort them (by date or name), search for a specific image by name/regex or switch the language of the report. The report also shows the model of GPU that was used and the creation date of the report.
...
later changed inside the report)
...
There are several options inside the finished report:
Toggle if rectangles and heatmaps are visible on the evaluated images
Filter out images based on conditions (by date, by name, only incorrect predictions)
The report also shows Recall, Precision, and Processing times as well as the used GPU during creation. To better inspect the results provided these features are available:
Hover your mouse over an image, an icon shows up
...
allowing you to
...
focus on the
...
selected image.
In the detailed view, you can zoom in and out
...
and move around the image
...
.
The image resolution is visible in the detailed view in the top left corner.
...