...
To calculate the statistics, you first need to classify OK and NOK
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
...
The statistics result shows a Confusion Matrix confusion matrix, which illustrates how the predicted (evaluated by application) and actual (annotated for statistics) results correspond (or do not) to each other.
There can be 4 results, as shown in this image:
...
True positive (TP) - A user classified the image as ‘Good’
and the application evaluated the image as ‘Good’Status colour Green title ok
.Status colour Green title ok False positive (FP) - A user classified the image as ‘Bad’
but the application evaluated the image as ‘Good’Status colour Red title ng
.Status colour Green title OK False negative (FN) - A user classified the image as ‘Good’
but the application evaluated the image as ‘Bad’Status colour Green title ok
.Status colour Red title ng True negative (TN) - A user classified the image as ‘Bad’
and the application evaluated the image as ‘Bad’Status colour Red title ng
.Status colour Red title ng
The matrix shows how many images ended up in each of those categories.
...
Inside the finished report, you can toggle whether rectangles and heatmaps should be shown on the evaluated images, choose to filter out images (e.g. show only images where ‘predicted’ predicted is different from ‘actual’ actual, images predicted as Good/Bad or images actually Good/Bad), sort them (by date or name), search for a specific image by name/regex or switch the language of the report. The report also shows the model of GPU that was used and the creation date of the report.
...