Report
Calculating statistics
To calculate the statistics, you first need to classify OK and NOK images by hand or using the Set by filename function. It is also possible to select multiple images at once using the Shift key and classify them in bulk. After you submit your classification, click the Show result button.
To be able to calculate all statistics, it is necessary to enable evaluation in any active module. If evaluation is not enabled, only the processing times are computed for annotated images.
Information in statistics
The statistics result shows a confusion matrix, which illustrates how the predicted (evaluated by application) and actual (annotated for statistics) results correspond (or not) to each other.
There can be 4 results, as shown in this image:
True positive (TP) - user classified image as ‘Good’ and application evaluated image as ‘Good’
False positive (FP) - user classified image as ‘Bad’ but application evaluated image as ‘Good’
False negative (FN) - user classified image as ‘Good’ but application evaluated image as ‘Bad’
True negative (TN) - user classified image as ‘Bad’ and application evaluated image as ‘Bad’
The matrix shows how many images ended up in each of those categories.
When you click on a specific cell of the confusion matrix, the images that ended up in that category will be shown in the image list on the right.
Next to the matrix is a table showing values for recall, precision and processing times (min, max and average).
Recall = TP / (TP + FN)
What percentage of images classified as ‘Good’ by the user were evaluated as ‘Good’ by the application.
Precision = TP / (TP + FP)
What percentage of images evaluated as ‘Good’ by the application were actually ‘Good’ (classified as ‘Good’ by the user).
Download report
After the statistics is successfully calculated, you can create an automatically generated report. The resulting report will be in HTML format.
You can choose how many images will be shown and whether you want to display only testing images (tick ‘Testing only’) or also training images. The entered amount of images is then randomly selected from all images classified in statistics (for testing images) and from all training images (if training images are enabled).
You can also change the maximum image size - images in report will be downscaled to that size. Other options include setting the default language (language can still be changed in report to another one) or showing/hiding specific parts of report (recall & precision, confusion matrix, processing time and graph of used modules).
Inside the finished report, you can toggle whether rectangles and heatmaps should be shown on the evaluated images, choose to filter out images (e.g. show only images where ‘predicted’ is different from ‘actual’, images predicted as Good/Bad or images actually Good/Bad), sort them (by date or name), search for a specific image by name/regex or switch the language of the report. The report also shows the model of GPU that was used and creation date of report.
If you hover your mouse over an image, an icon shows up which allows you to show the image in bigger view when you click it. In this detail view, you can zoom in and out using mouse scrollwheel and move the image around using the right mouse button. It also shows the image resolution in the top left corner.