Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Calculating statistics

To calculate the statistics, you first need to classify OK and NOK images by hand or using the Smart sorting function. It is also possible to select multiple images at once using the Shift key and classify them in bulk. After you submit your classification, click the Show Result button.

To be able to calculate all statistics, it is necessary to enable evaluation in any active module. If the evaluation is not enabled, only the processing times are computed for annotated images.

Information in statistics

The statistics result shows a confusion matrix, which illustrates how the predicted (evaluated by application) and actual (annotated for statistics) results correspond (or do not) to each other.

There can be 4 results, as shown in this image:

  • True positive (TP) - user classified the image as ‘Good’ and the application evaluated the image as ‘Good’.

  • False positive (FP) - user classified the image as ‘Bad’ but the application evaluated the image as ‘Good’.

  • False negative (FN) - user classified the image as ‘Good’ but the application evaluated the image as ‘Bad’.

  • True negative (TN) - user classified the image as ‘Bad’ and the application evaluated the image as ‘Bad’.

The matrix shows how many images ended up in each of those categories.

Info

When you click on a specific cell of the confusion matrix, the images that ended up in that category will be shown in the image list on the right.

Next to the matrix is a table showing values for recall, precision, and processing times (min, max, and average).

Recall = TP / (TP + FN)

  • Equals percentage of images classified as ‘Good’ by the user that were evaluated as ‘Good’ by the application.

Precision = TP / (TP + FP)

  • Equals percentage of images evaluated as ‘Good’ by the application that were classified as ‘Good’ by the user.

Download report

After the statistics are successfully calculated, you can create an automatically generated report. The resulting report will be in HTML format.

You can choose how many images will be shown and whether you want to display only testing images (tick ‘Testing only’) or also training images. The entered amount of images is then randomly selected from all images classified in statistics (for testing images) and from all training images (if training images are enabled).

You can also change the maximum image size - images in the report will be downscaled to that size. Other options include setting the default language (language can still be changed in a report to another one) or showing/hiding specific parts of the report (recall & precision, confusion matrix, processing time, and graph of used modules).

Image RemovedImage Added

Inside the finished report, you can toggle whether rectangles and heatmaps should be shown on the evaluated images, choose to filter out images (e.g. show only images where ‘predicted’ is different from ‘actual’, images predicted as Good/Bad or images actually Good/Bad), sort them (by date or name), search for a specific image by name/regex or switch the language of the report. The report also shows the model of GPU that was used and the creation date of the report.

If you hover your mouse over an image, an icon shows up which allows you to show the image in a bigger view when you click it. In this detailed view, you can zoom in and out using the mouse scroll wheel and move the image around using the right mouse button. It also shows the image resolution in the top left corner.