Report
Calculating statistics
To calculate the statistics you need to have the “Evaluation“ button ticked and you need to classify ok and ng images by hand or using the Smart sorting function. It is also possible to select multiple images at once using the Shift key and classify them in bulk. After you submit your classification, click the Show Result button.
To be able to calculate all statistics, it is necessary to enable evaluation in any active module. If no evaluation is enabled the following warning message will pop up:
However, you can still calculate the evaluation times even without evaluation being active in any module. You just need to tick off the “Evaluation“ button and choose the images you want to include into the statistics:
Information in statistics
The statistics result shows a Confuxion Matrix, which illustrates how the predicted (evaluated by application) and actual (annotated for statistics) results correspond (or do not) to each other.
There can be 4 results, as shown in this image:
True positive (TP) - A user classified the image as ok and the application evaluated the image as ok.
False positive (FP) - A user classified the image as ng but the application evaluated the image as OK.
False negative (FN) - A user classified the image as ok but the application evaluated the image as ng.
True negative (TN) - A user classified the image as ng and the application evaluated the image as ng.
The matrix shows how many images ended up in each of those categories.
When you click on a specific cell of the confusion matrix, the images that ended up in that category will be shown in the image list on the right.
Next to the matrix is a table showing values for recall, precision, and processing times (min, max, and average).
Recall = TP / (TP + FN)
Equals percentage of images classified as ‘Good’ by the user that were evaluated as ‘Good’ by the application.
Precision = TP / (TP + FP)
Equals percentage of images evaluated as ‘Good’ by the application that were classified as ‘Good’ by the user.
Additionally, the statistics show information about evaluation time. namely the average time for evaluation of a single image, the minimal time it took to evaluate one image and the maximal time it took to evaluate a single image (among the ones annotated in the report display).
Download report
After the statistics are successfully calculated, you can create an automatically generated report. The resulting report will be in HTML format.
Available options for report generation are:
Choose the amount of images to be included (if not all a random subset of images is selected, this subset still honors the total split between training and testing images, meaning if, among all of the modules 40% of images are used for training, then 40% of images in the report will be training)
Decide on displaying training or only testing images
Change the maximum image size - images in the report will be scaled to that size.
Show statistics like - recall, precision, confusion matrix, processing time
Show the modules used in the flow
Set default language (this can also be later changed inside the report)
There are several options inside the finished report:
Toggle if rectangles and heatmaps are visible on the evaluated images
Filter out images based on conditions (by date, by name, only incorrect predictions)
The report also shows Recall, Precision, and Processing times as well as the used GPU during creation. To better inspect the results provided these features are available:
Hover your mouse over an image, an icon shows up allowing you to focus on the selected image.
In the detailed view, you can zoom in and out and move around the image.
The image resolution is visible in the detailed view in the top left corner.