something somethingA confusion matrix is a table to evaluate the performance of a classification model. It allows visualization of the performance by comparing predicted labels with actual labels.
In Pekat there are two variations:
Overall evaluation confusion matrix
Classifier-specific confusion matrix
Info |
---|
Predicted represents the application results Actual represents the user-defined correct results |
Evaluation confusion matrix
True positive (TP) - A user classified the image as
and the application evaluated the image asStatus colour Green title ok
.Status colour Green title ok False positive (FP) - A user classified the image as
but the application evaluated the image asStatus colour Red title ng
.Status colour Green title OK False negative (FN) - A user classified the image as
but the application evaluated the image asStatus colour Green title ok
.Status colour Red title ng True negative (TN) - A user classified the image as
and the application evaluated the image asStatus colour Red title ng
.Status colour Red title ng
The matrix shows how many images ended up in each of those categories.
Info |
---|
When you click on a specific cell of the confusion matrix, the images that ended up in that category will be shown in the image list on the right. |
Classifier confusion matrix
...
This matrix works the same way but can contain multiple different classes (as per the current classifier). In the image above you can notice 4 classes: Screw, Nut, Spring, Pad.
Again the table is representing the actual (user defined) and predicted (application evaluation) results and shows the corresponding amounts.
Clicking on Export creates and downloads a .xlms file with the same values as shown in the table.