Metrics

AP Metrics

Stores all the information necessary to calculate the AP for one IoU and one class.

Examples

import torch

from alodataset import CocoBaseDataset
from alonet.detr import DetrR50
from alonet.metrics import ApMetrics

from aloscene import Frame

device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")

# Model/Dataset definition to evaluate
dataset = CocoBaseDataset(sample=True)
model = DetrR50(weights = "detr-r50")
model.eval().to(device)

# Metric to develop
metric = ApMetrics()

for frame in dataset.stream_loader():
    # Frame to batch
    frame = Frame.batch_list([frame]).to(device)

    # Boxes inference and get GT from frame
    pred_boxes = model.inference(model(frame))[0]  # Predictions of the first batch
    gt_boxes = frame[0].boxes2d  # GT of first batch

    # Add samples to evaluate metrics
    metric.add_sample(p_bbox=pred_boxes, t_bbox=gt_boxes)

# Print results
metric.calc_map(print_result=True)
Results obtained for AP in boxes

all

.50

.55

.60

.65

.70

.75

.80

.85

.90

.95

box

40.21

49.98

49.12

47.68

46.66

44.23

43.89

36.38

33.78

30.14

20.20

class alonet.metrics.compute_map.APDataObject

Bases: object

add_gt_positives(num_positives)

Call this once per image

Parameters
num_positivesint

Number of positives categories

get_metrics()

Get metrics from samples stored

Notes

Warning

Result not cached yet.

Return type

float

is_empty()

Check if data_points is empty

Return type

bool

push(score, is_true)

Append score and classification result to calculate AP

Parameters
scorefloat

Score of one element

is_truebool

Correctly predicted class category

class alonet.metrics.compute_map.ApMetrics(iou_thresholds=[0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95], compute_per_size_ap=False)

Bases: object

Compute AP Metrics.

Parameters
iou_thresholdslist

List of thresholds to use. Default: [0.5, 0.55, …. 0.9, 0.95]

compute_per_size_apbool

If True, will compute the IOU per boxes area/size, by default False

Attributes
class_nameslist

List of classes

add_sample(p_bbox, t_bbox, p_mask=None, t_mask=None)

Add sample to compute the AP.

Parameters
p_bboxBoundingBoxes2D

predicted boxes

t_bboxBoundingBoxes2D

Target boxes with labels with the labels_names property set.

p_maskMask

Apply APmask metric

t_maskMask

Apply APmask metric

calc_map(print_result=False)

Calcule mAP maps

Parameters
print_resultbool, optional

Print results, by default False

Returns
Tuple[Dict,Dict,Dict,Dict,Dict]
  • Summary with all maps result

  • Summary with all maps result per class

  • Size off all maps

  • Cross classification AP50

  • Cross classification AP50 per class

init_data_objects(class_names)

Init data objects used to compute the AP given some class_names list

Parameters
class_names: list

List of class_names to use to Init the Data objects

PQ Metrics

Compute Panoptic, Segmentation and Recognition Qualities Metrics.

Examples

import torch

from alodataset import CocoPanopticDataset, Split
from alonet.detr_panoptic import LitPanopticDetr
from alonet.metrics import PQMetrics

from aloscene import Frame

device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")

# Model/Dataset definition to evaluate
dataset = CocoPanopticDataset(split=Split.VAL)
model = LitPanopticDetr(weights = "detr-r50-panoptic").model
model.eval().to(device)

# Metric to develop
metric = PQMetrics()

for frame in dataset.stream_loader():
    # Frame to batch
    frame = Frame.batch_list([frame]).to(device)

    # Masks inference and get GT from frame
    _, pred_masks = model.inference(model(frame))
    pred_masks = pred_masks[0] # Predictions of the first batch
    gt_masks = frame[0].segmentation  # GT of first batch

    # Add samples to evaluate metrics
    metric.add_sample(p_mask=pred_masks, t_mask=gt_masks)

# Print results
metric.calc_map(print_result=True)
Summary of expected result

Category

Total cat.

PQ

SQ

RQ

Things

80

0.481

0.795

0.595

Stuff

53

0.358

0.779

0.449

All

133

0.432

0.789

0.537

class alonet.metrics.compute_pq.PQMetrics

Bases: object

add_sample(p_mask, t_mask, **kwargs)

Add a new prediction and target masks to PQ metrics estimation process

Parameters
p_maskMask

Predicted masks by network inference

t_maskMask

Target masks with labels and labels_names properties

Raises
Exception

p_mask and t_mask must be an Mask object and have the same shapes, as well as must have labels attribute. Finally, t_mask must have two minimal labels: category and isthing

calc_map(print_result=False)

Calcule PQ-RQ-SQ maps

Parameters
print_resultbool, optional

Print results, by default False

Returns
Tuple[Dict,Dict]
  • Summary with all maps result (average)

  • Summary with all maps result per class

pq_average(isthing=None, print_result=False)

Calculate SQ, RQ and PQ metrics from the categories, and thing/stuff/all if desired

Parameters
isthingbool

Calculate metrics for the ‘thing’ category (if it True) or ‘stuff’ category (if it False). By default the metric is executed over both

print_resultbool

Print result in console, by default False

Returns
Tuple[Dict, Dict]

Dictionaries with general PQ average metrics and for each class, respectively

update_data_objects(cat_labels, isthing_labels)

Update data objects categories, appending new categories from each sample

Parameters
cat_labelsLabels

Categories labels to append

isthing_labelsLabels

Description of thing/stuff labels to append

class alonet.metrics.compute_pq.PQStatCat

Bases: object

Keep TP, FP, FN and IoU metrics per class