Loading

Airborne Object Tracking Challenge

Evaluate your predictions locally (validation flights)

Demo on using metrics codebase for generating scores locally

shivam

Airborne Object Tracking Dataset

🤫 Setting up

In [1]:
import json
import random
import os, sys
from IPython.display import display, clear_output, HTML
from random import randrange, choice
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"]=25,25
import numpy as np
import seaborn as sns

# Because Life, Universe and Everything!
random.seed(42)

def mdprint(text):
    display({
        'text/markdown': text,
        'text/plain': text
    }, raw=True)

!git clone http://gitlab.aicrowd.com/amazon-prime-air/airborne-detection-starter-kit.git
os.chdir("airborne-detection-starter-kit/data")
Cloning into 'airborne-detection-starter-kit'...
remote: Enumerating objects: 337, done.
remote: Total 337 (delta 0), reused 0 (delta 0), pack-reused 337
Receiving objects: 100% (337/337), 21.90 MiB | 9.04 MiB/s, done.
Resolving deltas: 100% (139/139), done.
In [2]:
# Dataset for Airborne Object Tracking Dataset
sys.path.append(os.path.dirname(os.path.realpath(os.getcwd())))
sys.path.append(os.path.dirname(os.path.realpath(os.getcwd())) + "/core")
!pip install -r ../requirements.txt > /dev/null
from core.dataset import Dataset
notebook_path = os.path.dirname(os.path.realpath("__file__"))

local_path = notebook_path + '/part2'
s3_path = 's3://airborne-obj-detection-challenge-training/part2/'
dataset = Dataset(local_path, s3_path)
ERROR: botocore 1.20.95 has requirement urllib3<1.27,>=1.25.4, but you'll have urllib3 1.24.3 which is incompatible.
2021-06-16 19:29:18.507 | INFO     | core.dataset:load_gt:20 - Loading ground truth...
2021-06-16 19:29:18.508 | INFO     | core.file_handler:download_file_if_needed:33 - [download_from_s3] File not found locally, downloading: ImageSets/groundtruth.json
In [3]:
# Download validation flights
# - e0d815053c1c46cfbd0b586b72718feb
# - ac23cb93c5c242d2b1bf0633fae9b1e6

flight = dataset.get_flight_by_id("e0d815053c1c46cfbd0b586b72718feb")
flight.download()

flight = dataset.get_flight_by_id("ac23cb93c5c242d2b1bf0633fae9b1e6")
flight.download()

⏱ Generate the predictions

In [4]:
os.chdir("/content/airborne-detection-starter-kit/")
!ln -s $PWD/data/part2/Images $PWD/data/val
In [5]:
!python test.py
Successfully generated predictions!

👀 Time to evaluate locally!

In [6]:
# Cleanup...
!rm -rf data/evaluation/
!mkdir -p data/evaluation/gt
!mkdir -p data/evaluation/result
In [7]:
# Generate groundtruth.json for relevant flights
def generate_partial_gt():
    flights = ['e0d815053c1c46cfbd0b586b72718feb', 'ac23cb93c5c242d2b1bf0633fae9b1e6']
    gt = json.loads(open("data/part2/ImageSets/groundtruth.json").read())
    for sample in list(gt['samples'].keys()):
        if sample not in flights:
            del gt['samples'][sample]

    with open(("data/evaluation/gt/groundtruth.json"), 'w') as fp:
        json.dump(gt, fp)
In [8]:
# Transfer generated results to metrics codebase bbox format
def convert_and_copy_generated_results_to_metrics_folder():
    flight_results = json.loads(open("data/results/run0/result.json").read())
    for i in range(len(flight_results)):
        for j in range(len(flight_results[i]['detections'])):
            x = flight_results[i]['detections'][j]['x']
            y = flight_results[i]['detections'][j]['y']
            w = flight_results[i]['detections'][j]['w'] - x
            h = flight_results[i]['detections'][j]['h'] - y


            flight_results[i]['detections'][j]['x'] = x + w/2
            flight_results[i]['detections'][j]['y'] = y + h/2
            flight_results[i]['detections'][j]['w'] = w
            flight_results[i]['detections'][j]['h'] = h

    with open("data/evaluation/result/result.json", 'w') as fp:
        json.dump(flight_results, fp)
In [9]:
# Let's run the metrics codebase!!
generate_partial_gt()
convert_and_copy_generated_results_to_metrics_folder()
!python core/metrics/run_airborne_metrics.py --dataset-folder data/evaluation/gt --results-folder data/evaluation/result --summaries-folder data/evaluation/summaries
2021-06-16 19:39:16,872:INFO:run_airborne_metrics.py:160 Encounter ground truth: data/evaluation/gt/groundtruth_with_encounters_maxRange700_maxGap3_minEncLen30.csv
2021-06-16 19:39:16,873:INFO:calculate_encounters.py:83 Asserting data/evaluation/gt/groundtruth.json format
2021-06-16 19:39:16,873:INFO:pandas_utils.py:87 Reading ground truth
2021-06-16 19:39:16,873:INFO:pandas_utils.py:61 Reading provided data/evaluation/gt/groundtruth.json
2021-06-16 19:39:16,873:INFO:pandas_utils.py:68 Loading .json
2021-06-16 19:39:16,881:INFO:pandas_utils.py:75 Normalizing json. This operation is time consuming. The result .csv will be saved Please consider providing .csv file next time
2021-06-16 19:39:16,958:INFO:calculate_encounters.py:268 Saving groundtruth in .csv format, please use .csv in the future
2021-06-16 19:39:16,981:INFO:calculate_encounters.py:273 Filtering ground truth to get intruders in the specified range <= 700m.
2021-06-16 19:39:17,005:INFO:utils.py:157 NumExpr defaulting to 4 threads.
2021-06-16 19:39:17,011:INFO:calculate_encounters.py:277 Finding encounters and adding their information to the ground truth
2021-06-16 19:39:17,055:INFO:calculate_encounters.py:290 Saving ground truth + encounters dataframe to data/evaluation/gt/groundtruth_with_encounters_maxRange700_maxGap3_minEncLen30.csv
2021-06-16 19:39:17,089:INFO:calculate_encounters.py:302 Saving only valid encounters info dataframe to data/evaluation/gt/valid_encounters_maxRange700_maxGap3_minEncLen30.csv
2021-06-16 19:39:17,090:INFO:calculate_encounters.py:307 Saving only valid encounters info in json format to data/evaluation/gt/valid_encounters_maxRange700_maxGap3_minEncLen30.json
2021-06-16 19:39:17,094:INFO:match_groundtruth_results.py:516 Reading input ground truth and results
2021-06-16 19:39:17,094:INFO:pandas_utils.py:87 Reading ground truth
2021-06-16 19:39:17,094:INFO:pandas_utils.py:61 Reading provided data/evaluation/gt/groundtruth.csv
2021-06-16 19:39:17,104:INFO:match_groundtruth_results.py:522 Number of evaluated images is 2399
2021-06-16 19:39:17,104:INFO:pandas_utils.py:96 Reading detection results
2021-06-16 19:39:17,104:INFO:pandas_utils.py:61 Reading provided data/evaluation/result/result.json
2021-06-16 19:39:17,104:INFO:pandas_utils.py:68 Loading .json
2021-06-16 19:39:17,105:INFO:pandas_utils.py:75 Normalizing json. This operation is time consuming. The result .csv will be saved Please consider providing .csv file next time
2021-06-16 19:39:17,111:INFO:match_groundtruth_results.py:527 Saving airborne classifier results in .csv format, please use .csv in the future
2021-06-16 19:39:17,114:INFO:match_groundtruth_results.py:529 Number of evaluated unique detections is 273
2021-06-16 19:39:17,114:INFO:match_groundtruth_results.py:530 Filtering results based on results score 0.00
2021-06-16 19:39:17,117:INFO:match_groundtruth_results.py:536 Enumerating detections with detection_id
2021-06-16 19:39:17,118:INFO:match_groundtruth_results.py:547 Using track_id as track_id
2021-06-16 19:39:17,125:INFO:match_groundtruth_results.py:552 Augmenting with track length
2021-06-16 19:39:17,135:INFO:match_groundtruth_results.py:554 Filtering results with track length below 0
2021-06-16 19:39:17,135:INFO:match_groundtruth_results.py:557 Computing ground truth and detection match based on extended_iou_minObjArea_100
2021-06-16 19:39:17,142:INFO:match_groundtruth_results.py:464 Pairing each ground truth intruder with each detection in the respective frame
2021-06-16 19:39:17,157:INFO:match_groundtruth_results.py:471 Augmenting with original iou for comparison
2021-06-16 19:39:17,165:INFO:match_groundtruth_results.py:477 Extending bounding boxes based on groundtruth area
2021-06-16 19:39:17,165:INFO:match_groundtruth_results.py:296 Extending bounding boxes based on ground truth area
2021-06-16 19:39:17,167:INFO:match_groundtruth_results.py:307 Number of objects with ground truth area less than 100 is 91
2021-06-16 19:39:17,174:INFO:match_groundtruth_results.py:322 There are no detections with area below 100 that are being matched to extended ground truth
2021-06-16 19:39:17,174:INFO:match_groundtruth_results.py:480 Augmenting with extended iou with minimum object area of 100
2021-06-16 19:39:17,181:INFO:match_groundtruth_results.py:200 IoU matching: match minimum iou = 0.20, and no match maximum iou = 0.02 
2021-06-16 19:39:17,191:INFO:match_groundtruth_results.py:487 Matching done
2021-06-16 19:39:17,192:INFO:match_groundtruth_results.py:563 Saving ground truth and detection match results to data/evaluation/result/result_metrics_min_track_len_0/gt_det_matches_extended_iou_minObjArea_100_matchThresh_0_2_noMatchThresh_0_02.csv
2021-06-16 19:39:17,251:INFO:calculate_airborne_metrics.py:715 Reading ground truth detection matches from data/evaluation/result/result_metrics_min_track_len_0/gt_det_matches_extended_iou_minObjArea_100_matchThresh_0_2_noMatchThresh_0_02.csv
2021-06-16 19:39:17,262:WARNING:calculate_airborne_metrics.py:722 Reading ground truth with encounters from data/evaluation/gt/groundtruth_with_encounters_maxRange700_maxGap3_minEncLen30.csv
2021-06-16 19:39:17,269:INFO:calculate_airborne_metrics.py:727 Maximum range of encounter is 699.85
2021-06-16 19:39:17,269:INFO:calculate_airborne_metrics.py:742 The provided minimum detection score 0.00000 will be used
2021-06-16 19:39:17,269:INFO:calculate_airborne_metrics.py:745 Frame level metrics calculation for score threshold = 0.7001715688383829
2021-06-16 19:39:17,286:INFO:calculate_airborne_metrics.py:254 FAR calculation: Using unique flight ids in the provided data frame to calculate total number of processed flights
2021-06-16 19:39:17,286:INFO:calculate_airborne_metrics.py:260 FAR calculation: Total number of processed flights is 2
2021-06-16 19:39:17,287:INFO:calculate_airborne_metrics.py:261 FAR calculation: Total number of processed hours is 0.067
2021-06-16 19:39:17,287:INFO:calculate_airborne_metrics.py:206 Filtering score threshold = 0.700
2021-06-16 19:39:17,289:INFO:calculate_airborne_metrics.py:176 Calculating the number of unique tracks ids that that correspond to at least one not matched detection
2021-06-16 19:39:17,297:INFO:calculate_airborne_metrics.py:197 Number of unique track_ids that correspond to at least one false detection 2
2021-06-16 19:39:17,297:INFO:calculate_airborne_metrics.py:267 FAR = 30.00000
2021-06-16 19:39:17,297:INFO:calculate_airborne_metrics.py:227 FPPI calculation: Using unique image names in the provided data frame to calculate total number of processed frames
2021-06-16 19:39:17,297:INFO:calculate_airborne_metrics.py:231 FPPI calculation: Total number of processed frames is 2399
2021-06-16 19:39:17,298:INFO:calculate_airborne_metrics.py:206 Filtering score threshold = 0.700
2021-06-16 19:39:17,300:INFO:calculate_airborne_metrics.py:151 Calculating the number of detections that did not match ground truth
2021-06-16 19:39:17,304:INFO:calculate_airborne_metrics.py:171 No match calculation: Number of detections without a match = 273 out of 273 unique detections
2021-06-16 19:39:17,304:INFO:calculate_airborne_metrics.py:234 FPPI = 0.11380
2021-06-16 19:39:17,304:INFO:calculate_airborne_metrics.py:343 PD calculation: Intruders Range =  [0.0, 699.8]
2021-06-16 19:39:17,308:INFO:calculate_airborne_metrics.py:303 PD calculation: Number of intruders to detect = 272
2021-06-16 19:39:17,308:INFO:calculate_airborne_metrics.py:206 Filtering score threshold = 0.700
2021-06-16 19:39:17,311:INFO:calculate_airborne_metrics.py:272 Calculating the number of intruders that were matched by detections
2021-06-16 19:39:17,316:INFO:calculate_airborne_metrics.py:287 Detected intruders calculation: Number of detected intruders = 0 
2021-06-16 19:39:17,316:INFO:calculate_airborne_metrics.py:314 PD = 0.000 = 0 / 272
2021-06-16 19:39:17,316:INFO:calculate_airborne_metrics.py:360 PD calculation: gt_area > 200 and id.str.contains("Flock") == False and id.str.contains("Bird") == False
2021-06-16 19:39:17,324:INFO:calculate_airborne_metrics.py:303 PD calculation: Number of intruders to detect = 1325
2021-06-16 19:39:17,325:INFO:calculate_airborne_metrics.py:206 Filtering score threshold = 0.700
2021-06-16 19:39:17,329:INFO:calculate_airborne_metrics.py:272 Calculating the number of intruders that were matched by detections
2021-06-16 19:39:17,335:INFO:calculate_airborne_metrics.py:287 Detected intruders calculation: Number of detected intruders = 0 
2021-06-16 19:39:17,335:INFO:calculate_airborne_metrics.py:314 PD = 0.000 = 0 / 1325
2021-06-16 19:39:17,336:INFO:calculate_airborne_metrics.py:360 PD calculation: gt_area <= 200 and id.str.contains("Flock") == False and id.str.contains("Bird") == False
2021-06-16 19:39:17,344:INFO:calculate_airborne_metrics.py:303 PD calculation: Number of intruders to detect = 99
2021-06-16 19:39:17,344:INFO:calculate_airborne_metrics.py:206 Filtering score threshold = 0.700
2021-06-16 19:39:17,347:INFO:calculate_airborne_metrics.py:272 Calculating the number of intruders that were matched by detections
2021-06-16 19:39:17,351:INFO:calculate_airborne_metrics.py:287 Detected intruders calculation: Number of detected intruders = 0 
2021-06-16 19:39:17,351:INFO:calculate_airborne_metrics.py:314 PD = 0.000 = 0 / 99
2021-06-16 19:39:17,353:INFO:calculate_airborne_metrics.py:500 Thresholding score
2021-06-16 19:39:17,360:INFO:calculate_airborne_metrics.py:507 Number of encounters to detect 2
2021-06-16 19:39:17,361:INFO:calculate_airborne_metrics.py:509 Combining encounters with results
2021-06-16 19:39:17,369:INFO:calculate_airborne_metrics.py:513 Grouping data frame with matches to getdetection matches per encounter
2021-06-16 19:39:17,371:INFO:calculate_airborne_metrics.py:516 Augmenting with moving frame level detection rate, this might take some time
2021-06-16 19:39:17,389:INFO:calculate_airborne_metrics.py:520 Merge frame_level detection rate 
2021-06-16 19:39:17,395:INFO:calculate_airborne_metrics.py:526 Grouping data frame with matches to get matched track_ids per frame and object
2021-06-16 19:39:17,434:INFO:calculate_airborne_metrics.py:531 Grouping data frame with matches to get matched track_ids per encounter and frame
2021-06-16 19:39:17,468:INFO:calculate_airborne_metrics.py:599 Checking if encounters were detected
2021-06-16 19:39:17,483:INFO:calculate_airborne_metrics.py:599 Checking if encounters were detected
2021-06-16 19:39:17,496:INFO:calculate_airborne_metrics.py:771 Saving results
2021-06-16 19:39:17,498:INFO:calculate_airborne_metrics.py:794 Data frame with information on encounter detection is saved to data/evaluation/result/result_metrics_min_track_len_0/airborne_metrics_moving_30_fl_dr_0p5_encounter_detections_far_30_0.csv and data/evaluation/result/result_metrics_min_track_len_0/airborne_metrics_moving_30_fl_dr_0p5_encounter_detections_far_30_0_tracking.csv
2021-06-16 19:39:17,499:INFO:calculate_airborne_metrics.py:798 Data frame with information on encounter detection is saved to data/evaluation/result/result_metrics_min_track_len_0/airborne_metrics_moving_30_fl_dr_0p5_encounter_detections_far_30_0.json
2021-06-16 19:39:17,500:INFO:calculate_airborne_metrics.py:801 Calculating final summary
2021-06-16 19:39:17,524:INFO:calculate_airborne_metrics.py:819 Summary
2021-06-16 19:39:17,524:INFO:calculate_airborne_metrics.py:825 The minimum detection score is 0.700
2021-06-16 19:39:17,524:INFO:calculate_airborne_metrics.py:827 FPPI: 0.11380
2021-06-16 19:39:17,524:INFO:calculate_airborne_metrics.py:829 HFAR: 30.00000
2021-06-16 19:39:17,524:INFO:calculate_airborne_metrics.py:833 Planned Aircraft: 957
2021-06-16 19:39:17,524:INFO:calculate_airborne_metrics.py:834 Non-Planned Airborne: 1856
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:835 Non-Planned Aircraft: 467
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:837 All Aircraft: 1424
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:843 AFDR, aircraft with range <= 699.85: 0.00000 = 0 / 272
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:853 AFDR, aircraft with area > 200: 0.00000 = 0 / 1325
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:858 AFDR, aircraft with area <= 200: 0.00000 = 0 / 99
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:859 Detected Encounters based on Detections: 
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:706 Max. range 300: Below Horizon: 0 / 0  = 0.000
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:706 Max. range 300: Mixed: 0 / 1  = 0.000
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:706 Max. range 300: Above Horizon: 0 / 1  = 0.000
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:706 Max. range 300: All: 0 / 2  = 0.000
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:863 Detected Encounters based on Tracking: 
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:706 Max. range 300: Below Horizon: 0 / 0  = 0.000
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:706 Max. range 300: Mixed: 0 / 1  = 0.000
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:706 Max. range 300: Above Horizon: 0 / 1  = 0.000
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:706 Max. range 300: All: 0 / 2  = 0.000
2021-06-16 19:39:17,525:INFO:calculate_airborne_metrics.py:868 Saving summary to data/evaluation/result/result_metrics_min_track_len_0/summary_far_30_0_min_intruder_fl_dr_0p5_in_win_30.json
In [10]:
!cat data/evaluation/summaries/*_for_ranking.csv
#,Algorithm,Score,FPPI,AFDR
0,result_metrics_min_track_len_0,0.7001715688383829,0.1137974155898291,0.0
#,Algorithm,Score,HFAR,"EDR All, Tracking"
0,result_metrics_min_track_len_0,0.7001715688383829,30.0,0.0

Try out SiamMot results OR your own generated result files

You can obviously run things locally, but in case you want to try out on Colab, you can download like in example below

In [30]:
# Cleanup...
!rm -rf data/evaluation/
!mkdir -p data/evaluation/gt
!mkdir -p data/evaluation/result
In [31]:
# Downloading SiamMot result files for validation flights
!wget https://gitlab.aicrowd.com/snippets/37555/raw -O data/results/run0/result.json
--2021-06-16 19:53:03--  https://gitlab.aicrowd.com/snippets/37555/raw
Resolving gitlab.aicrowd.com (gitlab.aicrowd.com)... 18.194.109.98, 18.193.11.37
Connecting to gitlab.aicrowd.com (gitlab.aicrowd.com)|18.194.109.98|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84561 (83K) [text/plain]
Saving to: β€˜data/results/run0/result.json’

data/results/run0/r 100%[===================>]  82.58K  --.-KB/s    in 0.1s    

2021-06-16 19:53:04 (593 KB/s) - β€˜data/results/run0/result.json’ saved [84561/84561]

In [32]:
# Let's run the metrics codebase!!
generate_partial_gt()
convert_and_copy_generated_results_to_metrics_folder()
!python core/metrics/run_airborne_metrics.py --dataset-folder data/evaluation/gt --results-folder data/evaluation/result --summaries-folder data/evaluation/summaries
2021-06-16 19:53:15,009:INFO:run_airborne_metrics.py:160 Encounter ground truth: data/evaluation/gt/groundtruth_with_encounters_maxRange700_maxGap3_minEncLen30.csv
2021-06-16 19:53:15,010:INFO:calculate_encounters.py:83 Asserting data/evaluation/gt/groundtruth.json format
2021-06-16 19:53:15,010:INFO:pandas_utils.py:87 Reading ground truth
2021-06-16 19:53:15,010:INFO:pandas_utils.py:61 Reading provided data/evaluation/gt/groundtruth.json
2021-06-16 19:53:15,010:INFO:pandas_utils.py:68 Loading .json
2021-06-16 19:53:15,019:INFO:pandas_utils.py:75 Normalizing json. This operation is time consuming. The result .csv will be saved Please consider providing .csv file next time
2021-06-16 19:53:15,089:INFO:calculate_encounters.py:268 Saving groundtruth in .csv format, please use .csv in the future
2021-06-16 19:53:15,117:INFO:calculate_encounters.py:273 Filtering ground truth to get intruders in the specified range <= 700m.
2021-06-16 19:53:15,131:INFO:utils.py:157 NumExpr defaulting to 4 threads.
2021-06-16 19:53:15,136:INFO:calculate_encounters.py:277 Finding encounters and adding their information to the ground truth
2021-06-16 19:53:15,170:INFO:calculate_encounters.py:290 Saving ground truth + encounters dataframe to data/evaluation/gt/groundtruth_with_encounters_maxRange700_maxGap3_minEncLen30.csv
2021-06-16 19:53:15,205:INFO:calculate_encounters.py:302 Saving only valid encounters info dataframe to data/evaluation/gt/valid_encounters_maxRange700_maxGap3_minEncLen30.csv
2021-06-16 19:53:15,206:INFO:calculate_encounters.py:307 Saving only valid encounters info in json format to data/evaluation/gt/valid_encounters_maxRange700_maxGap3_minEncLen30.json
2021-06-16 19:53:15,210:INFO:match_groundtruth_results.py:516 Reading input ground truth and results
2021-06-16 19:53:15,210:INFO:pandas_utils.py:87 Reading ground truth
2021-06-16 19:53:15,210:INFO:pandas_utils.py:61 Reading provided data/evaluation/gt/groundtruth.csv
2021-06-16 19:53:15,221:INFO:match_groundtruth_results.py:522 Number of evaluated images is 2399
2021-06-16 19:53:15,221:INFO:pandas_utils.py:96 Reading detection results
2021-06-16 19:53:15,221:INFO:pandas_utils.py:61 Reading provided data/evaluation/result/result.json
2021-06-16 19:53:15,221:INFO:pandas_utils.py:68 Loading .json
2021-06-16 19:53:15,222:INFO:pandas_utils.py:75 Normalizing json. This operation is time consuming. The result .csv will be saved Please consider providing .csv file next time
2021-06-16 19:53:15,229:INFO:match_groundtruth_results.py:527 Saving airborne classifier results in .csv format, please use .csv in the future
2021-06-16 19:53:15,233:INFO:match_groundtruth_results.py:529 Number of evaluated unique detections is 348
2021-06-16 19:53:15,233:INFO:match_groundtruth_results.py:530 Filtering results based on results score 0.00
2021-06-16 19:53:15,236:INFO:match_groundtruth_results.py:536 Enumerating detections with detection_id
2021-06-16 19:53:15,237:INFO:match_groundtruth_results.py:547 Using track_id as track_id
2021-06-16 19:53:15,244:INFO:match_groundtruth_results.py:552 Augmenting with track length
2021-06-16 19:53:15,254:INFO:match_groundtruth_results.py:554 Filtering results with track length below 0
2021-06-16 19:53:15,255:INFO:match_groundtruth_results.py:557 Computing ground truth and detection match based on extended_iou_minObjArea_100
2021-06-16 19:53:15,261:INFO:match_groundtruth_results.py:464 Pairing each ground truth intruder with each detection in the respective frame
2021-06-16 19:53:15,275:INFO:match_groundtruth_results.py:471 Augmenting with original iou for comparison
2021-06-16 19:53:15,283:INFO:match_groundtruth_results.py:477 Extending bounding boxes based on groundtruth area
2021-06-16 19:53:15,283:INFO:match_groundtruth_results.py:296 Extending bounding boxes based on ground truth area
2021-06-16 19:53:15,285:INFO:match_groundtruth_results.py:307 Number of objects with ground truth area less than 100 is 91
2021-06-16 19:53:15,292:INFO:match_groundtruth_results.py:322 There are no detections with area below 100 that are being matched to extended ground truth
2021-06-16 19:53:15,292:INFO:match_groundtruth_results.py:480 Augmenting with extended iou with minimum object area of 100
2021-06-16 19:53:15,301:INFO:match_groundtruth_results.py:200 IoU matching: match minimum iou = 0.20, and no match maximum iou = 0.02 
2021-06-16 19:53:15,311:INFO:match_groundtruth_results.py:487 Matching done
2021-06-16 19:53:15,312:INFO:match_groundtruth_results.py:563 Saving ground truth and detection match results to data/evaluation/result/result_metrics_min_track_len_0/gt_det_matches_extended_iou_minObjArea_100_matchThresh_0_2_noMatchThresh_0_02.csv
2021-06-16 19:53:15,374:INFO:calculate_airborne_metrics.py:715 Reading ground truth detection matches from data/evaluation/result/result_metrics_min_track_len_0/gt_det_matches_extended_iou_minObjArea_100_matchThresh_0_2_noMatchThresh_0_02.csv
2021-06-16 19:53:15,386:WARNING:calculate_airborne_metrics.py:722 Reading ground truth with encounters from data/evaluation/gt/groundtruth_with_encounters_maxRange700_maxGap3_minEncLen30.csv
2021-06-16 19:53:15,393:INFO:calculate_airborne_metrics.py:727 Maximum range of encounter is 699.85
2021-06-16 19:53:15,393:INFO:calculate_airborne_metrics.py:742 The provided minimum detection score 0.00000 will be used
2021-06-16 19:53:15,393:INFO:calculate_airborne_metrics.py:745 Frame level metrics calculation for score threshold = 0.9851007461547852
2021-06-16 19:53:15,411:INFO:calculate_airborne_metrics.py:254 FAR calculation: Using unique flight ids in the provided data frame to calculate total number of processed flights
2021-06-16 19:53:15,411:INFO:calculate_airborne_metrics.py:260 FAR calculation: Total number of processed flights is 2
2021-06-16 19:53:15,411:INFO:calculate_airborne_metrics.py:261 FAR calculation: Total number of processed hours is 0.067
2021-06-16 19:53:15,411:INFO:calculate_airborne_metrics.py:206 Filtering score threshold = 0.985
2021-06-16 19:53:15,414:INFO:calculate_airborne_metrics.py:176 Calculating the number of unique tracks ids that that correspond to at least one not matched detection
2021-06-16 19:53:15,421:INFO:calculate_airborne_metrics.py:197 Number of unique track_ids that correspond to at least one false detection 0
2021-06-16 19:53:15,421:INFO:calculate_airborne_metrics.py:267 FAR = 0.00000
2021-06-16 19:53:15,421:INFO:calculate_airborne_metrics.py:227 FPPI calculation: Using unique image names in the provided data frame to calculate total number of processed frames
2021-06-16 19:53:15,422:INFO:calculate_airborne_metrics.py:231 FPPI calculation: Total number of processed frames is 2399
2021-06-16 19:53:15,422:INFO:calculate_airborne_metrics.py:206 Filtering score threshold = 0.985
2021-06-16 19:53:15,425:INFO:calculate_airborne_metrics.py:151 Calculating the number of detections that did not match ground truth
2021-06-16 19:53:15,429:INFO:calculate_airborne_metrics.py:171 No match calculation: Number of detections without a match = 0 out of 348 unique detections
2021-06-16 19:53:15,429:INFO:calculate_airborne_metrics.py:234 FPPI = 0.00000
2021-06-16 19:53:15,429:INFO:calculate_airborne_metrics.py:343 PD calculation: Intruders Range =  [0.0, 699.8]
2021-06-16 19:53:15,433:INFO:calculate_airborne_metrics.py:303 PD calculation: Number of intruders to detect = 272
2021-06-16 19:53:15,433:INFO:calculate_airborne_metrics.py:206 Filtering score threshold = 0.985
2021-06-16 19:53:15,436:INFO:calculate_airborne_metrics.py:272 Calculating the number of intruders that were matched by detections
2021-06-16 19:53:15,441:INFO:calculate_airborne_metrics.py:287 Detected intruders calculation: Number of detected intruders = 268 
2021-06-16 19:53:15,441:INFO:calculate_airborne_metrics.py:314 PD = 0.985 = 268 / 272
2021-06-16 19:53:15,441:INFO:calculate_airborne_metrics.py:360 PD calculation: gt_area > 200 and id.str.contains("Flock") == False and id.str.contains("Bird") == False
2021-06-16 19:53:15,451:INFO:calculate_airborne_metrics.py:303 PD calculation: Number of intruders to detect = 1325
2021-06-16 19:53:15,451:INFO:calculate_airborne_metrics.py:206 Filtering score threshold = 0.985
2021-06-16 19:53:15,454:INFO:calculate_airborne_metrics.py:272 Calculating the number of intruders that were matched by detections
2021-06-16 19:53:15,459:INFO:calculate_airborne_metrics.py:287 Detected intruders calculation: Number of detected intruders = 344 
2021-06-16 19:53:15,459:INFO:calculate_airborne_metrics.py:314 PD = 0.260 = 344 / 1325
2021-06-16 19:53:15,459:INFO:calculate_airborne_metrics.py:360 PD calculation: gt_area <= 200 and id.str.contains("Flock") == False and id.str.contains("Bird") == False
2021-06-16 19:53:15,466:INFO:calculate_airborne_metrics.py:303 PD calculation: Number of intruders to detect = 99
2021-06-16 19:53:15,466:INFO:calculate_airborne_metrics.py:206 Filtering score threshold = 0.985
2021-06-16 19:53:15,470:INFO:calculate_airborne_metrics.py:272 Calculating the number of intruders that were matched by detections
2021-06-16 19:53:15,474:INFO:calculate_airborne_metrics.py:287 Detected intruders calculation: Number of detected intruders = 4 
2021-06-16 19:53:15,474:INFO:calculate_airborne_metrics.py:314 PD = 0.040 = 4 / 99
2021-06-16 19:53:15,476:INFO:calculate_airborne_metrics.py:500 Thresholding score
2021-06-16 19:53:15,483:INFO:calculate_airborne_metrics.py:507 Number of encounters to detect 2
2021-06-16 19:53:15,483:INFO:calculate_airborne_metrics.py:509 Combining encounters with results
2021-06-16 19:53:15,490:INFO:calculate_airborne_metrics.py:513 Grouping data frame with matches to getdetection matches per encounter
2021-06-16 19:53:15,492:INFO:calculate_airborne_metrics.py:516 Augmenting with moving frame level detection rate, this might take some time
2021-06-16 19:53:15,510:INFO:calculate_airborne_metrics.py:520 Merge frame_level detection rate 
2021-06-16 19:53:15,516:INFO:calculate_airborne_metrics.py:526 Grouping data frame with matches to get matched track_ids per frame and object
2021-06-16 19:53:15,558:INFO:calculate_airborne_metrics.py:531 Grouping data frame with matches to get matched track_ids per encounter and frame
2021-06-16 19:53:15,592:INFO:calculate_airborne_metrics.py:599 Checking if encounters were detected
2021-06-16 19:53:15,608:INFO:calculate_airborne_metrics.py:599 Checking if encounters were detected
2021-06-16 19:53:15,623:INFO:calculate_airborne_metrics.py:771 Saving results
2021-06-16 19:53:15,624:INFO:calculate_airborne_metrics.py:794 Data frame with information on encounter detection is saved to data/evaluation/result/result_metrics_min_track_len_0/airborne_metrics_moving_30_fl_dr_0p5_encounter_detections_far_0_0.csv and data/evaluation/result/result_metrics_min_track_len_0/airborne_metrics_moving_30_fl_dr_0p5_encounter_detections_far_0_0_tracking.csv
2021-06-16 19:53:15,625:INFO:calculate_airborne_metrics.py:798 Data frame with information on encounter detection is saved to data/evaluation/result/result_metrics_min_track_len_0/airborne_metrics_moving_30_fl_dr_0p5_encounter_detections_far_0_0.json
2021-06-16 19:53:15,626:INFO:calculate_airborne_metrics.py:801 Calculating final summary
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:819 Summary
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:825 The minimum detection score is 0.985
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:827 FPPI: 0.00000
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:829 HFAR: 0.00000
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:833 Planned Aircraft: 957
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:834 Non-Planned Airborne: 1856
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:835 Non-Planned Aircraft: 467
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:837 All Aircraft: 1424
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:843 AFDR, aircraft with range <= 699.85: 0.98529 = 268 / 272
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:853 AFDR, aircraft with area > 200: 0.25962 = 344 / 1325
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:858 AFDR, aircraft with area <= 200: 0.04040 = 4 / 99
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:859 Detected Encounters based on Detections: 
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:706 Max. range 300: Below Horizon: 0 / 0  = 0.000
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:706 Max. range 300: Mixed: 1 / 1  = 1.000
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:706 Max. range 300: Above Horizon: 1 / 1  = 1.000
2021-06-16 19:53:15,650:INFO:calculate_airborne_metrics.py:706 Max. range 300: All: 2 / 2  = 1.000
2021-06-16 19:53:15,651:INFO:calculate_airborne_metrics.py:863 Detected Encounters based on Tracking: 
2021-06-16 19:53:15,651:INFO:calculate_airborne_metrics.py:706 Max. range 300: Below Horizon: 0 / 0  = 0.000
2021-06-16 19:53:15,651:INFO:calculate_airborne_metrics.py:706 Max. range 300: Mixed: 1 / 1  = 1.000
2021-06-16 19:53:15,651:INFO:calculate_airborne_metrics.py:706 Max. range 300: Above Horizon: 1 / 1  = 1.000
2021-06-16 19:53:15,651:INFO:calculate_airborne_metrics.py:706 Max. range 300: All: 2 / 2  = 1.000
2021-06-16 19:53:15,651:INFO:calculate_airborne_metrics.py:868 Saving summary to data/evaluation/result/result_metrics_min_track_len_0/summary_far_0_0_min_intruder_fl_dr_0p5_in_win_30.json
In [33]:
!cat data/evaluation/summaries/*_for_ranking.csv
#,Algorithm,Score,FPPI,AFDR
0,result_metrics_min_track_len_0,0.9851007461547852,0.0,0.9852941176470589
#,Algorithm,Score,HFAR,"EDR All, Tracking"
0,result_metrics_min_track_len_0,0.9851007461547852,0.0,1.0

The tricky bits

  • Use groundtruth.json with flights information only for the flights you are evaluating on. (example shared in this notebook)
  • Metrics codebase use different bbox format v/s the one in groundtruth & submissions. Use the convert_and_copy_generated_results_to_metrics_folder() function to convert, example above.

Getting different results in your submission v/s local run for SiamMot?

  • Check the pytorch version, maskrcnn, etc. We have provided Dockerfile in the repository which generate exactly the same results.json as downloaded in this colab
  • Have you downloaded full flight or partial flight in your local?
  • Check if you are using MIN_TRACK_LEN = 30, MIN_SCORE = 0.985

Are you sure everything is same?

Please let us know on Discord or Discourse, we can help in debugging further, cheers!

πŸ‘‹ πŸ‘‹ πŸ‘‹ πŸ‘‹


Comments

You must login before you can post a comment.

Execute