Loading

Object Detection

Solution for submission 157271

A detailed solution for submission 157271 submitted for challenge Object Detection

konstantin_diachkov

Starter Code for Object Detection

What we are going to Learn

Note : Create a copy of the notebook and use the copy for submission. Go to File > Save a Copy in Drive to create a new copy

Downloading Dataset

Installing aicrowd-cli

In [1]:
!pip install aicrowd-cli
%load_ext aicrowd.magic
Requirement already satisfied: aicrowd-cli in /usr/local/lib/python3.7/dist-packages (0.1.10)
Requirement already satisfied: GitPython==3.1.18 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (3.1.18)
Requirement already satisfied: click<8,>=7.1.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (7.1.2)
Requirement already satisfied: tqdm<5,>=4.56.0 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (4.62.2)
Requirement already satisfied: requests<3,>=2.25.1 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (2.26.0)
Requirement already satisfied: rich<11,>=10.0.0 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (10.10.0)
Requirement already satisfied: pyzmq==22.1.0 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (22.1.0)
Requirement already satisfied: toml<1,>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.10.2)
Requirement already satisfied: requests-toolbelt<1,>=0.9.1 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.9.1)
Requirement already satisfied: gitdb<5,>=4.0.1 in /usr/local/lib/python3.7/dist-packages (from GitPython==3.1.18->aicrowd-cli) (4.0.7)
Requirement already satisfied: typing-extensions>=3.7.4.0 in /usr/local/lib/python3.7/dist-packages (from GitPython==3.1.18->aicrowd-cli) (3.7.4.3)
Requirement already satisfied: smmap<5,>=3.0.1 in /usr/local/lib/python3.7/dist-packages (from gitdb<5,>=4.0.1->GitPython==3.1.18->aicrowd-cli) (4.0.0)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.0.5)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2021.5.30)
Requirement already satisfied: colorama<0.5.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (0.4.4)
Requirement already satisfied: commonmark<0.10.0,>=0.9.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (0.9.1)
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.6.1)
In [2]:
%aicrowd login
Please login here: https://api.aicrowd.com/auth/NdO9b6vBtOqzIO3pPwPuUMrl4yK0QS4LBJoayEfIMvA
API Key valid
Saved API Key successfully!
In [ ]:
!rm -rf data
!mkdir data
%aicrowd ds dl -c object-detection -o data
In [ ]:
!unzip data/train.zip -d data/train > /dev/null
!unzip data/test.zip -d data/test > /dev/null

Downloading & Importing Libraries

In this baseline, we will be using detectron2 to train and generate our predictions. Detectron2 is a library to Facebookresearch mainly used for object detection, semantic segmentation and other similar computer vision tasks

In [ ]:
!pip install -U torch torchvision
!pip install git+https://github.com/facebookresearch/fvcore.git
import torch, torchvision
torch.__version__
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (1.9.0+cu102)
Requirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (0.10.0+cu102)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch) (3.7.4.3)
Requirement already satisfied: pillow>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision) (7.1.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torchvision) (1.19.5)
Collecting git+https://github.com/facebookresearch/fvcore.git
  Cloning https://github.com/facebookresearch/fvcore.git to /tmp/pip-req-build-1ihkwp6p
  Running command git clone -q https://github.com/facebookresearch/fvcore.git /tmp/pip-req-build-1ihkwp6p
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from fvcore==0.1.5) (1.19.5)
Collecting yacs>=0.1.6
  Downloading yacs-0.1.8-py3-none-any.whl (14 kB)
Collecting pyyaml>=5.1
  Downloading PyYAML-5.4.1-cp37-cp37m-manylinux1_x86_64.whl (636 kB)
     |████████████████████████████████| 636 kB 5.3 MB/s 
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from fvcore==0.1.5) (4.62.2)
Requirement already satisfied: termcolor>=1.1 in /usr/local/lib/python3.7/dist-packages (from fvcore==0.1.5) (1.1.0)
Requirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from fvcore==0.1.5) (7.1.2)
Requirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from fvcore==0.1.5) (0.8.9)
Collecting iopath>=0.1.7
  Downloading iopath-0.1.9-py3-none-any.whl (27 kB)
Collecting portalocker
  Downloading portalocker-2.3.2-py2.py3-none-any.whl (15 kB)
Building wheels for collected packages: fvcore
  Building wheel for fvcore (setup.py) ... done
  Created wheel for fvcore: filename=fvcore-0.1.5-py3-none-any.whl size=64553 sha256=9ae1044912cf1ab6a98e7f2fb8d5ad079c21737a77b195c1d9c72ab4fbc3f56a
  Stored in directory: /tmp/pip-ephem-wheel-cache-dbf7vqec/wheels/24/1d/09/8167de727fe5b74f832b6fcb5d9069d8f03ca29f337bfe484d
Successfully built fvcore
Installing collected packages: pyyaml, portalocker, yacs, iopath, fvcore
  Attempting uninstall: pyyaml
    Found existing installation: PyYAML 3.13
    Uninstalling PyYAML-3.13:
      Successfully uninstalled PyYAML-3.13
Successfully installed fvcore-0.1.5 iopath-0.1.9 portalocker-2.3.2 pyyaml-5.4.1 yacs-0.1.8
Out[ ]:
'1.9.0+cu102'
In [ ]:
!git clone https://github.com/facebookresearch/detectron2 detectron2_repo
!pip install -e detectron2_repo
fatal: destination path 'detectron2_repo' already exists and is not an empty directory.
Obtaining file:///content/detectron2_repo
Requirement already satisfied: Pillow>=7.1 in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (7.1.2)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (3.2.2)
Requirement already satisfied: pycocotools>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (2.0.2)
Requirement already satisfied: termcolor>=1.1 in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (1.1.0)
Requirement already satisfied: yacs>=0.1.6 in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (0.1.8)
Requirement already satisfied: tabulate in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (0.8.9)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (1.3.0)
Requirement already satisfied: tqdm>4.29.0 in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (4.62.2)
Requirement already satisfied: tensorboard in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (2.6.0)
Requirement already satisfied: fvcore<0.1.6,>=0.1.5 in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (0.1.5)
Requirement already satisfied: iopath<0.1.10,>=0.1.7 in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (0.1.9)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (0.16.0)
Requirement already satisfied: pydot in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (1.3.0)
Requirement already satisfied: omegaconf>=2.1 in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (2.1.1)
Requirement already satisfied: hydra-core>=1.1 in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (1.1.1)
Requirement already satisfied: black==21.4b2 in /usr/local/lib/python3.7/dist-packages (from detectron2==0.5) (21.4b2)
Requirement already satisfied: mypy-extensions>=0.4.3 in /usr/local/lib/python3.7/dist-packages (from black==21.4b2->detectron2==0.5) (0.4.3)
Requirement already satisfied: typing-extensions>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from black==21.4b2->detectron2==0.5) (3.7.4.3)
Requirement already satisfied: click>=7.1.2 in /usr/local/lib/python3.7/dist-packages (from black==21.4b2->detectron2==0.5) (7.1.2)
Requirement already satisfied: pathspec<1,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from black==21.4b2->detectron2==0.5) (0.9.0)
Requirement already satisfied: appdirs in /usr/local/lib/python3.7/dist-packages (from black==21.4b2->detectron2==0.5) (1.4.4)
Requirement already satisfied: toml>=0.10.1 in /usr/local/lib/python3.7/dist-packages (from black==21.4b2->detectron2==0.5) (0.10.2)
Requirement already satisfied: typed-ast>=1.4.2 in /usr/local/lib/python3.7/dist-packages (from black==21.4b2->detectron2==0.5) (1.4.3)
Requirement already satisfied: regex>=2020.1.8 in /usr/local/lib/python3.7/dist-packages (from black==21.4b2->detectron2==0.5) (2021.8.28)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from fvcore<0.1.6,>=0.1.5->detectron2==0.5) (1.19.5)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.7/dist-packages (from fvcore<0.1.6,>=0.1.5->detectron2==0.5) (5.4.1)
Requirement already satisfied: importlib-resources in /usr/local/lib/python3.7/dist-packages (from hydra-core>=1.1->detectron2==0.5) (5.2.2)
Requirement already satisfied: antlr4-python3-runtime==4.8 in /usr/local/lib/python3.7/dist-packages (from hydra-core>=1.1->detectron2==0.5) (4.8)
Requirement already satisfied: portalocker in /usr/local/lib/python3.7/dist-packages (from iopath<0.1.10,>=0.1.7->detectron2==0.5) (2.3.2)
Requirement already satisfied: setuptools>=18.0 in /usr/local/lib/python3.7/dist-packages (from pycocotools>=2.0.2->detectron2==0.5) (57.4.0)
Requirement already satisfied: cython>=0.27.3 in /usr/local/lib/python3.7/dist-packages (from pycocotools>=2.0.2->detectron2==0.5) (0.29.24)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->detectron2==0.5) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->detectron2==0.5) (2.8.2)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->detectron2==0.5) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->detectron2==0.5) (1.3.2)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib->detectron2==0.5) (1.15.0)
Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.7/dist-packages (from importlib-resources->hydra-core>=1.1->detectron2==0.5) (3.5.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (1.35.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (1.8.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (0.4.6)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (2.26.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (0.37.0)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (1.40.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (3.3.4)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (3.17.3)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (0.6.1)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (0.12.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard->detectron2==0.5) (1.0.1)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2==0.5) (4.2.2)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2==0.5) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2==0.5) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2==0.5) (1.3.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard->detectron2==0.5) (4.8.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard->detectron2==0.5) (0.4.8)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.5) (2021.5.30)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.5) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.5) (1.24.3)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.5) (2.0.5)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2==0.5) (3.1.1)
Installing collected packages: detectron2
  Attempting uninstall: detectron2
    Found existing installation: detectron2 0.5
    Can't uninstall 'detectron2'. No files were found to uninstall.
  Running setup.py develop for detectron2
Successfully installed detectron2-0.5
In [ ]:
# Detection
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()

# import some common libraries
import numpy as np
import os, json, cv2, random
from glob import glob
from PIL import Image
from natsort import natsorted
from tqdm.notebook import tqdm
from google.colab.patches import cv2_imshow
from sklearn.model_selection import train_test_split
import pandas as pd

# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
from detectron2.engine import DefaultTrainer
from detectron2.data.datasets import register_coco_instances
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog

Creating the dataset

Here, we create registering our training dataset that detectron2 will be using during training

In [ ]:
ann_dir = "./data"

f = open(os.path.join(ann_dir,'train.json'),'r')
data = json.loads(f.read())

for i in data['annotations']:
    i['iscrowd'] = 0

ant = pd.DataFrame(data['annotations'])

splits = train_test_split(ant['image_id'].unique(), test_size=0.25, random_state=201)
print(len(splits[0]),len(splits[1]))

train_annotations = []
val_annotations = []
for i in data['annotations']:
    if i['image_id'] in splits[0]:
        train_annotations.append(i)
    elif i['image_id'] in splits[1]:
        val_annotations.append(i)
        
print(len(train_annotations),len(val_annotations))

train_ids = [i['id'] for i in train_annotations]
val_ids = [i['id'] for i in val_annotations]

train_images = []
val_images = []

for i in data['images']:
    img_id = int(i['file_name'].split(".")[0])
    if img_id in splits[0]:
        train_images.append(i)
    elif img_id in splits[1]:
        val_images.append(i)
        
train_data = data.copy()
train_data['annotations'] = train_annotations
train_data['images'] = train_images

val_data = data.copy()
val_data['annotations'] = val_annotations
val_data['images'] = val_images

import json
with open('train_data.json', 'w') as f:
    json.dump(train_data, f)
    
with open('val_data.json', 'w') as f:
    json.dump(val_data, f)
2250 750
6580 2188
In [ ]:
!mkdir dataset
!mkdir dataset/train
!mkdir dataset/val
In [ ]:
import shutil

s_dir = "./data/train"
d_dir = "dataset/train"

for fname in tqdm(train_data['images']):
    shutil.copy2(os.path.join(s_dir,fname['file_name']), os.path.join(d_dir,fname['file_name']))
    
s_dir = "./data/train"
d_dir = "dataset/val"

for fname in tqdm(val_data['images']):
    shutil.copy2(os.path.join(s_dir,fname['file_name']), os.path.join(d_dir,fname['file_name']))
In [ ]:

In [ ]:
data_dir = "dataset"

register_coco_instances("train", {}, "train_data.json", os.path.join(data_dir, "train"))
register_coco_instances("val", {}, "val_data.json", os.path.join(data_dir, "val"))

Visualizing the dataset

In [ ]:
vehicle_metadata = MetadataCatalog.get("train")
dataset_dicts = DatasetCatalog.get("train")
WARNING [09/19 06:31:19 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

WARNING [09/19 06:31:19 d2.data.datasets.coco]: train_data.json contains 6580 annotations, but only 4944 of them match to images in the file.
[09/19 06:31:19 d2.data.datasets.coco]: Loaded 2249 images in COCO format from train_data.json
In [ ]:
import random

for d in random.sample(dataset_dicts, 3):
    img = cv2.imread(d["file_name"])
    visualizer = Visualizer(img[:, :, ::-1], metadata=vehicle_metadata, scale=0.5)
    vis = visualizer.draw_dataset_dict(d)
    cv2_imshow(vis.get_image()[:, :, ::-1])

Creating the model

In [ ]:
from detectron2.engine import DefaultTrainer
from detectron2.evaluation import COCOEvaluator

class CocoTrainer(DefaultTrainer):

  @classmethod
  def build_evaluator(cls, cfg, dataset_name, output_folder=None):

    if output_folder is None:
        os.makedirs("coco_eval", exist_ok=True)
        output_folder = "coco_eval"

    return COCOEvaluator(dataset_name, cfg, False, output_folder)
In [ ]:
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("train",)
cfg.DATASETS.TEST = ('val',)

cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml")  # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 4
cfg.SOLVER.BASE_LR = 0.001


cfg.SOLVER.MAX_ITER = 8000
cfg.SOLVER.STEPS = []

cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 64
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 4
cfg.TEST.EVAL_PERIOD = 100

os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = CocoTrainer(cfg)
trainer.resume_or_load(resume=False)
[09/19 06:31:50 d2.engine.defaults]: Model:
GeneralizedRCNN(
  (backbone): FPN(
    (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (top_block): LastLevelMaxPool()
    (bottom_up): ResNet(
      (stem): BasicStem(
        (conv1): Conv2d(
          3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
      )
      (res2): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv1): Conv2d(
            64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
      )
      (res3): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv1): Conv2d(
            256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (3): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
      )
      (res4): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
          (conv1): Conv2d(
            512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (3): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (4): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (5): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
      )
      (res5): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
          (conv1): Conv2d(
            1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
      )
    )
  )
  (proposal_generator): RPN(
    (rpn_head): StandardRPNHead(
      (conv): Conv2d(
        256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
        (activation): ReLU()
      )
      (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
      (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
    )
    (anchor_generator): DefaultAnchorGenerator(
      (cell_anchors): BufferList()
    )
  )
  (roi_heads): StandardROIHeads(
    (box_pooler): ROIPooler(
      (level_poolers): ModuleList(
        (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True)
        (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True)
        (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
        (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
      )
    )
    (box_head): FastRCNNConvFCHead(
      (flatten): Flatten(start_dim=1, end_dim=-1)
      (fc1): Linear(in_features=12544, out_features=1024, bias=True)
      (fc_relu1): ReLU()
      (fc2): Linear(in_features=1024, out_features=1024, bias=True)
      (fc_relu2): ReLU()
    )
    (box_predictor): FastRCNNOutputLayers(
      (cls_score): Linear(in_features=1024, out_features=5, bias=True)
      (bbox_pred): Linear(in_features=1024, out_features=16, bias=True)
    )
  )
)
WARNING [09/19 06:31:51 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

WARNING [09/19 06:31:51 d2.data.datasets.coco]: train_data.json contains 6580 annotations, but only 4944 of them match to images in the file.
[09/19 06:31:51 d2.data.datasets.coco]: Loaded 2249 images in COCO format from train_data.json
[09/19 06:31:51 d2.data.build]: Removed 564 images with no usable annotations. 1685 images left.
[09/19 06:31:51 d2.data.build]: Distribution of instances among all 4 categories:
|  category  | #instances   |  category  | #instances   |   category    | #instances   |
|:----------:|:-------------|:----------:|:-------------|:-------------:|:-------------|
|  bicycle   | 1021         | motorcycle | 1179         | passenger_car | 1720         |
|   person   | 1024         |            |              |               |              |
|   total    | 4944         |            |              |               |              |
[09/19 06:31:51 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[09/19 06:31:51 d2.data.build]: Using training sampler TrainingSampler
[09/19 06:31:51 d2.data.common]: Serializing 1685 elements to byte tensors and concatenating them all ...
[09/19 06:31:51 d2.data.common]: Serialized dataset takes 0.58 MiB
model_final_280758.pkl: 167MB [00:06, 25.8MB/s]                           
Skip loading parameter 'roi_heads.box_predictor.cls_score.weight' to the model due to incompatible shapes: (81, 1024) in the checkpoint but (5, 1024) in the model! You might want to double check if this is expected.
Skip loading parameter 'roi_heads.box_predictor.cls_score.bias' to the model due to incompatible shapes: (81,) in the checkpoint but (5,) in the model! You might want to double check if this is expected.
Skip loading parameter 'roi_heads.box_predictor.bbox_pred.weight' to the model due to incompatible shapes: (320, 1024) in the checkpoint but (16, 1024) in the model! You might want to double check if this is expected.
Skip loading parameter 'roi_heads.box_predictor.bbox_pred.bias' to the model due to incompatible shapes: (320,) in the checkpoint but (16,) in the model! You might want to double check if this is expected.
Some model parameters or buffers are not found in the checkpoint:
roi_heads.box_predictor.bbox_pred.{bias, weight}
roi_heads.box_predictor.cls_score.{bias, weight}

Training the model 🚂

In [ ]:
trainer.train()
[09/19 06:32:15 d2.engine.train_loop]: Starting training from iteration 0
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at  /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.)
  return torch.floor_divide(self, other)
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
[09/19 06:33:00 d2.utils.events]:  eta: 1:16:24  iter: 19  total_loss: 2.5  loss_cls: 1.534  loss_box_reg: 0.7292  loss_rpn_cls: 0.175  loss_rpn_loc: 0.1827  time: 2.2394  data_time: 0.0412  lr: 1.9981e-05  max_mem: 4113M
[09/19 06:33:45 d2.utils.events]:  eta: 1:15:43  iter: 39  total_loss: 2.046  loss_cls: 1.182  loss_box_reg: 0.6015  loss_rpn_cls: 0.06797  loss_rpn_loc: 0.1236  time: 2.2522  data_time: 0.0302  lr: 3.9961e-05  max_mem: 4113M
[09/19 06:34:29 d2.utils.events]:  eta: 1:14:51  iter: 59  total_loss: 1.489  loss_cls: 0.7214  loss_box_reg: 0.5271  loss_rpn_cls: 0.05862  loss_rpn_loc: 0.1579  time: 2.2184  data_time: 0.0300  lr: 5.9941e-05  max_mem: 4113M
[09/19 06:35:14 d2.utils.events]:  eta: 1:14:12  iter: 79  total_loss: 1.262  loss_cls: 0.5522  loss_box_reg: 0.5339  loss_rpn_cls: 0.05197  loss_rpn_loc: 0.134  time: 2.2288  data_time: 0.0262  lr: 7.9921e-05  max_mem: 4113M
WARNING [09/19 06:35:59 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

WARNING [09/19 06:35:59 d2.data.datasets.coco]: val_data.json contains 2188 annotations, but only 501 of them match to images in the file.
[09/19 06:35:59 d2.data.datasets.coco]: Loaded 750 images in COCO format from val_data.json
[09/19 06:35:59 d2.data.build]: Distribution of instances among all 4 categories:
|  category  | #instances   |  category  | #instances   |   category    | #instances   |
|:----------:|:-------------|:----------:|:-------------|:-------------:|:-------------|
|  bicycle   | 99           | motorcycle | 131          | passenger_car | 181          |
|   person   | 90           |            |              |               |              |
|   total    | 501          |            |              |               |              |
[09/19 06:35:59 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[09/19 06:35:59 d2.data.common]: Serializing 750 elements to byte tensors and concatenating them all ...
[09/19 06:35:59 d2.data.common]: Serialized dataset takes 0.12 MiB
WARNING [09/19 06:35:59 d2.evaluation.coco_evaluation]: COCO Evaluator instantiated using config, this is deprecated behavior. Please pass in explicit arguments instead.
[09/19 06:35:59 d2.evaluation.evaluator]: Start inference on 750 batches
[09/19 06:36:03 d2.evaluation.evaluator]: Inference done 11/750. Dataloading: 0.0019 s/iter. Inference: 0.3198 s/iter. Eval: 0.0003 s/iter. Total: 0.3220 s/iter. ETA=0:03:57
[09/19 06:36:08 d2.evaluation.evaluator]: Inference done 27/750. Dataloading: 0.0026 s/iter. Inference: 0.3201 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:03:53
[09/19 06:36:13 d2.evaluation.evaluator]: Inference done 43/750. Dataloading: 0.0024 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3230 s/iter. ETA=0:03:48
[09/19 06:36:18 d2.evaluation.evaluator]: Inference done 59/750. Dataloading: 0.0024 s/iter. Inference: 0.3204 s/iter. Eval: 0.0004 s/iter. Total: 0.3234 s/iter. ETA=0:03:43
[09/19 06:36:23 d2.evaluation.evaluator]: Inference done 75/750. Dataloading: 0.0024 s/iter. Inference: 0.3204 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:03:38
[09/19 06:36:28 d2.evaluation.evaluator]: Inference done 91/750. Dataloading: 0.0023 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:03:33
[09/19 06:36:34 d2.evaluation.evaluator]: Inference done 107/750. Dataloading: 0.0023 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:03:27
[09/19 06:36:39 d2.evaluation.evaluator]: Inference done 123/750. Dataloading: 0.0023 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:03:22
[09/19 06:36:44 d2.evaluation.evaluator]: Inference done 139/750. Dataloading: 0.0023 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:03:17
[09/19 06:36:49 d2.evaluation.evaluator]: Inference done 155/750. Dataloading: 0.0023 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:03:12
[09/19 06:36:54 d2.evaluation.evaluator]: Inference done 171/750. Dataloading: 0.0023 s/iter. Inference: 0.3200 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:03:06
[09/19 06:36:59 d2.evaluation.evaluator]: Inference done 187/750. Dataloading: 0.0023 s/iter. Inference: 0.3200 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:03:01
[09/19 06:37:05 d2.evaluation.evaluator]: Inference done 203/750. Dataloading: 0.0023 s/iter. Inference: 0.3200 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:02:56
[09/19 06:37:10 d2.evaluation.evaluator]: Inference done 219/750. Dataloading: 0.0023 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:02:51
[09/19 06:37:15 d2.evaluation.evaluator]: Inference done 235/750. Dataloading: 0.0024 s/iter. Inference: 0.3200 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:02:46
[09/19 06:37:20 d2.evaluation.evaluator]: Inference done 251/750. Dataloading: 0.0024 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:02:41
[09/19 06:37:25 d2.evaluation.evaluator]: Inference done 267/750. Dataloading: 0.0024 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:02:35
[09/19 06:37:30 d2.evaluation.evaluator]: Inference done 283/750. Dataloading: 0.0024 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:02:30
[09/19 06:37:36 d2.evaluation.evaluator]: Inference done 299/750. Dataloading: 0.0024 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:02:25
[09/19 06:37:41 d2.evaluation.evaluator]: Inference done 315/750. Dataloading: 0.0024 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:02:20
[09/19 06:37:46 d2.evaluation.evaluator]: Inference done 331/750. Dataloading: 0.0023 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:02:15
[09/19 06:37:51 d2.evaluation.evaluator]: Inference done 347/750. Dataloading: 0.0023 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:02:10
[09/19 06:37:56 d2.evaluation.evaluator]: Inference done 363/750. Dataloading: 0.0024 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:02:04
[09/19 06:38:01 d2.evaluation.evaluator]: Inference done 379/750. Dataloading: 0.0024 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:59
[09/19 06:38:07 d2.evaluation.evaluator]: Inference done 395/750. Dataloading: 0.0024 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:54
[09/19 06:38:12 d2.evaluation.evaluator]: Inference done 411/750. Dataloading: 0.0024 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:49
[09/19 06:38:17 d2.evaluation.evaluator]: Inference done 427/750. Dataloading: 0.0024 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:44
[09/19 06:38:22 d2.evaluation.evaluator]: Inference done 443/750. Dataloading: 0.0024 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:01:39
[09/19 06:38:28 d2.evaluation.evaluator]: Inference done 459/750. Dataloading: 0.0024 s/iter. Inference: 0.3204 s/iter. Eval: 0.0004 s/iter. Total: 0.3234 s/iter. ETA=0:01:34
[09/19 06:38:33 d2.evaluation.evaluator]: Inference done 475/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3234 s/iter. ETA=0:01:28
[09/19 06:38:38 d2.evaluation.evaluator]: Inference done 491/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:01:23
[09/19 06:38:43 d2.evaluation.evaluator]: Inference done 507/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:01:18
[09/19 06:38:48 d2.evaluation.evaluator]: Inference done 523/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:01:13
[09/19 06:38:53 d2.evaluation.evaluator]: Inference done 539/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:01:08
[09/19 06:38:59 d2.evaluation.evaluator]: Inference done 555/750. Dataloading: 0.0024 s/iter. Inference: 0.3202 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:01:03
[09/19 06:39:04 d2.evaluation.evaluator]: Inference done 571/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:00:57
[09/19 06:39:09 d2.evaluation.evaluator]: Inference done 587/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:00:52
[09/19 06:39:14 d2.evaluation.evaluator]: Inference done 603/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:00:47
[09/19 06:39:19 d2.evaluation.evaluator]: Inference done 619/750. Dataloading: 0.0024 s/iter. Inference: 0.3202 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:00:42
[09/19 06:39:24 d2.evaluation.evaluator]: Inference done 635/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:00:37
[09/19 06:39:30 d2.evaluation.evaluator]: Inference done 651/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:00:32
[09/19 06:39:35 d2.evaluation.evaluator]: Inference done 667/750. Dataloading: 0.0024 s/iter. Inference: 0.3202 s/iter. Eval: 0.0004 s/iter. Total: 0.3233 s/iter. ETA=0:00:26
[09/19 06:39:40 d2.evaluation.evaluator]: Inference done 683/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3234 s/iter. ETA=0:00:21
[09/19 06:39:45 d2.evaluation.evaluator]: Inference done 699/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3234 s/iter. ETA=0:00:16
[09/19 06:39:50 d2.evaluation.evaluator]: Inference done 715/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3234 s/iter. ETA=0:00:11
[09/19 06:39:55 d2.evaluation.evaluator]: Inference done 731/750. Dataloading: 0.0024 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3234 s/iter. ETA=0:00:06
[09/19 06:40:01 d2.evaluation.evaluator]: Inference done 747/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3235 s/iter. ETA=0:00:00
[09/19 06:40:02 d2.evaluation.evaluator]: Total inference time: 0:04:01.120317 (0.323651 s / iter per device, on 1 devices)
[09/19 06:40:02 d2.evaluation.evaluator]: Total inference pure compute time: 0:03:58 (0.320504 s / iter per device, on 1 devices)
[09/19 06:40:02 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[09/19 06:40:02 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json
[09/19 06:40:03 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.32s)
creating index...
index created!
[09/19 06:40:03 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*
[09/19 06:40:04 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.70 seconds.
[09/19 06:40:04 d2.evaluation.fast_eval_api]: Accumulating evaluation results...
[09/19 06:40:04 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.24 seconds.
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.005
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.016
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.002
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.007
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.013
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.008
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.046
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.134
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.167
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.123
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.361
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.302
[09/19 06:40:04 d2.evaluation.coco_evaluation]: Evaluation results for bbox: 
|  AP   |  AP50  |  AP75  |  APs  |  APm  |  APl  |
|:-----:|:------:|:------:|:-----:|:-----:|:-----:|
| 0.474 | 1.609  | 0.195  | 0.700 | 1.279 | 0.828 |
[09/19 06:40:04 d2.evaluation.coco_evaluation]: Per-category bbox AP: 
| category   | AP    | category   | AP    | category      | AP    |
|:-----------|:------|:-----------|:------|:--------------|:------|
| bicycle    | 0.078 | motorcycle | 0.200 | passenger_car | 1.439 |
| person     | 0.178 |            |       |               |       |
[09/19 06:40:04 d2.engine.defaults]: Evaluation results for val in csv format:
[09/19 06:40:04 d2.evaluation.testing]: copypaste: Task: bbox
[09/19 06:40:04 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[09/19 06:40:04 d2.evaluation.testing]: copypaste: 0.4740,1.6091,0.1945,0.6996,1.2794,0.8284
[09/19 06:40:04 d2.utils.events]:  eta: 1:13:26  iter: 99  total_loss: 1.235  loss_cls: 0.5108  loss_box_reg: 0.5011  loss_rpn_cls: 0.05452  loss_rpn_loc: 0.1402  time: 2.2308  data_time: 0.0353  lr: 9.9901e-05  max_mem: 4113M
[09/19 06:40:50 d2.utils.events]:  eta: 1:12:36  iter: 119  total_loss: 1.311  loss_cls: 0.53  loss_box_reg: 0.5407  loss_rpn_cls: 0.05079  loss_rpn_loc: 0.1377  time: 2.2373  data_time: 0.0262  lr: 0.00011988  max_mem: 4113M
[09/19 06:41:34 d2.utils.events]:  eta: 1:11:50  iter: 139  total_loss: 1.308  loss_cls: 0.5074  loss_box_reg: 0.5759  loss_rpn_cls: 0.05624  loss_rpn_loc: 0.1308  time: 2.2348  data_time: 0.0296  lr: 0.00013986  max_mem: 4113M
[09/19 06:42:19 d2.utils.events]:  eta: 1:11:03  iter: 159  total_loss: 1.312  loss_cls: 0.5081  loss_box_reg: 0.571  loss_rpn_cls: 0.03517  loss_rpn_loc: 0.1398  time: 2.2364  data_time: 0.0384  lr: 0.00015984  max_mem: 4113M
[09/19 06:43:05 d2.utils.events]:  eta: 1:10:24  iter: 179  total_loss: 1.253  loss_cls: 0.4538  loss_box_reg: 0.6161  loss_rpn_cls: 0.04179  loss_rpn_loc: 0.1152  time: 2.2409  data_time: 0.0267  lr: 0.00017982  max_mem: 4113M
WARNING [09/19 06:43:50 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

WARNING [09/19 06:43:50 d2.data.datasets.coco]: val_data.json contains 2188 annotations, but only 501 of them match to images in the file.
[09/19 06:43:50 d2.data.datasets.coco]: Loaded 750 images in COCO format from val_data.json
[09/19 06:43:50 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[09/19 06:43:50 d2.data.common]: Serializing 750 elements to byte tensors and concatenating them all ...
[09/19 06:43:50 d2.data.common]: Serialized dataset takes 0.12 MiB
WARNING [09/19 06:43:50 d2.evaluation.coco_evaluation]: COCO Evaluator instantiated using config, this is deprecated behavior. Please pass in explicit arguments instead.
[09/19 06:43:50 d2.evaluation.evaluator]: Start inference on 750 batches
[09/19 06:43:54 d2.evaluation.evaluator]: Inference done 11/750. Dataloading: 0.0023 s/iter. Inference: 0.3200 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:03:58
[09/19 06:43:59 d2.evaluation.evaluator]: Inference done 27/750. Dataloading: 0.0024 s/iter. Inference: 0.3214 s/iter. Eval: 0.0004 s/iter. Total: 0.3244 s/iter. ETA=0:03:54
[09/19 06:44:04 d2.evaluation.evaluator]: Inference done 43/750. Dataloading: 0.0025 s/iter. Inference: 0.3213 s/iter. Eval: 0.0004 s/iter. Total: 0.3243 s/iter. ETA=0:03:49
[09/19 06:44:10 d2.evaluation.evaluator]: Inference done 59/750. Dataloading: 0.0026 s/iter. Inference: 0.3210 s/iter. Eval: 0.0004 s/iter. Total: 0.3242 s/iter. ETA=0:03:44
[09/19 06:44:15 d2.evaluation.evaluator]: Inference done 75/750. Dataloading: 0.0024 s/iter. Inference: 0.3215 s/iter. Eval: 0.0004 s/iter. Total: 0.3246 s/iter. ETA=0:03:39
[09/19 06:44:20 d2.evaluation.evaluator]: Inference done 91/750. Dataloading: 0.0024 s/iter. Inference: 0.3212 s/iter. Eval: 0.0004 s/iter. Total: 0.3242 s/iter. ETA=0:03:33
[09/19 06:44:25 d2.evaluation.evaluator]: Inference done 107/750. Dataloading: 0.0025 s/iter. Inference: 0.3208 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:03:28
[09/19 06:44:30 d2.evaluation.evaluator]: Inference done 123/750. Dataloading: 0.0025 s/iter. Inference: 0.3208 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:03:23
[09/19 06:44:35 d2.evaluation.evaluator]: Inference done 139/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:03:17
[09/19 06:44:41 d2.evaluation.evaluator]: Inference done 155/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:03:12
[09/19 06:44:46 d2.evaluation.evaluator]: Inference done 171/750. Dataloading: 0.0026 s/iter. Inference: 0.3204 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:03:07
[09/19 06:44:51 d2.evaluation.evaluator]: Inference done 187/750. Dataloading: 0.0025 s/iter. Inference: 0.3204 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:03:02
[09/19 06:44:56 d2.evaluation.evaluator]: Inference done 203/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:02:57
[09/19 06:45:01 d2.evaluation.evaluator]: Inference done 219/750. Dataloading: 0.0025 s/iter. Inference: 0.3204 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:02:51
[09/19 06:45:07 d2.evaluation.evaluator]: Inference done 235/750. Dataloading: 0.0026 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:02:46
[09/19 06:45:12 d2.evaluation.evaluator]: Inference done 251/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:02:41
[09/19 06:45:17 d2.evaluation.evaluator]: Inference done 267/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:02:36
[09/19 06:45:22 d2.evaluation.evaluator]: Inference done 283/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:02:31
[09/19 06:45:27 d2.evaluation.evaluator]: Inference done 299/750. Dataloading: 0.0025 s/iter. Inference: 0.3207 s/iter. Eval: 0.0004 s/iter. Total: 0.3238 s/iter. ETA=0:02:26
[09/19 06:45:32 d2.evaluation.evaluator]: Inference done 315/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:02:20
[09/19 06:45:38 d2.evaluation.evaluator]: Inference done 331/750. Dataloading: 0.0025 s/iter. Inference: 0.3204 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:02:15
[09/19 06:45:43 d2.evaluation.evaluator]: Inference done 347/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:02:10
[09/19 06:45:48 d2.evaluation.evaluator]: Inference done 363/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:02:05
[09/19 06:45:53 d2.evaluation.evaluator]: Inference done 379/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:02:00
[09/19 06:45:58 d2.evaluation.evaluator]: Inference done 395/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3236 s/iter. ETA=0:01:54
[09/19 06:46:04 d2.evaluation.evaluator]: Inference done 411/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:01:49
[09/19 06:46:09 d2.evaluation.evaluator]: Inference done 427/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:01:44
[09/19 06:46:14 d2.evaluation.evaluator]: Inference done 443/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:01:39
[09/19 06:46:19 d2.evaluation.evaluator]: Inference done 459/750. Dataloading: 0.0025 s/iter. Inference: 0.3207 s/iter. Eval: 0.0004 s/iter. Total: 0.3238 s/iter. ETA=0:01:34
[09/19 06:46:24 d2.evaluation.evaluator]: Inference done 475/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3238 s/iter. ETA=0:01:29
[09/19 06:46:29 d2.evaluation.evaluator]: Inference done 491/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3238 s/iter. ETA=0:01:23
[09/19 06:46:35 d2.evaluation.evaluator]: Inference done 507/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:01:18
[09/19 06:46:40 d2.evaluation.evaluator]: Inference done 523/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:01:13
[09/19 06:46:45 d2.evaluation.evaluator]: Inference done 539/750. Dataloading: 0.0025 s/iter. Inference: 0.3205 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:01:08
[09/19 06:46:50 d2.evaluation.evaluator]: Inference done 555/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3238 s/iter. ETA=0:01:03
[09/19 06:46:55 d2.evaluation.evaluator]: Inference done 571/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3238 s/iter. ETA=0:00:57
[09/19 06:47:01 d2.evaluation.evaluator]: Inference done 587/750. Dataloading: 0.0025 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3238 s/iter. ETA=0:00:52
[09/19 06:47:06 d2.evaluation.evaluator]: Inference done 603/750. Dataloading: 0.0026 s/iter. Inference: 0.3207 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:00:47
[09/19 06:47:11 d2.evaluation.evaluator]: Inference done 619/750. Dataloading: 0.0026 s/iter. Inference: 0.3207 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:00:42
[09/19 06:47:16 d2.evaluation.evaluator]: Inference done 635/750. Dataloading: 0.0026 s/iter. Inference: 0.3207 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:00:37
[09/19 06:47:21 d2.evaluation.evaluator]: Inference done 651/750. Dataloading: 0.0026 s/iter. Inference: 0.3207 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:00:32
[09/19 06:47:27 d2.evaluation.evaluator]: Inference done 667/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:00:26
[09/19 06:47:32 d2.evaluation.evaluator]: Inference done 683/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:00:21
[09/19 06:47:37 d2.evaluation.evaluator]: Inference done 699/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:00:16
[09/19 06:47:42 d2.evaluation.evaluator]: Inference done 715/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:00:11
[09/19 06:47:47 d2.evaluation.evaluator]: Inference done 731/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:00:06
[09/19 06:47:52 d2.evaluation.evaluator]: Inference done 747/750. Dataloading: 0.0026 s/iter. Inference: 0.3207 s/iter. Eval: 0.0004 s/iter. Total: 0.3239 s/iter. ETA=0:00:00
[09/19 06:47:53 d2.evaluation.evaluator]: Total inference time: 0:04:01.363686 (0.323978 s / iter per device, on 1 devices)
[09/19 06:47:53 d2.evaluation.evaluator]: Total inference pure compute time: 0:03:58 (0.320669 s / iter per device, on 1 devices)
[09/19 06:47:54 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[09/19 06:47:54 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json
[09/19 06:47:54 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.31s)
creating index...
index created!
[09/19 06:47:55 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*
[09/19 06:47:56 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 1.01 seconds.
[09/19 06:47:56 d2.evaluation.fast_eval_api]: Accumulating evaluation results...
[09/19 06:47:56 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.21 seconds.
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.023
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.062
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.008
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.025
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.055
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.025
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.117
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.227
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.243
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.195
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.451
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.381
[09/19 06:47:56 d2.evaluation.coco_evaluation]: Evaluation results for bbox: 
|  AP   |  AP50  |  AP75  |  APs  |  APm  |  APl  |
|:-----:|:------:|:------:|:-----:|:-----:|:-----:|
| 2.307 | 6.227  | 0.820  | 2.460 | 5.549 | 2.509 |
[09/19 06:47:56 d2.evaluation.coco_evaluation]: Per-category bbox AP: 
| category   | AP    | category   | AP    | category      | AP    |
|:-----------|:------|:-----------|:------|:--------------|:------|
| bicycle    | 0.844 | motorcycle | 1.835 | passenger_car | 5.003 |
| person     | 1.545 |            |       |               |       |
[09/19 06:47:56 d2.engine.defaults]: Evaluation results for val in csv format:
[09/19 06:47:56 d2.evaluation.testing]: copypaste: Task: bbox
[09/19 06:47:56 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[09/19 06:47:56 d2.evaluation.testing]: copypaste: 2.3066,6.2270,0.8201,2.4595,5.5493,2.5093
[09/19 06:47:56 d2.utils.events]:  eta: 1:09:37  iter: 199  total_loss: 1.214  loss_cls: 0.4378  loss_box_reg: 0.6233  loss_rpn_cls: 0.03109  loss_rpn_loc: 0.09439  time: 2.2446  data_time: 0.0296  lr: 0.0001998  max_mem: 4113M
[09/19 06:48:41 d2.utils.events]:  eta: 1:08:49  iter: 219  total_loss: 1.115  loss_cls: 0.3869  loss_box_reg: 0.5718  loss_rpn_cls: 0.03194  loss_rpn_loc: 0.1185  time: 2.2443  data_time: 0.0300  lr: 0.00021978  max_mem: 4113M
[09/19 06:49:25 d2.utils.events]:  eta: 1:07:56  iter: 239  total_loss: 1.198  loss_cls: 0.4246  loss_box_reg: 0.5802  loss_rpn_cls: 0.03547  loss_rpn_loc: 0.1338  time: 2.2396  data_time: 0.0330  lr: 0.00023976  max_mem: 4113M
[09/19 06:50:11 d2.utils.events]:  eta: 1:07:25  iter: 259  total_loss: 1.11  loss_cls: 0.3575  loss_box_reg: 0.5796  loss_rpn_cls: 0.03532  loss_rpn_loc: 0.1172  time: 2.2455  data_time: 0.0284  lr: 0.00025974  max_mem: 4113M
[09/19 06:50:56 d2.utils.events]:  eta: 1:06:32  iter: 279  total_loss: 1.156  loss_cls: 0.3536  loss_box_reg: 0.6426  loss_rpn_cls: 0.03612  loss_rpn_loc: 0.1418  time: 2.2456  data_time: 0.0326  lr: 0.00027972  max_mem: 4113M
WARNING [09/19 06:51:42 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

WARNING [09/19 06:51:42 d2.data.datasets.coco]: val_data.json contains 2188 annotations, but only 501 of them match to images in the file.
[09/19 06:51:42 d2.data.datasets.coco]: Loaded 750 images in COCO format from val_data.json
[09/19 06:51:42 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[09/19 06:51:42 d2.data.common]: Serializing 750 elements to byte tensors and concatenating them all ...
[09/19 06:51:42 d2.data.common]: Serialized dataset takes 0.12 MiB
WARNING [09/19 06:51:42 d2.evaluation.coco_evaluation]: COCO Evaluator instantiated using config, this is deprecated behavior. Please pass in explicit arguments instead.
[09/19 06:51:42 d2.evaluation.evaluator]: Start inference on 750 batches
[09/19 06:51:46 d2.evaluation.evaluator]: Inference done 11/750. Dataloading: 0.0018 s/iter. Inference: 0.3174 s/iter. Eval: 0.0004 s/iter. Total: 0.3196 s/iter. ETA=0:03:56
[09/19 06:51:51 d2.evaluation.evaluator]: Inference done 27/750. Dataloading: 0.0025 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3235 s/iter. ETA=0:03:53
[09/19 06:51:56 d2.evaluation.evaluator]: Inference done 43/750. Dataloading: 0.0023 s/iter. Inference: 0.3197 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:03:48
[09/19 06:52:01 d2.evaluation.evaluator]: Inference done 59/750. Dataloading: 0.0023 s/iter. Inference: 0.3204 s/iter. Eval: 0.0004 s/iter. Total: 0.3234 s/iter. ETA=0:03:43
[09/19 06:52:06 d2.evaluation.evaluator]: Inference done 75/750. Dataloading: 0.0023 s/iter. Inference: 0.3200 s/iter. Eval: 0.0004 s/iter. Total: 0.3230 s/iter. ETA=0:03:38
[09/19 06:52:11 d2.evaluation.evaluator]: Inference done 91/750. Dataloading: 0.0025 s/iter. Inference: 0.3194 s/iter. Eval: 0.0004 s/iter. Total: 0.3226 s/iter. ETA=0:03:32
[09/19 06:52:17 d2.evaluation.evaluator]: Inference done 107/750. Dataloading: 0.0024 s/iter. Inference: 0.3190 s/iter. Eval: 0.0004 s/iter. Total: 0.3221 s/iter. ETA=0:03:27
[09/19 06:52:22 d2.evaluation.evaluator]: Inference done 123/750. Dataloading: 0.0024 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3222 s/iter. ETA=0:03:22
[09/19 06:52:27 d2.evaluation.evaluator]: Inference done 139/750. Dataloading: 0.0025 s/iter. Inference: 0.3189 s/iter. Eval: 0.0004 s/iter. Total: 0.3220 s/iter. ETA=0:03:16
[09/19 06:52:32 d2.evaluation.evaluator]: Inference done 155/750. Dataloading: 0.0025 s/iter. Inference: 0.3189 s/iter. Eval: 0.0004 s/iter. Total: 0.3221 s/iter. ETA=0:03:11
[09/19 06:52:37 d2.evaluation.evaluator]: Inference done 171/750. Dataloading: 0.0025 s/iter. Inference: 0.3191 s/iter. Eval: 0.0004 s/iter. Total: 0.3223 s/iter. ETA=0:03:06
[09/19 06:52:42 d2.evaluation.evaluator]: Inference done 187/750. Dataloading: 0.0025 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3224 s/iter. ETA=0:03:01
[09/19 06:52:48 d2.evaluation.evaluator]: Inference done 203/750. Dataloading: 0.0025 s/iter. Inference: 0.3193 s/iter. Eval: 0.0004 s/iter. Total: 0.3225 s/iter. ETA=0:02:56
[09/19 06:52:53 d2.evaluation.evaluator]: Inference done 219/750. Dataloading: 0.0025 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3224 s/iter. ETA=0:02:51
[09/19 06:52:58 d2.evaluation.evaluator]: Inference done 235/750. Dataloading: 0.0026 s/iter. Inference: 0.3193 s/iter. Eval: 0.0004 s/iter. Total: 0.3225 s/iter. ETA=0:02:46
[09/19 06:53:03 d2.evaluation.evaluator]: Inference done 251/750. Dataloading: 0.0026 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3225 s/iter. ETA=0:02:40
[09/19 06:53:08 d2.evaluation.evaluator]: Inference done 267/750. Dataloading: 0.0026 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3225 s/iter. ETA=0:02:35
[09/19 06:53:13 d2.evaluation.evaluator]: Inference done 283/750. Dataloading: 0.0026 s/iter. Inference: 0.3193 s/iter. Eval: 0.0004 s/iter. Total: 0.3226 s/iter. ETA=0:02:30
[09/19 06:53:19 d2.evaluation.evaluator]: Inference done 299/750. Dataloading: 0.0026 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3225 s/iter. ETA=0:02:25
[09/19 06:53:24 d2.evaluation.evaluator]: Inference done 315/750. Dataloading: 0.0026 s/iter. Inference: 0.3193 s/iter. Eval: 0.0004 s/iter. Total: 0.3225 s/iter. ETA=0:02:20
[09/19 06:53:29 d2.evaluation.evaluator]: Inference done 331/750. Dataloading: 0.0026 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3225 s/iter. ETA=0:02:15
[09/19 06:53:34 d2.evaluation.evaluator]: Inference done 347/750. Dataloading: 0.0026 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3224 s/iter. ETA=0:02:09
[09/19 06:53:39 d2.evaluation.evaluator]: Inference done 363/750. Dataloading: 0.0026 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3224 s/iter. ETA=0:02:04
[09/19 06:53:44 d2.evaluation.evaluator]: Inference done 379/750. Dataloading: 0.0026 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3224 s/iter. ETA=0:01:59
[09/19 06:53:49 d2.evaluation.evaluator]: Inference done 395/750. Dataloading: 0.0026 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3224 s/iter. ETA=0:01:54
[09/19 06:53:55 d2.evaluation.evaluator]: Inference done 411/750. Dataloading: 0.0025 s/iter. Inference: 0.3194 s/iter. Eval: 0.0004 s/iter. Total: 0.3226 s/iter. ETA=0:01:49
[09/19 06:54:00 d2.evaluation.evaluator]: Inference done 427/750. Dataloading: 0.0025 s/iter. Inference: 0.3194 s/iter. Eval: 0.0004 s/iter. Total: 0.3226 s/iter. ETA=0:01:44
[09/19 06:54:05 d2.evaluation.evaluator]: Inference done 443/750. Dataloading: 0.0025 s/iter. Inference: 0.3195 s/iter. Eval: 0.0004 s/iter. Total: 0.3226 s/iter. ETA=0:01:39
[09/19 06:54:10 d2.evaluation.evaluator]: Inference done 459/750. Dataloading: 0.0025 s/iter. Inference: 0.3195 s/iter. Eval: 0.0004 s/iter. Total: 0.3226 s/iter. ETA=0:01:33
[09/19 06:54:15 d2.evaluation.evaluator]: Inference done 475/750. Dataloading: 0.0025 s/iter. Inference: 0.3194 s/iter. Eval: 0.0004 s/iter. Total: 0.3225 s/iter. ETA=0:01:28
[09/19 06:54:21 d2.evaluation.evaluator]: Inference done 491/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:23
[09/19 06:54:26 d2.evaluation.evaluator]: Inference done 507/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:01:18
[09/19 06:54:31 d2.evaluation.evaluator]: Inference done 523/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:01:13
[09/19 06:54:36 d2.evaluation.evaluator]: Inference done 539/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:01:08
[09/19 06:54:41 d2.evaluation.evaluator]: Inference done 555/750. Dataloading: 0.0025 s/iter. Inference: 0.3197 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:02
[09/19 06:54:46 d2.evaluation.evaluator]: Inference done 571/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:57
[09/19 06:54:52 d2.evaluation.evaluator]: Inference done 587/750. Dataloading: 0.0025 s/iter. Inference: 0.3197 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:52
[09/19 06:54:57 d2.evaluation.evaluator]: Inference done 603/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:47
[09/19 06:55:02 d2.evaluation.evaluator]: Inference done 619/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:42
[09/19 06:55:07 d2.evaluation.evaluator]: Inference done 635/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:37
[09/19 06:55:12 d2.evaluation.evaluator]: Inference done 651/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:31
[09/19 06:55:17 d2.evaluation.evaluator]: Inference done 667/750. Dataloading: 0.0026 s/iter. Inference: 0.3195 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:00:26
[09/19 06:55:23 d2.evaluation.evaluator]: Inference done 683/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:21
[09/19 06:55:28 d2.evaluation.evaluator]: Inference done 699/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:16
[09/19 06:55:33 d2.evaluation.evaluator]: Inference done 715/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:11
[09/19 06:55:38 d2.evaluation.evaluator]: Inference done 731/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:06
[09/19 06:55:43 d2.evaluation.evaluator]: Inference done 747/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:00
[09/19 06:55:44 d2.evaluation.evaluator]: Total inference time: 0:04:00.530030 (0.322859 s / iter per device, on 1 devices)
[09/19 06:55:44 d2.evaluation.evaluator]: Total inference pure compute time: 0:03:58 (0.319582 s / iter per device, on 1 devices)
[09/19 06:55:45 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[09/19 06:55:45 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json
[09/19 06:55:45 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.31s)
creating index...
index created!
[09/19 06:55:45 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*
[09/19 06:55:46 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.65 seconds.
[09/19 06:55:46 d2.evaluation.fast_eval_api]: Accumulating evaluation results...
[09/19 06:55:46 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.21 seconds.
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.031
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.085
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.011
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.025
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.083
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.042
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.142
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.261
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.275
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.227
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.485
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.378
[09/19 06:55:46 d2.evaluation.coco_evaluation]: Evaluation results for bbox: 
|  AP   |  AP50  |  AP75  |  APs  |  APm  |  APl  |
|:-----:|:------:|:------:|:-----:|:-----:|:-----:|
| 3.124 | 8.511  | 1.130  | 2.524 | 8.311 | 4.228 |
[09/19 06:55:46 d2.evaluation.coco_evaluation]: Per-category bbox AP: 
| category   | AP    | category   | AP    | category      | AP    |
|:-----------|:------|:-----------|:------|:--------------|:------|
| bicycle    | 1.561 | motorcycle | 2.626 | passenger_car | 6.590 |
| person     | 1.721 |            |       |               |       |
[09/19 06:55:46 d2.engine.defaults]: Evaluation results for val in csv format:
[09/19 06:55:46 d2.evaluation.testing]: copypaste: Task: bbox
[09/19 06:55:46 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[09/19 06:55:46 d2.evaluation.testing]: copypaste: 3.1242,8.5105,1.1302,2.5239,8.3110,4.2276
[09/19 06:55:46 d2.utils.events]:  eta: 1:06:00  iter: 299  total_loss: 1.101  loss_cls: 0.3593  loss_box_reg: 0.5613  loss_rpn_cls: 0.03662  loss_rpn_loc: 0.1109  time: 2.2485  data_time: 0.0284  lr: 0.0002997  max_mem: 4113M
[09/19 06:56:31 d2.utils.events]:  eta: 1:05:12  iter: 319  total_loss: 1.026  loss_cls: 0.3369  loss_box_reg: 0.5918  loss_rpn_cls: 0.0265  loss_rpn_loc: 0.1044  time: 2.2482  data_time: 0.0326  lr: 0.00031968  max_mem: 4113M
[09/19 06:57:16 d2.utils.events]:  eta: 1:04:25  iter: 339  total_loss: 1.136  loss_cls: 0.3398  loss_box_reg: 0.6353  loss_rpn_cls: 0.02853  loss_rpn_loc: 0.1289  time: 2.2479  data_time: 0.0329  lr: 0.00033966  max_mem: 4113M
[09/19 06:58:03 d2.utils.events]:  eta: 1:03:45  iter: 359  total_loss: 0.9925  loss_cls: 0.2872  loss_box_reg: 0.5638  loss_rpn_cls: 0.02792  loss_rpn_loc: 0.1161  time: 2.2522  data_time: 0.0370  lr: 0.00035964  max_mem: 4113M
[09/19 06:58:47 d2.utils.events]:  eta: 1:02:53  iter: 379  total_loss: 1.07  loss_cls: 0.3085  loss_box_reg: 0.6251  loss_rpn_cls: 0.02949  loss_rpn_loc: 0.1058  time: 2.2510  data_time: 0.0268  lr: 0.00037962  max_mem: 4113M
WARNING [09/19 06:59:33 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

WARNING [09/19 06:59:33 d2.data.datasets.coco]: val_data.json contains 2188 annotations, but only 501 of them match to images in the file.
[09/19 06:59:33 d2.data.datasets.coco]: Loaded 750 images in COCO format from val_data.json
[09/19 06:59:33 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[09/19 06:59:33 d2.data.common]: Serializing 750 elements to byte tensors and concatenating them all ...
[09/19 06:59:33 d2.data.common]: Serialized dataset takes 0.12 MiB
WARNING [09/19 06:59:33 d2.evaluation.coco_evaluation]: COCO Evaluator instantiated using config, this is deprecated behavior. Please pass in explicit arguments instead.
[09/19 06:59:33 d2.evaluation.evaluator]: Start inference on 750 batches
[09/19 06:59:37 d2.evaluation.evaluator]: Inference done 11/750. Dataloading: 0.0024 s/iter. Inference: 0.3224 s/iter. Eval: 0.0004 s/iter. Total: 0.3253 s/iter. ETA=0:04:00
[09/19 06:59:42 d2.evaluation.evaluator]: Inference done 26/750. Dataloading: 0.0025 s/iter. Inference: 0.3215 s/iter. Eval: 0.0106 s/iter. Total: 0.3350 s/iter. ETA=0:04:02
[09/19 06:59:47 d2.evaluation.evaluator]: Inference done 42/750. Dataloading: 0.0024 s/iter. Inference: 0.3217 s/iter. Eval: 0.0062 s/iter. Total: 0.3305 s/iter. ETA=0:03:53
[09/19 06:59:52 d2.evaluation.evaluator]: Inference done 58/750. Dataloading: 0.0023 s/iter. Inference: 0.3220 s/iter. Eval: 0.0045 s/iter. Total: 0.3291 s/iter. ETA=0:03:47
[09/19 06:59:57 d2.evaluation.evaluator]: Inference done 74/750. Dataloading: 0.0026 s/iter. Inference: 0.3219 s/iter. Eval: 0.0035 s/iter. Total: 0.3282 s/iter. ETA=0:03:41
[09/19 07:00:03 d2.evaluation.evaluator]: Inference done 90/750. Dataloading: 0.0025 s/iter. Inference: 0.3214 s/iter. Eval: 0.0029 s/iter. Total: 0.3271 s/iter. ETA=0:03:35
[09/19 07:00:08 d2.evaluation.evaluator]: Inference done 106/750. Dataloading: 0.0025 s/iter. Inference: 0.3210 s/iter. Eval: 0.0025 s/iter. Total: 0.3262 s/iter. ETA=0:03:30
[09/19 07:00:13 d2.evaluation.evaluator]: Inference done 122/750. Dataloading: 0.0026 s/iter. Inference: 0.3208 s/iter. Eval: 0.0023 s/iter. Total: 0.3258 s/iter. ETA=0:03:24
[09/19 07:00:18 d2.evaluation.evaluator]: Inference done 138/750. Dataloading: 0.0026 s/iter. Inference: 0.3208 s/iter. Eval: 0.0020 s/iter. Total: 0.3256 s/iter. ETA=0:03:19
[09/19 07:00:23 d2.evaluation.evaluator]: Inference done 154/750. Dataloading: 0.0026 s/iter. Inference: 0.3208 s/iter. Eval: 0.0019 s/iter. Total: 0.3254 s/iter. ETA=0:03:13
[09/19 07:00:28 d2.evaluation.evaluator]: Inference done 170/750. Dataloading: 0.0025 s/iter. Inference: 0.3208 s/iter. Eval: 0.0017 s/iter. Total: 0.3253 s/iter. ETA=0:03:08
[09/19 07:00:34 d2.evaluation.evaluator]: Inference done 186/750. Dataloading: 0.0026 s/iter. Inference: 0.3208 s/iter. Eval: 0.0016 s/iter. Total: 0.3252 s/iter. ETA=0:03:03
[09/19 07:00:39 d2.evaluation.evaluator]: Inference done 202/750. Dataloading: 0.0025 s/iter. Inference: 0.3209 s/iter. Eval: 0.0015 s/iter. Total: 0.3252 s/iter. ETA=0:02:58
[09/19 07:00:44 d2.evaluation.evaluator]: Inference done 218/750. Dataloading: 0.0026 s/iter. Inference: 0.3209 s/iter. Eval: 0.0014 s/iter. Total: 0.3251 s/iter. ETA=0:02:52
[09/19 07:00:49 d2.evaluation.evaluator]: Inference done 234/750. Dataloading: 0.0026 s/iter. Inference: 0.3209 s/iter. Eval: 0.0014 s/iter. Total: 0.3250 s/iter. ETA=0:02:47
[09/19 07:00:54 d2.evaluation.evaluator]: Inference done 250/750. Dataloading: 0.0026 s/iter. Inference: 0.3208 s/iter. Eval: 0.0013 s/iter. Total: 0.3249 s/iter. ETA=0:02:42
[09/19 07:01:00 d2.evaluation.evaluator]: Inference done 266/750. Dataloading: 0.0026 s/iter. Inference: 0.3208 s/iter. Eval: 0.0013 s/iter. Total: 0.3248 s/iter. ETA=0:02:37
[09/19 07:01:05 d2.evaluation.evaluator]: Inference done 282/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0012 s/iter. Total: 0.3247 s/iter. ETA=0:02:31
[09/19 07:01:10 d2.evaluation.evaluator]: Inference done 298/750. Dataloading: 0.0026 s/iter. Inference: 0.3207 s/iter. Eval: 0.0012 s/iter. Total: 0.3247 s/iter. ETA=0:02:26
[09/19 07:01:15 d2.evaluation.evaluator]: Inference done 314/750. Dataloading: 0.0026 s/iter. Inference: 0.3207 s/iter. Eval: 0.0011 s/iter. Total: 0.3246 s/iter. ETA=0:02:21
[09/19 07:01:20 d2.evaluation.evaluator]: Inference done 330/750. Dataloading: 0.0026 s/iter. Inference: 0.3207 s/iter. Eval: 0.0011 s/iter. Total: 0.3246 s/iter. ETA=0:02:16
[09/19 07:01:25 d2.evaluation.evaluator]: Inference done 346/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0011 s/iter. Total: 0.3245 s/iter. ETA=0:02:11
[09/19 07:01:31 d2.evaluation.evaluator]: Inference done 362/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0010 s/iter. Total: 0.3245 s/iter. ETA=0:02:05
[09/19 07:01:36 d2.evaluation.evaluator]: Inference done 378/750. Dataloading: 0.0026 s/iter. Inference: 0.3205 s/iter. Eval: 0.0010 s/iter. Total: 0.3243 s/iter. ETA=0:02:00
[09/19 07:01:41 d2.evaluation.evaluator]: Inference done 394/750. Dataloading: 0.0026 s/iter. Inference: 0.3205 s/iter. Eval: 0.0010 s/iter. Total: 0.3244 s/iter. ETA=0:01:55
[09/19 07:01:46 d2.evaluation.evaluator]: Inference done 410/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0010 s/iter. Total: 0.3244 s/iter. ETA=0:01:50
[09/19 07:01:51 d2.evaluation.evaluator]: Inference done 426/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0009 s/iter. Total: 0.3244 s/iter. ETA=0:01:45
[09/19 07:01:57 d2.evaluation.evaluator]: Inference done 442/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0009 s/iter. Total: 0.3244 s/iter. ETA=0:01:39
[09/19 07:02:02 d2.evaluation.evaluator]: Inference done 458/750. Dataloading: 0.0026 s/iter. Inference: 0.3206 s/iter. Eval: 0.0009 s/iter. Total: 0.3244 s/iter. ETA=0:01:34
[09/19 07:02:07 d2.evaluation.evaluator]: Inference done 474/750. Dataloading: 0.0026 s/iter. Inference: 0.3207 s/iter. Eval: 0.0009 s/iter. Total: 0.3245 s/iter. ETA=0:01:29
[09/19 07:02:12 d2.evaluation.evaluator]: Inference done 490/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0009 s/iter. Total: 0.3244 s/iter. ETA=0:01:24
[09/19 07:02:17 d2.evaluation.evaluator]: Inference done 506/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0009 s/iter. Total: 0.3244 s/iter. ETA=0:01:19
[09/19 07:02:23 d2.evaluation.evaluator]: Inference done 522/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0009 s/iter. Total: 0.3244 s/iter. ETA=0:01:13
[09/19 07:02:28 d2.evaluation.evaluator]: Inference done 538/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0008 s/iter. Total: 0.3244 s/iter. ETA=0:01:08
[09/19 07:02:33 d2.evaluation.evaluator]: Inference done 554/750. Dataloading: 0.0027 s/iter. Inference: 0.3208 s/iter. Eval: 0.0008 s/iter. Total: 0.3245 s/iter. ETA=0:01:03
[09/19 07:02:38 d2.evaluation.evaluator]: Inference done 570/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0008 s/iter. Total: 0.3244 s/iter. ETA=0:00:58
[09/19 07:02:43 d2.evaluation.evaluator]: Inference done 586/750. Dataloading: 0.0027 s/iter. Inference: 0.3208 s/iter. Eval: 0.0008 s/iter. Total: 0.3245 s/iter. ETA=0:00:53
[09/19 07:02:48 d2.evaluation.evaluator]: Inference done 602/750. Dataloading: 0.0027 s/iter. Inference: 0.3208 s/iter. Eval: 0.0008 s/iter. Total: 0.3245 s/iter. ETA=0:00:48
[09/19 07:02:54 d2.evaluation.evaluator]: Inference done 617/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0011 s/iter. Total: 0.3247 s/iter. ETA=0:00:43
[09/19 07:02:59 d2.evaluation.evaluator]: Inference done 633/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0010 s/iter. Total: 0.3246 s/iter. ETA=0:00:37
[09/19 07:03:04 d2.evaluation.evaluator]: Inference done 649/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0010 s/iter. Total: 0.3246 s/iter. ETA=0:00:32
[09/19 07:03:09 d2.evaluation.evaluator]: Inference done 665/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0010 s/iter. Total: 0.3246 s/iter. ETA=0:00:27
[09/19 07:03:14 d2.evaluation.evaluator]: Inference done 681/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0010 s/iter. Total: 0.3246 s/iter. ETA=0:00:22
[09/19 07:03:19 d2.evaluation.evaluator]: Inference done 697/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0010 s/iter. Total: 0.3246 s/iter. ETA=0:00:17
[09/19 07:03:25 d2.evaluation.evaluator]: Inference done 713/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0010 s/iter. Total: 0.3245 s/iter. ETA=0:00:12
[09/19 07:03:30 d2.evaluation.evaluator]: Inference done 729/750. Dataloading: 0.0027 s/iter. Inference: 0.3206 s/iter. Eval: 0.0010 s/iter. Total: 0.3245 s/iter. ETA=0:00:06
[09/19 07:03:35 d2.evaluation.evaluator]: Inference done 745/750. Dataloading: 0.0027 s/iter. Inference: 0.3207 s/iter. Eval: 0.0009 s/iter. Total: 0.3245 s/iter. ETA=0:00:01
[09/19 07:03:37 d2.evaluation.evaluator]: Total inference time: 0:04:01.768445 (0.324521 s / iter per device, on 1 devices)
[09/19 07:03:37 d2.evaluation.evaluator]: Total inference pure compute time: 0:03:58 (0.320640 s / iter per device, on 1 devices)
[09/19 07:03:37 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[09/19 07:03:37 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json
[09/19 07:03:37 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.31s)
creating index...
index created!
[09/19 07:03:38 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*
[09/19 07:03:39 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.77 seconds.
[09/19 07:03:39 d2.evaluation.fast_eval_api]: Accumulating evaluation results...
[09/19 07:03:39 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.21 seconds.
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.046
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.105
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.035
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.033
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.113
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.069
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.183
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.302
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.314
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.263
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.504
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.419
[09/19 07:03:39 d2.evaluation.coco_evaluation]: Evaluation results for bbox: 
|  AP   |  AP50  |  AP75  |  APs  |  APm   |  APl  |
|:-----:|:------:|:------:|:-----:|:------:|:-----:|
| 4.624 | 10.547 | 3.518  | 3.264 | 11.313 | 6.913 |
[09/19 07:03:39 d2.evaluation.coco_evaluation]: Per-category bbox AP: 
| category   | AP    | category   | AP    | category      | AP    |
|:-----------|:------|:-----------|:------|:--------------|:------|
| bicycle    | 1.199 | motorcycle | 4.057 | passenger_car | 9.569 |
| person     | 3.672 |            |       |               |       |
[09/19 07:03:39 d2.engine.defaults]: Evaluation results for val in csv format:
[09/19 07:03:39 d2.evaluation.testing]: copypaste: Task: bbox
[09/19 07:03:39 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[09/19 07:03:39 d2.evaluation.testing]: copypaste: 4.6243,10.5474,3.5177,3.2642,11.3127,6.9130
[09/19 07:03:39 d2.utils.events]:  eta: 1:02:08  iter: 399  total_loss: 1.022  loss_cls: 0.2932  loss_box_reg: 0.5394  loss_rpn_cls: 0.03032  loss_rpn_loc: 0.09765  time: 2.2522  data_time: 0.0393  lr: 0.0003996  max_mem: 4113M
[09/19 07:04:24 d2.utils.events]:  eta: 1:01:24  iter: 419  total_loss: 1.016  loss_cls: 0.3253  loss_box_reg: 0.5535  loss_rpn_cls: 0.02789  loss_rpn_loc: 0.1084  time: 2.2523  data_time: 0.0353  lr: 0.00041958  max_mem: 4113M
[09/19 07:05:08 d2.utils.events]:  eta: 1:00:33  iter: 439  total_loss: 0.8065  loss_cls: 0.2204  loss_box_reg: 0.4804  loss_rpn_cls: 0.02363  loss_rpn_loc: 0.08194  time: 2.2501  data_time: 0.0370  lr: 0.00043956  max_mem: 4113M
[09/19 07:05:54 d2.utils.events]:  eta: 0:59:52  iter: 459  total_loss: 0.9307  loss_cls: 0.2612  loss_box_reg: 0.5077  loss_rpn_cls: 0.03284  loss_rpn_loc: 0.1214  time: 2.2518  data_time: 0.0313  lr: 0.00045954  max_mem: 4113M
[09/19 07:06:40 d2.utils.events]:  eta: 0:59:08  iter: 479  total_loss: 1.032  loss_cls: 0.2767  loss_box_reg: 0.5698  loss_rpn_cls: 0.03027  loss_rpn_loc: 0.1157  time: 2.2532  data_time: 0.0356  lr: 0.00047952  max_mem: 4113M
WARNING [09/19 07:07:24 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

WARNING [09/19 07:07:24 d2.data.datasets.coco]: val_data.json contains 2188 annotations, but only 501 of them match to images in the file.
[09/19 07:07:24 d2.data.datasets.coco]: Loaded 750 images in COCO format from val_data.json
[09/19 07:07:24 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[09/19 07:07:24 d2.data.common]: Serializing 750 elements to byte tensors and concatenating them all ...
[09/19 07:07:24 d2.data.common]: Serialized dataset takes 0.12 MiB
WARNING [09/19 07:07:24 d2.evaluation.coco_evaluation]: COCO Evaluator instantiated using config, this is deprecated behavior. Please pass in explicit arguments instead.
[09/19 07:07:25 d2.evaluation.evaluator]: Start inference on 750 batches
[09/19 07:07:28 d2.evaluation.evaluator]: Inference done 11/750. Dataloading: 0.0020 s/iter. Inference: 0.3186 s/iter. Eval: 0.0005 s/iter. Total: 0.3211 s/iter. ETA=0:03:57
[09/19 07:07:33 d2.evaluation.evaluator]: Inference done 27/750. Dataloading: 0.0022 s/iter. Inference: 0.3214 s/iter. Eval: 0.0004 s/iter. Total: 0.3243 s/iter. ETA=0:03:54
[09/19 07:07:39 d2.evaluation.evaluator]: Inference done 43/750. Dataloading: 0.0026 s/iter. Inference: 0.3209 s/iter. Eval: 0.0004 s/iter. Total: 0.3241 s/iter. ETA=0:03:49
[09/19 07:07:44 d2.evaluation.evaluator]: Inference done 59/750. Dataloading: 0.0025 s/iter. Inference: 0.3204 s/iter. Eval: 0.0004 s/iter. Total: 0.3234 s/iter. ETA=0:03:43
[09/19 07:07:49 d2.evaluation.evaluator]: Inference done 75/750. Dataloading: 0.0025 s/iter. Inference: 0.3207 s/iter. Eval: 0.0004 s/iter. Total: 0.3237 s/iter. ETA=0:03:38
[09/19 07:07:54 d2.evaluation.evaluator]: Inference done 91/750. Dataloading: 0.0025 s/iter. Inference: 0.3203 s/iter. Eval: 0.0004 s/iter. Total: 0.3235 s/iter. ETA=0:03:33
[09/19 07:07:59 d2.evaluation.evaluator]: Inference done 107/750. Dataloading: 0.0025 s/iter. Inference: 0.3201 s/iter. Eval: 0.0004 s/iter. Total: 0.3232 s/iter. ETA=0:03:27
[09/19 07:08:04 d2.evaluation.evaluator]: Inference done 123/750. Dataloading: 0.0024 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3230 s/iter. ETA=0:03:22
[09/19 07:08:10 d2.evaluation.evaluator]: Inference done 139/750. Dataloading: 0.0024 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:03:17
[09/19 07:08:15 d2.evaluation.evaluator]: Inference done 155/750. Dataloading: 0.0024 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:03:12
[09/19 07:08:20 d2.evaluation.evaluator]: Inference done 171/750. Dataloading: 0.0024 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3230 s/iter. ETA=0:03:07
[09/19 07:08:25 d2.evaluation.evaluator]: Inference done 187/750. Dataloading: 0.0025 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3230 s/iter. ETA=0:03:01
[09/19 07:08:30 d2.evaluation.evaluator]: Inference done 203/750. Dataloading: 0.0025 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3230 s/iter. ETA=0:02:56
[09/19 07:08:35 d2.evaluation.evaluator]: Inference done 219/750. Dataloading: 0.0025 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3230 s/iter. ETA=0:02:51
[09/19 07:08:41 d2.evaluation.evaluator]: Inference done 235/750. Dataloading: 0.0025 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:02:46
[09/19 07:08:46 d2.evaluation.evaluator]: Inference done 251/750. Dataloading: 0.0025 s/iter. Inference: 0.3198 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:02:41
[09/19 07:08:51 d2.evaluation.evaluator]: Inference done 267/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:02:35
[09/19 07:08:56 d2.evaluation.evaluator]: Inference done 283/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:02:30
[09/19 07:09:01 d2.evaluation.evaluator]: Inference done 299/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:02:25
[09/19 07:09:06 d2.evaluation.evaluator]: Inference done 315/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:02:20
[09/19 07:09:12 d2.evaluation.evaluator]: Inference done 331/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:02:15
[09/19 07:09:17 d2.evaluation.evaluator]: Inference done 347/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:02:10
[09/19 07:09:22 d2.evaluation.evaluator]: Inference done 363/750. Dataloading: 0.0025 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:02:04
[09/19 07:09:27 d2.evaluation.evaluator]: Inference done 379/750. Dataloading: 0.0025 s/iter. Inference: 0.3195 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:01:59
[09/19 07:09:32 d2.evaluation.evaluator]: Inference done 395/750. Dataloading: 0.0025 s/iter. Inference: 0.3195 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:01:54
[09/19 07:09:37 d2.evaluation.evaluator]: Inference done 411/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:49
[09/19 07:09:43 d2.evaluation.evaluator]: Inference done 427/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:44
[09/19 07:09:48 d2.evaluation.evaluator]: Inference done 443/750. Dataloading: 0.0026 s/iter. Inference: 0.3195 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:01:39
[09/19 07:09:53 d2.evaluation.evaluator]: Inference done 459/750. Dataloading: 0.0026 s/iter. Inference: 0.3195 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:01:33
[09/19 07:09:58 d2.evaluation.evaluator]: Inference done 475/750. Dataloading: 0.0026 s/iter. Inference: 0.3195 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:01:28
[09/19 07:10:03 d2.evaluation.evaluator]: Inference done 491/750. Dataloading: 0.0026 s/iter. Inference: 0.3195 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:01:23
[09/19 07:10:08 d2.evaluation.evaluator]: Inference done 507/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:18
[09/19 07:10:14 d2.evaluation.evaluator]: Inference done 523/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:01:13
[09/19 07:10:19 d2.evaluation.evaluator]: Inference done 539/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:08
[09/19 07:10:24 d2.evaluation.evaluator]: Inference done 555/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:01:02
[09/19 07:10:29 d2.evaluation.evaluator]: Inference done 571/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:57
[09/19 07:10:34 d2.evaluation.evaluator]: Inference done 587/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:52
[09/19 07:10:39 d2.evaluation.evaluator]: Inference done 603/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:47
[09/19 07:10:45 d2.evaluation.evaluator]: Inference done 619/750. Dataloading: 0.0026 s/iter. Inference: 0.3197 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:00:42
[09/19 07:10:50 d2.evaluation.evaluator]: Inference done 635/750. Dataloading: 0.0026 s/iter. Inference: 0.3197 s/iter. Eval: 0.0004 s/iter. Total: 0.3229 s/iter. ETA=0:00:37
[09/19 07:10:55 d2.evaluation.evaluator]: Inference done 651/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:31
[09/19 07:11:00 d2.evaluation.evaluator]: Inference done 667/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:26
[09/19 07:11:05 d2.evaluation.evaluator]: Inference done 683/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:21
[09/19 07:11:10 d2.evaluation.evaluator]: Inference done 699/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:16
[09/19 07:11:16 d2.evaluation.evaluator]: Inference done 715/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:11
[09/19 07:11:21 d2.evaluation.evaluator]: Inference done 731/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:06
[09/19 07:11:26 d2.evaluation.evaluator]: Inference done 747/750. Dataloading: 0.0026 s/iter. Inference: 0.3196 s/iter. Eval: 0.0004 s/iter. Total: 0.3228 s/iter. ETA=0:00:00
[09/19 07:11:27 d2.evaluation.evaluator]: Total inference time: 0:04:00.534280 (0.322865 s / iter per device, on 1 devices)
[09/19 07:11:27 d2.evaluation.evaluator]: Total inference pure compute time: 0:03:58 (0.319610 s / iter per device, on 1 devices)
[09/19 07:11:27 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[09/19 07:11:27 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json
[09/19 07:11:28 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.28s)
creating index...
index created!
[09/19 07:11:28 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*
[09/19 07:11:29 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.68 seconds.
[09/19 07:11:29 d2.evaluation.fast_eval_api]: Accumulating evaluation results...
[09/19 07:11:29 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.16 seconds.
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.057
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.123
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.047
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.042
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.132
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.091
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.212
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.324
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.333
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.271
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.549
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.394
[09/19 07:11:29 d2.evaluation.coco_evaluation]: Evaluation results for bbox: 
|  AP   |  AP50  |  AP75  |  APs  |  APm   |  APl  |
|:-----:|:------:|:------:|:-----:|:------:|:-----:|
| 5.699 | 12.338 | 4.703  | 4.220 | 13.187 | 9.130 |
[09/19 07:11:29 d2.evaluation.coco_evaluation]: Per-category bbox AP: 
| category   | AP    | category   | AP    | category      | AP     |
|:-----------|:------|:-----------|:------|:--------------|:-------|
| bicycle    | 2.348 | motorcycle | 7.270 | passenger_car | 10.129 |
| person     | 3.049 |            |       |               |        |
[09/19 07:11:29 d2.engine.defaults]: Evaluation results for val in csv format:
[09/19 07:11:29 d2.evaluation.testing]: copypaste: Task: bbox
[09/19 07:11:29 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[09/19 07:11:29 d2.evaluation.testing]: copypaste: 5.6988,12.3385,4.7029,4.2201,13.1870,9.1303
[09/19 07:11:29 d2.utils.events]:  eta: 0:58:18  iter: 499  total_loss: 0.9071  loss_cls: 0.2378  loss_box_reg: 0.5209  loss_rpn_cls: 0.02405  loss_rpn_loc: 0.1033  time: 2.2527  data_time: 0.0271  lr: 0.0004995  max_mem: 4113M
[09/19 07:12:14 d2.utils.events]:  eta: 0:57:33  iter: 519  total_loss: 0.865  loss_cls: 0.2554  loss_box_reg: 0.4881  loss_rpn_cls: 0.02497  loss_rpn_loc: 0.123  time: 2.2533  data_time: 0.0360  lr: 0.00051948  max_mem: 4113M
[09/19 07:13:00 d2.utils.events]:  eta: 0:56:51  iter: 539  total_loss: 0.8694  loss_cls: 0.25  loss_box_reg: 0.48  loss_rpn_cls: 0.02599  loss_rpn_loc: 0.09696  time: 2.2550  data_time: 0.0370  lr: 0.00053946  max_mem: 4113M
[09/19 07:13:46 d2.utils.events]:  eta: 0:56:05  iter: 559  total_loss: 0.8365  loss_cls: 0.2213  loss_box_reg: 0.4527  loss_rpn_cls: 0.02618  loss_rpn_loc: 0.09801  time: 2.2560  data_time: 0.0375  lr: 0.00055944  max_mem: 4113M
[09/19 07:14:31 d2.utils.events]:  eta: 0:55:18  iter: 579  total_loss: 0.8375  loss_cls: 0.2546  loss_box_reg: 0.4242  loss_rpn_cls: 0.02632  loss_rpn_loc: 0.092  time: 2.2553  data_time: 0.0325  lr: 0.00057942  max_mem: 4113M
WARNING [09/19 07:15:14 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

WARNING [09/19 07:15:14 d2.data.datasets.coco]: val_data.json contains 2188 annotations, but only 501 of them match to images in the file.
[09/19 07:15:14 d2.data.datasets.coco]: Loaded 750 images in COCO format from val_data.json
[09/19 07:15:14 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[09/19 07:15:14 d2.data.common]: Serializing 750 elements to byte tensors and concatenating them all ...
[09/19 07:15:14 d2.data.common]: Serialized dataset takes 0.12 MiB
WARNING [09/19 07:15:14 d2.evaluation.coco_evaluation]: COCO Evaluator instantiated using config, this is deprecated behavior. Please pass in explicit arguments instead.
[09/19 07:15:14 d2.evaluation.evaluator]: Start inference on 750 batches
[09/19 07:15:18 d2.evaluation.evaluator]: Inference done 11/750. Dataloading: 0.0016 s/iter. Inference: 0.3220 s/iter. Eval: 0.0004 s/iter. Total: 0.3241 s/iter. ETA=0:03:59
[09/19 07:15:23 d2.evaluation.evaluator]: Inference done 27/750. Dataloading: 0.0024 s/iter. Inference: 0.3187 s/iter. Eval: 0.0004 s/iter. Total: 0.3216 s/iter. ETA=0:03:52
[09/19 07:15:28 d2.evaluation.evaluator]: Inference done 43/750. Dataloading: 0.0024 s/iter. Inference: 0.3182 s/iter. Eval: 0.0004 s/iter. Total: 0.3214 s/iter. ETA=0:03:47
[09/19 07:15:33 d2.evaluation.evaluator]: Inference done 59/750. Dataloading: 0.0024 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:03:42
[09/19 07:15:39 d2.evaluation.evaluator]: Inference done 75/750. Dataloading: 0.0024 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3216 s/iter. ETA=0:03:37
[09/19 07:15:44 d2.evaluation.evaluator]: Inference done 91/750. Dataloading: 0.0024 s/iter. Inference: 0.3186 s/iter. Eval: 0.0004 s/iter. Total: 0.3216 s/iter. ETA=0:03:31
[09/19 07:15:49 d2.evaluation.evaluator]: Inference done 107/750. Dataloading: 0.0025 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:03:26
[09/19 07:15:54 d2.evaluation.evaluator]: Inference done 123/750. Dataloading: 0.0025 s/iter. Inference: 0.3183 s/iter. Eval: 0.0004 s/iter. Total: 0.3214 s/iter. ETA=0:03:21
[09/19 07:15:59 d2.evaluation.evaluator]: Inference done 139/750. Dataloading: 0.0025 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3216 s/iter. ETA=0:03:16
[09/19 07:16:04 d2.evaluation.evaluator]: Inference done 155/750. Dataloading: 0.0025 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:03:11
[09/19 07:16:09 d2.evaluation.evaluator]: Inference done 171/750. Dataloading: 0.0025 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:03:06
[09/19 07:16:15 d2.evaluation.evaluator]: Inference done 187/750. Dataloading: 0.0025 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:03:01
[09/19 07:16:20 d2.evaluation.evaluator]: Inference done 203/750. Dataloading: 0.0025 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:02:55
[09/19 07:16:25 d2.evaluation.evaluator]: Inference done 219/750. Dataloading: 0.0025 s/iter. Inference: 0.3186 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:02:50
[09/19 07:16:30 d2.evaluation.evaluator]: Inference done 235/750. Dataloading: 0.0025 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:02:45
[09/19 07:16:35 d2.evaluation.evaluator]: Inference done 251/750. Dataloading: 0.0025 s/iter. Inference: 0.3186 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:02:40
[09/19 07:16:40 d2.evaluation.evaluator]: Inference done 267/750. Dataloading: 0.0025 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:02:35
[09/19 07:16:46 d2.evaluation.evaluator]: Inference done 283/750. Dataloading: 0.0025 s/iter. Inference: 0.3186 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:02:30
[09/19 07:16:51 d2.evaluation.evaluator]: Inference done 299/750. Dataloading: 0.0026 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3216 s/iter. ETA=0:02:25
[09/19 07:16:56 d2.evaluation.evaluator]: Inference done 315/750. Dataloading: 0.0026 s/iter. Inference: 0.3183 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:02:19
[09/19 07:17:01 d2.evaluation.evaluator]: Inference done 331/750. Dataloading: 0.0026 s/iter. Inference: 0.3183 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:02:14
[09/19 07:17:06 d2.evaluation.evaluator]: Inference done 347/750. Dataloading: 0.0026 s/iter. Inference: 0.3183 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:02:09
[09/19 07:17:11 d2.evaluation.evaluator]: Inference done 363/750. Dataloading: 0.0026 s/iter. Inference: 0.3183 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:02:04
[09/19 07:17:16 d2.evaluation.evaluator]: Inference done 379/750. Dataloading: 0.0026 s/iter. Inference: 0.3183 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:01:59
[09/19 07:17:21 d2.evaluation.evaluator]: Inference done 395/750. Dataloading: 0.0026 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:01:54
[09/19 07:17:27 d2.evaluation.evaluator]: Inference done 411/750. Dataloading: 0.0026 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3216 s/iter. ETA=0:01:49
[09/19 07:17:32 d2.evaluation.evaluator]: Inference done 427/750. Dataloading: 0.0026 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3216 s/iter. ETA=0:01:43
[09/19 07:17:37 d2.evaluation.evaluator]: Inference done 443/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:01:38
[09/19 07:17:42 d2.evaluation.evaluator]: Inference done 459/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:01:33
[09/19 07:17:47 d2.evaluation.evaluator]: Inference done 475/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:01:28
[09/19 07:17:52 d2.evaluation.evaluator]: Inference done 491/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:01:23
[09/19 07:17:58 d2.evaluation.evaluator]: Inference done 507/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:01:18
[09/19 07:18:03 d2.evaluation.evaluator]: Inference done 523/750. Dataloading: 0.0026 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3216 s/iter. ETA=0:01:13
[09/19 07:18:08 d2.evaluation.evaluator]: Inference done 539/750. Dataloading: 0.0026 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3216 s/iter. ETA=0:01:07
[09/19 07:18:13 d2.evaluation.evaluator]: Inference done 555/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:01:02
[09/19 07:18:18 d2.evaluation.evaluator]: Inference done 571/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:00:57
[09/19 07:18:23 d2.evaluation.evaluator]: Inference done 587/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:00:52
[09/19 07:18:28 d2.evaluation.evaluator]: Inference done 603/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:00:47
[09/19 07:18:34 d2.evaluation.evaluator]: Inference done 619/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:00:42
[09/19 07:18:39 d2.evaluation.evaluator]: Inference done 635/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:00:36
[09/19 07:18:44 d2.evaluation.evaluator]: Inference done 651/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:00:31
[09/19 07:18:49 d2.evaluation.evaluator]: Inference done 667/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:00:26
[09/19 07:18:54 d2.evaluation.evaluator]: Inference done 683/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:00:21
[09/19 07:18:59 d2.evaluation.evaluator]: Inference done 699/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:00:16
[09/19 07:19:05 d2.evaluation.evaluator]: Inference done 715/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3218 s/iter. ETA=0:00:11
[09/19 07:19:10 d2.evaluation.evaluator]: Inference done 731/750. Dataloading: 0.0026 s/iter. Inference: 0.3185 s/iter. Eval: 0.0004 s/iter. Total: 0.3218 s/iter. ETA=0:00:06
[09/19 07:19:15 d2.evaluation.evaluator]: Inference done 747/750. Dataloading: 0.0026 s/iter. Inference: 0.3186 s/iter. Eval: 0.0004 s/iter. Total: 0.3218 s/iter. ETA=0:00:00
[09/19 07:19:16 d2.evaluation.evaluator]: Total inference time: 0:03:59.781271 (0.321854 s / iter per device, on 1 devices)
[09/19 07:19:16 d2.evaluation.evaluator]: Total inference pure compute time: 0:03:57 (0.318556 s / iter per device, on 1 devices)
[09/19 07:19:16 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[09/19 07:19:16 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json
[09/19 07:19:16 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.26s)
creating index...
index created!
[09/19 07:19:17 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*
[09/19 07:19:17 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.60 seconds.
[09/19 07:19:17 d2.evaluation.fast_eval_api]: Accumulating evaluation results...
[09/19 07:19:18 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.17 seconds.
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.064
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.133
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.052
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.047
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.164
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.109
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.224
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.331
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.338
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.273
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.566
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.456
[09/19 07:19:18 d2.evaluation.coco_evaluation]: Evaluation results for bbox: 
|  AP   |  AP50  |  AP75  |  APs  |  APm   |  APl   |
|:-----:|:------:|:------:|:-----:|:------:|:------:|
| 6.427 | 13.332 | 5.226  | 4.659 | 16.372 | 10.917 |
[09/19 07:19:18 d2.evaluation.coco_evaluation]: Per-category bbox AP: 
| category   | AP    | category   | AP    | category      | AP     |
|:-----------|:------|:-----------|:------|:--------------|:-------|
| bicycle    | 2.016 | motorcycle | 8.944 | passenger_car | 10.444 |
| person     | 4.302 |            |       |               |        |
[09/19 07:19:18 d2.engine.defaults]: Evaluation results for val in csv format:
[09/19 07:19:18 d2.evaluation.testing]: copypaste: Task: bbox
[09/19 07:19:18 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[09/19 07:19:18 d2.evaluation.testing]: copypaste: 6.4267,13.3318,5.2259,4.6592,16.3718,10.9166
[09/19 07:19:18 d2.utils.events]:  eta: 0:54:30  iter: 599  total_loss: 0.8681  loss_cls: 0.2336  loss_box_reg: 0.4806  loss_rpn_cls: 0.02619  loss_rpn_loc: 0.1034  time: 2.2527  data_time: 0.0319  lr: 0.0005994  max_mem: 4113M
[09/19 07:20:02 d2.utils.events]:  eta: 0:53:43  iter: 619  total_loss: 0.9184  loss_cls: 0.2357  loss_box_reg: 0.4933  loss_rpn_cls: 0.0285  loss_rpn_loc: 0.08695  time: 2.2514  data_time: 0.0339  lr: 0.00061938  max_mem: 4113M
[09/19 07:20:47 d2.utils.events]:  eta: 0:52:57  iter: 639  total_loss: 0.9002  loss_cls: 0.2661  loss_box_reg: 0.4828  loss_rpn_cls: 0.02355  loss_rpn_loc: 0.1047  time: 2.2519  data_time: 0.0362  lr: 0.00063936  max_mem: 4113M
[09/19 07:21:32 d2.utils.events]:  eta: 0:52:10  iter: 659  total_loss: 0.8878  loss_cls: 0.2707  loss_box_reg: 0.4883  loss_rpn_cls: 0.02135  loss_rpn_loc: 0.1208  time: 2.2512  data_time: 0.0328  lr: 0.00065934  max_mem: 4113M
[09/19 07:22:17 d2.utils.events]:  eta: 0:51:24  iter: 679  total_loss: 0.8325  loss_cls: 0.2462  loss_box_reg: 0.4928  loss_rpn_cls: 0.02211  loss_rpn_loc: 0.1033  time: 2.2507  data_time: 0.0301  lr: 0.00067932  max_mem: 4113M
WARNING [09/19 07:23:02 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

WARNING [09/19 07:23:02 d2.data.datasets.coco]: val_data.json contains 2188 annotations, but only 501 of them match to images in the file.
[09/19 07:23:02 d2.data.datasets.coco]: Loaded 750 images in COCO format from val_data.json
[09/19 07:23:02 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[09/19 07:23:02 d2.data.common]: Serializing 750 elements to byte tensors and concatenating them all ...
[09/19 07:23:02 d2.data.common]: Serialized dataset takes 0.12 MiB
WARNING [09/19 07:23:02 d2.evaluation.coco_evaluation]: COCO Evaluator instantiated using config, this is deprecated behavior. Please pass in explicit arguments instead.
[09/19 07:23:02 d2.evaluation.evaluator]: Start inference on 750 batches
[09/19 07:23:06 d2.evaluation.evaluator]: Inference done 11/750. Dataloading: 0.0023 s/iter. Inference: 0.3205 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:03:58
[09/19 07:23:11 d2.evaluation.evaluator]: Inference done 27/750. Dataloading: 0.0025 s/iter. Inference: 0.3194 s/iter. Eval: 0.0005 s/iter. Total: 0.3226 s/iter. ETA=0:03:53
[09/19 07:23:16 d2.evaluation.evaluator]: Inference done 43/750. Dataloading: 0.0030 s/iter. Inference: 0.3199 s/iter. Eval: 0.0005 s/iter. Total: 0.3236 s/iter. ETA=0:03:48
[09/19 07:23:21 d2.evaluation.evaluator]: Inference done 59/750. Dataloading: 0.0029 s/iter. Inference: 0.3196 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:03:43
[09/19 07:23:26 d2.evaluation.evaluator]: Inference done 75/750. Dataloading: 0.0029 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3234 s/iter. ETA=0:03:38
[09/19 07:23:32 d2.evaluation.evaluator]: Inference done 91/750. Dataloading: 0.0027 s/iter. Inference: 0.3196 s/iter. Eval: 0.0005 s/iter. Total: 0.3230 s/iter. ETA=0:03:32
[09/19 07:23:37 d2.evaluation.evaluator]: Inference done 107/750. Dataloading: 0.0028 s/iter. Inference: 0.3193 s/iter. Eval: 0.0005 s/iter. Total: 0.3228 s/iter. ETA=0:03:27
[09/19 07:23:42 d2.evaluation.evaluator]: Inference done 123/750. Dataloading: 0.0029 s/iter. Inference: 0.3194 s/iter. Eval: 0.0005 s/iter. Total: 0.3230 s/iter. ETA=0:03:22
[09/19 07:23:47 d2.evaluation.evaluator]: Inference done 139/750. Dataloading: 0.0028 s/iter. Inference: 0.3193 s/iter. Eval: 0.0005 s/iter. Total: 0.3228 s/iter. ETA=0:03:17
[09/19 07:23:52 d2.evaluation.evaluator]: Inference done 155/750. Dataloading: 0.0029 s/iter. Inference: 0.3193 s/iter. Eval: 0.0005 s/iter. Total: 0.3229 s/iter. ETA=0:03:12
[09/19 07:23:57 d2.evaluation.evaluator]: Inference done 171/750. Dataloading: 0.0029 s/iter. Inference: 0.3194 s/iter. Eval: 0.0005 s/iter. Total: 0.3229 s/iter. ETA=0:03:06
[09/19 07:24:03 d2.evaluation.evaluator]: Inference done 187/750. Dataloading: 0.0029 s/iter. Inference: 0.3195 s/iter. Eval: 0.0005 s/iter. Total: 0.3230 s/iter. ETA=0:03:01
[09/19 07:24:08 d2.evaluation.evaluator]: Inference done 203/750. Dataloading: 0.0029 s/iter. Inference: 0.3195 s/iter. Eval: 0.0005 s/iter. Total: 0.3230 s/iter. ETA=0:02:56
[09/19 07:24:13 d2.evaluation.evaluator]: Inference done 219/750. Dataloading: 0.0028 s/iter. Inference: 0.3201 s/iter. Eval: 0.0005 s/iter. Total: 0.3236 s/iter. ETA=0:02:51
[09/19 07:24:18 d2.evaluation.evaluator]: Inference done 235/750. Dataloading: 0.0028 s/iter. Inference: 0.3200 s/iter. Eval: 0.0005 s/iter. Total: 0.3235 s/iter. ETA=0:02:46
[09/19 07:24:23 d2.evaluation.evaluator]: Inference done 251/750. Dataloading: 0.0028 s/iter. Inference: 0.3200 s/iter. Eval: 0.0005 s/iter. Total: 0.3234 s/iter. ETA=0:02:41
[09/19 07:24:29 d2.evaluation.evaluator]: Inference done 267/750. Dataloading: 0.0028 s/iter. Inference: 0.3199 s/iter. Eval: 0.0005 s/iter. Total: 0.3233 s/iter. ETA=0:02:36
[09/19 07:24:34 d2.evaluation.evaluator]: Inference done 283/750. Dataloading: 0.0028 s/iter. Inference: 0.3199 s/iter. Eval: 0.0005 s/iter. Total: 0.3233 s/iter. ETA=0:02:31
[09/19 07:24:39 d2.evaluation.evaluator]: Inference done 299/750. Dataloading: 0.0028 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:02:25
[09/19 07:24:44 d2.evaluation.evaluator]: Inference done 315/750. Dataloading: 0.0027 s/iter. Inference: 0.3197 s/iter. Eval: 0.0005 s/iter. Total: 0.3231 s/iter. ETA=0:02:20
[09/19 07:24:49 d2.evaluation.evaluator]: Inference done 331/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:02:15
[09/19 07:24:54 d2.evaluation.evaluator]: Inference done 347/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3231 s/iter. ETA=0:02:10
[09/19 07:25:00 d2.evaluation.evaluator]: Inference done 363/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:02:05
[09/19 07:25:05 d2.evaluation.evaluator]: Inference done 379/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:01:59
[09/19 07:25:10 d2.evaluation.evaluator]: Inference done 395/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:01:54
[09/19 07:25:15 d2.evaluation.evaluator]: Inference done 411/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:01:49
[09/19 07:25:20 d2.evaluation.evaluator]: Inference done 427/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:01:44
[09/19 07:25:25 d2.evaluation.evaluator]: Inference done 443/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:01:39
[09/19 07:25:31 d2.evaluation.evaluator]: Inference done 459/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:01:34
[09/19 07:25:36 d2.evaluation.evaluator]: Inference done 475/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:01:28
[09/19 07:25:41 d2.evaluation.evaluator]: Inference done 491/750. Dataloading: 0.0027 s/iter. Inference: 0.3199 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:01:23
[09/19 07:25:46 d2.evaluation.evaluator]: Inference done 507/750. Dataloading: 0.0027 s/iter. Inference: 0.3199 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:01:18
[09/19 07:25:51 d2.evaluation.evaluator]: Inference done 523/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3231 s/iter. ETA=0:01:13
[09/19 07:25:56 d2.evaluation.evaluator]: Inference done 539/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:01:08
[09/19 07:26:02 d2.evaluation.evaluator]: Inference done 555/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3231 s/iter. ETA=0:01:03
[09/19 07:26:07 d2.evaluation.evaluator]: Inference done 571/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:57
[09/19 07:26:12 d2.evaluation.evaluator]: Inference done 587/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:52
[09/19 07:26:17 d2.evaluation.evaluator]: Inference done 603/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:47
[09/19 07:26:22 d2.evaluation.evaluator]: Inference done 619/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:42
[09/19 07:26:27 d2.evaluation.evaluator]: Inference done 635/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:37
[09/19 07:26:33 d2.evaluation.evaluator]: Inference done 651/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:31
[09/19 07:26:38 d2.evaluation.evaluator]: Inference done 667/750. Dataloading: 0.0028 s/iter. Inference: 0.3197 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:26
[09/19 07:26:43 d2.evaluation.evaluator]: Inference done 683/750. Dataloading: 0.0027 s/iter. Inference: 0.3197 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:21
[09/19 07:26:48 d2.evaluation.evaluator]: Inference done 699/750. Dataloading: 0.0027 s/iter. Inference: 0.3198 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:16
[09/19 07:26:53 d2.evaluation.evaluator]: Inference done 715/750. Dataloading: 0.0027 s/iter. Inference: 0.3197 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:11
[09/19 07:26:58 d2.evaluation.evaluator]: Inference done 731/750. Dataloading: 0.0028 s/iter. Inference: 0.3197 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:06
[09/19 07:27:04 d2.evaluation.evaluator]: Inference done 747/750. Dataloading: 0.0028 s/iter. Inference: 0.3197 s/iter. Eval: 0.0005 s/iter. Total: 0.3232 s/iter. ETA=0:00:00
[09/19 07:27:05 d2.evaluation.evaluator]: Total inference time: 0:04:00.840673 (0.323276 s / iter per device, on 1 devices)
[09/19 07:27:05 d2.evaluation.evaluator]: Total inference pure compute time: 0:03:58 (0.319753 s / iter per device, on 1 devices)
[09/19 07:27:05 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[09/19 07:27:05 d2.evaluation.coco_evaluation]: Saving results to coco_eval/coco_instances_results.json
[09/19 07:27:05 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.31s)
creating index...
index created!
[09/19 07:27:06 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*
[09/19 07:27:07 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.75 seconds.
[09/19 07:27:07 d2.evaluation.fast_eval_api]: Accumulating evaluation results...
[09/19 07:27:07 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.21 seconds.
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.069
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.141
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.064
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.050
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.163
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.135
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.239
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.347
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.362
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.290
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.586
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.541
[09/19 07:27:07 d2.evaluation.coco_evaluation]: Evaluation results for bbox: 
|  AP   |  AP50  |  AP75  |  APs  |  APm   |  APl   |
|:-----:|:------:|:------:|:-----:|:------:|:------:|
| 6.862 | 14.110 | 6.409  | 5.020 | 16.318 | 13.508 |
[09/19 07:27:07 d2.evaluation.coco_evaluation]: Per-category bbox AP: 
| category   | AP    | category   | AP     | category      | AP     |
|:-----------|:------|:-----------|:-------|:--------------|:-------|
| bicycle    | 2.253 | motorcycle | 10.203 | passenger_car | 10.434 |
| person     | 4.559 |            |        |               |        |
[09/19 07:27:07 d2.engine.defaults]: Evaluation results for val in csv format:
[09/19 07:27:07 d2.evaluation.testing]: copypaste: Task: bbox
[09/19 07:27:07 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[09/19 07:27:07 d2.evaluation.testing]: copypaste: 6.8623,14.1101,6.4087,5.0204,16.3184,13.5081
[09/19 07:27:07 d2.utils.events]:  eta: 0:50:38  iter: 699  total_loss: 0.8982  loss_cls: 0.2861  loss_box_reg: 0.4624  loss_rpn_cls: 0.02889  loss_rpn_loc: 0.1321  time: 2.2510  data_time: 0.0340  lr: 0.0006993  max_mem: 4113M
[09/19 07:27:52 d2.utils.events]:  eta: 0:49:51  iter: 719  total_loss: 0.9771  loss_cls: 0.299  loss_box_reg: 0.5477  loss_rpn_cls: 0.022  loss_rpn_loc: 0.09163  time: 2.2508  data_time: 0.0339  lr: 0.00071928  max_mem: 4113M
[09/19 07:28:35 d2.utils.events]:  eta: 0:49:03  iter: 739  total_loss: 0.8836  loss_cls: 0.2652  loss_box_reg: 0.4746  loss_rpn_cls: 0.02626  loss_rpn_loc: 0.1057  time: 2.2489  data_time: 0.0304  lr: 0.00073926  max_mem: 4113M
[09/19 07:29:19 d2.utils.events]:  eta: 0:48:12  iter: 759  total_loss: 0.8514  loss_cls: 0.2595  loss_box_reg: 0.4789  loss_rpn_cls: 0.02284  loss_rpn_loc: 0.0924  time: 2.2466  data_time: 0.0311  lr: 0.00075924  max_mem: 4113M
[09/19 07:30:03 d2.utils.events]:  eta: 0:47:24  iter: 779  total_loss: 0.8993  loss_cls: 0.2736  loss_box_reg: 0.4777  loss_rpn_cls: 0.02602  loss_rpn_loc: 0.1253  time: 2.2460  data_time: 0.0333  lr: 0.00077922  max_mem: 4113M
WARNING [09/19 07:30:47 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

WARNING [09/19 07:30:47 d2.data.datasets.coco]: val_data.json contains 2188 annotations, but only 501 of them match to images in the file.
[09/19 07:30:47 d2.data.datasets.coco]: Loaded 750 images in COCO format from val_data.json
[09/19 07:30:48 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')]
[09/19 07:30:48 d2.data.common]: Serializing 750 elements to byte tensors and concatenating them all ...
[09/19 07:30:48 d2.data.common]: Serialized dataset takes 0.12 MiB
WARNING [09/19 07:30:48 d2.evaluation.coco_evaluation]: COCO Evaluator instantiated using config, this is deprecated behavior. Please pass in explicit arguments instead.
[09/19 07:30:48 d2.evaluation.evaluator]: Start inference on 750 batches
[09/19 07:30:51 d2.evaluation.evaluator]: Inference done 11/750. Dataloading: 0.0018 s/iter. Inference: 0.3161 s/iter. Eval: 0.0004 s/iter. Total: 0.3184 s/iter. ETA=0:03:55
[09/19 07:30:57 d2.evaluation.evaluator]: Inference done 27/750. Dataloading: 0.0026 s/iter. Inference: 0.3199 s/iter. Eval: 0.0004 s/iter. Total: 0.3231 s/iter. ETA=0:03:53
[09/19 07:31:02 d2.evaluation.evaluator]: Inference done 43/750. Dataloading: 0.0028 s/iter. Inference: 0.3193 s/iter. Eval: 0.0004 s/iter. Total: 0.3227 s/iter. ETA=0:03:48
[09/19 07:31:07 d2.evaluation.evaluator]: Inference done 59/750. Dataloading: 0.0028 s/iter. Inference: 0.3192 s/iter. Eval: 0.0004 s/iter. Total: 0.3226 s/iter. ETA=0:03:42
[09/19 07:31:12 d2.evaluation.evaluator]: Inference done 75/750. Dataloading: 0.0028 s/iter. Inference: 0.3190 s/iter. Eval: 0.0004 s/iter. Total: 0.3224 s/iter. ETA=0:03:37
[09/19 07:31:17 d2.evaluation.evaluator]: Inference done 91/750. Dataloading: 0.0027 s/iter. Inference: 0.3186 s/iter. Eval: 0.0004 s/iter. Total: 0.3219 s/iter. ETA=0:03:32
[09/19 07:31:22 d2.evaluation.evaluator]: Inference done 107/750. Dataloading: 0.0026 s/iter. Inference: 0.3186 s/iter. Eval: 0.0004 s/iter. Total: 0.3218 s/iter. ETA=0:03:26
[09/19 07:31:27 d2.evaluation.evaluator]: Inference done 123/750. Dataloading: 0.0027 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:03:21
[09/19 07:31:33 d2.evaluation.evaluator]: Inference done 139/750. Dataloading: 0.0027 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:03:16
[09/19 07:31:38 d2.evaluation.evaluator]: Inference done 155/750. Dataloading: 0.0027 s/iter. Inference: 0.3183 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:03:11
[09/19 07:31:43 d2.evaluation.evaluator]: Inference done 171/750. Dataloading: 0.0027 s/iter. Inference: 0.3184 s/iter. Eval: 0.0004 s/iter. Total: 0.3217 s/iter. ETA=0:03:06
[09/19 07:31:48 d2.evaluation.evaluator]: Inference done 187/750. Dataloading: 0.0027 s/iter. Inference: 0.3183 s/iter. Eval: 0.0004 s/iter. Total: 0.3216 s/iter. ETA=0:03:01
[09/19 07:31:53 d2.evaluation.evaluator]: Inference done 203/750. Dataloading: 0.0027 s/iter. Inference: 0.3182 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:02:55
[09/19 07:31:58 d2.evaluation.evaluator]: Inference done 219/750. Dataloading: 0.0027 s/iter. Inference: 0.3183 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:02:50
[09/19 07:32:03 d2.evaluation.evaluator]: Inference done 235/750. Dataloading: 0.0026 s/iter. Inference: 0.3183 s/iter. Eval: 0.0004 s/iter. Total: 0.3215 s/iter. ETA=0:02:45
[09/19 07:32:08 d2.evaluation.evaluator]: Inference done 251/750. Dataloading: 0.0026 s/iter. Inference: 0.3182 s/iter. Eval: 0.0004 s/iter. Total: 0.3214 s/iter. ETA=0:02:40
[09/19 07:32:14 d2.evaluation.evaluator]: Inference done 267/750. Dataloading: 0.0026 s/iter. Inference: 0.3181 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:02:35
[09/19 07:32:19 d2.evaluation.evaluator]: Inference done 283/750. Dataloading: 0.0026 s/iter. Inference: 0.3181 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:02:30
[09/19 07:32:24 d2.evaluation.evaluator]: Inference done 299/750. Dataloading: 0.0026 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3212 s/iter. ETA=0:02:24
[09/19 07:32:29 d2.evaluation.evaluator]: Inference done 315/750. Dataloading: 0.0026 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:02:19
[09/19 07:32:34 d2.evaluation.evaluator]: Inference done 331/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:02:14
[09/19 07:32:39 d2.evaluation.evaluator]: Inference done 347/750. Dataloading: 0.0026 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:02:09
[09/19 07:32:44 d2.evaluation.evaluator]: Inference done 363/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:02:04
[09/19 07:32:50 d2.evaluation.evaluator]: Inference done 379/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:01:59
[09/19 07:32:55 d2.evaluation.evaluator]: Inference done 395/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:01:54
[09/19 07:33:00 d2.evaluation.evaluator]: Inference done 411/750. Dataloading: 0.0027 s/iter. Inference: 0.3179 s/iter. Eval: 0.0004 s/iter. Total: 0.3212 s/iter. ETA=0:01:48
[09/19 07:33:05 d2.evaluation.evaluator]: Inference done 427/750. Dataloading: 0.0027 s/iter. Inference: 0.3179 s/iter. Eval: 0.0004 s/iter. Total: 0.3212 s/iter. ETA=0:01:43
[09/19 07:33:10 d2.evaluation.evaluator]: Inference done 443/750. Dataloading: 0.0027 s/iter. Inference: 0.3179 s/iter. Eval: 0.0004 s/iter. Total: 0.3212 s/iter. ETA=0:01:38
[09/19 07:33:15 d2.evaluation.evaluator]: Inference done 459/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:01:33
[09/19 07:33:20 d2.evaluation.evaluator]: Inference done 475/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:01:28
[09/19 07:33:26 d2.evaluation.evaluator]: Inference done 491/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:01:23
[09/19 07:33:31 d2.evaluation.evaluator]: Inference done 507/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:01:18
[09/19 07:33:36 d2.evaluation.evaluator]: Inference done 523/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:01:12
[09/19 07:33:41 d2.evaluation.evaluator]: Inference done 539/750. Dataloading: 0.0027 s/iter. Inference: 0.3179 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:01:07
[09/19 07:33:46 d2.evaluation.evaluator]: Inference done 555/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:01:02
[09/19 07:33:51 d2.evaluation.evaluator]: Inference done 571/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:00:57
[09/19 07:33:56 d2.evaluation.evaluator]: Inference done 587/750. Dataloading: 0.0027 s/iter. Inference: 0.3180 s/iter. Eval: 0.0004 s/iter. Total: 0.3213 s/iter. ETA=0:00:52

Predictions

Generating sample predictions

In [ ]:
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")

# Setting up threshold to filter out some low score predictions
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.1

predictor = DefaultPredictor(cfg)
In [ ]:
from detectron2.utils.visualizer import ColorMode

# Showing some predictions
for d in random.sample(dataset_dicts, 3):    
    im = cv2.imread(d["file_name"])
    outputs = predictor(im)
    v = Visualizer(im[:, :, ::-1],
                   metadata=vehicle_metadata, 
                   scale=0.8, 
    )
    v = v.draw_instance_predictions(outputs["instances"].to("cpu"))
    cv2_imshow(v.get_image()[:, :, ::-1])
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-14-6a421b43f2bc> in <module>()
      2 
      3 # Showing some predictions
----> 4 for d in random.sample(dataset_dicts, 3):
      5     im = cv2.imread(d["file_name"])
      6     outputs = predictor(im)

NameError: name 'dataset_dicts' is not defined

Generating Predictions

In this section, we will be generating the prediction of test dataset for submission

In [ ]:
test_images_list = natsorted(glob("data/test/*"))
test_images_list[0]
Out[ ]:
'data/test/0.jpg'
In [ ]:
# Generating the predictions

pred = []

# Doing though each image
for file_path in tqdm(test_images_list):

  # Reading the image
  img = cv2.imread(file_path)

  # Generating the predictions
  outputs = predictor(img)


  image_path, image_file_name = os.path.split(file_path)
  
  # Getting the image_id of the predictions 
  # ( The image_id in the predictions is the file_id  + 1 )
  image_id = int(image_file_name.split(".")[0])+1


  # Adding the predictions
  for n, boxes in enumerate(outputs['instances'].pred_boxes.tensor.cpu().numpy().tolist()):

    # Converting thr bounding boxes from (x1, y1, x2, y2) to (x, y, w, h)
    preprocessed_box = [boxes[0], boxes[1], abs(boxes[0] - boxes[2]), abs(boxes[1] - boxes[3])]

    pred.append({
        "image_id": image_id,
        "category_id": outputs['instances'].pred_classes[n].cpu().numpy().tolist(),
        "bbox": preprocessed_box,
        "score":outputs['instances'].scores[n].cpu().numpy().tolist()
    })
100%|██████████| 1000/1000 [05:46<00:00,  2.89it/s]
In [3]:
# Saving the predictions
!rm -rf assets
!mkdir assets

with open('assets/predictions.json', 'w') as f:
    json.dump(pred, f)

Uploading the Results 🧪

In [4]:

/usr/local/lib/python3.7/dist-packages/aicrowd/notebook/helpers.py:361: UserWarning: `%aicrowd` magic command can be used to save the notebook inside jupyter notebook/jupyterLab environment and also to get the notebook directly from the frontend without mounting the drive in colab environment. You can use magic command to skip mounting the drive and submit using the code below:
 %load_ext aicrowd.magic
%aicrowd notebook submit -c object-detection -a assets --no-verify
  warnings.warn(description + code)
Mounting Google Drive 💾
Your Google Drive will be mounted to access the colab notebook
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.activity.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fexperimentsandconfigs%20https%3a%2f%2fwww.googleapis.com%2fauth%2fphotos.native&response_type=code

Enter your authorization code:
4/1AX4XfWirUTiqDsObrHMstx9DCRozDMCEqaLcfEXB56fiV648J1seGVT-qYc
Mounted at /content/drive
Using notebook: Object Detection for submission...
Scrubbing API keys from the notebook...
Collecting notebook...
submission.zip ━━━━━━━━━━━━━━━━━━━━━━ 100.0%1.7/1.7 MB898.1 kB/s0:00:00
                                                  ╭─────────────────────────╮                                                  
                                                  │ Successfully submitted! │                                                  
                                                  ╰─────────────────────────╯                                                  
                                                        Important links                                                        
┌──────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│  This submission │ https://www.aicrowd.com/challenges/ai-blitz-xi/problems/object-detection/submissions/157265              │
│                  │                                                                                                          │
│  All submissions │ https://www.aicrowd.com/challenges/ai-blitz-xi/problems/object-detection/submissions?my_submissions=true │
│                  │                                                                                                          │
│      Leaderboard │ https://www.aicrowd.com/challenges/ai-blitz-xi/problems/object-detection/leaderboards                    │
│                  │                                                                                                          │
│ Discussion forum │ https://discourse.aicrowd.com/c/ai-blitz-xi                                                              │
│                  │                                                                                                          │
│   Challenge page │ https://www.aicrowd.com/challenges/ai-blitz-xi/problems/object-detection                                 │
└──────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┘
In [ ]:


Comments

You must login before you can post a comment.

Execute