Loading

Data Purchasing Challenge 2022

[First Baseline + Explainer] Getting Started With ROUND 2

Boiler Plate Code With Abiding (COMPUTE AND PURCHASE BUDGET) For Getting Started With Round 2

gaurav_singhal

Hey Guys

 

First of all, I would like to thank you for taking the time to check out this notebook.

A lot of things have changed in round 2, what exactly??? Check the changelog. In short, the focus of this round is strictly on purchase, and not on training bulky networks (you can try but you'll be defeated by new time constraints).

 

I understand how tedious setting up the boilerplate code can be. Almost all the participants here have a full-time job and TRUST ME, you don't want to waste your precious time with the basic details and just want to focus on the main problem i.e. PURCHASE.

 

I have put together the boilerplate code that will help you get started with the competition in no time 😎😎😎😎. This notebook is a stand-alone solution that will run on Google colab and set up all the basic things, you can also get it from the official notebook but let's face it, it's just a Skeleton.

 

NOTE: The code uses the PyTorch and NumPy seed which means you can have somewhat reproducible results. For better reproducibility, you must use deterministic algorithms.

With this notebook you will be able to do the following things:

1. Download the dataset and helper code

2. Boilerplate code for training, validation, purchase, and prediction. The code is implemented with EfficientNet-B0

3. Evaluation with respect to time constraints and budget, therefore you can test your pipeline in real-time.

 

Credits:

1. [AICrowd Official Notebook](https://colab.research.google.com/drive/1ZJQBK9DKus1zSjm97aEc6bQ2mSS3vSTD)

2. [Offical Starter Kit](https://gitlab.aicrowd.com/zew/data-purchasing-challenge-2022-starter-kit)

 

What more???

Well, you can use the same code from `ZEWDPCBaseRun` and make some minor changes and you would then be able to make a submission.

 

Let's begin


 

1) Login to AIcrowd 🤩

In [ ]:
#@title Login to AIcrowd
!pip install -U aicrowd-cli > /dev/null
!aicrowd login 2> /dev/null
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
google-colab 1.0.0 requires requests~=2.23.0, but you have requests 2.27.1 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
Please login here: https://api.aicrowd.com/auth/n0wE6PBiVYoeu6FoLMzZAsBz1YBGGCckj-qbPFGU8kc
API Key valid
Gitlab access token valid
Saved details successfully!

2) Setup magically, run the below cell 😉

In [ ]:
#@title Magic Box ⬛ { vertical-output: true, display-mode: "form" }
try:
  import os
  if first_run and os.path.exists("/content/data-purchasing-challenge-2022-starter-kit/data/public_training"):
    first_run = False
except:
  first_run = True

os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
purchased_labels = None

if first_run:
  %cd /content/
  !git clone http://gitlab.aicrowd.com/zew/data-purchasing-challenge-2022-starter-kit.git > /dev/null
  %cd data-purchasing-challenge-2022-starter-kit
  !aicrowd dataset list -c data-purchasing-challenge-2022
  !aicrowd dataset download -c data-purchasing-challenge-2022 *-v0.2-rc4.zip
  !mkdir -p data/
  !mv *.zip data/ && cd data && echo "Extracting dataset..." && ls *.zip | xargs -n1 -I{} bash -c "unzip -q {}"


def run_pre_training_phase():
  from run import ZEWDPCBaseRun
  run = ZEWDPCBaseRun()
  run.pre_training_phase = pre_training_phase
  run.pre_training_phase(self=run, training_dataset=training_dataset)
  # NOTE:It is critical that the checkpointing works in a self-contained way
  #      As, the evaluators might choose to run the different phases separately.
  run.save_checkpoint("/tmp/pretrainig_phase_checkpoint.pickle")

def run_purchase_phase():
  from run import ZEWDPCBaseRun
  run = ZEWDPCBaseRun()
  run.pre_training_phase = pre_training_phase
  run.purchase_phase = purchase_phase
  run.load_checkpoint("/tmp/pretrainig_phase_checkpoint.pickle")
  # Hacky way to make it work in notebook
  unlabelled_dataset.purchases = set()

  global purchased_labels
  purchased_labels = run.purchase_phase(self=run, unlabelled_dataset=unlabelled_dataset, training_dataset=training_dataset, purchase_budget=1500, compute_budget=51*60)
  
  run.save_checkpoint("/tmp/purchase_phase_checkpoint.pickle")

  del run

def run_prediction_phase():
  from run import ZEWDPCBaseRun
  run = ZEWDPCBaseRun()
  run.pre_training_phase = pre_training_phase
  run.purchase_phase = purchase_phase
  run.prediction_phase = prediction_phase
  run.load_checkpoint("/tmp/purchase_phase_checkpoint.pickle")
  run.prediction_phase(self=run, test_dataset=val_dataset)
  del run

def run_post_purchase_training_phase():
  import torch
  from evaluator.evaluation_metrics import get_zew_dpc_metrics
  from evaluator.utils import instantiate_purchased_dataset
  from evaluator.trainer import ZEWDPCTrainer

  purchased_dataset = instantiate_purchased_dataset(unlabelled_dataset, purchased_labels)
  aggregated_dataset = torch.utils.data.ConcatDataset(
      [training_dataset, purchased_dataset]
  )
  print("Training Dataset Size: ", len(training_dataset))
  print("Purchased Dataset Size: ", len(purchased_dataset))
  print("Aggregataed Dataset Size: ", len(aggregated_dataset))

  trainer = ZEWDPCTrainer(num_classes=6, use_pretrained=True)
  trainer.train(
      training_dataset, num_epochs=10, validation_percentage=0.1, batch_size=5
  )

  y_pred = trainer.predict(val_dataset)
  y_true = val_dataset_gt._get_all_labels()

  metrics = get_zew_dpc_metrics(y_true, y_pred)

  f1_score = metrics["F1_score_macro"]
  accuracy_score = metrics["accuracy_score"]
  hamming_loss_score = metrics["hamming_loss"]

  print("\n\n==================")
  print("F1 Score: ", f1_score)
  print("Accuracy Score: ", accuracy_score)
  print("Hamming Loss: ", hamming_loss_score)
/content
Cloning into 'data-purchasing-challenge-2022-starter-kit'...
remote: Enumerating objects: 191, done.
remote: Counting objects: 100% (191/191), done.
remote: Compressing objects: 100% (85/85), done.
remote: Total 290 (delta 117), reused 164 (delta 106), pack-reused 99
Receiving objects: 100% (290/290), 77.99 KiB | 3.25 MiB/s, done.
Resolving deltas: 100% (171/171), done.
/content/data-purchasing-challenge-2022-starter-kit
                          Datasets for challenge #1024                          
┌───┬─────────────────────────┬──────────────────────────────────────┬─────────┐
│ #  Title                    Description                              Size │
├───┼─────────────────────────┼──────────────────────────────────────┼─────────┤
│ 0 │ training-v0.2-rc4.zip   │ Training data for round 2            │  97 MiB │
│ 1 │ debug-v0.2-rc4.zip      │ Debug data for round 2               │   6 MiB │
│ 2 │ validation-v0.2-rc4.zip │ Validation dataset for round 2       │ 292 MiB │
│ 3 │ unlabelled-v0.2-rc4.zip │ Unlabelled image dataset for round 2 │ 973 MiB │
│ 4 │ debug-v0.1.tar.gz       │ Debug dataset                        │ 6.1 MiB │
│ 5 │ unlabelled-v0.1.tar.gz  │ Unlabelled image dataset             │ 609 MiB │
│ 6 │ validation-v0.1.tar.gz  │ Validation dataset                   │ 182 MiB │
│ 7 │ training-v0.1.tar.gz    │ Training data                        │ 304 MiB │
└───┴─────────────────────────┴──────────────────────────────────────┴─────────┘
training-v0.2-rc4.zip: 100% 102M/102M [00:06<00:00, 15.8MB/s]
debug-v0.2-rc4.zip: 100% 6.39M/6.39M [00:01<00:00, 4.89MB/s]
validation-v0.2-rc4.zip: 100% 306M/306M [00:17<00:00, 17.3MB/s]
unlabelled-v0.2-rc4.zip: 100% 1.02G/1.02G [01:05<00:00, 15.7MB/s]
Extracting dataset...

3) Writing your code implementation! ✍️

a) Runtime Packages

In [ ]:
#@title a) Runtime Packages<br/><small>Important: Add the packages required by your code here. (space separated)</small> { run: "auto", display-mode: "form" }
apt_packages = "build-essential vim" #@param {type:"string"}
pip_packages = "scikit-image pandas timeout-decorator==0.5.0 numpy torchmetrics" #@param {type:"string"}

!apt install -y $apt_packages git-lfs
!pip install $pip_packages
Reading package lists... Done
Building dependency tree       
Reading state information... Done
build-essential is already the newest version (12.4ubuntu1).
The following package was automatically installed and is no longer required:
  libnvidia-common-470
Use 'apt autoremove' to remove it.
The following additional packages will be installed:
  libgpm2 vim-common vim-runtime xxd
Suggested packages:
  gpm ctags vim-doc vim-scripts
The following NEW packages will be installed:
  git-lfs libgpm2 vim vim-common vim-runtime xxd
0 upgraded, 6 newly installed, 0 to remove and 39 not upgraded.
Need to get 8,854 kB of archives.
After this operation, 40.2 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 xxd amd64 2:8.0.1453-1ubuntu1.8 [49.9 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 vim-common all 2:8.0.1453-1ubuntu1.8 [71.1 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic/universe amd64 git-lfs amd64 2.3.4-1 [2,129 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic/main amd64 libgpm2 amd64 1.20.7-5 [15.1 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 vim-runtime all 2:8.0.1453-1ubuntu1.8 [5,435 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 vim amd64 2:8.0.1453-1ubuntu1.8 [1,154 kB]
Fetched 8,854 kB in 0s (39.2 MB/s)
Selecting previously unselected package xxd.
(Reading database ... 155320 files and directories currently installed.)
Preparing to unpack .../0-xxd_2%3a8.0.1453-1ubuntu1.8_amd64.deb ...
Unpacking xxd (2:8.0.1453-1ubuntu1.8) ...
Selecting previously unselected package vim-common.
Preparing to unpack .../1-vim-common_2%3a8.0.1453-1ubuntu1.8_all.deb ...
Unpacking vim-common (2:8.0.1453-1ubuntu1.8) ...
Selecting previously unselected package git-lfs.
Preparing to unpack .../2-git-lfs_2.3.4-1_amd64.deb ...
Unpacking git-lfs (2.3.4-1) ...
Selecting previously unselected package libgpm2:amd64.
Preparing to unpack .../3-libgpm2_1.20.7-5_amd64.deb ...
Unpacking libgpm2:amd64 (1.20.7-5) ...
Selecting previously unselected package vim-runtime.
Preparing to unpack .../4-vim-runtime_2%3a8.0.1453-1ubuntu1.8_all.deb ...
Adding 'diversion of /usr/share/vim/vim80/doc/help.txt to /usr/share/vim/vim80/doc/help.txt.vim-tiny by vim-runtime'
Adding 'diversion of /usr/share/vim/vim80/doc/tags to /usr/share/vim/vim80/doc/tags.vim-tiny by vim-runtime'
Unpacking vim-runtime (2:8.0.1453-1ubuntu1.8) ...
Selecting previously unselected package vim.
Preparing to unpack .../5-vim_2%3a8.0.1453-1ubuntu1.8_amd64.deb ...
Unpacking vim (2:8.0.1453-1ubuntu1.8) ...
Setting up git-lfs (2.3.4-1) ...
Setting up xxd (2:8.0.1453-1ubuntu1.8) ...
Setting up libgpm2:amd64 (1.20.7-5) ...
Setting up vim-common (2:8.0.1453-1ubuntu1.8) ...
Setting up vim-runtime (2:8.0.1453-1ubuntu1.8) ...
Setting up vim (2:8.0.1453-1ubuntu1.8) ...
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vim (vim) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vimdiff (vimdiff) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rvim (rvim) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rview (rview) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vi (vi) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/view (view) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/ex (ex) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/editor (editor) in auto mode
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for hicolor-icon-theme (0.17-2) ...
Processing triggers for mime-support (3.60ubuntu1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.3) ...
/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link

Requirement already satisfied: scikit-image in /usr/local/lib/python3.7/dist-packages (0.18.3)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (1.3.5)
Collecting timeout-decorator==0.5.0
  Downloading timeout-decorator-0.5.0.tar.gz (4.8 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (1.21.5)
Collecting torchmetrics
  Downloading torchmetrics-0.7.2-py3-none-any.whl (397 kB)
     |████████████████████████████████| 397 kB 13.4 MB/s 
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image) (3.2.2)
Requirement already satisfied: scipy>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from scikit-image) (1.4.1)
Requirement already satisfied: pillow!=7.1.0,!=7.1.1,>=4.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image) (7.1.2)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image) (2.4.1)
Requirement already satisfied: PyWavelets>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from scikit-image) (1.2.0)
Requirement already satisfied: tifffile>=2019.7.26 in /usr/local/lib/python3.7/dist-packages (from scikit-image) (2021.11.2)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image) (2.6.3)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (1.3.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (0.11.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (3.0.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (2.8.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib!=3.0.0,>=2.0.0->scikit-image) (1.15.0)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas) (2018.9)
Requirement already satisfied: torch>=1.3.1 in /usr/local/lib/python3.7/dist-packages (from torchmetrics) (1.10.0+cu111)
Collecting pyDeprecate==0.3.*
  Downloading pyDeprecate-0.3.2-py3-none-any.whl (10 kB)
Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from torchmetrics) (21.3)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.3.1->torchmetrics) (3.10.0.2)
Building wheels for collected packages: timeout-decorator
  Building wheel for timeout-decorator (setup.py) ... done
  Created wheel for timeout-decorator: filename=timeout_decorator-0.5.0-py3-none-any.whl size=5028 sha256=3892964be0032b85625f95b41517c62ae5f8880940962a03bb3109b4a6a690c2
  Stored in directory: /root/.cache/pip/wheels/7d/64/ac/de1dd54f9a6e48b846e9cb5e4176d6f063380e7f83d69807ad
Successfully built timeout-decorator
Installing collected packages: pyDeprecate, torchmetrics, timeout-decorator
Successfully installed pyDeprecate-0.3.2 timeout-decorator-0.5.0 torchmetrics-0.7.2

b) Load Dataset

In [ ]:
from evaluator.dataset import ZEWDPCBaseDataset, ZEWDPCProtectedDataset
DATASET_SHUFFLE_SEED = 1022022

# Instantiate Training Dataset
training_dataset = ZEWDPCBaseDataset(
    images_dir="./data/training/images",
    labels_path="./data/training/labels.csv",
    shuffle_seed=DATASET_SHUFFLE_SEED,
)
# Instantiate Unlabelled Dataset
unlabelled_dataset = ZEWDPCProtectedDataset(
    images_dir="./data/unlabelled/images",
    labels_path="./data/unlabelled/labels.csv",
    purchase_budget=1500,  # Configurable Parameter
    shuffle_seed=DATASET_SHUFFLE_SEED,
)
# Instantiate Validation Dataset
val_dataset = ZEWDPCBaseDataset(
    images_dir="./data/validation/images",
    labels_path="./data/validation/labels.csv",
    drop_labels=True,
    shuffle_seed=DATASET_SHUFFLE_SEED,
)
# A second instantiation of the validation test with the labels present
#       - helpful later, when computing the scores.
val_dataset_gt = ZEWDPCBaseDataset(
    images_dir="./data/validation/images",
    labels_path="./data/validation/labels.csv",
    drop_labels=False,
    shuffle_seed=DATASET_SHUFFLE_SEED,
)

Training

In [ ]:
import torch
from torch import nn
from torchvision import models
from torch.optim import Adam, SGD, lr_scheduler
from torchvision import transforms as T
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import abc
import datetime
from tqdm import tqdm
import copy

from evaluator.exceptions import OutOfBudetException
from evaluator.evaluation_metrics import get_zew_dpc_metrics
from evaluator.dataset import ZEWDPCBaseDataset, ZEWDPCProtectedDataset, ZEWDPCRuntimeDataset
from evaluator.utils import (
    instantiate_purchased_dataset,
    AverageMeter,
)
import torchmetrics

torch.manual_seed(17)
np.random.seed(17)
Out[ ]:
<torch._C.Generator at 0x7f67ac1fd6b0>

Base Code

In [ ]:
class ZEWDPCBaseRun:
    def __init__(self):
        self.evaluation_state = {}
        self.BATCH_SIZE = 32
        self.NUM_WORKERS = 2
        self.LEARNING_RATE = 0.00009
        self.NUM_CLASSES = 6
        self.THRESHOLD = 0.5
        self.NUM_EPOCS = 2
        self.CHECKPOINT_FREQUENCY = 10
        self.EVAL_FREQ = 1
        self.validation_percentage = 0.1
        self.seed = 42

        # Use any torchvision model you like here
        self.model = models.efficientnet_b0(pretrained=True)
        # Change last layer if using pretrained model
        self.model.classifier = torch.nn.Sequential(
            torch.nn.Dropout(p=self.model.classifier[0].p),
            torch.nn.Linear(self.model.classifier[1].in_features, out_features=self.NUM_CLASSES)
        )
        self.model.cuda()
        self.device = "cuda:0"
        self.activation = torch.nn.Sigmoid()
        self.optimizer = Adam(self.model.parameters(), lr=self.LEARNING_RATE, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.0009, amsgrad=False)
        self.lr_sched = lr_scheduler.ReduceLROnPlateau(
            self.optimizer, mode='max', patience=2, verbose=True
        )
        self.criterion = nn.BCEWithLogitsLoss()

    def pre_training_phase(
        self,
        training_dataset: ZEWDPCBaseDataset,
        compute_budget=10**10,
        register_progress=lambda x: False,
    ):
        print("\n================> Pre-Training Phase\n")
        # Creating transformations
        
        # Setup Transforms
        self.setup_transforms(training_dataset)

        # Prepare Validation Set
        training_dataset, validation_dataset = self.setup_validation_set(
            training_dataset, validation_percentage=self.validation_percentage
        )

        # Setup Dataloaders
        train_dataloader, val_dataloader = self.setup_dataloaders(
            training_dataset, validation_dataset, batch_size=self.BATCH_SIZE
        )

        # Setup Metric Meters
        val_loss_avg_meter = AverageMeter()
        train_loss_avg_meter = AverageMeter()
        val_f1 = torchmetrics.F1Score(num_classes=self.NUM_CLASSES, average="macro")
        train_f1 = torchmetrics.F1Score(num_classes=self.NUM_CLASSES, average="macro")

        # Setup Metric Meters
        val_loss_avg_meter = AverageMeter()
        train_loss_avg_meter = AverageMeter()
        val_f1 = torchmetrics.F1Score(num_classes=self.NUM_CLASSES, average="macro")
        train_f1 = torchmetrics.F1Score(num_classes=self.NUM_CLASSES, average="macro")

        ########################################################################
        ########################################################################
        #
        # Iterate over Epochs
        ########################################################################
        for epoch in range(self.NUM_EPOCS):
            self.epoch = epoch
            self.model.train()
            train_loss_avg_meter.reset()
            train_f1.reset()

            tqdm_iter = tqdm(train_dataloader, total=len(train_dataloader))
            tqdm_iter.set_description(f"Epoch {epoch}")
            
            for sample in tqdm_iter:
                # Reset Optimizer Gradients
                self.optimizer.zero_grad()

                # Gather Data Sample
                idx = sample["idx"].to(self.device)
                image = sample["image"].to(self.device)
                label = torch.vstack(sample["label"]).T

                # Forward Pass
                output = self.model(image)
                # Compute Loss
                loss = self.criterion(output, label.to(self.device).float())

                # Update Metric Meters
                train_loss_avg_meter.update(loss.item(), image.shape[0])
                output_with_activation = self.activation(output.detach()).cpu()
                train_f1.update(output_with_activation, label)
                tqdm_iter.set_postfix(
                    iter_train_loss=loss.item(), avg_train_loss=train_loss_avg_meter.avg
                )

                # Backpropagate
                loss.backward()
                self.optimizer.step()

            print(
                "Epoch %d - Average Train Loss: %.5f \t Train F1: %.5f"
                % (epoch, train_loss_avg_meter.avg, train_f1.compute().item())
            )

            # Checkpointing policy
            if (self.epoch+1)%self.CHECKPOINT_FREQUENCY == 0:
                pass
                # self.save_checkpoint("./")

            ####################################################################################
            ####################################################################################
            #
            # Validation
            ####################################################################################
            VALIDATION_INTERVAL = self.EVAL_FREQ
            if (
                validation_dataset is not None
                and (epoch + 1) % VALIDATION_INTERVAL == 0
            ):
                self.model.eval()
                val_loss_avg_meter.reset()
                val_f1.reset()

                for sample in val_dataloader:
                    with torch.no_grad():
                        idx = sample["idx"].to(self.device)
                        image = sample["image"].to(self.device)
                        label = torch.vstack(sample["label"]).T

                        output = self.model(image)
                        loss = self.criterion(output, label.to(self.device).float())
                        output_with_activation = self.activation(
                            output.detach()
                        ).cpu()
                        val_f1.update(output_with_activation, label)

                        val_loss_avg_meter.update(loss.item(), image.shape[0])

                self.lr_sched.step(val_loss_avg_meter.avg)
                print(
                    "Epoch %d - Average Val Loss: %.5f \t Val F1: %.5f \t Learning Rate %0.5f"
                    % (
                        epoch,
                        val_loss_avg_meter.avg,
                        val_f1.compute().item(),
                        self.optimizer.param_groups[0]["lr"],
                    )
                )
                print()

            train_metrics = {"f1": train_f1.compute().item()}
            val_metrics = {"f1": val_f1.compute().item()}
            info = {"learning_rate": self.optimizer.param_groups[0]["lr"]}
        print("Execution Complete of Training Phase.")

    def purchase_phase(
        self,
        unlabelled_dataset: ZEWDPCProtectedDataset,
        training_dataset: ZEWDPCBaseDataset,
        purchase_budget=1000,
        compute_budget=10**10,
        register_progress=lambda x: False,
    ):
        """
        # Purchase Phase
        -------------------------
        In this phase of the competition, you have access to
        the unlabelled_dataset (an instance of `ZEWDPCProtectedDataset`)
        and the training_dataset (an instance of `ZEWDPCBaseDataset`)
        {see datasets.py for more details}, a purchase budget, and a compute budget.

        You can iterate over both the datasets and access the images without restrictions.
        However, you can probe the labels of the unlabelled_dataset only until you
        run out of the label purchasing budget.

        The `compute_budget` argument holds a floating point number representing
        the time available (in seconds) for **BOTH** the pre_training_phase and
        the `purchase_phase`.
        Exceeding the time will lead to a TimeOut error.

        PARTICIPANT_TODO: Add your code here
        """
        print("\n================> Purchase Phase | Budget = {}\n".format(purchase_budget))

        register_progress(0.0)  # Register Progress

        purchased_labels = {}
        for sample in tqdm(unlabelled_dataset):
            idx = sample["idx"]

            # Budgeting & Purchasing Labels
            if purchase_budget > 0:
                label = unlabelled_dataset.purchase_label(idx)
                purchased_labels[idx] = label
                purchase_budget -= 1

        register_progress(1.0)  # Register Progress
        print("Execution Complete of Purchase Phase.")
        return purchased_labels

    def prediction_phase(
        self,
        test_dataset: ZEWDPCBaseDataset,
        register_progress=lambda x: False,
    ):
        """
        # Prediction Phase
        -------------------------
        In this phase of the competition, you have access to the test dataset, and you
        are supposed to make predictions using your trained models.

        Returns:
            np.ndarray of shape (n, 6)
                where n is the number of samples in the test set
                and 6 refers to the 6 labels to be predicted for each sample
                for the multi-label classification problem.

        PARTICIPANT_TODO: Add your code here
        """
        print(
            "\n================> Prediction Phase : - on {} images\n".format(
                len(test_dataset)
            )
        )

        test_transform = T.Compose([
            T.ToTensor(),
        ])
        test_dataset.set_transform(test_transform)
        test_loader = DataLoader(
            dataset=test_dataset,
            batch_size=self.BATCH_SIZE,
            shuffle=False,
            num_workers=self.NUM_WORKERS,
        )

        self.model.eval()
        predictions = []
        with torch.no_grad():
            for _, batch in enumerate(test_loader):
                X = batch['image'].cuda()
                output = self.model(X)
                output_with_activation = self.activation(
                            output.detach()
                        ).cpu()
                predictions.extend(output_with_activation)

        register_progress(1.0)
        predictions = np.array(predictions) # random predictions
        print("Execution Complete of Purchase Phase.")
        return predictions

    def save_checkpoint(self, checkpoint_folder):
        """
        Self-contained checkpoint code to be included here,
        which can capture the state of your run (including any trained models, etc)
        at the provided folder path.

        This is critical to implement, as the execution of the different phases can
        happen using different instances of the BaseRun. See below for examples.

        PARTICIPANT_TODO: Add your code here
        """
        # checkpoint_path = os.path.join(checkpoint_folder, "model.pth")
        save_dict = {
            'model_state_dict': self.model.state_dict(),
            'optim_state_dict': self.optimizer.state_dict(),
        }
        torch.save(save_dict, checkpoint_folder)
        print(f"Saving checkpont at {checkpoint_folder}")

    def load_checkpoint(self, checkpoint_folder):
        """
        Self-contained checkpoint code to be included here,
        which can load the state of your run (including any trained models, etc)
        from a provided checkpoint_folder path 
        (previously saved using `self.save_checkpoint`)

        This is critical to implement, as the execution of the different phases can
        happen using different instances of the BaseRun. See below for examples.

        PARTICIPANT_TODO: Add your code here
        """
        # checkpoint_path = os.path.join(checkpoint_folder, "model.pth")
        checkpoint_model = torch.load(checkpoint_folder, map_location=self.device)
        self.model.load_state_dict(checkpoint_model['model_state_dict'])
        self.optimizer.load_state_dict(checkpoint_model['optim_state_dict'])
        print('Loading checkpoint success')

    def setup_validation_set(self, training_dataset, validation_percentage=0.05):
        """
        Creates a Validation Set from the Training Dataset
        """
        assert (
            0 < validation_percentage < 1
        ), "Expected : validation_percentage ∈ [0, 1]. Received validataion_percentage = {}".format(
            validation_percentage
        )

        validation_size = int(validation_percentage * len(training_dataset))
        training_dataset, validation_dataset = torch.utils.data.random_split(
            training_dataset,
            [
                len(training_dataset) - validation_size,
                validation_size,
            ],
            generator=torch.Generator().manual_seed(self.seed),
        )
        return training_dataset, validation_dataset

    def setup_dataloaders(self, training_dataset, validation_dataset, batch_size=32):
        """
        Sets up necessary dataloader
        """
        train_dataloader = torch.utils.data.DataLoader(
            training_dataset, batch_size=batch_size, shuffle=True
        )
        val_dataloader = torch.utils.data.DataLoader(
            validation_dataset, batch_size=batch_size, shuffle=True
        )

        return train_dataloader, val_dataloader

    def setup_transforms(self, training_dataset):
        """
        Sets up the necessary transforms for the training_dataset
        """
        ## Setup necessary Transformations
        train_transform = T.Compose(
            [
                T.ToTensor(),  # Converts image to [0, 1]
                T.RandomVerticalFlip(p=0.5),
                T.RandomHorizontalFlip(p=0.5),
                T.GaussianBlur(kernel_size=3),
                T.ColorJitter(brightness=0.2, contrast=0.2),
                # *self.model.required_transforms,
            ]
        )

        if isinstance(training_dataset, ZEWDPCBaseDataset):
            training_dataset.set_transform(train_transform)
        elif isinstance(training_dataset, torch.utils.data.ConcatDataset):
            for dataset in training_dataset.datasets:
                if isinstance(dataset, ZEWDPCRuntimeDataset):
                    dataset.set_transform(train_transform)
                elif isinstance(dataset, ZEWDPCBaseDataset):
                    dataset.set_transform(train_transform)
                else:
                    raise NotImplementedError()
        else:
            raise NotImplementedError()

Evaluator

In [ ]:
import os
import tempfile
import time
from evaluator.trainer import ZEWDPCTrainer, ZEWDPCDebugTrainer

# Location to save your checkpoint
checkpoint_folder_path = tempfile.TemporaryDirectory().name
### NOTE: This folder doesnot clean up itself.
###       You are responsible for cleaning up the contents of this folder after
##        the desired usage.


####################################################################################
####################################################################################
##
## Setup Compute & Purchase Budgets
####################################################################################
time_started = time.time()

PURCHASE_BUDGET = 500
COMPUTE_BUDGET = 60 * 60 # 1 hour

####################################################################################
####################################################################################
##
## Phase 1 : Pre-Training Phase
####################################################################################
run = ZEWDPCBaseRun()
run.pre_training_phase(training_dataset, compute_budget=COMPUTE_BUDGET)
run.save_checkpoint(checkpoint_folder_path)
# NOTE:It is critical that the checkpointing works in a self-contained way
#       As, the evaluators might choose to run the different phases separately.

del run
time_available = COMPUTE_BUDGET - (time_started - time.time())
print("Time remaining: ", time_available)

####################################################################################
####################################################################################
##
## Phase 2 : Purchase Phase
####################################################################################
run = ZEWDPCBaseRun()
run.load_checkpoint(checkpoint_folder_path)
purchased_labels = run.purchase_phase(
    unlabelled_dataset, training_dataset, purchase_budget=PURCHASE_BUDGET, compute_budget=time_available
)

run.save_checkpoint(checkpoint_folder_path)
del run

####################################################################################
####################################################################################
##
## Phase 3 : Post Purchase Training Phase
####################################################################################

# Create a runtime instance of the purchased dataset with the right labels
purchased_dataset = instantiate_purchased_dataset(unlabelled_dataset, purchased_labels)
aggregated_dataset = torch.utils.data.ConcatDataset(
    [training_dataset, purchased_dataset]
)
print("Training Dataset Size : ", len(training_dataset))
print("Purchased Dataset Size : ", len(purchased_dataset))
print("Aggregataed Dataset Size : ", len(aggregated_dataset))

DEBUG_MODE = os.getenv("AICROWD_DEBUG_MODE", False)
if DEBUG_MODE:
    TRAINER_CLASS = ZEWDPCDebugTrainer
else:
    TRAINER_CLASS = ZEWDPCTrainer

trainer = ZEWDPCTrainer(num_classes=6, use_pretrained=True)
trainer.train(
    training_dataset, num_epochs=10, validation_percentage=0.1, batch_size=5
)

y_pred = trainer.predict(val_dataset)
y_true = val_dataset_gt._get_all_labels()

####################################################################################
####################################################################################
##
## Phase 4 : Evaluation Phase
####################################################################################
metrics = get_zew_dpc_metrics(y_true, y_pred)

f1_score = metrics["F1_score_macro"]
accuracy_score = metrics["accuracy_score"]
hamming_loss_score = metrics["hamming_loss"]
print()
print("F1 Score : ", f1_score)
print("Accuracy Score : ", accuracy_score)
print("Hamming Loss : ", hamming_loss_score)
================> Pre-Training Phase

Epoch 0: 100%|██████████| 29/29 [00:12<00:00,  2.34it/s, avg_train_loss=0.594, iter_train_loss=0.529]
Epoch 0 - Average Train Loss: 0.59386 	 Train F1: 0.24379
Epoch 0 - Average Val Loss: 0.72379 	 Val F1: 0.19298 	 Learning Rate 0.00009

Epoch 1: 100%|██████████| 29/29 [00:11<00:00,  2.52it/s, avg_train_loss=0.417, iter_train_loss=0.309]
Epoch 1 - Average Train Loss: 0.41743 	 Train F1: 0.29024
Epoch 1 - Average Val Loss: 0.35622 	 Val F1: 0.42857 	 Learning Rate 0.00009

Execution Complete of Training Phase.
Saving checkpont at /tmp/tmpcayhpl4p
Time remaining:  3626.0609192848206
Loading checkpoint success

================> Purchase Phase | Budget = 500

100%|██████████| 10000/10000 [00:23<00:00, 417.21it/s]
Execution Complete of Purchase Phase.
Saving checkpont at /tmp/tmpcayhpl4p
Training Dataset Size :  1000
Purchased Dataset Size :  500
Aggregataed Dataset Size :  1500
Downloading: "https://download.pytorch.org/models/efficientnet_b4_rwightman-7eb33cd5.pth" to /root/.cache/torch/hub/checkpoints/efficientnet_b4_rwightman-7eb33cd5.pth
Epoch 0: 100%|██████████| 180/180 [00:14<00:00, 12.69it/s, avg_train_loss=0.443, iter_train_loss=0.186]
Epoch 0 - Average Train Loss: 0.44349 	 Train F1: 0.15893
Validation at Epoch 0: 100%|██████████| 20/20 [00:01<00:00, 14.11it/s, avg_val_loss=0.388]
Epoch 0 - Average Val Loss: 0.38835 	 Val F1: 0.14765 	 Learning Rate 0.00100
Epoch 1: 100%|██████████| 180/180 [00:14<00:00, 12.69it/s, avg_train_loss=0.384, iter_train_loss=0.296]
Epoch 1 - Average Train Loss: 0.38433 	 Train F1: 0.15641
Validation at Epoch 1: 100%|██████████| 20/20 [00:01<00:00, 13.71it/s, avg_val_loss=1.52]
Epoch 1 - Average Val Loss: 1.52412 	 Val F1: 0.15258 	 Learning Rate 0.00100
Epoch 2: 100%|██████████| 180/180 [00:14<00:00, 12.66it/s, avg_train_loss=0.363, iter_train_loss=1.01]
Epoch 2 - Average Train Loss: 0.36296 	 Train F1: 0.19413
Validation at Epoch 2: 100%|██████████| 20/20 [00:01<00:00, 12.60it/s, avg_val_loss=0.344]
Epoch 2 - Average Val Loss: 0.34436 	 Val F1: 0.18248 	 Learning Rate 0.00100
Epoch 3: 100%|██████████| 180/180 [00:14<00:00, 12.76it/s, avg_train_loss=0.346, iter_train_loss=0.215]
Epoch 3 - Average Train Loss: 0.34583 	 Train F1: 0.24055
Validation at Epoch 3: 100%|██████████| 20/20 [00:01<00:00, 13.87it/s, avg_val_loss=0.334]
Epoch 3 - Average Val Loss: 0.33411 	 Val F1: 0.22175 	 Learning Rate 0.00100
Epoch 4: 100%|██████████| 180/180 [00:14<00:00, 12.71it/s, avg_train_loss=0.33, iter_train_loss=0.424]
Epoch 4 - Average Train Loss: 0.33000 	 Train F1: 0.24934
Validation at Epoch 4: 100%|██████████| 20/20 [00:01<00:00, 14.21it/s, avg_val_loss=0.328]
Epoch 4 - Average Val Loss: 0.32759 	 Val F1: 0.30794 	 Learning Rate 0.00100
Epoch 5: 100%|██████████| 180/180 [00:14<00:00, 12.74it/s, avg_train_loss=0.338, iter_train_loss=0.457]
Epoch 5 - Average Train Loss: 0.33813 	 Train F1: 0.27530
Validation at Epoch 5: 100%|██████████| 20/20 [00:01<00:00, 14.18it/s, avg_val_loss=0.338]
Epoch 5 - Average Val Loss: 0.33842 	 Val F1: 0.34472 	 Learning Rate 0.00100
Epoch 6: 100%|██████████| 180/180 [00:14<00:00, 12.79it/s, avg_train_loss=0.324, iter_train_loss=0.277]
Epoch 6 - Average Train Loss: 0.32367 	 Train F1: 0.30203
Validation at Epoch 6: 100%|██████████| 20/20 [00:01<00:00, 13.76it/s, avg_val_loss=0.313]
Epoch 6 - Average Val Loss: 0.31250 	 Val F1: 0.26483 	 Learning Rate 0.00100
Epoch 7: 100%|██████████| 180/180 [00:14<00:00, 12.65it/s, avg_train_loss=0.323, iter_train_loss=0.178]
Epoch 7 - Average Train Loss: 0.32280 	 Train F1: 0.29083
Validation at Epoch 7: 100%|██████████| 20/20 [00:01<00:00, 13.90it/s, avg_val_loss=17.8]
Epoch 7 - Average Val Loss: 17.76347 	 Val F1: 0.21406 	 Learning Rate 0.00100
Epoch 8: 100%|██████████| 180/180 [00:14<00:00, 12.58it/s, avg_train_loss=0.321, iter_train_loss=0.214]
Epoch 8 - Average Train Loss: 0.32075 	 Train F1: 0.29460
Validation at Epoch 8: 100%|██████████| 20/20 [00:01<00:00, 13.60it/s, avg_val_loss=0.313]
Epoch 8 - Average Val Loss: 0.31335 	 Val F1: 0.32415 	 Learning Rate 0.00100
Epoch 9: 100%|██████████| 180/180 [00:15<00:00, 11.93it/s, avg_train_loss=0.301, iter_train_loss=0.159]
Epoch 9 - Average Train Loss: 0.30057 	 Train F1: 0.33211
Validation at Epoch 9: 100%|██████████| 20/20 [00:01<00:00, 13.59it/s, avg_val_loss=9.21]
Epoch 9 - Average Val Loss: 9.21223 	 Val F1: 0.34220 	 Learning Rate 0.00100
100%|██████████| 94/94 [00:18<00:00,  5.02it/s]
F1 Score :  0.11252397129776552
Accuracy Score :  0.4816666666666667
Hamming Loss :  0.14122222222222222

I hope this notebook succeeds in helping you "Getting Started", if it does how about leaving some love 🤎🤎🤎🤎🤎...


Comments

vrv
Over 2 years ago

Comment deleted by vrv.

You must login before you can post a comment.

Execute