Loading

Seismic Facies Identification Challenge

[Explainer] Introduction and General Approach Final Pack!

Introduction to this challenge, general approach, my approach, and what I learn from the others

leocd

This is my final explainer trying to summarize what I know and learned from this challenge especially from other wonderful participants.

Notebook list:

I'll try to add more notebooks in the comment about my round 1 approach and what new things I learned (post-processing method, etc).

I hope you guys enjoy it and learn something from this explainer.

 

---

 

Thanks to SEAM AI and Aicrowd for organizing this event.

Also, shout out to other contestant's explainer :

This has been a wonderful experience, I'm glad that I spent my time for this challenge. Always checked the forum and leaderboard every time I got back from work.

I learned a lot from this community and was surprised that I can push my score to F1:0.901 Acc:0.941 (Round 1 Unweighted) and F1:0.770 Acc:0.737 (Round 2 weighted). I'm pretty noob at this.

welcome to.gif

ses.gif

using dl.gif

with.gif leo2.gif


Shout-out

First of all, thank you for SEAM AI and Aicrowd for organizing this event.

Also shout out to other contestants explainer (I suggest you read it too) :

I learned a lot from this community and suprised that I can push my score to F1:0.901 Acc:0.941 (Round 1 Unweighted) and F1:0.770 Acc:0.737 (Round 2 weighted).

Also checkout my previous explainer too here :P

ps: for geoscientist, what I meant for processing here is to transform input data provided from this challenge.

Introduction

Please watch :)


In [ ]:
#@title
from IPython.display import HTML,clear_output
from base64 import b64encode
!gdown "https://drive.google.com/uc?id=1PuQU_NZzKYAhXMYLMBU1Ff3VQ02PuOs7"
mp4 = open('render.mp4','rb').read()
clear_output()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
HTML("""
<video width=480 controls>
      <source src="%s" type="video/mp4">
</video>
""" % data_url)
Output hidden; open in https://colab.research.google.com to view.

Sorry for the tone difference lol, I only got time to do this while my family is asleep.

First, we download the training data for the challenge :

In [ ]:
!gdown "https://drive.google.com/uc?id=14u7fkARS8WRJUdhvU79kDxg8EKTqg606"
!gdown "https://drive.google.com/uc?id=1--tADAa10l2M1iaSEslGXK-RaBv8UbMf"
Downloading...
From: https://drive.google.com/uc?id=14u7fkARS8WRJUdhvU79kDxg8EKTqg606
To: /content/data_train.npz
1.72GB [00:18, 94.5MB/s]
Downloading...
From: https://drive.google.com/uc?id=1--tADAa10l2M1iaSEslGXK-RaBv8UbMf
To: /content/labels_train.npz
7.16MB [00:00, 63.3MB/s]

Let's install some package that'll make our job easier :

In [ ]:
!pip install segmentation-models-pytorch==0.1.2  # easy to use some famous model architecture. visit https://github.com/qubvel/segmentation_models.pytorch/
!pip install albumentations               # easy image manipulation for data augmentation
Collecting segmentation-models-pytorch==0.1.2
  Downloading https://files.pythonhosted.org/packages/03/36/37b6b0e54a98ff15eb36ce36c9181fdb627b3e789e23fc764f9e5f01dc68/segmentation_models_pytorch-0.1.2-py3-none-any.whl (53kB)
     |████████████████████████████████| 61kB 8.1MB/s 
Requirement already satisfied: torchvision>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from segmentation-models-pytorch==0.1.2) (0.8.1+cu101)
Collecting pretrainedmodels==0.7.4
  Downloading https://files.pythonhosted.org/packages/84/0e/be6a0e58447ac16c938799d49bfb5fb7a80ac35e137547fc6cee2c08c4cf/pretrainedmodels-0.7.4.tar.gz (58kB)
     |████████████████████████████████| 61kB 8.9MB/s 
Collecting timm==0.1.20
  Downloading https://files.pythonhosted.org/packages/89/26/ba294669cc5cc4d09efd1964c8df752dc0955ac26f86bdeec582aed77d1d/timm-0.1.20-py3-none-any.whl (161kB)
     |████████████████████████████████| 163kB 36.5MB/s 
Collecting efficientnet-pytorch==0.6.3
  Downloading https://files.pythonhosted.org/packages/b8/cb/0309a6e3d404862ae4bc017f89645cf150ac94c14c88ef81d215c8e52925/efficientnet_pytorch-0.6.3.tar.gz
Requirement already satisfied: torch==1.7.0 in /usr/local/lib/python3.6/dist-packages (from torchvision>=0.3.0->segmentation-models-pytorch==0.1.2) (1.7.0+cu101)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torchvision>=0.3.0->segmentation-models-pytorch==0.1.2) (1.19.5)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision>=0.3.0->segmentation-models-pytorch==0.1.2) (7.0.0)
Collecting munch
  Downloading https://files.pythonhosted.org/packages/cc/ab/85d8da5c9a45e072301beb37ad7f833cd344e04c817d97e0cc75681d248f/munch-2.5.0-py2.py3-none-any.whl
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from pretrainedmodels==0.7.4->segmentation-models-pytorch==0.1.2) (4.41.1)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from torch==1.7.0->torchvision>=0.3.0->segmentation-models-pytorch==0.1.2) (0.8)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch==1.7.0->torchvision>=0.3.0->segmentation-models-pytorch==0.1.2) (0.16.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from torch==1.7.0->torchvision>=0.3.0->segmentation-models-pytorch==0.1.2) (3.7.4.3)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from munch->pretrainedmodels==0.7.4->segmentation-models-pytorch==0.1.2) (1.15.0)
Building wheels for collected packages: pretrainedmodels, efficientnet-pytorch
  Building wheel for pretrainedmodels (setup.py) ... done
  Created wheel for pretrainedmodels: filename=pretrainedmodels-0.7.4-cp36-none-any.whl size=60963 sha256=15ecf5d77b10b7173e744ea8e29e484fd320dd124df60adf1fb7da5e67054433
  Stored in directory: /root/.cache/pip/wheels/69/df/63/62583c096289713f22db605aa2334de5b591d59861a02c2ecd
  Building wheel for efficientnet-pytorch (setup.py) ... done
  Created wheel for efficientnet-pytorch: filename=efficientnet_pytorch-0.6.3-cp36-none-any.whl size=12421 sha256=d62b69be61ee765ae545f905231aa4a6576329cedad28cb4234bf47486d6586b
  Stored in directory: /root/.cache/pip/wheels/42/1e/a9/2a578ba9ad04e776e80bf0f70d8a7f4c29ec0718b92d8f6ccd
Successfully built pretrainedmodels efficientnet-pytorch
Installing collected packages: munch, pretrainedmodels, timm, efficientnet-pytorch, segmentation-models-pytorch
Successfully installed efficientnet-pytorch-0.6.3 munch-2.5.0 pretrainedmodels-0.7.4 segmentation-models-pytorch-0.1.2 timm-0.1.20
Requirement already satisfied: albumentations in /usr/local/lib/python3.6/dist-packages (0.1.12)
Requirement already satisfied: numpy>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from albumentations) (1.19.5)
Collecting imgaug<0.2.7,>=0.2.5
  Downloading https://files.pythonhosted.org/packages/ad/2e/748dbb7bb52ec8667098bae9b585f448569ae520031932687761165419a2/imgaug-0.2.6.tar.gz (631kB)
     |████████████████████████████████| 634kB 16.2MB/s 
Requirement already satisfied: opencv-python in /usr/local/lib/python3.6/dist-packages (from albumentations) (4.1.2.30)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from albumentations) (1.4.1)
Requirement already satisfied: scikit-image>=0.11.0 in /usr/local/lib/python3.6/dist-packages (from imgaug<0.2.7,>=0.2.5->albumentations) (0.16.2)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from imgaug<0.2.7,>=0.2.5->albumentations) (1.15.0)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image>=0.11.0->imgaug<0.2.7,>=0.2.5->albumentations) (3.2.2)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image>=0.11.0->imgaug<0.2.7,>=0.2.5->albumentations) (1.1.1)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image>=0.11.0->imgaug<0.2.7,>=0.2.5->albumentations) (2.5)
Requirement already satisfied: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image>=0.11.0->imgaug<0.2.7,>=0.2.5->albumentations) (7.0.0)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image>=0.11.0->imgaug<0.2.7,>=0.2.5->albumentations) (2.4.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.11.0->imgaug<0.2.7,>=0.2.5->albumentations) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.11.0->imgaug<0.2.7,>=0.2.5->albumentations) (2.8.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.11.0->imgaug<0.2.7,>=0.2.5->albumentations) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.11.0->imgaug<0.2.7,>=0.2.5->albumentations) (1.3.1)
Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.0->scikit-image>=0.11.0->imgaug<0.2.7,>=0.2.5->albumentations) (4.4.2)
Building wheels for collected packages: imgaug
  Building wheel for imgaug (setup.py) ... done
  Created wheel for imgaug: filename=imgaug-0.2.6-cp36-none-any.whl size=654020 sha256=2bd0fd1798120cc3cc19e0c808f3723d34695735e6d8b8883223305a26ebfe2f
  Stored in directory: /root/.cache/pip/wheels/97/ec/48/0d25896c417b715af6236dbcef8f0bed136a1a5e52972fc6d0
Successfully built imgaug
Installing collected packages: imgaug
  Found existing installation: imgaug 0.2.9
    Uninstalling imgaug-0.2.9:
      Successfully uninstalled imgaug-0.2.9
Successfully installed imgaug-0.2.6

Import the packages :

In [ ]:
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.optim as optim
from torch.utils.data import DataLoader, Dataset
import segmentation_models_pytorch as smp
import albumentations as A
import os
from ipywidgets import IntProgress
from IPython.display import display
import time
import cv2
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"

Load the data and see the shape:

In [ ]:
train_data_full = np.load('data_train.npz', allow_pickle=True, mmap_mode='r')['data']
train_label_full = np.load('labels_train.npz', allow_pickle=True, mmap_mode='r')['labels']
In [ ]:
print('shape            :',train_data_full.shape)
print('min-max amplitude:',train_data_full.min(),'&',train_data_full.max())
shape            : (1006, 782, 590)
min-max amplitude: -5195.5234 & 5151.7188

If you see the EDA by sergeytsimfer, especially the amplitude distribution, you can improve your score by doing quantile to the data or for me gain+rms is what perform the best.

You can also add it to extra channel (so the dimension will be vanilla+processed,size x, size y) but for me the improvement is really small and not worth the extra computation time.

In [ ]:
import math
from scipy.signal.windows import triang
from scipy.signal import convolve2d as conv2
def gain(data,dt,parameters):
    nt,nx = data.shape
    dout = np.zeros(data.shape)
    L = parameters/dt+1
    L = np.floor(L/2)
    h = triang(2*L+1)
    shaped_h  = h.reshape(len(h),1)
    for k in range(nx):
        aux = data[:,k]
        e = aux**2
        shaped_e = e.reshape(len(e),1)
        rms = np.sqrt(conv2(shaped_e,shaped_h,"same"))
        epsi = 1e-10*max(rms)
        op = rms/(rms**2+epsi)
        op = op.reshape(len(op),)
        dout[:,k] = data[:,k]*op
    for k in range(nx):
        aux = dout[:,k]
        amax = np.sqrt(sum(aux**2)/nt)
        dout[:,k] = dout[:,k]/amax
    return dout

Let's test it for one slice :

In [ ]:
test_proc=train_data_full[:,0,:]
dat_proc=gain(test_proc,3e-3,0.8)
In [ ]:
fig, ax = plt.subplots(1, 2, figsize=(10,8))
ax[0].imshow(test_proc,interpolation='none',cmap='seismic')
ax[1].imshow(dat_proc,interpolation='none',cmap='seismic')

ax[0].set_title("Vanilla")
ax[1].set_title("Gain+RMS")
plt.show()

Now you can run these to process all the data, but it'll take some time to finish!

In [ ]:
print('preprocessing the data :')
f = IntProgress(min=0, max=train_data_full.shape[1])
display(f)
for i in range(0,train_data_full.shape[1]):
    #print('reprocess : ',i+1,'of',train_data_full.shape[1])
    train_data_full[:,i,:]=gain(train_data_full[:,i,:],3e-3,0.8)
    f.value += 1

if you don't want to wait, then you can use this instead.

In [ ]:
!gdown "https://drive.google.com/uc?id=1JZ5LZz_f2Vfg9BxuGGBY9LliJQAAHi_H"
train_data_full=np.load('data_train_processed.npz')['data']
Downloading...
From: https://drive.google.com/uc?id=1JZ5LZz_f2Vfg9BxuGGBY9LliJQAAHi_H
To: /content/data_train_processed.npz
1.73GB [00:13, 132MB/s] 

then we rescale the data :

In [ ]:
train_data_full = (train_data_full - train_data_full.min()) / (train_data_full.max() - train_data_full.min())

labelRR.png

And let's see the label distribution :

In [ ]:
fig = plt.figure(figsize=(10,5))
labels    = ["1", "2", "3", "4", "5", "6"]
colors = ['hotpink', 'lightskyblue', 'mediumpurple','cornsilk', 'pink', 'lightgrey']
N, bins, patches = plt.hist(train_label_full.flatten(),6,density=True, edgecolor='gray', linewidth=1)
for i in range(6):
    patches[i].set_facecolor(colors[i])
    patches[i].set_label(labels[i])
plt.gca().axes.xaxis.set_ticklabels([])
plt.title('Full Train Data Dist.')
plt.show()

Now, about the test cube, it's better to pick the part that also got the label-5.

labeEl.png

Don't do this :

labeEnol.png

As you can see, if we pick this area as a test, then no label-5 that'll be represented.

In [ ]:
from matplotlib.patches import Rectangle
fig, ax = plt.subplots(figsize=(5,10))
im = ax.imshow(train_label_full[:,-1,:],interpolation='none',cmap='Pastel1')
rect = plt.Rectangle((420,0),300,1500,facecolor='red',alpha=0.5)
ax.add_patch(rect)
plt.show()

Now let's split the train and test data :

In [ ]:
split_sample = 0.7
test_data = train_data_full[:,:train_data_full.shape[1]-int(train_data_full.shape[1]*split_sample),:]
train_data = train_data_full[:,train_data_full.shape[1]-int(train_data_full.shape[1]*split_sample):,:]
test_label = train_label_full[:,:train_label_full.shape[1]-int(train_label_full.shape[1]*split_sample),:]
train_label = train_label_full[:,train_label_full.shape[1]-int(train_label_full.shape[1]*split_sample):,:]
In [ ]:
print('train cube shape :',train_data.shape)
print('test cube shape :',test_data.shape)
train cube shape : (1006, 547, 590)
test cube shape : (1006, 235, 590)
In [ ]:
fig = plt.figure(figsize=(10,10))
plt.imshow(train_label[:,-1,:],interpolation='none',cmap='Pastel1')
plt.colorbar()
plt.show()

and see the label distribution :

In [ ]:
fig, ax = plt.subplots(1, 2, figsize=(10,5))
labels    = ["1", "2", "3", "4", "5", "6"]
colors = ['hotpink', 'lightskyblue', 'mediumpurple','cornsilk', 'pink', 'lightgrey']
N, bins, patches = ax[0].hist(train_label.flatten(),[1, 2, 3, 4, 5, 6, 7],density=True, edgecolor='gray', linewidth=1)
for i in range(6):
    patches[i].set_facecolor(colors[i])
N2, bins2, patches2 = ax[1].hist(test_label.flatten(),[1, 2, 3, 4, 5, 6, 7],density=True, edgecolor='gray', linewidth=1)
for i in range(6):
    patches2[i].set_facecolor(colors[i])
    patches2[i].set_label(labels[i])
ax[0].get_xaxis().set_visible(False)
ax[1].get_xaxis().set_visible(False)
ax[0].set_title("Training Set Dist.")
ax[1].set_title("Testing Set Dist.")
plt.legend(title="Label")
plt.show()

Yikes, that's still quite an imbalance that we got here.

In [ ]:
print('label 5 count of train data : ',np.count_nonzero(train_label.flatten() == 5))
print('label 5 count of test data : ',np.count_nonzero(test_label.flatten() == 5))
label 5 count of train data :  4190167
label 5 count of test data :  428833

Still it's not empty if we splice it the other way.

Setting up the Model, Hyperparameter, Dataloader, etc.

Let's set up some stuff for training the models.

First, the metric that we will use from Aicrowd.

In [ ]:
from sklearn.metrics import multilabel_confusion_matrix
from sklearn.metrics import f1_score,accuracy_score  #for score metric calculation
def _prf_divide(numerator, denominator, ):
    """Performs division and handles divide-by-zero.
    On zero-division, sets the corresponding result elements equal to
    0 or 1 (according to ``zero_division``). 
    """
    mask = denominator == 0.0
    denominator = denominator.copy()
    denominator[mask] = 1  # avoid infs/nans
    result = numerator / denominator

    return result

def compute_scores(y_true, y_pred, class_weights=[1, 1, 1, 1, 20, 20]):
    """
    Computes the weighted & unweighted f1_score and accuracy
    Using the standard F1-Score and class-wise accuracy computations were quite 
    slow as we were doing a lot of redundant work across all score computations,
    hence we have implemented this from the base principles.
    Please refer to the inline comments.
    """

    # Initial Housekeeping Taks1
    y_true = np.array(y_true).flatten()
    y_pred = np.array(y_pred).flatten()
    class_weights = np.array(class_weights)
    # print(np.max(y_true))
    # print(np.max(y_pred))
    # print(np.min(y_true))
    # print(np.min(y_pred))
    # Computing Multilabel Confusion Matrix
    #print("--------- Computing MCM... ")
    #begin_time = time.time()
    MCM = multilabel_confusion_matrix(y_true, y_pred,labels=[1,2,3,4,5,6])
    #print("MCM computation time  : ", time.time() - begin_time)
    
    """
    Gather True Positives, True Negatives, False Positives, False Negatives
    """
    tp_sum = MCM[:, 1, 1]
    tn_sum = MCM[:, 0, 0]
    fn_sum = MCM[:, 1, 0]
    fp_sum = MCM[:, 0, 1]
    
    #print("--------- Computing per class instances... ")
    per_class_instances = np.bincount(y_true) # Helps keep a track of total number of instances per class
    per_class_instances = per_class_instances[1:] # as the class names in the dataset are NOT zero-indexed
    
    assert class_weights.shape == per_class_instances.shape
    
    #print("--------- Computing precision... ")
    # precision : tp / (tp + fp)
    precision = _prf_divide(
                    tp_sum,
                    (tp_sum + fp_sum)
                )
    #print("--------- Computing recall... ")                        
    # recall : tp / (tp + fn)
    recall = _prf_divide(
                    tp_sum,
                    (tp_sum + fn_sum)
                )

    #print("--------- Computing F1 score... ")
    # f1 : 2 * (recall * precision) / (recall + precision)
    f1_score = _prf_divide(
                    2 * precision * recall,
                    precision + recall
                )
    #print("--------- Computing Accuracy... ")
    # accuracy = tp_sum / instances_per_class
    # NOTE: we are computing the accuracy independently for all the class specific subgroups
    # accuracy = _prf_divide(
    #                 tp_sum,
    #                 per_class_instances
    #             )
    # print(class_weights)
    # print(f1_score)
    f1_score_weighted = np.dot(class_weights, f1_score) / np.sum(class_weights)
    f1_score_unweighted = f1_score.mean()

    # accuracy_weighted = np.dot(class_weights, accuracy) / np.sum(class_weights)
    # accuracy_unweighted = accuracy.mean()

    return f1_score_weighted, f1_score_unweighted#, accuracy_weighted, f1_score_unweighted, accuracy_unweighted

Then the training parameters:

In [ ]:
batch_size = 8      
num_epochs = 40      
num_classes = 6       
learning_rate = 0.00085

Set up the architecture, pretty simple using smp, right?

Also we don't use pretrained weight here. (After some experiment, pretrained weight got worse score).

In [ ]:
model = smp.PSPNet(
    encoder_name="efficientnet-b3",        
    encoder_weights=None,     
    in_channels=1,                  
    classes=num_classes,                    
)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
print("")

Now let's put some random value to test if it works:

In [ ]:
test = torch.rand(1, 1, 320, 320).cuda()
out = model(test)
out.shape
Out[ ]:
torch.Size([1, 6, 320, 320])

And set up the optimizer and loss function:

In [ ]:
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(),lr = learning_rate)

Next, let's make the train dataloader:

In [ ]:
class seisdataset_train(Dataset):
    def __init__(self, x_set, y_set):
        self.x, self.y = x_set, y_set
        self.n_sample = self.x.shape[1]+self.x.shape[2]+self.x.shape[1]
        self.aug = A.Compose([
            A.Resize(p=1, height=640, width=320, interpolation=1)
        ]) 
        self.aug2 = A.Compose([
            #A.RandomSizedCrop(p=1.0, min_max_height=(1006, 1006), height=1006, width=256, w2h_ratio=1.0, interpolation=0),
            #A.GridDistortion(p=0.3, num_steps=6, distort_limit=(-0.2, -0.05), border_mode=1),
            A.ElasticTransform(p=0.2,alpha=100, sigma=8, alpha_affine=0, border_mode=1),
            A.ShiftScaleRotate(p=0.5, shift_limit=(0.0, 0.0), scale_limit=(0.01, 0.25), rotate_limit=(-15, 15), interpolation=0, border_mode=1),
            A.RandomCrop(900, 250, p=0.3),
            A.Resize(p=1, height=640, width=320, interpolation=0)
            
        ])  

    def __len__(self):
        return self.n_sample
    
    def __getitem__(self, index):
        if index < self.x.shape[1]:
          idx = index
          batch_x = self.x[:,idx,:]
          batch_y = self.y[:,idx,:]-1
          augmented = self.aug(image=batch_x, mask=batch_y)
        elif self.x.shape[1] <= index < (self.x.shape[1]+self.x.shape[2]):
          idx = index-self.x.shape[1]
          batch_x = self.x[:,:,idx]
          batch_y = self.y[:,:,idx]-1
          augmented = self.aug2(image=batch_x, mask=batch_y)
        elif index >= self.x.shape[1]+self.x.shape[2]:
          idx = index-self.x.shape[1]-self.x.shape[2]
          batch_x = self.x[:,idx,:]
          batch_y = self.y[:,idx,:]-1
          augmented = self.aug2(image=batch_x, mask=batch_y)          
        image, mask = augmented['image'], augmented['mask']
        return image[None,:,:], mask

Wait, what are you doing with albumentation there?

We try to augment some new data.

Well augmentation is.. making new data from available dataset by adding image manipulation algorithm like:

  • flip
  • rotation
  • scale
  • elastic transform
  • adding noise, etc

Just like those Indonesia & India TV drama, where they only got like 5 sec footage but they add multiple effect to the footage and merge it so they get 1 minute unnecessary dramatic moment.

tenor.gif

So basically we tried to introduce new data so our model will generalize more. A way to overcome the problem of limited data.

Min Jun Park also describe it nicely on our 1st Townhall about data augmentation advantage and it's risk.

1.PNG

2.PNG

and now let's set up the test loader dataset:

In [ ]:
class seisdataset_test(Dataset):
    def __init__(self, x_set, y_set):
        self.x, self.y = x_set, y_set
        self.n_sample = self.x.shape[1]
        self.aug2 = A.Compose([
            A.Resize(p=1, height=640, width=320, interpolation=0)
        ])  

    def __len__(self):
        return self.n_sample
    
    def __getitem__(self, index):
        batch_x = self.x[:,index,:]
        batch_y = self.y[:,index,:]-1
        augmented = self.aug2(image=batch_x, mask=batch_y)
        image, mask = augmented['image'], augmented['mask']
        return image[None,:,:], mask

Let's test the data loader if it can fetch anything :

In [ ]:
train_dataset = seisdataset_train(train_data, train_label)
In [ ]:
random_id = np.random.randint(0,len(train_dataset))
data_pick = train_dataset[random_id]
img = data_pick[0][0,:,:]
lbl = data_pick[1]
fig, ax = plt.subplots(1, 2, figsize=(10,8))
ax[0].imshow(img,interpolation='none',cmap='gray')
ax[1].imshow(lbl,interpolation='none',cmap='Pastel1')
plt.show()

Let's put it into a loader:

In [ ]:
def get_data_loaders(batch_size):
    train_dataset = seisdataset_train(train_data, train_label)
    test_dataset = seisdataset_test(test_data, test_label)
    
    train_loader = DataLoader(dataset = train_dataset, batch_size = batch_size, shuffle=True, drop_last=True)
    test_loader = DataLoader(dataset = test_dataset, batch_size = batch_size, shuffle=False, drop_last=True)
    return train_loader,test_loader
In [ ]:
train_loader,test_loader=get_data_loaders(batch_size)

Now put some variable for the training log :

In [ ]:
train_losses = []
valid_losses = []
train_F1 = []
test_F1 = []
test_F1_uw = []
train_acc = []
test_acc = []
F1_old = 0.0
F1uw_old = 0.0

Training

... and now train the model!

It's gonna take some times. Better go watch some youtube video.

See you in half an hour-ish!

In [ ]:
for epoch in range(1, num_epochs + 1):
    train_loss = 0.0
    valid_loss = 0.0
    F1_train = 0.0
    acc_train = 0.0
    F1_test = 0.0
    F1uw_test = 0.0
    acc_test = 0.0    
    model.train()
    for data, label in train_loader:
        data = data.to(device)
        label = label.to(device)
        optimizer.zero_grad()
        output = model(data.float())
        pred = output.data.max(1)[1].cpu().numpy()[:, :, :]
        loss = criterion(output, label.long())
        acc = accuracy_score(label.cpu().numpy().flatten()+1, pred.flatten()+1)
        f1s, f1uw = compute_scores(label.cpu().numpy().flatten()+1, pred.flatten()+1)   
        loss.backward()
        optimizer.step()
        train_loss += loss.item() * data.size(0)
        F1_train += f1s
        acc_train += acc
    model.eval()
    for data, label in test_loader:        
        data = data.to(device)
        label = label.to(device)
        output = model(data.float())
        pred = output.data.max(1)[1].cpu().numpy()[:, :, :]
        loss = criterion(output, label.long())
        acc = accuracy_score(label.cpu().numpy().flatten()+1, pred.flatten()+1)
        f1s, f1uw = compute_scores(label.cpu().numpy().flatten()+1, pred.flatten()+1)  
        valid_loss += loss.item() * data.size(0)
        F1_test += f1s
        F1uw_test += f1uw
        acc_test += acc

    train_loss = train_loss/len(train_loader.sampler)
    F1_train = F1_train/len(train_loader.sampler)*batch_size
    acc_train = acc_train/len(train_loader.sampler)*batch_size    
    valid_loss = valid_loss/len(test_loader.sampler)*batch_size
    F1_test = F1_test/len(test_loader.sampler)*batch_size
    F1uw_test = F1uw_test /len(test_loader.sampler)*batch_size
    acc_test = acc_test/len(test_loader.sampler)*batch_size
    train_losses.append(train_loss)
    valid_losses.append(valid_loss)
    train_F1.append(F1_train)
    test_F1.append(F1_test)
    test_F1_uw.append(F1uw_test)
    train_acc.append(acc_train)
    test_acc.append(acc_test)      

    if F1_old < F1_test:
      F1_old = F1_test
      torch.save(model.state_dict(), 'modelbestf1_weighted_run1.ckpt')
    if F1uw_old < F1uw_test:
      F1uw_old = F1uw_test
      torch.save(model.state_dict(), 'modelbestf1_unweighted_run1.ckpt')  

    # print
    print('Epoch: {} \tTrain Loss: {:.3f} \tVal. Loss: {:.3f} \tF1_train: {:.3f} \tF1_test: {:.3f} \tF1u_test: {:.3f}'.format(
        epoch, train_loss, valid_loss, F1_train, F1_test, F1uw_test))
Epoch: 1 	Train Loss: 0.492 	Val. Loss: 23.184 	F1_train: 0.533 	F1_test: 0.043 	F1u_test: 0.054
Epoch: 2 	Train Loss: 0.207 	Val. Loss: 48.560 	F1_train: 0.752 	F1_test: 0.076 	F1u_test: 0.184
Epoch: 3 	Train Loss: 0.161 	Val. Loss: 9.070 	F1_train: 0.804 	F1_test: 0.328 	F1u_test: 0.447
Epoch: 4 	Train Loss: 0.132 	Val. Loss: 6.158 	F1_train: 0.835 	F1_test: 0.337 	F1u_test: 0.500
Epoch: 5 	Train Loss: 0.124 	Val. Loss: 2.940 	F1_train: 0.847 	F1_test: 0.446 	F1u_test: 0.651
Epoch: 6 	Train Loss: 0.107 	Val. Loss: 2.990 	F1_train: 0.863 	F1_test: 0.490 	F1u_test: 0.676
Epoch: 7 	Train Loss: 0.102 	Val. Loss: 2.228 	F1_train: 0.868 	F1_test: 0.575 	F1u_test: 0.740
Epoch: 8 	Train Loss: 0.097 	Val. Loss: 2.047 	F1_train: 0.876 	F1_test: 0.580 	F1u_test: 0.753
Epoch: 9 	Train Loss: 0.090 	Val. Loss: 1.442 	F1_train: 0.881 	F1_test: 0.632 	F1u_test: 0.803
Epoch: 10 	Train Loss: 0.085 	Val. Loss: 1.792 	F1_train: 0.885 	F1_test: 0.613 	F1u_test: 0.781
Epoch: 11 	Train Loss: 0.084 	Val. Loss: 1.865 	F1_train: 0.886 	F1_test: 0.627 	F1u_test: 0.793
Epoch: 12 	Train Loss: 0.081 	Val. Loss: 1.443 	F1_train: 0.891 	F1_test: 0.578 	F1u_test: 0.780
Epoch: 13 	Train Loss: 0.076 	Val. Loss: 1.535 	F1_train: 0.896 	F1_test: 0.643 	F1u_test: 0.802
Epoch: 14 	Train Loss: 0.077 	Val. Loss: 1.391 	F1_train: 0.895 	F1_test: 0.612 	F1u_test: 0.796
Epoch: 15 	Train Loss: 0.072 	Val. Loss: 1.364 	F1_train: 0.899 	F1_test: 0.628 	F1u_test: 0.805
Epoch: 16 	Train Loss: 0.073 	Val. Loss: 1.916 	F1_train: 0.899 	F1_test: 0.471 	F1u_test: 0.696
Epoch: 17 	Train Loss: 0.074 	Val. Loss: 1.552 	F1_train: 0.899 	F1_test: 0.643 	F1u_test: 0.808
Epoch: 18 	Train Loss: 0.068 	Val. Loss: 1.582 	F1_train: 0.905 	F1_test: 0.532 	F1u_test: 0.759
Epoch: 19 	Train Loss: 0.066 	Val. Loss: 1.924 	F1_train: 0.905 	F1_test: 0.581 	F1u_test: 0.782
Epoch: 20 	Train Loss: 0.066 	Val. Loss: 1.845 	F1_train: 0.906 	F1_test: 0.616 	F1u_test: 0.796
Epoch: 21 	Train Loss: 0.066 	Val. Loss: 1.764 	F1_train: 0.908 	F1_test: 0.579 	F1u_test: 0.781
Epoch: 22 	Train Loss: 0.063 	Val. Loss: 1.388 	F1_train: 0.910 	F1_test: 0.554 	F1u_test: 0.782
Epoch: 23 	Train Loss: 0.065 	Val. Loss: 1.554 	F1_train: 0.908 	F1_test: 0.594 	F1u_test: 0.788
Epoch: 24 	Train Loss: 0.061 	Val. Loss: 2.568 	F1_train: 0.911 	F1_test: 0.590 	F1u_test: 0.757
Epoch: 25 	Train Loss: 0.064 	Val. Loss: 1.995 	F1_train: 0.910 	F1_test: 0.554 	F1u_test: 0.770
Epoch: 26 	Train Loss: 0.064 	Val. Loss: 1.466 	F1_train: 0.910 	F1_test: 0.632 	F1u_test: 0.809
Epoch: 27 	Train Loss: 0.059 	Val. Loss: 1.451 	F1_train: 0.915 	F1_test: 0.601 	F1u_test: 0.798
Epoch: 28 	Train Loss: 0.060 	Val. Loss: 1.701 	F1_train: 0.914 	F1_test: 0.595 	F1u_test: 0.788
Epoch: 29 	Train Loss: 0.058 	Val. Loss: 1.380 	F1_train: 0.914 	F1_test: 0.636 	F1u_test: 0.813
Epoch: 30 	Train Loss: 0.057 	Val. Loss: 1.467 	F1_train: 0.916 	F1_test: 0.589 	F1u_test: 0.795
Epoch: 31 	Train Loss: 0.058 	Val. Loss: 2.117 	F1_train: 0.916 	F1_test: 0.584 	F1u_test: 0.767
Epoch: 32 	Train Loss: 0.059 	Val. Loss: 1.892 	F1_train: 0.914 	F1_test: 0.639 	F1u_test: 0.808
Epoch: 33 	Train Loss: 0.057 	Val. Loss: 1.437 	F1_train: 0.917 	F1_test: 0.623 	F1u_test: 0.806
Epoch: 34 	Train Loss: 0.055 	Val. Loss: 1.810 	F1_train: 0.920 	F1_test: 0.608 	F1u_test: 0.780
Epoch: 35 	Train Loss: 0.053 	Val. Loss: 1.745 	F1_train: 0.921 	F1_test: 0.617 	F1u_test: 0.804
Epoch: 36 	Train Loss: 0.055 	Val. Loss: 1.607 	F1_train: 0.919 	F1_test: 0.622 	F1u_test: 0.805
Epoch: 37 	Train Loss: 0.056 	Val. Loss: 2.632 	F1_train: 0.919 	F1_test: 0.533 	F1u_test: 0.749
Epoch: 38 	Train Loss: 0.054 	Val. Loss: 1.458 	F1_train: 0.919 	F1_test: 0.610 	F1u_test: 0.803
Epoch: 39 	Train Loss: 0.054 	Val. Loss: 2.133 	F1_train: 0.920 	F1_test: 0.596 	F1u_test: 0.792
Epoch: 40 	Train Loss: 0.053 	Val. Loss: 1.449 	F1_train: 0.921 	F1_test: 0.619 	F1u_test: 0.810

Performance

Let's see the training performance :

In [ ]:
fig2, ax = plt.subplots(1,3,  figsize=(15,5))
ax[0].plot(train_losses)
ax[1].plot(train_acc)
ax[2].plot(train_F1)
ax[0].plot(valid_losses,'-r')
ax[1].plot(test_acc,'-r')
ax[2].plot(test_F1_uw,'--r')
ax[2].plot(test_F1,'-r')
ax[0].set_title('Loss')
ax[1].set_title('Accuracy')
ax[2].set_title('F1')
ax[2].legend(("Train", "Test"))
plt.show()

Now let's see our best score.

In [ ]:
print("Best F1            :",F1_old)
print("Best F1 unweighted :",F1uw_old)
Best F1            : 0.6433850880601024
Best F1 unweighted : 0.81251770782969

So, that's it.

It's certainly not the best, if it is, I'd get some money by now. (Still won a drone, though. Thanks Aicrowd and SEAM AI!)

But I hope, you can learn new stuff and at least not getting error when submitting to this challenge.

If you guys got any question, just comment on my submission here. also please hit that love button! (Me want cool VR gear)

Thanks and see ya!

Than3k.gif



Credits

Image and glitter text (pretty important) from :

Music :

  • Benjamin Tissot - Jazzy Frenchie www.bensound.com
  • Benjamin Tissot - The Elevator Bossa www.bensound.com

Some packages :

Making a Submission

Download the test data:

In [ ]:
!gdown "https://drive.google.com/uc?id=1-GZiUbyzmTK-nR9AZC9t1Q8ZF2iC3sH1"
Downloading...
From: https://drive.google.com/uc?id=1-GZiUbyzmTK-nR9AZC9t1Q8ZF2iC3sH1
To: /content/data_test_2.npz
1.04GB [00:10, 103MB/s]

Read it :

In [ ]:
test2_data = np.load('data_test_2.npz')['data']
In [ ]:
test2_data.shape
Out[ ]:
(1006, 334, 841)

Preprocess the data :

In [ ]:
from ipywidgets import IntProgress
from IPython.display import display
import time
print('preprocessing the data :')
f = IntProgress(min=0, max=test2_data.shape[1])
display(f)
for i in range(0,test2_data.shape[1]):
    #print('reprocess : ',i+1,'of',train_data_full.shape[1])
    test2_data[:,i,:]=gain(test2_data[:,i,:],3e-3,0.8)
    f.value += 1
preprocessing the data :
In [ ]:
test2_data = (test2_data - test2_data.min()) / (test2_data.max() - test2_data.min())

Load the best score model :

In [ ]:
model.load_state_dict(torch.load('modelbestf1_unweighted_run1.ckpt'))
Out[ ]:
<All keys matched successfully>

Predict it :

In [ ]:
pred=np.zeros([1006,334,841],dtype='int32')
model.eval()
for i in range(334):
    print('predicting...',i+1,'of','334')
    tempcek=test2_data[:,i,:]

    temp=cv2.resize(tempcek, (320,640), interpolation=cv2.INTER_NEAREST)
    
    temp=temp[None,None,:,:]
    #temp=temp.cuda()
    score = model(torch.from_numpy(temp).float().cuda())
    temppred = score.max(1)[1].cpu().numpy()[0, :, :]+1    
    
    temppred=cv2.resize(temppred, (841,1006), interpolation=cv2.INTER_NEAREST)

    pred[:,i,:]=temppred
predicting... 1 of 334
predicting... 2 of 334
predicting... 3 of 334
predicting... 4 of 334
predicting... 5 of 334
predicting... 6 of 334
predicting... 7 of 334
predicting... 8 of 334
predicting... 9 of 334
predicting... 10 of 334
predicting... 11 of 334
predicting... 12 of 334
predicting... 13 of 334
predicting... 14 of 334
predicting... 15 of 334
predicting... 16 of 334
predicting... 17 of 334
predicting... 18 of 334
predicting... 19 of 334
predicting... 20 of 334
predicting... 21 of 334
predicting... 22 of 334
predicting... 23 of 334
predicting... 24 of 334
predicting... 25 of 334
predicting... 26 of 334
predicting... 27 of 334
predicting... 28 of 334
predicting... 29 of 334
predicting... 30 of 334
predicting... 31 of 334
predicting... 32 of 334
predicting... 33 of 334
predicting... 34 of 334
predicting... 35 of 334
predicting... 36 of 334
predicting... 37 of 334
predicting... 38 of 334
predicting... 39 of 334
predicting... 40 of 334
predicting... 41 of 334
predicting... 42 of 334
predicting... 43 of 334
predicting... 44 of 334
predicting... 45 of 334
predicting... 46 of 334
predicting... 47 of 334
predicting... 48 of 334
predicting... 49 of 334
predicting... 50 of 334
predicting... 51 of 334
predicting... 52 of 334
predicting... 53 of 334
predicting... 54 of 334
predicting... 55 of 334
predicting... 56 of 334
predicting... 57 of 334
predicting... 58 of 334
predicting... 59 of 334
predicting... 60 of 334
predicting... 61 of 334
predicting... 62 of 334
predicting... 63 of 334
predicting... 64 of 334
predicting... 65 of 334
predicting... 66 of 334
predicting... 67 of 334
predicting... 68 of 334
predicting... 69 of 334
predicting... 70 of 334
predicting... 71 of 334
predicting... 72 of 334
predicting... 73 of 334
predicting... 74 of 334
predicting... 75 of 334
predicting... 76 of 334
predicting... 77 of 334
predicting... 78 of 334
predicting... 79 of 334
predicting... 80 of 334
predicting... 81 of 334
predicting... 82 of 334
predicting... 83 of 334
predicting... 84 of 334
predicting... 85 of 334
predicting... 86 of 334
predicting... 87 of 334
predicting... 88 of 334
predicting... 89 of 334
predicting... 90 of 334
predicting... 91 of 334
predicting... 92 of 334
predicting... 93 of 334
predicting... 94 of 334
predicting... 95 of 334
predicting... 96 of 334
predicting... 97 of 334
predicting... 98 of 334
predicting... 99 of 334
predicting... 100 of 334
predicting... 101 of 334
predicting... 102 of 334
predicting... 103 of 334
predicting... 104 of 334
predicting... 105 of 334
predicting... 106 of 334
predicting... 107 of 334
predicting... 108 of 334
predicting... 109 of 334
predicting... 110 of 334
predicting... 111 of 334
predicting... 112 of 334
predicting... 113 of 334
predicting... 114 of 334
predicting... 115 of 334
predicting... 116 of 334
predicting... 117 of 334
predicting... 118 of 334
predicting... 119 of 334
predicting... 120 of 334
predicting... 121 of 334
predicting... 122 of 334
predicting... 123 of 334
predicting... 124 of 334
predicting... 125 of 334
predicting... 126 of 334
predicting... 127 of 334
predicting... 128 of 334
predicting... 129 of 334
predicting... 130 of 334
predicting... 131 of 334
predicting... 132 of 334
predicting... 133 of 334
predicting... 134 of 334
predicting... 135 of 334
predicting... 136 of 334
predicting... 137 of 334
predicting... 138 of 334
predicting... 139 of 334
predicting... 140 of 334
predicting... 141 of 334
predicting... 142 of 334
predicting... 143 of 334
predicting... 144 of 334
predicting... 145 of 334
predicting... 146 of 334
predicting... 147 of 334
predicting... 148 of 334
predicting... 149 of 334
predicting... 150 of 334
predicting... 151 of 334
predicting... 152 of 334
predicting... 153 of 334
predicting... 154 of 334
predicting... 155 of 334
predicting... 156 of 334
predicting... 157 of 334
predicting... 158 of 334
predicting... 159 of 334
predicting... 160 of 334
predicting... 161 of 334
predicting... 162 of 334
predicting... 163 of 334
predicting... 164 of 334
predicting... 165 of 334
predicting... 166 of 334
predicting... 167 of 334
predicting... 168 of 334
predicting... 169 of 334
predicting... 170 of 334
predicting... 171 of 334
predicting... 172 of 334
predicting... 173 of 334
predicting... 174 of 334
predicting... 175 of 334
predicting... 176 of 334
predicting... 177 of 334
predicting... 178 of 334
predicting... 179 of 334
predicting... 180 of 334
predicting... 181 of 334
predicting... 182 of 334
predicting... 183 of 334
predicting... 184 of 334
predicting... 185 of 334
predicting... 186 of 334
predicting... 187 of 334
predicting... 188 of 334
predicting... 189 of 334
predicting... 190 of 334
predicting... 191 of 334
predicting... 192 of 334
predicting... 193 of 334
predicting... 194 of 334
predicting... 195 of 334
predicting... 196 of 334
predicting... 197 of 334
predicting... 198 of 334
predicting... 199 of 334
predicting... 200 of 334
predicting... 201 of 334
predicting... 202 of 334
predicting... 203 of 334
predicting... 204 of 334
predicting... 205 of 334
predicting... 206 of 334
predicting... 207 of 334
predicting... 208 of 334
predicting... 209 of 334
predicting... 210 of 334
predicting... 211 of 334
predicting... 212 of 334
predicting... 213 of 334
predicting... 214 of 334
predicting... 215 of 334
predicting... 216 of 334
predicting... 217 of 334
predicting... 218 of 334
predicting... 219 of 334
predicting... 220 of 334
predicting... 221 of 334
predicting... 222 of 334
predicting... 223 of 334
predicting... 224 of 334
predicting... 225 of 334
predicting... 226 of 334
predicting... 227 of 334
predicting... 228 of 334
predicting... 229 of 334
predicting... 230 of 334
predicting... 231 of 334
predicting... 232 of 334
predicting... 233 of 334
predicting... 234 of 334
predicting... 235 of 334
predicting... 236 of 334
predicting... 237 of 334
predicting... 238 of 334
predicting... 239 of 334
predicting... 240 of 334
predicting... 241 of 334
predicting... 242 of 334
predicting... 243 of 334
predicting... 244 of 334
predicting... 245 of 334
predicting... 246 of 334
predicting... 247 of 334
predicting... 248 of 334
predicting... 249 of 334
predicting... 250 of 334
predicting... 251 of 334
predicting... 252 of 334
predicting... 253 of 334
predicting... 254 of 334
predicting... 255 of 334
predicting... 256 of 334
predicting... 257 of 334
predicting... 258 of 334
predicting... 259 of 334
predicting... 260 of 334
predicting... 261 of 334
predicting... 262 of 334
predicting... 263 of 334
predicting... 264 of 334
predicting... 265 of 334
predicting... 266 of 334
predicting... 267 of 334
predicting... 268 of 334
predicting... 269 of 334
predicting... 270 of 334
predicting... 271 of 334
predicting... 272 of 334
predicting... 273 of 334
predicting... 274 of 334
predicting... 275 of 334
predicting... 276 of 334
predicting... 277 of 334
predicting... 278 of 334
predicting... 279 of 334
predicting... 280 of 334
predicting... 281 of 334
predicting... 282 of 334
predicting... 283 of 334
predicting... 284 of 334
predicting... 285 of 334
predicting... 286 of 334
predicting... 287 of 334
predicting... 288 of 334
predicting... 289 of 334
predicting... 290 of 334
predicting... 291 of 334
predicting... 292 of 334
predicting... 293 of 334
predicting... 294 of 334
predicting... 295 of 334
predicting... 296 of 334
predicting... 297 of 334
predicting... 298 of 334
predicting... 299 of 334
predicting... 300 of 334
predicting... 301 of 334
predicting... 302 of 334
predicting... 303 of 334
predicting... 304 of 334
predicting... 305 of 334
predicting... 306 of 334
predicting... 307 of 334
predicting... 308 of 334
predicting... 309 of 334
predicting... 310 of 334
predicting... 311 of 334
predicting... 312 of 334
predicting... 313 of 334
predicting... 314 of 334
predicting... 315 of 334
predicting... 316 of 334
predicting... 317 of 334
predicting... 318 of 334
predicting... 319 of 334
predicting... 320 of 334
predicting... 321 of 334
predicting... 322 of 334
predicting... 323 of 334
predicting... 324 of 334
predicting... 325 of 334
predicting... 326 of 334
predicting... 327 of 334
predicting... 328 of 334
predicting... 329 of 334
predicting... 330 of 334
predicting... 331 of 334
predicting... 332 of 334
predicting... 333 of 334
predicting... 334 of 334

See sample result :

In [ ]:
plt.imshow(pred[:,1,:],cmap='Pastel1')
plt.title('Result test-2 slice#1')
plt.colorbar()
plt.show()
In [ ]:
np.savez_compressed('submission_pspnet_run1.npz',prediction=pred)

In [ ]:
def weight_reset(m):
    reset_parameters = getattr(m, "reset_parameters", None)
    if callable(reset_parameters):
        m.reset_parameters()

Worth to Try

What we did above is just a simple tile predict. You can use pytorch-toolbelt just like ivan_romanov explainer.

or..

use this https://github.com/the-lay/tiler which based on this interesting paper :

Introducing Hann windows for reducing edge-effects in patch-based image segmentation, Pielawski and Wählby, March 2020


Comments

leocd
About 3 years ago

If you have anything to ask, just hit me up or comment here!

also, the video only can be played on Colab.

You must login before you can post a comment.

Execute