Loading

Seismic Facies Identification Challenge

[Explainer] Detectron2 & COCO Dataset 🔥 • Web Application & Visualizations • End-to-End Baseline & Tensorflow

Detectron2 & COCO Dataset 🔥 • Web Application & Visualizations • End-to-End Baseline & Tensorflow

Shubhamai

So, me Shubhamai and I have come up with these 3 things -

COCO Dataset & using Detectron2, MMDetection

YES! I have converted this dataset into COCO Dataset and which we train Mask-RCNN using Detectron2.

There we go boys - Colab Link

More things will be added so like this post RIGHT NOW :smile:

Web Application & Visualisation

https://seismic-facies-identification.herokuapp.com/

But this time, I found that a great preprocessing pipeline can help to model to find accurate features and increasing overall accuracy. But it kinda isn’t that easy as it looks —

So I made a Web Application based on that which allows you to play/experiment with many of the image preprocessing functions/methods, changing parameters or writing custom image preprocessing functions to experiment.

And it also contains all the visualizations from the colab notebook .

I hope that it will help you in making the perfect preprocessing pipelines :grin:.

End-to-End Baseline & Tensorflow

https://colab.research.google.com/drive/1t1hF_Vs4xIyLGMw_B9l1G6qzLBxLB5eG?usp=sharing

I have made a complete colab notebook from Data Exploration to Submitting Predictions. Here are some of the glimpse of the image visualization section!

And this 3D Plot!

Tables of Content -

  1. Setting our Workspace :briefcase:
  2. Data Exploration :face_with_monocle:
  3. Image Preprocessing Techniqes :broom:
  4. Creating our Dataset :hammer:
  5. Creating our Model :factory:
  6. Training the Model :steam_locomotive:
  7. Evaluating the model :test_tube:
  8. Testing on test Data :100:
  9. Generate More Data + Some tips & tricks :bulb:

The main libraries covered in this notebook is —

  • Tensorflow 2.0 & Keras
  • Plotly
  • cv2
    and much more…

The model that i am using is UNet, pretty much standard in image segmentation. More is in the colab notebook!

I hope the colab notebook will help you get started in this competition or learning something new :slightly_smiling_face:. If the notebook did help you, make sure to like the post. lol.

https://colab.research.google.com/drive/1t1hF_Vs4xIyLGMw_B9l1G6qzLBxLB5eG?usp=sharing

:red_circle: Please like the topic if this helps in any way possible :slight_smile: . I really appreciate that :smiley:

🌎 Facies Identification Challenge: 3D image interpretation by Machine Learning

In this challange we need to identify facies as an image, from 3D seismic image using Deep Learing with various tools like tensorflow, keras, numpy, pandas, matplotlib, plotly and much much more..

Problem

Segmentating the 3D seismic image into an image with each pixel can be classfied into 6 labels based on patterns in the image.

newplot (4).png

https://www.aicrowd.com/challenges/seismic-facies-identification-challenge#introduction

Dataset

We have 3D datasets both ( features X, and labels Y ) with shape for X in 1006 × 782 × 590, in axis corresponding Z, X, Y and Y in 1006 × 782 × 590 in also axis corresponsing Z, X, Y.

https://www.aicrowd.com/challenges/seismic-facies-identification-challenge/dataset_files

We can say that we have total of 2,378 trainig images with their corresponsing labels and we also have same number of 2,378 testing images which we will predict labels for.

https://www.aicrowd.com/challenges/seismic-facies-identification-challenge#dataset

Evaluation

The evaluation metrics are the F1 score and accuracy.

https://www.aicrowd.com/challenges/seismic-facies-identification-challenge#evaluation-criteria

Tables of Content

  1. Setting our Workspace 💼
    • Downloading our Dataset
    • Importing Necessary Libraries
  1. Data Exploration 🧐

    • Reading our Dataset
    • Image Visualisations
  2. Image Preprocessing Techniqes 🧹

    • Image preprocessing
  3. Creating our Dataset 🔨

    • Loading data into memory
    • Making 2D Images
  4. Creating our Model 🏭

    • Creating Unet Model
    • Setting up hyperparameters
  5. Training the Model 🚂

    • Setting up Tensorboard
    • Start Training!
  6. Evaluating the model 🧪

    • Evaluating our Model
  7. Testing on test Data 💯

  8. Generate More Data + Some tips & tricks 💡

Setting our Workspace 💼

In this section we are going to download our dataset & also downloading some libraries, and then importing up all libraries to get ready!

Downloading our Dataset

In [ ]:
# Downloading training data ( Seismic Images | X )
!wget https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_train.npz

# Downloading training data ( Labels | Y )
!wget https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/labels_train.npz

# Downloading Testing Dataset 
!wget https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_test_1.npz
--2020-10-17 12:30:32--  https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_train.npz
Resolving datasets.aicrowd.com (datasets.aicrowd.com)... 35.189.208.115
Connecting to datasets.aicrowd.com (datasets.aicrowd.com)|35.189.208.115|:443... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_train.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123038Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=81146a2ec9aeba19548ac23abf4872f0d522419b687b1c19d40b96ec81651020 [following]
--2020-10-17 12:30:38--  https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_train.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123038Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=81146a2ec9aeba19548ac23abf4872f0d522419b687b1c19d40b96ec81651020
Resolving s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)... 206.190.215.254
Connecting to s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)|206.190.215.254|:443... connected.
HTTP request sent, awaiting response... 200 
Length: 1715555445 (1.6G) [application/octet-stream]
Saving to: ‘data_train.npz’

data_train.npz      100%[===================>]   1.60G  17.8MB/s    in 89s     

2020-10-17 12:32:17 (18.5 MB/s) - ‘data_train.npz’ saved [1715555445/1715555445]

--2020-10-17 12:32:17--  https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/labels_train.npz
Resolving datasets.aicrowd.com (datasets.aicrowd.com)... 35.189.208.115
Connecting to datasets.aicrowd.com (datasets.aicrowd.com)|35.189.208.115|:443... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/labels_train.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123258Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=070a2785f94809c1d6ed9f69d002047d1ff579a0c943fef35e0cb1a0bcee2cd2 [following]
--2020-10-17 12:32:58--  https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/labels_train.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123258Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=070a2785f94809c1d6ed9f69d002047d1ff579a0c943fef35e0cb1a0bcee2cd2
Resolving s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)... 206.190.215.254
Connecting to s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)|206.190.215.254|:443... connected.
HTTP request sent, awaiting response... 200 
Length: 7160425 (6.8M) [application/octet-stream]
Saving to: ‘labels_train.npz’

labels_train.npz    100%[===================>]   6.83M  5.49MB/s    in 1.2s    

2020-10-17 12:33:08 (5.49 MB/s) - ‘labels_train.npz’ saved [7160425/7160425]

--2020-10-17 12:33:08--  https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_test_1.npz
Resolving datasets.aicrowd.com (datasets.aicrowd.com)... 35.189.208.115
Connecting to datasets.aicrowd.com (datasets.aicrowd.com)|35.189.208.115|:443... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_test_1.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123312Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=3e419cd13d54249c50952d1062203a197c2a40d60a6ba01a675b7ff417ec4385 [following]
--2020-10-17 12:33:12--  https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_test_1.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123312Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=3e419cd13d54249c50952d1062203a197c2a40d60a6ba01a675b7ff417ec4385
Resolving s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)... 206.190.215.254
Connecting to s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)|206.190.215.254|:443... connected.
HTTP request sent, awaiting response... 200 
Length: 731382806 (698M) [application/octet-stream]
Saving to: ‘data_test_1.npz’

data_test_1.npz     100%[===================>] 697.50M  17.5MB/s    in 39s     

2020-10-17 12:33:56 (17.9 MB/s) - ‘data_test_1.npz’ saved [731382806/731382806]

Importing Necessary Libraries

In [ ]:
!pip install git+https://github.com/tensorflow/examples.git
!pip install git+https://github.com/karolzak/keras-unet

# # install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.5 torchvision==0.6 -f https://download.pytorch.org/whl/cu101/torch_stable.html 
!pip install cython pyyaml==5.1
!pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
!gcc --version

# install detectron2:
!pip install detectron2==0.1.2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html

!pip install imantics
Collecting git+https://github.com/tensorflow/examples.git
  Cloning https://github.com/tensorflow/examples.git to /tmp/pip-req-build-frvmgcpl
  Running command git clone -q https://github.com/tensorflow/examples.git /tmp/pip-req-build-frvmgcpl
Requirement already satisfied: absl-py in /usr/local/lib/python3.6/dist-packages (from tensorflow-examples===35f4ae1e805c97aa63da565f61e4b81f66da1422-) (0.10.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from tensorflow-examples===35f4ae1e805c97aa63da565f61e4b81f66da1422-) (1.15.0)
Building wheels for collected packages: tensorflow-examples
  Building wheel for tensorflow-examples (setup.py) ... done
  Created wheel for tensorflow-examples: filename=tensorflow_examples-35f4ae1e805c97aa63da565f61e4b81f66da1422_-cp36-none-any.whl size=137927 sha256=1b0c01fb7ad460af04327ba8b88e6957b61ab64c27cf980127f2148069013090
  Stored in directory: /tmp/pip-ephem-wheel-cache-1m5fanzw/wheels/83/64/b3/4cfa02dc6f9d16bf7257892c6a7ec602cd7e0ff6ec4d7d714d
Successfully built tensorflow-examples
Installing collected packages: tensorflow-examples
Successfully installed tensorflow-examples-35f4ae1e805c97aa63da565f61e4b81f66da1422-
Collecting git+https://github.com/karolzak/keras-unet
  Cloning https://github.com/karolzak/keras-unet to /tmp/pip-req-build-uml7m95l
  Running command git clone -q https://github.com/karolzak/keras-unet /tmp/pip-req-build-uml7m95l
Building wheels for collected packages: keras-unet
  Building wheel for keras-unet (setup.py) ... done
  Created wheel for keras-unet: filename=keras_unet-0.1.2-cp36-none-any.whl size=16995 sha256=0e316062b26f2d7af94b337efbb1e07400bad12389e25f60de004c51e5522712
  Stored in directory: /tmp/pip-ephem-wheel-cache-2pgkknt2/wheels/b3/3a/85/c3df1c96b5d83dcd2c09b634e72a98cafcf12a52501ac5cd77
Successfully built keras-unet
Installing collected packages: keras-unet
Successfully installed keras-unet-0.1.2
Looking in links: https://download.pytorch.org/whl/cu101/torch_stable.html
Collecting torch==1.5
  Downloading https://download.pytorch.org/whl/cu101/torch-1.5.0%2Bcu101-cp36-cp36m-linux_x86_64.whl (703.8MB)
     |████████████████████████████████| 703.8MB 27kB/s 
Collecting torchvision==0.6
  Downloading https://download.pytorch.org/whl/cu101/torchvision-0.6.0%2Bcu101-cp36-cp36m-linux_x86_64.whl (6.6MB)
     |████████████████████████████████| 6.6MB 46kB/s 
Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.5) (1.18.5)
Requirement already satisfied, skipping upgrade: future in /usr/local/lib/python3.6/dist-packages (from torch==1.5) (0.16.0)
Requirement already satisfied, skipping upgrade: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==0.6) (7.0.0)
Installing collected packages: torch, torchvision
  Found existing installation: torch 1.6.0+cu101
    Uninstalling torch-1.6.0+cu101:
      Successfully uninstalled torch-1.6.0+cu101
  Found existing installation: torchvision 0.7.0+cu101
    Uninstalling torchvision-0.7.0+cu101:
      Successfully uninstalled torchvision-0.7.0+cu101
Successfully installed torch-1.5.0+cu101 torchvision-0.6.0+cu101
Requirement already satisfied: cython in /usr/local/lib/python3.6/dist-packages (0.29.21)
Collecting pyyaml==5.1
  Downloading https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz (274kB)
     |████████████████████████████████| 276kB 2.8MB/s 
Building wheels for collected packages: pyyaml
  Building wheel for pyyaml (setup.py) ... done
  Created wheel for pyyaml: filename=PyYAML-5.1-cp36-cp36m-linux_x86_64.whl size=44075 sha256=3e2c9f335c5ccc60e69e701b66bb535aa70e96ef852b7138ee26056643ddc940
  Stored in directory: /root/.cache/pip/wheels/ad/56/bc/1522f864feb2a358ea6f1a92b4798d69ac783a28e80567a18b
Successfully built pyyaml
Installing collected packages: pyyaml
  Found existing installation: PyYAML 3.13
    Uninstalling PyYAML-3.13:
      Successfully uninstalled PyYAML-3.13
Successfully installed pyyaml-5.1
Collecting git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI
  Cloning https://github.com/cocodataset/cocoapi.git to /tmp/pip-req-build-kpgx5uyk
  Running command git clone -q https://github.com/cocodataset/cocoapi.git /tmp/pip-req-build-kpgx5uyk
Requirement already satisfied, skipping upgrade: setuptools>=18.0 in /usr/local/lib/python3.6/dist-packages (from pycocotools==2.0) (50.3.0)
Requirement already satisfied, skipping upgrade: cython>=0.27.3 in /usr/local/lib/python3.6/dist-packages (from pycocotools==2.0) (0.29.21)
Requirement already satisfied, skipping upgrade: matplotlib>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from pycocotools==2.0) (3.2.2)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (0.10.0)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (2.8.1)
Requirement already satisfied, skipping upgrade: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (1.18.5)
Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (2.4.7)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (1.2.0)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib>=2.1.0->pycocotools==2.0) (1.15.0)
Building wheels for collected packages: pycocotools
  Building wheel for pycocotools (setup.py) ... done
  Created wheel for pycocotools: filename=pycocotools-2.0-cp36-cp36m-linux_x86_64.whl size=266458 sha256=f9380190c48084dd7af6b4016246105ed6ac36c7f2d9bb62901d4019a3cf2689
  Stored in directory: /tmp/pip-ephem-wheel-cache-t7pvade2/wheels/90/51/41/646daf401c3bc408ff10de34ec76587a9b3ebfac8d21ca5c3a
Successfully built pycocotools
Installing collected packages: pycocotools
  Found existing installation: pycocotools 2.0.2
    Uninstalling pycocotools-2.0.2:
      Successfully uninstalled pycocotools-2.0.2
Successfully installed pycocotools-2.0
1.5.0+cu101 True
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Looking in links: https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html
Collecting detectron2==0.1.2
  Downloading https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.5/detectron2-0.1.2%2Bcu101-cp36-cp36m-linux_x86_64.whl (6.2MB)
     |████████████████████████████████| 6.2MB 411kB/s 
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (7.0.0)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (1.3.0)
Requirement already satisfied: tensorboard in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (2.3.0)
Collecting yacs>=0.1.6
  Downloading https://files.pythonhosted.org/packages/38/4f/fe9a4d472aa867878ce3bb7efb16654c5d63672b86dc0e6e953a67018433/yacs-0.1.8-py3-none-any.whl
Collecting mock
  Downloading https://files.pythonhosted.org/packages/cd/74/d72daf8dff5b6566db857cfd088907bb0355f5dd2914c4b3ef065c790735/mock-4.0.2-py3-none-any.whl
Requirement already satisfied: pydot in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (1.3.0)
Requirement already satisfied: tabulate in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (0.8.7)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (0.16.0)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (3.2.2)
Collecting fvcore
  Downloading https://files.pythonhosted.org/packages/8f/14/3d359bd5526262b15dfbb471cc1680a6aa384ed5883f0455c859f9b4224e/fvcore-0.1.2.post20201016.tar.gz
Requirement already satisfied: tqdm>4.29.0 in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (4.41.1)
Requirement already satisfied: termcolor>=1.1 in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (1.1.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (0.4.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (3.2.2)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (3.12.4)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (50.3.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.7.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.17.2)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (2.23.0)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.32.0)
Requirement already satisfied: numpy>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.18.5)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.15.0)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (0.35.1)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (0.10.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.0.1)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.6/dist-packages (from yacs>=0.1.6->detectron2==0.1.2) (5.1)
Requirement already satisfied: pyparsing>=2.1.4 in /usr/local/lib/python3.6/dist-packages (from pydot->detectron2==0.1.2) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->detectron2==0.1.2) (2.8.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->detectron2==0.1.2) (1.2.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->detectron2==0.1.2) (0.10.0)
Collecting portalocker
  Downloading https://files.pythonhosted.org/packages/89/a6/3814b7107e0788040870e8825eebf214d72166adf656ba7d4bf14759a06a/portalocker-2.0.0-py2.py3-none-any.whl
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2==0.1.2) (1.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard->detectron2==0.1.2) (2.0.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2==0.1.2) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2==0.1.2) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2==0.1.2) (0.2.8)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1.2) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1.2) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1.2) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1.2) (2020.6.20)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2==0.1.2) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard->detectron2==0.1.2) (3.2.0)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<5,>=3.1.4; python_version >= "3"->google-auth<2,>=1.6.3->tensorboard->detectron2==0.1.2) (0.4.8)
Building wheels for collected packages: fvcore
  Building wheel for fvcore (setup.py) ... done
  Created wheel for fvcore: filename=fvcore-0.1.2.post20201016-cp36-none-any.whl size=44196 sha256=515c7e811e56805981547d30fec694ff443cac59cb9ffd84100179768527f2e9
  Stored in directory: /root/.cache/pip/wheels/f3/3f/35/86873c1ddea45a9fb1ba7921232ea15c570165a9d4f4d831c7
Successfully built fvcore
Installing collected packages: yacs, mock, portalocker, fvcore, detectron2
Successfully installed detectron2-0.1.2+cu101 fvcore-0.1.2.post20201016 mock-4.0.2 portalocker-2.0.0 yacs-0.1.8
Collecting imantics
  Downloading https://files.pythonhosted.org/packages/1a/ff/8f92fa03b42f14860bc882d08187b359d3b8f9ef670d4efbed090d451c58/imantics-0.1.12.tar.gz
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from imantics) (1.18.5)
Requirement already satisfied: opencv-python>=3 in /usr/local/lib/python3.6/dist-packages (from imantics) (4.1.2.30)
Requirement already satisfied: lxml in /usr/local/lib/python3.6/dist-packages (from imantics) (4.2.6)
Collecting xmljson
  Downloading https://files.pythonhosted.org/packages/91/2d/7191efe15406b8b99e2b5905ca676a8a3dc2936416ade7ed17752902c250/xmljson-0.2.1-py2.py3-none-any.whl
Building wheels for collected packages: imantics
  Building wheel for imantics (setup.py) ... done
  Created wheel for imantics: filename=imantics-0.1.12-cp36-none-any.whl size=16034 sha256=f5724970536ff60df0f5669aef09a9bcd471861fbe8d1d7d5fdce02c34ee4815
  Stored in directory: /root/.cache/pip/wheels/73/93/1c/9e2fc52eb74441941bc76cac441ddcc2c7ad67b18e1849e62a
Successfully built imantics
Installing collected packages: xmljson, imantics
Successfully installed imantics-0.1.12 xmljson-0.2.1
In [ ]:
# For data preprocessing & manipulation
import numpy as np
import pandas as pd

# FOr data visualisations & graphs
import matplotlib.pyplot as plt
import plotly.graph_objects as go
import plotly.express as px
from plotly.subplots import make_subplots

# utilities
from tqdm.notebook import tqdm
import datetime 
from IPython.display import HTML
import os

# For Deep learning
import tensorflow as tf
from tensorflow_examples.models.pix2pix import pix2pix
import tensorflow_datasets as tfds
import tensorflow_addons as tfa

# For Image Preprocessing
import cv2

# Detectron2


import detectron2
from detectron2.utils.logger import setup_logger
from imantics import Polygons, Mask
setup_logger()

import random

# import some common detectron2 utilities
from detectron2 import model_zoo

from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog
from detectron2.structures import BoxMode

from pycocotools import mask
from skimage import measure

from detectron2.data import DatasetCatalog, MetadataCatalog

from detectron2.engine import DefaultTrainer
from detectron2.config import get_cfg

# Setting a bigger figure size
plt.rcParams["figure.figsize"] = (20, 15)

Data Exploration 🧐

In this section we are going to explore our dataset, firstly load it and seeing some array, categories and then image visualisations

Reading Our Dataset

In [ ]:
# Reading our Training dataset ( Seismic Images | X )
data = np.load("/content/data_train.npz", 
               allow_pickle=True, mmap_mode = 'r')

# Reading our Traning Dataset ( Labels | Y)
labels = np.load("/content/labels_train.npz", 
                 allow_pickle=True, mmap_mode = 'r')

# Picking the actual data
X = data['data']
Y = labels['labels']
In [ ]:
# Dimensions of features & labels 

X.shape, Y.shape
In [ ]:
# Showing the data

X[:, 6, :], Y[:, 6, :]

Here we are making a 2D array of image, so we are picking the 6th index of X axis and seing the Z and Y axis values!

Also it looks like that we have got negative values also in X, but the Y looks good!

In [ ]:
np.unique(Y)

Ther are 6 different unique values in labels, as said before, each pixel can be classified into 6 different labels

Image Visualisations

In [ ]:
# Making a subplot with 1 row and 2 column
fig = make_subplots(1, 2, subplot_titles=("Image", "Label"))

# Visualising a section of the 3D array
fig.add_trace(go.Heatmap(z=X[:, :, 70][:300, :300]), 1, 1)

fig.add_trace(go.Heatmap(z=Y[:, :, 70][:300, :300]), 1, 2)

fig.update_layout(height=600, width=1100, title_text="Seismic Image & Label")

HTML(fig.to_html())
Output hidden; open in https://colab.research.google.com to view.
In [ ]:
# Making a subplot with 1 row and 2 column
fig = make_subplots(1, 2, subplot_titles=("Image", "Label"), specs=[[{"type": "Surface"}, {"type": "Surface"}]])

# Making a 3D Surphace graph with image and corresponsing label
fig.add_trace(go.Surface(z=X[:,75, :][:300, :300]), 1, 1)
fig.add_trace(go.Surface(z=Y[:,75, :][:300, :300]), 1, 2)

fig.update_layout(height=600, width=1100, title_text="Seismic Image & Label in 3D!")

HTML(fig.to_html())
Output hidden; open in https://colab.research.google.com to view.
In [ ]:
# Making a subplot with 1 row and 2 column
fig = make_subplots(1, 2, subplot_titles=("Image", "Label"))

# Making a contour graph
fig.add_trace(go.Contour(
        z=X[:,34, :][:300, :300]), 1, 1)

fig.add_trace(go.Contour(
        z=Y[:,34, :][:300, :300]
    ), 1, 2)


fig.update_layout(height=600, width=1100, title_text="Seismic Image & Label in with contours")

HTML(fig.to_html())
Output hidden; open in https://colab.research.google.com to view.
In [ ]:
# Making a subplot with 2 row and 2 column
fig = make_subplots(2, 2, subplot_titles=("Image", "Label", "Label Histogram"))

# Making a contour graph
fig.add_trace(go.Contour(
        z=X[:,34, :][:300, :300], contours_coloring='lines',
        line_width=2,), 1, 1)

# Showing the label ( also the contour )
fig.add_trace(go.Contour(
        z=Y[:,34, :][:300, :300]
    ), 1, 2)

# Showing histogram for the label column
fig.add_trace(go.Histogram(x=Y[:,34, :][:300, :300].ravel()), 2, 1)


fig.update_layout(height=800, width=1100, title_text="Seismic Image & Label in with contours ( only line )")

HTML(fig.to_html())
Output hidden; open in https://colab.research.google.com to view.
In [ ]:
# Making a subplot with 2 row and 1 column
fig = make_subplots(2, 1, subplot_titles=("Image", "label"))

# Making a contour graph
fig.add_trace(
    go.Contour(
        z=X[:,:, 56][:200, :200]
    ), 1, 1)

fig.add_trace(go.Contour(
        z=Y[:,:, 56][:200, :200]
    ), 2, 1)

fig.update_layout(height=1000, width=1100, title_text="Seismic Image & Label in with contours ( More Closer Look )")

HTML(fig.to_html())
Output hidden; open in https://colab.research.google.com to view.

Image Preprocessing Techniqes 🧹

In this section we are going to take a look at some image processing technique to see how we can improve the features so that our model and give more accuracy!

In [ ]:
# Reading a sample seismic image with label
img = X[:,:, 56]
label = Y[:, :, 56]

plt.imshow(img, cmap='gray')
plt.show()
plt.imshow(label)
Out[ ]:
<matplotlib.image.AxesImage at 0x7fa9b9630128>
In [ ]:
# Image Thresholding
ret,thresh1 = cv2.threshold(img,0,255,cv2.THRESH_TOZERO)
plt.imshow(thresh1, cmap='gray')
Out[ ]:
<matplotlib.image.AxesImage at 0x7f9be3d7de10>