Clouds Removal

Cloud Removal using Opencv

Removal of white foreground using the hsv of white color and fast denoising techniques of opencv


Removal of white foreground using the hsv of white color and fast denoising techniques of opencv

Random Submission for Clouds Removal

Note : Create a copy of the notebook and use the copy for submission. Go to File > Save a Copy in Drive to create a new copy

Setting up Environment

Downloading Dataset

So we will first need to download the python library by AIcrowd that will allow us to download the dataset by just inputting the API key.

In [ ]:
!pip install aicrowd-cli

%load_ext aicrowd.magic
Collecting aicrowd-cli
  Downloading aicrowd_cli-0.1.8-py3-none-any.whl (43 kB)
     |████████████████████████████████| 43 kB 790 kB/s 
Collecting tqdm<5,>=4.56.0
  Downloading tqdm-4.61.2-py2.py3-none-any.whl (76 kB)
     |████████████████████████████████| 76 kB 2.7 MB/s 
Collecting rich<11,>=10.0.0
  Downloading rich-10.6.0-py3-none-any.whl (208 kB)
     |████████████████████████████████| 208 kB 8.8 MB/s 
Requirement already satisfied: click<8,>=7.1.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (7.1.2)
Collecting GitPython==3.1.18
  Downloading GitPython-3.1.18-py3-none-any.whl (170 kB)
     |████████████████████████████████| 170 kB 10.4 MB/s 
Collecting requests<3,>=2.25.1
  Downloading requests-2.26.0-py2.py3-none-any.whl (62 kB)
     |████████████████████████████████| 62 kB 713 kB/s 
Requirement already satisfied: toml<1,>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.10.2)
Collecting requests-toolbelt<1,>=0.9.1
  Downloading requests_toolbelt-0.9.1-py2.py3-none-any.whl (54 kB)
     |████████████████████████████████| 54 kB 2.1 MB/s 
Collecting gitdb<5,>=4.0.1
  Downloading gitdb-4.0.7-py3-none-any.whl (63 kB)
     |████████████████████████████████| 63 kB 1.4 MB/s 
Requirement already satisfied: typing-extensions>= in /usr/local/lib/python3.7/dist-packages (from GitPython==3.1.18->aicrowd-cli) (
Collecting smmap<5,>=3.0.1
  Downloading smmap-4.0.0-py2.py3-none-any.whl (24 kB)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.0.2)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.24.3)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2021.5.30)
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.6.1)
Collecting commonmark<0.10.0,>=0.9.0
  Downloading commonmark-0.9.1-py2.py3-none-any.whl (51 kB)
     |████████████████████████████████| 51 kB 4.7 MB/s 
Collecting colorama<0.5.0,>=0.4.0
  Downloading colorama-0.4.4-py2.py3-none-any.whl (16 kB)
Installing collected packages: smmap, requests, gitdb, commonmark, colorama, tqdm, rich, requests-toolbelt, GitPython, aicrowd-cli
  Attempting uninstall: requests
    Found existing installation: requests 2.23.0
    Uninstalling requests-2.23.0:
      Successfully uninstalled requests-2.23.0
  Attempting uninstall: tqdm
    Found existing installation: tqdm 4.41.1
    Uninstalling tqdm-4.41.1:
      Successfully uninstalled tqdm-4.41.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
google-colab 1.0.0 requires requests~=2.23.0, but you have requests 2.26.0 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
Successfully installed GitPython-3.1.18 aicrowd-cli-0.1.8 colorama-0.4.4 commonmark-0.9.1 gitdb-4.0.7 requests-2.26.0 requests-toolbelt-0.9.1 rich-10.6.0 smmap-4.0.0 tqdm-4.61.2
In [ ]:
%aicrowd login
Please login here: https://api.aicrowd.com/auth/vanSC8We2GovNv_dkfqh2GtdnuhWhborjIF0hyBE-HQ
API Key valid
Saved API Key successfully!
In [ ]:
# Downloading the Dataset
!rm -rf data
!mkdir data
!aicrowd dataset download -c clouds-removal "*Partial*" -o data
test.zip: 100% 601M/601M [00:36<00:00, 16.6MB/s]
train.zip: 100% 1.62G/1.62G [01:54<00:00, 14.1MB/s]
In [ ]:
# Unzipping the dataset
!unzip data/train.zip -d data/train >> /dev/null
!unzip data/test.zip -d data/test >> /dev/null

Importing Libraries

In [ ]:
# Importing Libraries
import os
from natsort import natsorted
from glob import glob
import cv2
from tqdm.notebook import tqdm

Generate Random Submission

In this section we will be generating a random submission. We will read all of the files from the tesing directroy and save the same video in clear directory for submsision.

In [ ]:
# Creating a clear directory
!rm -rf clear
!mkdir clear
In [ ]:
import matplotlib.pyplot as plt
In [ ]:
# Random submission function
import numpy as np
#a = img = np.zeros((512,512,3), dtype=np.uint8)

def random_submission(data_directory):

  # List of all videos
  video_files = natsorted(glob(data_directory))

  # Groung through each video
  for idx, img_file in enumerate(tqdm(video_files)):
    # Saving a new video file 
    fourcc = cv2.VideoWriter_fourcc(*'mp4v')
    out = cv2.VideoWriter(os.path.join("clear", f"clear_{idx}.mp4"), fourcc, 24.0, (512,512))
    # Reading the video
    img_video = cv2.VideoCapture(img_file)
    # Going through each frame
    while ret:
      # Reading the frame
      ret, frame = img_video.read(cv2.IMREAD_UNCHANGED)
      if frame is not None:
        # Convert RGB to grayscale:
        grayscaleImage = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

        # Convert the BGR image to HSV:
        hsvImage = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

      # HSV colors for white 
        lowerValues = np.array([180, 18, 255])
        upperValues = np.array([0, 0, 231])

        # Get binary mask of the blue ink:
        bluepenMask = cv2.inRange(hsvImage, lowerValues, upperValues)
        # Use a little bit of morphology to clean the mask:
        # Set kernel (structuring element) size:
        kernelSize = 3
        # Set morph operation iterations:
        opIterations = 1
        # Get the structuring element:
        morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
        # Perform closing:
        bluepenMask = cv2.morphologyEx(bluepenMask, cv2.MORPH_CLOSE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

        # Add the white mask to the grayscale image:
        colorMask = cv2.add(grayscaleImage, bluepenMask)
        _, binaryImage = cv2.threshold(colorMask, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
        thresh, im_bw = cv2.threshold(binaryImage, 210, 230, cv2.THRESH_BINARY)
        kernel = np.ones((1, 1), np.uint8)
        imgfinal = cv2.dilate(im_bw, kernel=kernel, iterations=1)
          # kernel = np.ones((10,10),np.uint8)
          # res = cv2.erode(frame,kernel,iterations = 10)
      # # Adding the frame to video writer
      return frame
In [ ]:
# Running the function
frame = random_submission("data/test/cloud*")

Submitting Results 📄

Uploading the Results

In [ ]:
!aicrowd notebook submit -c clouds-removal -a clear --no-verify
Mounting Google Drive 💾
Your Google Drive will be mounted to access the colab notebook
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.activity.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fexperimentsandconfigs%20https%3a%2f%2fwww.googleapis.com%2fauth%2fphotos.native&response_type=code

Enter your authorization code:
Mounted at /content/drive
Using notebook: /content/drive/MyDrive/Colab Notebooks/Cloud Removal for submission...
Scrubbing API keys from the notebook...
Collecting notebook...
submission.zip ━━━━━━━━━━━━━━━━━━━━━━ 100.0%56.3/56.3 MB2.3 MB/s0:00:00
                                                │ Successfully submitted! │                                                 
                                                      Important links                                                       
│  This submission │ https://www.aicrowd.com/challenges/ai-blitz-x/problems/clouds-removal/submissions/150788              │
│                  │                                                                                                       │
│  All submissions │ https://www.aicrowd.com/challenges/ai-blitz-x/problems/clouds-removal/submissions?my_submissions=true │
│                  │                                                                                                       │
│      Leaderboard │ https://www.aicrowd.com/challenges/ai-blitz-x/problems/clouds-removal/leaderboards                    │
│                  │                                                                                                       │
│ Discussion forum │ https://discourse.aicrowd.com/c/ai-blitz-x                                                            │
│                  │                                                                                                       │
│   Challenge page │ https://www.aicrowd.com/challenges/ai-blitz-x/problems/clouds-removal                                 │

Don't be shy to ask question related to any errors you are getting or doubts in any part of this notebook in discussion forum or in AIcrowd Discord sever, AIcrew will be happy to help you :)

Also, wanna give us your valuable feedback for next blitz or wanna work with us creating blitz challanges ? Let us know!

In [ ]:

The HSV techniques and the opencv inbuilt desioning techniques were not that successful for cloud removal. The next approach that I am thinking of using is the CNN+GAN I will explain the concept in a new notebook


You must login before you can post a comment.