Loading

ashivani ###

Name

Ayush Shivani

Location

Hyderabad, IN

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Sample-efficient reinforcement learning in Minecraft

10 Travel Grants
Misc Prizes : 1x Titan RTX GPU

Latest submissions

See All
graded 10553

Multi Agent Reinforcement Learning on Trains.

30'000 Prize Money
5 Travel Grants
Misc Prizes : To Be Announced

Latest submissions

See All
graded 26932
failed 26931
failed 26777

A benchmark for image-based food recognition

1 Travel Grants
1 Authorship/Co-Authorship
Misc Prizes : Various Prizes

Latest submissions

See All
failed 59927

Classify images of snake species from around the world

1 Travel Grants
1 Authorship/Co-Authorship

Latest submissions

See All
failed 31887
graded 27332
failed 27331
ashivani has not joined any teams yet...

ICMPR - Income Prediction

Baseline INCPR Educational Challenge

2 days ago

Baseline for INCPR Educational Challenge on AIcrowd

Author : Ayush Shivani

Download Necessary Packages

In [1]:
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn
Requirement already satisfied: numpy in /home/ayush/.local/lib/python3.7/site-packages (1.18.1)
Requirement already satisfied: pandas in /home/ayush/.local/lib/python3.7/site-packages (0.25.0)
Requirement already satisfied: pytz>=2017.2 in /home/ayush/.local/lib/python3.7/site-packages (from pandas) (2019.3)
Requirement already satisfied: python-dateutil>=2.6.1 in /home/ayush/anaconda3/lib/python3.7/site-packages (from pandas) (2.8.0)
Requirement already satisfied: numpy>=1.13.3 in /home/ayush/.local/lib/python3.7/site-packages (from pandas) (1.18.1)
Requirement already satisfied: six>=1.5 in /home/ayush/anaconda3/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas) (1.12.0)
Requirement already satisfied: scikit-learn in /home/ayush/.local/lib/python3.7/site-packages (0.21.3)
Requirement already satisfied: scipy>=0.17.0 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (1.4.1)
Requirement already satisfied: numpy>=1.11.0 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (1.18.1)
Requirement already satisfied: joblib>=0.11 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (0.14.0)

Import packages

In [2]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load the data

In [5]:
train_data_path = "../data/public/train.csv" #path where data is stored
In [6]:
train_data = pd.read_csv(train_data_path) #load data in dataframe using pandas
In [16]:
train_data # Visualize the data
Out[16]:
age workclass fnlwgt education education num marital status occupation relationship race sex capital gain capital liss working hours per weel native country income
0 50 Self-emp-not-inc 83311 Bachelors 13 Married-civ-spouse Exec-managerial Husband White Male 0 0 13 United-States 1
1 38 Private 215646 HS-grad 9 Divorced Handlers-cleaners Not-in-family White Male 0 0 40 United-States 1
2 53 Private 234721 11th 7 Married-civ-spouse Handlers-cleaners Husband Black Male 0 0 40 United-States 1
3 28 Private 338409 Bachelors 13 Married-civ-spouse Prof-specialty Wife Black Female 0 0 40 Cuba 1
4 37 Private 284582 Masters 14 Married-civ-spouse Exec-managerial Wife White Female 0 0 40 United-States 1
5 49 Private 160187 9th 5 Married-spouse-absent Other-service Not-in-family Black Female 0 0 16 Jamaica 1
6 52 Self-emp-not-inc 209642 HS-grad 9 Married-civ-spouse Exec-managerial Husband White Male 0 0 45 United-States 0
7 31 Private 45781 Masters 14 Never-married Prof-specialty Not-in-family White Female 14084 0 50 United-States 0
8 42 Private 159449 Bachelors 13 Married-civ-spouse Exec-managerial Husband White Male 5178 0 40 United-States 0
9 37 Private 280464 Some-college 10 Married-civ-spouse Exec-managerial Husband Black Male 0 0 80 United-States 0
10 30 State-gov 141297 Bachelors 13 Married-civ-spouse Prof-specialty Husband Asian-Pac-Islander Male 0 0 40 India 0
11 23 Private 122272 Bachelors 13 Never-married Adm-clerical Own-child White Female 0 0 30 United-States 1
12 32 Private 205019 Assoc-acdm 12 Never-married Sales Not-in-family Black Male 0 0 50 United-States 1
13 40 Private 121772 Assoc-voc 11 Married-civ-spouse Craft-repair Husband Asian-Pac-Islander Male 0 0 40 ? 0
14 34 Private 245487 7th-8th 4 Married-civ-spouse Transport-moving Husband Amer-Indian-Eskimo Male 0 0 45 Mexico 1
15 25 Self-emp-not-inc 176756 HS-grad 9 Never-married Farming-fishing Own-child White Male 0 0 35 United-States 1
16 32 Private 186824 HS-grad 9 Never-married Machine-op-inspct Unmarried White Male 0 0 40 United-States 1
17 38 Private 28887 11th 7 Married-civ-spouse Sales Husband White Male 0 0 50 United-States 1
18 43 Self-emp-not-inc 292175 Masters 14 Divorced Exec-managerial Unmarried White Female 0 0 45 United-States 0
19 40 Private 193524 Doctorate 16 Married-civ-spouse Prof-specialty Husband White Male 0 0 60 United-States 0
20 54 Private 302146 HS-grad 9 Separated Other-service Unmarried Black Female 0 0 20 United-States 1
21 35 Federal-gov 76845 9th 5 Married-civ-spouse Farming-fishing Husband Black Male 0 0 40 United-States 1
22 43 Private 117037 11th 7 Married-civ-spouse Transport-moving Husband White Male 0 2042 40 United-States 1
23 59 Private 109015 HS-grad 9 Divorced Tech-support Unmarried White Female 0 0 40 United-States 1
24 56 Local-gov 216851 Bachelors 13 Married-civ-spouse Tech-support Husband White Male 0 0 40 United-States 0
25 19 Private 168294 HS-grad 9 Never-married Craft-repair Own-child White Male 0 0 40 United-States 1
26 54 ? 180211 Some-college 10 Married-civ-spouse ? Husband Asian-Pac-Islander Male 0 0 60 South 0
27 39 Private 367260 HS-grad 9 Divorced Exec-managerial Not-in-family White Male 0 0 80 United-States 1
28 49 Private 193366 HS-grad 9 Married-civ-spouse Craft-repair Husband White Male 0 0 40 United-States 1
29 23 Local-gov 190709 Assoc-acdm 12 Never-married Protective-serv Not-in-family White Male 0 0 52 United-States 1
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
32530 30 ? 33811 Bachelors 13 Never-married ? Not-in-family Asian-Pac-Islander Female 0 0 99 United-States 1
32531 34 Private 204461 Doctorate 16 Married-civ-spouse Prof-specialty Husband White Male 0 0 60 United-States 0
32532 54 Private 337992 Bachelors 13 Married-civ-spouse Exec-managerial Husband Asian-Pac-Islander Male 0 0 50 Japan 0
32533 37 Private 179137 Some-college 10 Divorced Adm-clerical Unmarried White Female 0 0 39 United-States 1
32534 22 Private 325033 12th 8 Never-married Protective-serv Own-child Black Male 0 0 35 United-States 1
32535 34 Private 160216 Bachelors 13 Never-married Exec-managerial Not-in-family White Female 0 0 55 United-States 0
32536 30 Private 345898 HS-grad 9 Never-married Craft-repair Not-in-family Black Male 0 0 46 United-States 1
32537 38 Private 139180 Bachelors 13 Divorced Prof-specialty Unmarried Black Female 15020 0 45 United-States 0
32538 71 ? 287372 Doctorate 16 Married-civ-spouse ? Husband White Male 0 0 10 United-States 0
32539 45 State-gov 252208 HS-grad 9 Separated Adm-clerical Own-child White Female 0 0 40 United-States 1
32540 41 ? 202822 HS-grad 9 Separated ? Not-in-family Black Female 0 0 32 United-States 1
32541 72 ? 129912 HS-grad 9 Married-civ-spouse ? Husband White Male 0 0 25 United-States 1
32542 45 Local-gov 119199 Assoc-acdm 12 Divorced Prof-specialty Unmarried White Female 0 0 48 United-States 1
32543 31 Private 199655 Masters 14 Divorced Other-service Not-in-family Other Female 0 0 30 United-States 1
32544 39 Local-gov 111499 Assoc-acdm 12 Married-civ-spouse Adm-clerical Wife White Female 0 0 20 United-States 0
32545 37 Private 198216 Assoc-acdm 12 Divorced Tech-support Not-in-family White Female 0 0 40 United-States 1
32546 43 Private 260761 HS-grad 9 Married-civ-spouse Machine-op-inspct Husband White Male 0 0 40 Mexico 1
32547 65 Self-emp-not-inc 99359 Prof-school 15 Never-married Prof-specialty Not-in-family White Male 1086 0 60 United-States 1
32548 43 State-gov 255835 Some-college 10 Divorced Adm-clerical Other-relative White Female 0 0 40 United-States 1
32549 43 Self-emp-not-inc 27242 Some-college 10 Married-civ-spouse Craft-repair Husband White Male 0 0 50 United-States 1
32550 32 Private 34066 10th 6 Married-civ-spouse Handlers-cleaners Husband Amer-Indian-Eskimo Male 0 0 40 United-States 1
32551 43 Private 84661 Assoc-voc 11 Married-civ-spouse Sales Husband White Male 0 0 45 United-States 1
32552 32 Private 116138 Masters 14 Never-married Tech-support Not-in-family Asian-Pac-Islander Male 0 0 11 Taiwan 1
32553 53 Private 321865 Masters 14 Married-civ-spouse Exec-managerial Husband White Male 0 0 40 United-States 0
32554 22 Private 310152 Some-college 10 Never-married Protective-serv Not-in-family White Male 0 0 40 United-States 1
32555 27 Private 257302 Assoc-acdm 12 Married-civ-spouse Tech-support Wife White Female 0 0 38 United-States 1
32556 40 Private 154374 HS-grad 9 Married-civ-spouse Machine-op-inspct Husband White Male 0 0 40 United-States 0
32557 58 Private 151910 HS-grad 9 Widowed Adm-clerical Unmarried White Female 0 0 40 United-States 1
32558 22 Private 201490 HS-grad 9 Never-married Adm-clerical Own-child White Male 0 0 20 United-States 1
32559 52 Self-emp-inc 287927 HS-grad 9 Married-civ-spouse Exec-managerial Wife White Female 15024 0 40 United-States 0

32560 rows × 15 columns

Select the clumns which you need to train on. We can also select columns with data type as text , but then we need to map thise text to numbers and tehn select them.

In [20]:
train_data = train_data[['age','education num','capital gain','capital liss','working hours per weel','income']]

Split the data in train/Validation

In [53]:
X_train, X_test= train_test_split(train_data, test_size=0.2, random_state=42)

Check which coloum contains the variable that needs to be predicted. Here it is the last column.

In [54]:
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_test,y_test = X_test.iloc[:,:-1],X_test.iloc[:,-1]

Define the classifier

In [56]:
classifier = LogisticRegression(solver = 'lbfgs',multi_class='auto', max_iter=1000)

One can set more parameters. To see the list of parameters visit here.

We can also use other classifiers. To read more about sklear classifiers visit here.

Train the classifier

In [57]:
classifier.fit(X_train, y_train)
Out[57]:
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
                   intercept_scaling=1, l1_ratio=None, max_iter=1000,
                   multi_class='auto', n_jobs=None, penalty='l2',
                   random_state=None, solver='lbfgs', tol=0.0001, verbose=0,
                   warm_start=False)

Predict on test set

In [58]:
y_pred = classifier.predict(X_test)

Find the scores

In [60]:
precision = precision_score(y_test,y_pred,average='micro')
recall = recall_score(y_test,y_pred,average='micro')
accuracy = accuracy_score(y_test,y_pred)
f1 = f1_score(y_test,y_pred,average='macro')
In [61]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)
Accuracy of the model is : 0.8143427518427518
Recall of the model is : 0.8143427518427518
Precision of the model is : 0.8143427518427518
F1 score of the model is : 0.7010332966849817

Prediction on Evaluation Set

Load the evaluation data

In [62]:
final_test_path = "../data/public/test.csv"
final_test = pd.read_csv(final_test_path)
final_test = final_test[['age','education num','capital gain','capital liss','working hours per weel']]

Predict on evaluation set

In [63]:
submission = classifier.predict(final_test)

Save the prediction to csv

In [64]:
submission = pd.DataFrame(submission)
submission.to_csv('../data/public/submission.csv',header=['income'],index=False)

Go to platform. Participate in the challenge and submit the submission.csv generated in /data/public/ folder.

MNIST - Recognise Handwritten Digits

Baseline - MNIST

2 days ago

Baseline for MNIST Educational Challenge on AIcrowd

Author : Ayush Shivani

Download Necessary Packages

In [76]:
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn
Requirement already satisfied: numpy in /home/ayush/.local/lib/python3.7/site-packages (1.18.1)
Requirement already satisfied: pandas in /home/ayush/.local/lib/python3.7/site-packages (0.25.0)
Requirement already satisfied: pytz>=2017.2 in /home/ayush/.local/lib/python3.7/site-packages (from pandas) (2019.3)
Requirement already satisfied: python-dateutil>=2.6.1 in /home/ayush/anaconda3/lib/python3.7/site-packages (from pandas) (2.8.0)
Requirement already satisfied: numpy>=1.13.3 in /home/ayush/.local/lib/python3.7/site-packages (from pandas) (1.18.1)
Requirement already satisfied: six>=1.5 in /home/ayush/anaconda3/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas) (1.12.0)
Requirement already satisfied: scikit-learn in /home/ayush/.local/lib/python3.7/site-packages (0.21.3)
Requirement already satisfied: joblib>=0.11 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (0.14.0)
Requirement already satisfied: numpy>=1.11.0 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (1.18.1)
Requirement already satisfied: scipy>=0.17.0 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (1.4.1)

Import packages

In [77]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load the data

In [80]:
train_data_path = "data/public/train.csv" #path where data is stored
In [79]:
train_data = pd.read_csv(train_data_path) #load data in dataframe using pandas

Split the data in train/test

In [65]:
X_train, X_test= train_test_split(train_data, test_size=0.2, random_state=42)
In [66]:
X_train,y_train = X_train.iloc[:,1:],X_train.iloc[:,0]
X_test,y_test = X_test.iloc[:,1:],X_test.iloc[:,0]

Define the classifier

In [81]:
classifier = LogisticRegression(solver = 'lbfgs',multi_class='auto',max_iter=100)

One can set more parameters. To see the list of parameters visit here.

We can also use other classifiers. To read more about sklear classifiers visit here.

Train the classifier

In [ ]:
classifier.fit(X_train, y_train)

Predict on test set

In [69]:
y_pred = logisticRegr.predict(X_test)

Find the scores

In [70]:
precision = precision_score(y_test,y_pred,average='micro')
recall = recall_score(y_test,y_pred,average='micro')
accuracy = accuracy_score(y_test,y_pred)
f1 = f1_score(y_test,y_pred,average='macro')
In [71]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)
Accuracy of the model is : 0.92225
Recall of the model is : 0.92225
Precision of the model is : 0.92225
F1 score of the model is : 0.9213314758432045

Prediction on Evaluation Set

Load the evaluation data

In [72]:
final_test_path = "data/public/test.csv"
final_test = pd.read_csv(final_test_path)

Predict on evaluation set

In [73]:
submission = logisticRegr.predict(final_test)

Save the prediction to csv

In [74]:
submission = pd.DataFrame(submission)
submission.to_csv('./data/public/submission.csv',header=['label'],index=False)

Go to platform. Participate in the challenge and submit the submission.csv generated in /data/public/ folder.

Baseline for MNIST

3 days ago

Open In Colab

This dataset and notebook correspond to the Food Recognition Challenge being held on AICrowd.

In this Notebook, we will first do an analysis of the Food Recognition Dataset and then use maskrcnn for training on the dataset.

The Challenge

  • Given Images of Food, we are asked to provide Instance Segmentation over the images for the food items.
  • The Training Data is provided in the COCO format, making it simpler to load with pre-available COCO data processors in popular libraries.
  • The test set provided in the public dataset is similar to Validation set, but with no annotations.
  • The test set after submission is much larger and contains private images upon which every submission is evaluated.
  • Pariticipants have to submit their trained model along with trained weights. Immediately after the submission the AICrowd Grader picks up the submitted model and produces inference on the private test set using Cloud GPUs.
  • This requires Users to structure their repositories and follow a provided paradigm for submission.
  • The AICrowd AutoGrader picks up the Dockerfile provided with the repository, builds it and then mounts the tests folder in the container. Once inference is made, the final results are checked with the ground truth.

For more submission related information, please check the AIcrowd Challenge page and the starter kit.

The Notebook

  • Installation of MaskRCNN
  • Using MatterPort MaskRCNN Library and Making local inference with it
  • Local Evaluation Using Matterport MaskRCNN

A bonus section on other resources to read is also added!

Dataset Download

In [0]:
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/myfoodrepo/round-2/train.tar.gz
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/myfoodrepo/round-2/val.tar.gz
--2020-03-23 19:42:31--  https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/myfoodrepo/round-2/train.tar.gz
Resolving s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)... 130.117.252.10, 130.117.252.12, 130.117.252.11
Connecting to s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)|130.117.252.10|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 418258711 (399M) [application/x-gzip]
Saving to: ‘train.tar.gz’

train.tar.gz        100%[===================>] 398.88M  21.9MB/s    in 19s     

2020-03-23 19:42:51 (20.8 MB/s) - ‘train.tar.gz’ saved [418258711/418258711]

--2020-03-23 19:42:53--  https://s3.eu-central-1.wasabisys.com/aicrowd-public-datasets/myfoodrepo/round-2/val.tar.gz
Resolving s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)... 130.117.252.11, 130.117.252.12, 130.117.252.10
Connecting to s3.eu-central-1.wasabisys.com (s3.eu-central-1.wasabisys.com)|130.117.252.11|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 33512285 (32M) [application/x-gzip]
Saving to: ‘val.tar.gz’

val.tar.gz          100%[===================>]  31.96M  12.9MB/s    in 2.5s    

2020-03-23 19:42:56 (12.9 MB/s) - ‘val.tar.gz’ saved [33512285/33512285]

In [0]:
!mkdir data
!mkdir data/val
!mkdir data/train
!tar -xf train.tar.gz -C data/train
!tar -xf val.tar.gz -C data/val

Installation

In [0]:
#Directories present
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
for dirname, _, filenames in os.walk('data/'):
        print(dirname)
data/
data/val
data/val/test_images
data/val/test_images/images
data/val/images
data/train
data/train/images
In [0]:
pip install -U numpy==1.17.0
Collecting numpy==1.17.0
  Downloading https://files.pythonhosted.org/packages/19/b9/bda9781f0a74b90ebd2e046fde1196182900bd4a8e1ea503d3ffebc50e7c/numpy-1.17.0-cp36-cp36m-manylinux1_x86_64.whl (20.4MB)
     |████████████████████████████████| 20.4MB 145kB/s 
ERROR: tensorflow-model-optimization 0.2.1 requires enum34~=1.1, which is not installed.
ERROR: tensorflow-federated 0.12.0 has requirement tensorflow~=2.1.0, but you'll have tensorflow 1.15.0 which is incompatible.
ERROR: tensorflow-federated 0.12.0 has requirement tensorflow-addons~=0.7.0, but you'll have tensorflow-addons 0.8.3 which is incompatible.
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.
ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.
Installing collected packages: numpy
  Found existing installation: numpy 1.18.2
    Uninstalling numpy-1.18.2:
      Successfully uninstalled numpy-1.18.2
Successfully installed numpy-1.17.0
In [0]:
import os 
import sys
import random
import math
import numpy as np
import cv2
import matplotlib.pyplot as plt
import json
from imgaug import augmenters as iaa
from tqdm import tqdm
import pandas as pd 
import glob
In [0]:
!pip install tensorflow-gpu==1.13.1
Collecting tensorflow-gpu==1.13.1
  Downloading https://files.pythonhosted.org/packages/7b/b1/0ad4ae02e17ddd62109cd54c291e311c4b5fd09b4d0678d3d6ce4159b0f0/tensorflow_gpu-1.13.1-cp36-cp36m-manylinux1_x86_64.whl (345.2MB)
     |████████████████████████████████| 345.2MB 48kB/s 
Collecting tensorflow-estimator<1.14.0rc0,>=1.13.0
  Downloading https://files.pythonhosted.org/packages/bb/48/13f49fc3fa0fdf916aa1419013bb8f2ad09674c275b4046d5ee669a46873/tensorflow_estimator-1.13.0-py2.py3-none-any.whl (367kB)
     |████████████████████████████████| 368kB 42.0MB/s 
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.1.0)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (0.8.1)
Requirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.0.8)
Requirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (0.9.0)
Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (0.2.2)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.1.0)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.24.3)
Collecting tensorboard<1.14.0,>=1.13.0
  Downloading https://files.pythonhosted.org/packages/0f/39/bdd75b08a6fba41f098b6cb091b9e8c7a80e1b4d679a581a0ccd17b10373/tensorboard-1.13.1-py3-none-any.whl (3.2MB)
     |████████████████████████████████| 3.2MB 75.3MB/s 
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.12.0)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (3.10.0)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.17.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (0.34.2)
Collecting mock>=2.0.0
  Downloading https://files.pythonhosted.org/packages/cd/74/d72daf8dff5b6566db857cfd088907bb0355f5dd2914c4b3ef065c790735/mock-4.0.2-py3-none-any.whl
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.6->tensorflow-gpu==1.13.1) (2.8.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow-gpu==1.13.1) (3.2.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow-gpu==1.13.1) (1.0.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.6.1->tensorflow-gpu==1.13.1) (46.0.0)
ERROR: tensorflow 1.15.0 has requirement tensorboard<1.16.0,>=1.15.0, but you'll have tensorboard 1.13.1 which is incompatible.
ERROR: tensorflow 1.15.0 has requirement tensorflow-estimator==1.15.1, but you'll have tensorflow-estimator 1.13.0 which is incompatible.
ERROR: tensorflow-federated 0.12.0 has requirement tensorflow~=2.1.0, but you'll have tensorflow 1.15.0 which is incompatible.
ERROR: tensorflow-federated 0.12.0 has requirement tensorflow-addons~=0.7.0, but you'll have tensorflow-addons 0.8.3 which is incompatible.
Installing collected packages: mock, tensorflow-estimator, tensorboard, tensorflow-gpu
  Found existing installation: tensorflow-estimator 1.15.1
    Uninstalling tensorflow-estimator-1.15.1:
      Successfully uninstalled tensorflow-estimator-1.15.1
  Found existing installation: tensorboard 1.15.0
    Uninstalling tensorboard-1.15.0:
      Successfully uninstalled tensorboard-1.15.0
Successfully installed mock-4.0.2 tensorboard-2.1.1 tensorflow-estimator-2.1.0 tensorflow-gpu-1.13.1
In [0]:
import tensorflow as tf
tf.__version__
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])

The default version of TensorFlow in Colab will soon switch to TensorFlow 2.x.
We recommend you upgrade now or ensure your notebook will continue to use TensorFlow 1.x via the %tensorflow_version 1.x magic: more info.

Out[0]:
'1.13.1'
In [0]:
DATA_DIR = 'data'
# Directory to save logs and trained model
ROOT_DIR = 'working'
In [0]:
!git clone https://www.github.com/matterport/Mask_RCNN.git
os.chdir('Mask_RCNN')
!pip install -r requirements.txt
!python setup.py -q install
Cloning into 'Mask_RCNN'...
warning: redirecting to https://github.com/matterport/Mask_RCNN.git/
remote: Enumerating objects: 956, done.
remote: Total 956 (delta 0), reused 0 (delta 0), pack-reused 956
Receiving objects: 100% (956/956), 111.84 MiB | 33.57 MiB/s, done.
Resolving deltas: 100% (570/570), done.
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 1)) (1.17.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 2)) (1.4.1)
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 3)) (7.0.0)
Requirement already satisfied: cython in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 4)) (0.29.15)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 5)) (3.2.0)
Requirement already satisfied: scikit-image in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 6)) (0.16.2)
Requirement already satisfied: tensorflow>=1.3.0 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 7)) (2.1.0)
Requirement already satisfied: keras>=2.0.8 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 8)) (2.2.5)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 9)) (4.1.2.30)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 10)) (2.8.0)
Requirement already satisfied: imgaug in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 11)) (0.2.9)
Requirement already satisfied: IPython[all] in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 12)) (5.5.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->-r requirements.txt (line 5)) (2.8.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->-r requirements.txt (line 5)) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->-r requirements.txt (line 5)) (2.4.6)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->-r requirements.txt (line 5)) (1.1.0)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->-r requirements.txt (line 6)) (2.4.1)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->-r requirements.txt (line 6)) (2.4)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->-r requirements.txt (line 6)) (1.1.1)
Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (0.2.2)
Requirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (1.1.0)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (1.12.0)
Requirement already satisfied: tensorflow-estimator<2.2.0,>=2.1.0rc0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (2.1.0)
Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (0.2.0)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (0.34.2)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (0.9.0)
Requirement already satisfied: tensorboard<2.2.0,>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (2.1.1)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (1.12.1)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (3.2.0)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (1.24.3)
Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (1.0.8)
Requirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (3.10.0)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (0.8.1)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow>=1.3.0->-r requirements.txt (line 7)) (1.1.0)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from keras>=2.0.8->-r requirements.txt (line 8)) (3.13)
Requirement already satisfied: Shapely in /usr/local/lib/python3.6/dist-packages (from imgaug->-r requirements.txt (line 11)) (1.7.0)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (0.8.1)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (4.8.0)
Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (46.0.0)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (2.1.3)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (0.7.5)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (4.4.2)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (4.3.3)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (1.0.18)
Requirement already satisfied: notebook; extra == "all" in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (5.2.2)
Requirement already satisfied: nbformat; extra == "all" in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (5.0.4)
Collecting ipyparallel; extra == "all"
  Downloading https://files.pythonhosted.org/packages/3f/82/aaa7a357845a98d4028f27c799f0d3bb2fe55fc1247c73dc712b4ae2344c/ipyparallel-6.2.4-py2.py3-none-any.whl (198kB)
     |████████████████████████████████| 204kB 4.7MB/s 
Requirement already satisfied: testpath; extra == "all" in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (0.4.4)
Requirement already satisfied: requests; extra == "all" in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (2.21.0)
Requirement already satisfied: nbconvert; extra == "all" in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (5.6.1)
Collecting nose>=0.10.1; extra == "all"
  Downloading https://files.pythonhosted.org/packages/15/d8/dd071918c040f50fa1cf80da16423af51ff8ce4a0f2399b7bf8de45ac3d9/nose-1.3.7-py3-none-any.whl (154kB)
     |████████████████████████████████| 163kB 18.2MB/s 
Requirement already satisfied: ipywidgets; extra == "all" in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (7.5.1)
Requirement already satisfied: Sphinx>=1.3; extra == "all" in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (1.8.5)
Requirement already satisfied: ipykernel; extra == "all" in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (4.6.1)
Requirement already satisfied: qtconsole; extra == "all" in /usr/local/lib/python3.6/dist-packages (from IPython[all]->-r requirements.txt (line 12)) (4.7.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow>=1.3.0->-r requirements.txt (line 7)) (1.0.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow>=1.3.0->-r requirements.txt (line 7)) (0.4.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow>=1.3.0->-r requirements.txt (line 7)) (3.2.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow>=1.3.0->-r requirements.txt (line 7)) (1.7.2)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->IPython[all]->-r requirements.txt (line 12)) (0.6.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->IPython[all]->-r requirements.txt (line 12)) (0.2.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->IPython[all]->-r requirements.txt (line 12)) (0.1.8)
Requirement already satisfied: jupyter-client in /usr/local/lib/python3.6/dist-packages (from notebook; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (5.3.4)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from notebook; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (4.6.3)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from notebook; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (2.11.1)
Requirement already satisfied: terminado>=0.3.3; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from notebook; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (0.8.3)
Requirement already satisfied: tornado>=4 in /usr/local/lib/python3.6/dist-packages (from notebook; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (4.5.3)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (2.6.0)
Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.6/dist-packages (from ipyparallel; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (17.0.0)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (1.24.3)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (2.8)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (2019.11.28)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (0.6.0)
Requirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (3.1.3)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (1.4.2)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (0.8.4)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (0.3)
Requirement already satisfied: widgetsnbextension~=3.5.0 in /usr/local/lib/python3.6/dist-packages (from ipywidgets; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (3.5.1)
Requirement already satisfied: alabaster<0.8,>=0.7 in /usr/local/lib/python3.6/dist-packages (from Sphinx>=1.3; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (0.7.12)
Requirement already satisfied: imagesize in /usr/local/lib/python3.6/dist-packages (from Sphinx>=1.3; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (1.2.0)
Requirement already satisfied: babel!=2.0,>=1.3 in /usr/local/lib/python3.6/dist-packages (from Sphinx>=1.3; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (2.8.0)
Requirement already satisfied: docutils>=0.11 in /usr/local/lib/python3.6/dist-packages (from Sphinx>=1.3; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (0.15.2)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from Sphinx>=1.3; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (20.3)
Requirement already satisfied: snowballstemmer>=1.1 in /usr/local/lib/python3.6/dist-packages (from Sphinx>=1.3; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (2.0.0)
Requirement already satisfied: sphinxcontrib-websupport in /usr/local/lib/python3.6/dist-packages (from Sphinx>=1.3; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (1.2.0)
Requirement already satisfied: qtpy in /usr/local/lib/python3.6/dist-packages (from qtconsole; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (1.9.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow>=1.3.0->-r requirements.txt (line 7)) (1.3.0)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow>=1.3.0->-r requirements.txt (line 7)) (4.0)
Requirement already satisfied: cachetools<3.2,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow>=1.3.0->-r requirements.txt (line 7)) (3.1.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow>=1.3.0->-r requirements.txt (line 7)) (0.2.8)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->notebook; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (1.1.1)
Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (0.5.1)
Requirement already satisfied: pytz>=2015.7 in /usr/local/lib/python3.6/dist-packages (from babel!=2.0,>=1.3->Sphinx>=1.3; extra == "all"->IPython[all]->-r requirements.txt (line 12)) (2018.9)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow>=1.3.0->-r requirements.txt (line 7)) (3.1.0)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<4.1,>=3.1.4->google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow>=1.3.0->-r requirements.txt (line 7)) (0.4.8)
Installing collected packages: ipyparallel, nose
Successfully installed ipyparallel-6.2.4 nose-1.3.7
WARNING:root:Fail load requirements file, so using default ones.
zip_safe flag not set; analyzing archive contents...
In [0]:
# Import Mask RCNN
sys.path.append(os.path.join('.', 'Mask_RCNN'))  # To find local version of the library
from mrcnn.config import Config
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnn.model import log
Using TensorFlow backend.
In [0]:
!pip uninstall pycocotools -y
!pip install git+https://github.com/waleedka/coco.git#subdirectory=PythonAPI
Uninstalling pycocotools-2.0.0:
  Successfully uninstalled pycocotools-2.0.0
Collecting git+https://github.com/waleedka/coco.git#subdirectory=PythonAPI
  Cloning https://github.com/waleedka/coco.git to /tmp/pip-req-build-mu3pyhni
  Running command git clone -q https://github.com/waleedka/coco.git /tmp/pip-req-build-mu3pyhni
Building wheels for collected packages: pycocotools
  Building wheel for pycocotools (setup.py) ... done
  Created wheel for pycocotools: filename=pycocotools-2.0-cp36-cp36m-linux_x86_64.whl size=275072 sha256=bf935a58655dcbda6dde5838c1a8adc208c3f9c194437471c43d254f8f392d11
  Stored in directory: /tmp/pip-ephem-wheel-cache-dga9a0n9/wheels/b4/64/d2/36f24ec8ae3838ab50b0f8979fbf579ea02b78de923785d2ae
Successfully built pycocotools
Installing collected packages: pycocotools
Successfully installed pycocotools-2.0
In [0]:
from mrcnn import utils
import numpy as np

from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
from pycocotools import mask as maskUtils

MaskRCNN

To train MaskRCNN, two things we have to define FoodChallengeDataset that implements the Dataset class of MaskRCNN and FoodChallengeConfig that implements the Config class.

The FoodChallengeDataset helps define certain functions that allow us to load the data.

The FoodChallengeConfig gives the information like NUM_CLASSES, BACKBONE, etc.

In [0]:
class FoodChallengeDataset(utils.Dataset):
    def load_dataset(self, dataset_dir, load_small=False, return_coco=True):
        """ Loads dataset released for the AICrowd Food Challenge
            Params:
                - dataset_dir : root directory of the dataset (can point to the train/val folder)
                - load_small : Boolean value which signals if the annotations for all the images need to be loaded into the memory,
                               or if only a small subset of the same should be loaded into memory
        """
        self.load_small = load_small
        if self.load_small:
            annotation_path = os.path.join(dataset_dir, "annotation-small.json")
        else:
            annotation_path = os.path.join(dataset_dir, "annotations.json")

        image_dir = os.path.join(dataset_dir, "images")
        print("Annotation Path ", annotation_path)
        print("Image Dir ", image_dir)
        assert os.path.exists(annotation_path) and os.path.exists(image_dir)

        self.coco = COCO(annotation_path)
        self.image_dir = image_dir

        # Load all classes (Only Building in this version)
        classIds = self.coco.getCatIds()

        # Load all images
        image_ids = list(self.coco.imgs.keys())

        # register classes
        for _class_id in classIds:
            self.add_class("crowdai-food-challenge", _class_id, self.coco.loadCats(_class_id)[0]["name"])

        # Register Images
        for _img_id in image_ids:
            assert(os.path.exists(os.path.join(image_dir, self.coco.imgs[_img_id]['file_name'])))
            self.add_image(
                "crowdai-food-challenge", image_id=_img_id,
                path=os.path.join(image_dir, self.coco.imgs[_img_id]['file_name']),
                width=self.coco.imgs[_img_id]["width"],
                height=self.coco.imgs[_img_id]["height"],
                annotations=self.coco.loadAnns(self.coco.getAnnIds(
                                            imgIds=[_img_id],
                                            catIds=classIds,
                                            iscrowd=None)))

        if return_coco:
            return self.coco

    def load_mask(self, image_id):
        """ Loads instance mask for a given image
              This function converts mask from the coco format to a
              a bitmap [height, width, instance]
            Params:
                - image_id : reference id for a given image

            Returns:
                masks : A bool array of shape [height, width, instances] with
                    one mask per instance
                class_ids : a 1D array of classIds of the corresponding instance masks
                    (In this version of the challenge it will be of shape [instances] and always be filled with the class-id of the "Building" class.)
        """

        image_info = self.image_info[image_id]
        assert image_info["source"] == "crowdai-food-challenge"

        instance_masks = []
        class_ids = []
        annotations = self.image_info[image_id]["annotations"]
        # Build mask of shape [height, width, instance_count] and list
        # of class IDs that correspond to each channel of the mask.
        for annotation in annotations:
            class_id = self.map_source_class_id(
                "crowdai-food-challenge.{}".format(annotation['category_id']))
            if class_id:
                m = self.annToMask(annotation,  image_info["height"],
                                                image_info["width"])
                # Some objects are so small that they're less than 1 pixel area
                # and end up rounded out. Skip those objects.
                if m.max() < 1:
                    continue

                # Ignore the notion of "is_crowd" as specified in the coco format
                # as we donot have the said annotation in the current version of the dataset

                instance_masks.append(m)
                class_ids.append(class_id)
        # Pack instance masks into an array
        if class_ids:
            mask = np.stack(instance_masks, axis=2)
            class_ids = np.array(class_ids, dtype=np.int32)
            return mask, class_ids
        else:
            # Call super class to return an empty mask
            return super(FoodChallengeDataset, self).load_mask(image_id)


    def image_reference(self, image_id):
        """Return a reference for a particular image

            Ideally you this function is supposed to return a URL
            but in this case, we will simply return the image_id
        """
        return "crowdai-food-challenge::{}".format(image_id)
    # The following two functions are from pycocotools with a few changes.

    def annToRLE(self, ann, height, width):
        """
        Convert annotation which can be polygons, uncompressed RLE to RLE.
        :return: binary mask (numpy 2D array)
        """
        segm = ann['segmentation']
        if isinstance(segm, list):
            # polygon -- a single object might consist of multiple parts
            # we merge all parts into one mask rle code
            rles = maskUtils.frPyObjects(segm, height, width)
            rle = maskUtils.merge(rles)
        elif isinstance(segm['counts'], list):
            # uncompressed RLE
            rle = maskUtils.frPyObjects(segm, height, width)
        else:
            # rle
            rle = ann['segmentation']
        return rle

    def annToMask(self, ann, height, width):
        """
        Convert annotation which can be polygons, uncompressed RLE, or RLE to binary mask.
        :return: binary mask (numpy 2D array)
        """
        rle = self.annToRLE(ann, height, width)
        m = maskUtils.decode(rle)
        return m
In [0]:
class FoodChallengeConfig(Config):
    """Configuration for training on data in MS COCO format.
    Derives from the base Config class and overrides values specific
    to the COCO dataset.
    """
    # Give the configuration a recognizable name
    NAME = "crowdai-food-challenge"

    # We use a GPU with 12GB memory, which can fit two images.
    # Adjust down if you use a smaller GPU.
    IMAGES_PER_GPU = 4

    # Uncomment to train on 8 GPUs (default is 1)
    GPU_COUNT = 1
    BACKBONE = 'resnet50'
    # Number of classes (including background)
    NUM_CLASSES = 62  # 1 Background + 61 classes

    STEPS_PER_EPOCH=150
    VALIDATION_STEPS=50

    LEARNING_RATE=0.001
    IMAGE_MAX_DIM=256
    IMAGE_MIN_DIM=256
In [0]:
config = FoodChallengeConfig()
config.display()
Configurations:
BACKBONE                       resnet50
BACKBONE_STRIDES               [4, 8, 16, 32, 64]
BATCH_SIZE                     4
BBOX_STD_DEV                   [0.1 0.1 0.2 0.2]
COMPUTE_BACKBONE_SHAPE         None
DETECTION_MAX_INSTANCES        100
DETECTION_MIN_CONFIDENCE       0.7
DETECTION_NMS_THRESHOLD        0.3
FPN_CLASSIF_FC_LAYERS_SIZE     1024
GPU_COUNT                      1
GRADIENT_CLIP_NORM             5.0
IMAGES_PER_GPU                 4
IMAGE_CHANNEL_COUNT            3
IMAGE_MAX_DIM                  256
IMAGE_META_SIZE                74
IMAGE_MIN_DIM                  256
IMAGE_MIN_SCALE                0
IMAGE_RESIZE_MODE              square
IMAGE_SHAPE                    [256 256   3]
LEARNING_MOMENTUM              0.9
LEARNING_RATE                  0.001
LOSS_WEIGHTS                   {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}
MASK_POOL_SIZE                 14
MASK_SHAPE                     [28, 28]
MAX_GT_INSTANCES               100
MEAN_PIXEL                     [123.7 116.8 103.9]
MINI_MASK_SHAPE                (56, 56)
NAME                           crowdai-food-challenge
NUM_CLASSES                    62
POOL_SIZE                      7
POST_NMS_ROIS_INFERENCE        1000
POST_NMS_ROIS_TRAINING         2000
PRE_NMS_LIMIT                  6000
ROI_POSITIVE_RATIO             0.33
RPN_ANCHOR_RATIOS              [0.5, 1, 2]
RPN_ANCHOR_SCALES              (32, 64, 128, 256, 512)
RPN_ANCHOR_STRIDE              1
RPN_BBOX_STD_DEV               [0.1 0.1 0.2 0.2]
RPN_NMS_THRESHOLD              0.7
RPN_TRAIN_ANCHORS_PER_IMAGE    256
STEPS_PER_EPOCH                150
TOP_DOWN_PYRAMID_SIZE          256
TRAIN_BN                       False
TRAIN_ROIS_PER_IMAGE           200
USE_MINI_MASK                  True
USE_RPN_ROIS                   True
VALIDATION_STEPS               50
WEIGHT_DECAY                   0.0001


You can change other values in the FoodChallengeConfig as well and try out different combinations for best results!

In [0]:
!mkdir pretrained
In [0]:
PRETRAINED_MODEL_PATH = os.path.join("pretrained", "mask_rcnn_coco.h5")
LOGS_DIRECTORY = os.path.join(ROOT_DIR, "logs")
In [0]:
if not os.path.exists(PRETRAINED_MODEL_PATH):
    utils.download_trained_weights(PRETRAINED_MODEL_PATH)
Downloading pretrained model to pretrained/mask_rcnn_coco.h5 ...
... done downloading pretrained model!
In [0]:
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
Out[0]:
['/job:localhost/replica:0/task:0/device:GPU:0']
In [0]:
import keras.backend
K = keras.backend.backend()
if K=='tensorflow':
    keras.backend.common.image_dim_ordering()
model = modellib.MaskRCNN(mode="training", config=config, model_dir=LOGS_DIRECTORY)
model_path = PRETRAINED_MODEL_PATH
model.load_weights(model_path, by_name=True, exclude=[
        "mrcnn_class_logits", "mrcnn_bbox_fc",
        "mrcnn_bbox", "mrcnn_mask"])
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
In [0]:
dataset_train = FoodChallengeDataset()
dataset_train.load_dataset('/content/data/train', load_small=False)
dataset_train.prepare()
Annotation Path  /content/data/train/annotations.json
Image Dir  /content/data/train/images
loading annotations into memory...
Done (t=0.63s)
creating index...
index created!
In [0]:
dataset_val = FoodChallengeDataset()
val_coco = dataset_val.load_dataset(dataset_dir='/content/data/val', load_small=False, return_coco=True)
dataset_val.prepare()
Annotation Path  /content/data/val/annotations.json
Image Dir  /content/data/val/images
loading annotations into memory...
Done (t=0.03s)
creating index...
index created!
In [0]:
class_names = dataset_train.class_names
# If you don't have the correct classes here, there must be some error in your DatasetConfig
assert len(class_names)==62, "Please check DatasetConfig"
class_names
Out[0]:
['BG',
 'water',
 'pizza-margherita-baked',
 'broccoli',
 'salad-leaf-salad-green',
 'zucchini',
 'egg',
 'butter',
 'bread-white',
 'apple',
 'dark-chocolate',
 'white-coffee-with-caffeine',
 'sweet-pepper',
 'mixed-salad-chopped-without-sauce',
 'tomato-sauce',
 'bread-wholemeal',
 'coffee-with-caffeine',
 'cucumber',
 'cheese',
 'pasta-spaghetti',
 'rice',
 'salmon',
 'carrot',
 'onion',
 'mixed-vegetables',
 'espresso-with-caffeine',
 'banana',
 'strawberries',
 'mayonnaise',
 'almonds',
 'wine-white',
 'hard-cheese',
 'ham-raw',
 'tomato',
 'french-beans',
 'mandarine',
 'wine-red',
 'potatoes-steamed',
 'croissant',
 'salami',
 'boisson-au-glucose-50g',
 'biscuits',
 'corn',
 'leaf-spinach',
 'jam',
 'tea-green',
 'chips-french-fries',
 'parmesan',
 'beer',
 'avocado',
 'bread-french-white-flour',
 'chicken',
 'soft-cheese',
 'tea',
 'sauce-savoury',
 'honey',
 'bread-whole-wheat',
 'bread-sourdough',
 'gruyere',
 'pickle',
 'mixed-nuts',
 'water-mineral']

Lets start training!!

In [0]:
print("Training network")
model.train(dataset_train, dataset_val,
            learning_rate=config.LEARNING_RATE,
            epochs=15,
            layers='heads')
Training network

Starting at epoch 0. LR=0.001

Checkpoint Path: working/logs/crowdai-food-challenge20200321T0657/mask_rcnn_crowdai-food-challenge_{epoch:04d}.h5
Selecting layers to train
fpn_c5p5               (Conv2D)
fpn_c4p4               (Conv2D)
fpn_c3p3               (Conv2D)
fpn_c2p2               (Conv2D)
fpn_p5                 (Conv2D)
fpn_p2                 (Conv2D)
fpn_p3                 (Conv2D)
fpn_p4                 (Conv2D)
In model:  rpn_model
    rpn_conv_shared        (Conv2D)
    rpn_class_raw          (Conv2D)
    rpn_bbox_pred          (Conv2D)
mrcnn_mask_conv1       (TimeDistributed)
mrcnn_mask_bn1         (TimeDistributed)
mrcnn_mask_conv2       (TimeDistributed)
mrcnn_mask_bn2         (TimeDistributed)
mrcnn_class_conv1      (TimeDistributed)
mrcnn_class_bn1        (TimeDistributed)
mrcnn_mask_conv3       (TimeDistributed)
mrcnn_mask_bn3         (TimeDistributed)
mrcnn_class_conv2      (TimeDistributed)
mrcnn_class_bn2        (TimeDistributed)
mrcnn_mask_conv4       (TimeDistributed)
mrcnn_mask_bn4         (TimeDistributed)
mrcnn_bbox_fc          (TimeDistributed)
mrcnn_mask_deconv      (TimeDistributed)
mrcnn_class_logits     (TimeDistributed)
mrcnn_mask             (TimeDistributed)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_impl.py:110: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py:49: UserWarning: Using a generator with `use_multiprocessing=True` and multiple workers may duplicate your data. Please consider using the `keras.utils.Sequence class.
  UserWarning('Using a generator with `use_multiprocessing=True`'
Epoch 1/15
150/150 [==============================] - 156s 1s/step - loss: 2.5231 - rpn_class_loss: 0.0425 - rpn_bbox_loss: 0.5922 - mrcnn_class_loss: 0.5262 - mrcnn_bbox_loss: 0.6937 - mrcnn_mask_loss: 0.6684 - val_loss: 2.2519 - val_rpn_class_loss: 0.0395 - val_rpn_bbox_loss: 0.7131 - val_mrcnn_class_loss: 0.2958 - val_mrcnn_bbox_loss: 0.6037 - val_mrcnn_mask_loss: 0.5997
Epoch 2/15
150/150 [==============================] - 91s 604ms/step - loss: 2.1715 - rpn_class_loss: 0.0407 - rpn_bbox_loss: 0.6212 - mrcnn_class_loss: 0.3700 - mrcnn_bbox_loss: 0.5465 - mrcnn_mask_loss: 0.5930 - val_loss: 1.9491 - val_rpn_class_loss: 0.0270 - val_rpn_bbox_loss: 0.3237 - val_mrcnn_class_loss: 0.4762 - val_mrcnn_bbox_loss: 0.5715 - val_mrcnn_mask_loss: 0.5507
Epoch 3/15
150/150 [==============================] - 86s 574ms/step - loss: 1.9140 - rpn_class_loss: 0.0289 - rpn_bbox_loss: 0.5086 - mrcnn_class_loss: 0.3484 - mrcnn_bbox_loss: 0.5002 - mrcnn_mask_loss: 0.5280 - val_loss: 1.9657 - val_rpn_class_loss: 0.0317 - val_rpn_bbox_loss: 0.5260 - val_mrcnn_class_loss: 0.3910 - val_mrcnn_bbox_loss: 0.4806 - val_mrcnn_mask_loss: 0.5364
Epoch 4/15
150/150 [==============================] - 83s 556ms/step - loss: 1.8447 - rpn_class_loss: 0.0275 - rpn_bbox_loss: 0.4840 - mrcnn_class_loss: 0.3293 - mrcnn_bbox_loss: 0.4670 - mrcnn_mask_loss: 0.5370 - val_loss: 1.8504 - val_rpn_class_loss: 0.0271 - val_rpn_bbox_loss: 0.4564 - val_mrcnn_class_loss: 0.3506 - val_mrcnn_bbox_loss: 0.5124 - val_mrcnn_mask_loss: 0.5038
Epoch 5/15
150/150 [==============================] - 87s 581ms/step - loss: 1.8583 - rpn_class_loss: 0.0303 - rpn_bbox_loss: 0.5397 - mrcnn_class_loss: 0.3582 - mrcnn_bbox_loss: 0.4367 - mrcnn_mask_loss: 0.4934 - val_loss: 1.8301 - val_rpn_class_loss: 0.0298 - val_rpn_bbox_loss: 0.3861 - val_mrcnn_class_loss: 0.4897 - val_mrcnn_bbox_loss: 0.4917 - val_mrcnn_mask_loss: 0.4328
Epoch 6/15
150/150 [==============================] - 87s 577ms/step - loss: 1.6971 - rpn_class_loss: 0.0263 - rpn_bbox_loss: 0.4711 - mrcnn_class_loss: 0.3138 - mrcnn_bbox_loss: 0.4347 - mrcnn_mask_loss: 0.4512 - val_loss: 1.7609 - val_rpn_class_loss: 0.0322 - val_rpn_bbox_loss: 0.3882 - val_mrcnn_class_loss: 0.4509 - val_mrcnn_bbox_loss: 0.4555 - val_mrcnn_mask_loss: 0.4342
Epoch 7/15
150/150 [==============================] - 88s 589ms/step - loss: 1.6252 - rpn_class_loss: 0.0274 - rpn_bbox_loss: 0.4094 - mrcnn_class_loss: 0.3531 - mrcnn_bbox_loss: 0.4070 - mrcnn_mask_loss: 0.4283 - val_loss: 1.7031 - val_rpn_class_loss: 0.0241 - val_rpn_bbox_loss: 0.3748 - val_mrcnn_class_loss: 0.4289 - val_mrcnn_bbox_loss: 0.4620 - val_mrcnn_mask_loss: 0.4133
Epoch 8/15
150/150 [==============================] - 83s 554ms/step - loss: 1.4838 - rpn_class_loss: 0.0214 - rpn_bbox_loss: 0.3707 - mrcnn_class_loss: 0.3168 - mrcnn_bbox_loss: 0.3845 - mrcnn_mask_loss: 0.3903 - val_loss: 1.6012 - val_rpn_class_loss: 0.0207 - val_rpn_bbox_loss: 0.4417 - val_mrcnn_class_loss: 0.2683 - val_mrcnn_bbox_loss: 0.4142 - val_mrcnn_mask_loss: 0.4563
Epoch 9/15
150/150 [==============================] - 83s 556ms/step - loss: 1.5479 - rpn_class_loss: 0.0225 - rpn_bbox_loss: 0.3999 - mrcnn_class_loss: 0.3387 - mrcnn_bbox_loss: 0.3771 - mrcnn_mask_loss: 0.4097 - val_loss: 1.5015 - val_rpn_class_loss: 0.0215 - val_rpn_bbox_loss: 0.3295 - val_mrcnn_class_loss: 0.3370 - val_mrcnn_bbox_loss: 0.4090 - val_mrcnn_mask_loss: 0.4045
Epoch 10/15
150/150 [==============================] - 85s 567ms/step - loss: 1.4963 - rpn_class_loss: 0.0243 - rpn_bbox_loss: 0.3742 - mrcnn_class_loss: 0.3449 - mrcnn_bbox_loss: 0.3648 - mrcnn_mask_loss: 0.3881 - val_loss: 1.8873 - val_rpn_class_loss: 0.0290 - val_rpn_bbox_loss: 0.6538 - val_mrcnn_class_loss: 0.3800 - val_mrcnn_bbox_loss: 0.4104 - val_mrcnn_mask_loss: 0.4140
Epoch 11/15
150/150 [==============================] - 84s 562ms/step - loss: 1.5555 - rpn_class_loss: 0.0254 - rpn_bbox_loss: 0.3608 - mrcnn_class_loss: 0.3863 - mrcnn_bbox_loss: 0.3737 - mrcnn_mask_loss: 0.4093 - val_loss: 1.3825 - val_rpn_class_loss: 0.0203 - val_rpn_bbox_loss: 0.2715 - val_mrcnn_class_loss: 0.3360 - val_mrcnn_bbox_loss: 0.3696 - val_mrcnn_mask_loss: 0.3852
Epoch 12/15
150/150 [==============================] - 86s 573ms/step - loss: 1.4167 - rpn_class_loss: 0.0243 - rpn_bbox_loss: 0.3356 - mrcnn_class_loss: 0.3274 - mrcnn_bbox_loss: 0.3434 - mrcnn_mask_loss: 0.3860 - val_loss: 1.4704 - val_rpn_class_loss: 0.0265 - val_rpn_bbox_loss: 0.3477 - val_mrcnn_class_loss: 0.3498 - val_mrcnn_bbox_loss: 0.3704 - val_mrcnn_mask_loss: 0.3760
Epoch 13/15
150/150 [==============================] - 84s 561ms/step - loss: 1.4757 - rpn_class_loss: 0.0263 - rpn_bbox_loss: 0.4176 - mrcnn_class_loss: 0.3210 - mrcnn_bbox_loss: 0.3575 - mrcnn_mask_loss: 0.3533 - val_loss: 1.5298 - val_rpn_class_loss: 0.0231 - val_rpn_bbox_loss: 0.3967 - val_mrcnn_class_loss: 0.3761 - val_mrcnn_bbox_loss: 0.3562 - val_mrcnn_mask_loss: 0.3778
Epoch 14/15
150/150 [==============================] - 84s 561ms/step - loss: 1.3434 - rpn_class_loss: 0.0213 - rpn_bbox_loss: 0.3112 - mrcnn_class_loss: 0.3349 - mrcnn_bbox_loss: 0.3242 - mrcnn_mask_loss: 0.3518 - val_loss: 1.2733 - val_rpn_class_loss: 0.0169 - val_rpn_bbox_loss: 0.2953 - val_mrcnn_class_loss: 0.2724 - val_mrcnn_bbox_loss: 0.3376 - val_mrcnn_mask_loss: 0.3511
Epoch 15/15
150/150 [==============================] - 82s 549ms/step - loss: 1.3229 - rpn_class_loss: 0.0232 - rpn_bbox_loss: 0.3210 - mrcnn_class_loss: 0.3072 - mrcnn_bbox_loss: 0.3171 - mrcnn_mask_loss: 0.3543 - val_loss: 1.4025 - val_rpn_class_loss: 0.0231 - val_rpn_bbox_loss: 0.4472 - val_mrcnn_class_loss: 0.2449 - val_mrcnn_bbox_loss: 0.3571 - val_mrcnn_mask_loss: 0.3302
In [0]:
model_path = model.find_last()
model_path
Out[0]:
'working/logs/crowdai-food-challenge20200321T0657/mask_rcnn_crowdai-food-challenge_0015.h5'
In [0]:
class InferenceConfig(FoodChallengeConfig):
    GPU_COUNT = 1
    IMAGES_PER_GPU = 1
    NUM_CLASSES = 62  # 1 Background + 61 classes
    IMAGE_MAX_DIM=256
    IMAGE_MIN_DIM=256
    NAME = "food"
    DETECTION_MIN_CONFIDENCE=0

inference_config = InferenceConfig()
inference_config.display()
Configurations:
BACKBONE                       resnet50
BACKBONE_STRIDES               [4, 8, 16, 32, 64]
BATCH_SIZE                     1
BBOX_STD_DEV                   [0.1 0.1 0.2 0.2]
COMPUTE_BACKBONE_SHAPE         None
DETECTION_MAX_INSTANCES        100
DETECTION_MIN_CONFIDENCE       0
DETECTION_NMS_THRESHOLD        0.3
FPN_CLASSIF_FC_LAYERS_SIZE     1024
GPU_COUNT                      1
GRADIENT_CLIP_NORM             5.0
IMAGES_PER_GPU                 1
IMAGE_CHANNEL_COUNT            3
IMAGE_MAX_DIM                  256
IMAGE_META_SIZE                74
IMAGE_MIN_DIM                  256
IMAGE_MIN_SCALE                0
IMAGE_RESIZE_MODE              square
IMAGE_SHAPE                    [256 256   3]
LEARNING_MOMENTUM              0.9
LEARNING_RATE                  0.001
LOSS_WEIGHTS                   {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}
MASK_POOL_SIZE                 14
MASK_SHAPE                     [28, 28]
MAX_GT_INSTANCES               100
MEAN_PIXEL                     [123.7 116.8 103.9]
MINI_MASK_SHAPE                (56, 56)
NAME                           food
NUM_CLASSES                    62
POOL_SIZE                      7
POST_NMS_ROIS_INFERENCE        1000
POST_NMS_ROIS_TRAINING         2000
PRE_NMS_LIMIT                  6000
ROI_POSITIVE_RATIO             0.33
RPN_ANCHOR_RATIOS              [0.5, 1, 2]
RPN_ANCHOR_SCALES              (32, 64, 128, 256, 512)
RPN_ANCHOR_STRIDE              1
RPN_BBOX_STD_DEV               [0.1 0.1 0.2 0.2]
RPN_NMS_THRESHOLD              0.7
RPN_TRAIN_ANCHORS_PER_IMAGE    256
STEPS_PER_EPOCH                150
TOP_DOWN_PYRAMID_SIZE          256
TRAIN_BN                       False
TRAIN_ROIS_PER_IMAGE           200
USE_MINI_MASK                  True
USE_RPN_ROIS                   True
VALIDATION_STEPS               50
WEIGHT_DECAY                   0.0001


In [0]:
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode='inference', 
                          config=inference_config,
                          model_dir=ROOT_DIR)

# Load trained weights (fill in path to trained weights here)
assert model_path != "", "Provide path to trained weights"
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
WARNING:tensorflow:From /content/Mask_RCNN/mrcnn/model.py:772: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Loading weights from  working/logs/crowdai-food-challenge20200321T0657/mask_rcnn_crowdai-food-challenge_0015.h5
Re-starting from epoch 15
In [0]:
# Show few example of ground truth vs. predictions on the validation dataset 
dataset = dataset_val
fig = plt.figure(figsize=(10, 30))

for i in range(4):

    image_id = random.choice(dataset.image_ids)
    
    original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
        modellib.load_image_gt(dataset_val, inference_config, 
                               image_id, use_mini_mask=False)
    
    print(original_image.shape)
    plt.subplot(6, 2, 2*i + 1)
    visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id, 
                                dataset.class_names, ax=fig.axes[-1])
    
    plt.subplot(6, 2, 2*i + 2)
    results = model.detect([original_image]) #, verbose=1)
    r = results[0]
    visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'], 
                                dataset.class_names, r['scores'], ax=fig.axes[-1])
(256, 256, 3)
(256, 256, 3)
(256, 256, 3)
(256, 256, 3)
In [0]:
import json
with open('/content/data/val/annotations.json') as json_file:
    data = json.load(json_file)
In [0]:
d = {}
for x in data["categories"]:
    d[x["name"]]=x["id"]
In [0]:
id_category = [0]
for x in dataset.class_names[1:]:
    id_category.append(d[x])
#id_category
In [0]:
import tqdm
import skimage
In [0]:
files = glob.glob(os.path.join('/content/data/val/test_images/images', "*.jpg"))
_final_object = []
for file in tqdm.tqdm(files):
    images = [skimage.io.imread(file) ]
    #if(len(images)!= inference_config.IMAGES_PER_GPU):
    #    images = images + [images[-1]]*(inference_config.BATCH_SIZE - len(images))
    predictions = model.detect(images, verbose=0)
    #print(file)
    for _idx, r in enumerate(predictions):
        
            image_id = int(file.split("/")[-1].replace(".jpg",""))
            for _idx, class_id in enumerate(r["class_ids"]):
                if class_id > 0:
                    mask = r["masks"].astype(np.uint8)[:, :, _idx]
                    bbox = np.around(r["rois"][_idx], 1)
                    bbox = [float(x) for x in bbox]
                    _result = {}
                    _result["image_id"] = image_id
                    _result["category_id"] = id_category[class_id]
                    _result["score"] = float(r["scores"][_idx])
                    _mask = maskUtils.encode(np.asfortranarray(mask))
                    _mask["counts"] = _mask["counts"].decode("UTF-8")
                    _result["segmentation"] = _mask
                    _result["bbox"] = [bbox[1], bbox[0], bbox[3] - bbox[1], bbox[2] - bbox[0]]
                    _final_object.append(_result)

fp = open('/content/output.json', "w")
import json
print("Writing JSON...")
fp.write(json.dumps(_final_object))
fp.close()
100%|██████████| 418/418 [00:57<00:00,  7.33it/s]
Writing JSON...

In [0]:
submission_file = json.loads(open("/content/output.json").read())
len(submission_file)
Out[0]:
693
In [0]:
type(submission_file)
Out[0]:
list
In [0]:
import random
import json
import numpy as np
import argparse
import base64
import glob
import os
from PIL import Image

from pycocotools.coco import COCO
GROUND_TRUTH_ANNOTATION_PATH = "/content/data/val/annotations.json"
ground_truth_annotations = COCO(GROUND_TRUTH_ANNOTATION_PATH)
submission_file = json.loads(open("/content/output.json").read())
results = ground_truth_annotations.loadRes(submission_file)
cocoEval = COCOeval(ground_truth_annotations, results, 'segm')
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
loading annotations into memory...
Done (t=0.04s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=0.38s).
Accumulating evaluation results...
DONE (t=0.20s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.050
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.090
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.054
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.017
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.053
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.086
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.087
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.087
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.026
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.092
In [0]:

Snake Species Identification Challenge

Can I have an example of a code which is working to make a submission on gitlab?

3 months ago

@amapic Your master branch contains the aicrowd-api but you submission branch does not. The environment.yml file in submission-v0.22 does not contain the api.

Submission Errors

8 months ago

@nilabha Commented on the issue. Hope this helps!

Food Recognition Challenge

Submission confusion. Am I dumb?

3 months ago

@HarryWalters can you please share the submission link.

Submission confusion. Am I dumb?

3 months ago

@rohitmidha23 @HarryWalters Can you please share the submission links.

Novartis DSAI Challenge

Not able to Submit

4 months ago

You need to participate from the aicrowd.com first to make a submission.

Submission Evaluation - Queued since 2 hours

4 months ago

@shravankoninti Can you please send the issue link? Or just tag me in the issue.

Flatland Challenge

Submission Errors Flatland

8 months ago

@nilabha
019-08-03T15:15:13.396985671Z Traceback (most recent call last):
2019-08-03T15:15:13.397067988Z File “./run.py”, line 10, in
2019-08-03T15:15:13.397075262Z from utils.observation_utils import norm_obs_clip, split_tree
2019-08-03T15:15:13.3970822Z ModuleNotFoundError: No module named ‘utils’

Regarding dataloader

8 months ago

@gokuleloop Sorry for the late response. If your isuue is still unsolved please tag me on the relevant isssue.

ashivani has not provided any information yet.