Loading

ADDI Alzheimers Detection Challenge

F1:0.376 Image pixel representation + CNN Model Baseline

Create Image-Pixel like representation features and Convolution Neural Network based baseline

nilabha

Convert Clock features to Image Pixel features

Motivation

The digit features are encoded as numbers,distances, angle but it does not represent the actual positions or the spatial information of the actual digits in a clock. Representing the same in a 2-d plane seems a more natural way to represent the digits and the deviations of the drawings from the actual positions. While one option could be to use the hand drawn clocks (this is not directly provided but could be derived from the numerical features). However the purpose of the numerical features is to represent the data in an uniform manner as the drawings can be quiet imperfect and differences less visible. The pixel based representation aims to get the best out of both these attributes -

  • Having an uniform measurable numerical representation
  • Keeping the spatial and positions in 2-d space intact

Feature Engineering Approach

A clock can be represented as a 7*7 features by representing the clock positions as follows

************************************
*     *     *     12    *     *    *
************************************
*     *     11     *    1     *    *
************************************
*     10     *     *    *     2    *
************************************
9     *     *     *    *     *    3
************************************
*     8     *     *    *     4    *
************************************
*     *     7     *    5     *    *
************************************
*     *     *     6    *     *    *
************************************

We can replace this based on the features missing_digit_* features by hot encoding the presence of digits

For e.g. the features

missing_digit_1 = 0
missing_digit_2 = 0
missing_digit_3 = 1
missing_digit_4 = 1
missing_digit_5 = 1
missing_digit_6 = 0
missing_digit_7 = 0
missing_digit_8 = 0
missing_digit_9 = 0
missing_digit_10 = 0
missing_digit_11 = 0
missing_digit_12 = 0

can be represented as

************************************
0     0     0     0    0     0    0
************************************
0     0     0     0    0     0    0
************************************
0     0     0     0    0     0    0
************************************
0     0     0     0    0     0    1
************************************
0     0     0     0    0     1    0
************************************
0     0     0     0    1     0    0
************************************
0     0     0     0    0     0    0
************************************

further if the feature sequence_flag_ccw is 1, meaning the clock has been drawn counter clockwise, then the same clock can be perceived as

************************************
0     0     0     0    0     0    0
************************************
0     0     0     0    0     0    0
************************************
0     0     0     0    0     0    0
************************************
1     0     0     0    0     0    0
************************************
0     1     0     0    0     0    0
************************************
0     0     1     0    0     0    0
************************************
0     0     0     0    0     0    0
************************************

now the feature final_rotational_angle can further shift our clock positions. for e.g. if final_rotational_angle = 60°, then the clock shifts by 2 positions and becomes

************************************
0     0     0     1    0     0    0
************************************
0     0     1     0    1     0    0
************************************
0     0     0     0    0     0    0
************************************
0     0     0     0    0     0    0
************************************
0     0     0     0    0     0    0
************************************
0     0     0     0    0     0    0
************************************
0     0     0     0    0     0    0
************************************

We use this feature represented by 7*7 and upscaled scaled by 4 times to get the size 28*28.

Note that this image like pixel representation which matches with the size of the mnist gray scale images of dimension 1*28*28 We now apply the standard convolution models to this pixel representation on this feature to obtain a benchmark for the cnn model.

We show below a simple missing digit feature with rotation and anti-clockwise is able to get a reasonable learning to achieve a baseline F-Score of 0.376

Future Ideas

Additional digit features like the euc dist digit, distance from the centre, bounding box - area, width and height could be concatenated on the z axis to obtain 3-d image like representation similar to the image used on rgb images. Likewise hour and minute hand features could also be hard coded on the correct pixels based on their orientation and distance via hot encoding. A thinly drawn item could be represented via a transparency number from 0 to 1 (similar to A in RGBA) Centre Dot Feature could just represent the pixel section in the centre of the clock. The eclipse to circle ratio features can be bucketed and be used to shift positions of digits to nearby pixels. Similar techniques could also be applied on the count_defects, percentage inside eclipse, top left, right, bottom area etc.

Drawing

What is the notebook about?

The challenge is to use the features extracted from the Clock Drawing Test to build an automated and algorithm to predict whether each participant is one of three phases:

1) Pre-Alzheimer’s (Early Warning) 2) Post-Alzheimer’s (Detection) 3) Normal (Not an Alzheimer’s patient)

In machine learning terms: this is a 3-class classification task.

How to use this notebook? 📝

notebook overview

  • Update the config parameters. You can define the common variables here
Variable Description
AICROWD_DATASET_PATH Path to the file containing test data (The data will be available at /ds_shared_drive/ on aridhia workspace). This should be an absolute path.
AICROWD_PREDICTIONS_PATH Path to write the output to.
AICROWD_ASSETS_DIR In case your notebook needs additional files (like model weights, etc.,), you can add them to a directory and specify the path to the directory here (please specify relative path). The contents of this directory will be sent to AIcrowd for evaluation.
AICROWD_API_KEY In order to submit your code to AIcrowd, you need to provide your account's API key. This key is available at https://www.aicrowd.com/participants/me
  • Installing packages. Please use the Install packages 🗃 section to install the packages
  • Training your models. All the code within the Training phase ⚙️ section will be skipped during evaluation. Please make sure to save your model weights in the assets directory and load them in the predictions phase section

Setup AIcrowd Utilities 🛠

We use this to bundle the files for submission and create a submission on AIcrowd. Do not edit this block.

In [3]:
!pip install -q -U aicrowd-cli
In [1]:
%load_ext aicrowd.magic
In [16]:
!pip install sweetviz
!pip install -U jupyter
In [2]:
import sweetviz as sv
In [3]:
import os

# Please use the absolute for the location of the pip install Shapelydataset.
# Or you can use relative path with `os.getcwd() + "test_data/validation.csv"`
AICROWD_DATASET_PATH = os.getenv("DATASET_PATH", "/ds_shared_drive/validation.csv")
AICROWD_PREDICTIONS_PATH = os.getenv("PREDICTIONS_PATH", "predictions.csv")
AICROWD_ASSETS_DIR = "assets"
In [85]:
#!pip install ipywidgets
#!jupyter nbextension enable --py widgetsnbextension
#!conda install -y jupyterlab_widgets
#!pip install aquirdturtle_collapsible_headings

Install packages 🗃

Please add all pacakage installations in this section

In [86]:
!pip install numpy pandas
!pip install -U imbalanced-learn
!pip install xgboost
!pip install lightgbm
!pip install catboost
!pip install tensorflow
!pip install shap
!pip install torch torchvision torchaudio

Define preprocessing code 💻

The code that is common between the training and the prediction sections should be defined here. During evaluation, we completely skip the training section. Please make sure to add any common logic between the training and prediction sections here.

Import common packages

Please import packages that are common for training and prediction phases here.

In [424]:
import numpy as np
import pandas as pd
import joblib
import matplotlib.pyplot as plt
from collections import Counter
import torch
from tqdm.notebook import tqdm
%matplotlib inline
In [5]:
target_col = "diagnosis"
key_col = "row_id"
cat_cols = ['intersection_pos_rel_centre']
seed = 2021

target_values = ["normal", "post_alzheimer", "pre_alzheimer"]
In [394]:
scale = 4
translator2d = {1: [4,1], 2 : [5,2], 3: [6,3], 4:[5,4] ,5:[4,5] ,6:[3,6] ,7:[2,5] ,8:[1,4] ,9:[0,3],10:[1,2], 11:[2,1] , 12:[3,0]}
ccw_translate = {i: 12 - i  for i in range(1,13,1)}
ccw_translate[12] = 12
translator2d_ccw = {ccw_translate[k]:v for k,v in translator2d.items()}
In [395]:
import torchvision
import torch, torch.nn as nn
import torchvision.models as models
from torch.autograd import Variable
import torch.nn.functional as F
import torch.optim as optim
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
np.random.seed(0)
torch.manual_seed(0)
Out[395]:
<torch._C.Generator at 0x7fdcf9c01150>
In [421]:
z_dim = 1 # image_repr_features.shape[1]
n_classes = 3
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(z_dim, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
        self.conv2_drop = nn.Dropout2d()
        self.fc1 = nn.Linear(320, 50)
        self.fc2 = nn.Linear(50, n_classes)

    def forward(self, x):
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
        x = x.view(-1, 320)
        x = F.relu(self.fc1(x))
        x = F.dropout(x, training=self.training)
        x = self.fc2(x)
        return F.softmax(x)
model = Net()

Training phase ⚙️

You can define your training code here. This sections will be skipped during evaluation.

In [10]:
train = pd.read_csv('/ds_shared_drive/train.csv')
In [11]:
# valid = pd.read_csv('/ds_shared_drive/validation.csv')
# valid_truth = pd.read_csv('/ds_shared_drive/validation_ground_truth.csv')
# valid_all = valid.merge(valid_truth,how='left')
# train = pd.concat([train, valid_all],axis = 0)
In [12]:
train = train[train[target_col].isin(target_values)].copy().reset_index(drop=True)

# Remove Constant Columns
train = train.loc[:, (train != train.iloc[0]).any()]
features = train.columns[1:-1].to_list()

numeric_features = [c for c in features if c not in cat_cols]
In [13]:
for c in numeric_features:
    train[c] = train[c].astype(float)

print(train[target_col].value_counts())
print(train.shape)
normal            31208
post_alzheimer     1149
pre_alzheimer       420
Name: diagnosis, dtype: int64
(32777, 120)
In [42]:
df_pos = train[train[target_col].isin(target_values[1:])]
nb_pos = df_pos.shape[0]
nb_neg = nb_pos*2
df_neg = train[train[target_col] == "normal"].sample(n=nb_neg, random_state=seed)
# df_neg = df_normal 
df_samples = pd.concat([df_pos, df_neg]).sample(frac=1).reset_index(drop=True)
# df_samples = train
df_samples.shape
Out[42]:
(4707, 120)
In [43]:
df_samples.shape
Out[43]:
(4707, 120)
In [326]:
print(cat_cols)
for c in cat_cols:
    df_samples[c].fillna("NA", inplace=True)
    
df_dummies = pd.get_dummies(df_samples[cat_cols], columns=cat_cols, dummy_na=True).add_prefix('CAT_')
dummy_cols = df_dummies.columns.to_list()
print(dummy_cols)

df_samples = pd.concat([df_samples, df_dummies], axis=1)
df_samples['cnt_NaN'] = df_samples[numeric_features].isna().sum(axis=1)
# df_samples.fillna(-1, inplace=True)
model_features = df_samples.columns.to_list()
model_features = [c for c in model_features if c not in [key_col, target_col] + cat_cols]
print(len(model_features))
X_train = df_samples[model_features]
y_train_all = df_samples[target_col].map(dict(zip(target_values, list(range(len(target_values))))))
['intersection_pos_rel_centre']
['CAT_intersection_pos_rel_centre_BL', 'CAT_intersection_pos_rel_centre_BR', 'CAT_intersection_pos_rel_centre_NA', 'CAT_intersection_pos_rel_centre_TL', 'CAT_intersection_pos_rel_centre_TR', 'CAT_intersection_pos_rel_centre_nan']
130
In [45]:
df_samples[target_col].value_counts()
Out[45]:
normal            3138
post_alzheimer    1149
pre_alzheimer      420
Name: diagnosis, dtype: int64
In [352]:
image_repr_features = None
for n,row in tqdm(X_train.iterrows()):
    image_repr = np.zeros((1,7,7))
    centre_repr = np.zeros((1,7,7))
    for i in range(1,13):
        col = f'missing_digit_{i}'
        present = row[col]

        if present:
            ccw_flag = row["sequence_flag_ccw"] == 1
            translator = translator2d
            if ccw_flag:
                translator = translator2d_ccw
            pos = translator[i]
            image_repr[0,pos[1],pos[0]] = 1
    
    image_repr = np.kron(image_repr, np.ones((scale,scale)))
#     rot_angle_z = image_repr * row["final_rotation_angle"]/360
#     centre_dot = row["centre_dot_detect"]
#     if centre_dot == 1:
#         centre_repr[0,3,3] = 1
#     centre_repr = np.kron(centre_repr, np.ones((scale,scale))) 
#     image_repr = np.vstack([image_repr,rot_angle_z,centre_repr])
    image_repr = np.expand_dims(image_repr, axis = 0)
    if n > 0:
        image_repr_features = np.vstack([image_repr_features,image_repr])
    else:
        image_repr_features = image_repr

image_repr_features_no_nan = image_repr_features # np.nan_to_num(image_repr_features)
In [422]:
model = Net()
In [379]:
train_x, val_x, train_y, val_y = train_test_split(image_repr_features_no_nan, y_train_all, test_size = 0.1)
(train_x.shape, train_y.shape), (val_x.shape, val_y.shape)
train_x = torch.from_numpy(train_x).float()
val_x = torch.from_numpy(val_x).float()
train_y = torch.from_numpy(train_y.values).long()
val_y = torch.from_numpy(val_y.values).long()
def train(epoch):
    tr_loss = 0
    # getting the training set
    x_train, y_train = Variable(train_x), Variable(train_y)
    # getting the validation set
    x_val, y_val = Variable(val_x), Variable(val_y)
    # clearing the Gradients of the model parameters
    optimizer.zero_grad()
    
    # prediction for training and validation set
    output_train = model(x_train)
    output_val = model(x_val)

    # computing the training and validation loss
    loss_train = criterion(output_train, y_train)
    loss_val = criterion(output_val, y_val)
    train_losses.append(loss_train)
    val_losses.append(loss_val)

    # computing the updated weights of all the model parameters
    loss_train.backward()
    optimizer.step()
    tr_loss = loss_train.item()
    if epoch%2 == 0:
        # printing the validation loss
        print('Epoch : ',epoch+1, '\t', 'loss :', loss_val)


# defining the optimizer
optimizer = optim.Adam(model.parameters(), lr=0.005)
# defining the loss function
criterion = nn.CrossEntropyLoss()
# defining the number of epochs
n_epochs = 15
# empty list to store training losses
train_losses = []
# empty list to store validation losses
val_losses = []
# training the model
for epoch in range(n_epochs):
    train(epoch)
# prediction for training set
with torch.no_grad():
    output = model(train_x)
    
softmax = output.cpu()
prob = list(softmax.numpy())
predictions = np.argmax(prob, axis=1)
print(predictions.sum())
# f1 score on training set
f1_score(train_y.numpy(), predictions, average='weighted')
In [912]:
# load your data

Train your model

In [918]:
train_x, val_x, train_y, val_y = train_test_split(image_repr_features_no_nan, y_train_all, test_size = 0.1)
(train_x.shape, train_y.shape), (val_x.shape, val_y.shape)
train_x = torch.from_numpy(train_x).float()
val_x = torch.from_numpy(val_x).float()
train_y = torch.from_numpy(train_y.values).long()
val_y = torch.from_numpy(val_y.values).long()
def train(epoch):
    tr_loss = 0
    # getting the training set
    x_train, y_train = Variable(train_x), Variable(train_y)
    # getting the validation set
    x_val, y_val = Variable(val_x), Variable(val_y)
    # clearing the Gradients of the model parameters
    optimizer.zero_grad()
    
    # prediction for training and validation set
    output_train = model(x_train)
    output_val = model(x_val)

    # computing the training and validation loss
    loss_train = criterion(output_train, y_train)
    loss_val = criterion(output_val, y_val)
    train_losses.append(loss_train)
    val_losses.append(loss_val)

    # computing the updated weights of all the model parameters
    loss_train.backward()
    optimizer.step()
    tr_loss = loss_train.item()
    if epoch%2 == 0:
        # printing the validation loss
        print('Epoch : ',epoch+1, '\t', 'loss :', loss_val)

model = Net()
# defining the optimizer
optimizer = optim.Adam(model.parameters(), lr=0.005)
# defining the loss function
criterion = nn.CrossEntropyLoss()
# defining the number of epochs
n_epochs = 15
# empty list to store training losses
train_losses = []
# empty list to store validation losses
val_losses = []
# training the model
for epoch in range(n_epochs):
    train(epoch)
# prediction for training set
with torch.no_grad():
    output = model(train_x)
    
softmax = output.cpu()
prob = list(softmax.numpy())
predictions = np.argmax(prob, axis=1)
print(predictions.sum())
# f1 score on training set
f1_score(train_y.numpy(), predictions, average='weighted')
Out[918]:
Pipeline(steps=[('adasyn', ADASYN(random_state=0)),
                ('lgbmclassifier', LGBMClassifier())])

Save your trained model

In [396]:
filename = f'{AICROWD_ASSETS_DIR}/model_checkpoint'

check_point = {'params': model.state_dict(),
              'optimizer': optimizer.state_dict()}

torch.save(check_point, filename)

Prediction phase 🔎

Please make sure to save the weights from the training section in your assets directory and load them in this section

In [397]:
file = f'{AICROWD_ASSETS_DIR}/model_checkpoint'
check_point = torch.load(file)
model.load_state_dict(check_point['params'])
Out[397]:
<All keys matched successfully>

Load test data

In [404]:
test_data = pd.read_csv(AICROWD_DATASET_PATH)
test_data.head()
Out[404]:
row_id number_of_digits missing_digit_1 missing_digit_2 missing_digit_3 missing_digit_4 missing_digit_5 missing_digit_6 missing_digit_7 missing_digit_8 ... top_area_perc bottom_area_perc left_area_perc right_area_perc hor_count vert_count eleven_ten_error other_error time_diff centre_dot_detect
0 LA9JQ1JZMJ9D2MBZV 11.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.500272 0.499368 0.553194 0.446447 0 0 0 1 NaN NaN
1 PSSRCWAPTAG72A1NT 6.0 1.0 1.0 0.0 1.0 1.0 0.0 0.0 0.0 ... 0.572472 0.427196 0.496352 0.503273 0 1 0 1 NaN NaN
2 GCTODIZJB42VCBZRZ 11.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 ... 0.494076 0.505583 0.503047 0.496615 1 0 0 0 0.0 0.0
3 7YMVQGV1CDB1WZFNE 3.0 1.0 0.0 1.0 0.0 1.0 1.0 1.0 1.0 ... 0.555033 0.444633 0.580023 0.419575 0 1 0 1 NaN NaN
4 PHEQC6DV3LTFJYIJU 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.0 ... 0.603666 0.395976 0.494990 0.504604 0 0 0 1 150.0 0.0

5 rows × 121 columns

In [406]:
image_repr_features_test = None
for n,row in tqdm(test_data.iterrows()):
    image_repr = np.zeros((1,7,7))
    centre_repr = np.zeros((1,7,7))
    for i in range(1,13):
        col = f'missing_digit_{i}'
        present = row[col]

        if present:
            ccw_flag = row["sequence_flag_ccw"] == 1
            translator = translator2d
            if ccw_flag:
                translator = translator2d_ccw
            pos = translator[i]
            image_repr[0,pos[1],pos[0]] = 1
    
    image_repr = np.kron(image_repr, np.ones((scale,scale)))
#     rot_angle_z = image_repr * row["final_rotation_angle"]/360
#     centre_dot = row["centre_dot_detect"]
#     if centre_dot == 1:
#         centre_repr[0,3,3] = 1
#     centre_repr = np.kron(centre_repr, np.ones((scale,scale))) 
#     image_repr = np.vstack([image_repr,rot_angle_z,centre_repr])
    image_repr = np.expand_dims(image_repr, axis = 0)
    if n > 0:
        image_repr_features_test = np.vstack([image_repr_features_test,image_repr])
    else:
        image_repr_features_test = image_repr

image_repr_features_test_no_nan = image_repr_features_test # np.nan_to_num(image_repr_features)
In [407]:
# prediction for training set
test_x = torch.from_numpy(image_repr_features_test_no_nan).float()
with torch.no_grad():
    output = model(test_x)
    
softmax = output.cpu()
prob = list(softmax.numpy())
predictions = np.argmax(prob, axis=1)
print(predictions.sum())
33
<ipython-input-373-e62410b353e2>:18: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  return F.softmax(x)

Generate predictions

In [418]:
predictions = {
    "row_id": test_data["row_id"].values,
    "normal_diagnosis_probability": [x[0] for x in prob],
    "post_alzheimer_diagnosis_probability": [x[1] for x in prob],
    "pre_alzheimer_diagnosis_probability": [x[2] for x in prob],
}

predictions_df = pd.DataFrame.from_dict(predictions)
In [419]:
predictions_df.head()
Out[419]:
row_id normal_diagnosis_probability post_alzheimer_diagnosis_probability pre_alzheimer_diagnosis_probability
0 LA9JQ1JZMJ9D2MBZV 0.166939 0.833061 5.098431e-13
1 PSSRCWAPTAG72A1NT 0.999258 0.000743 6.307389e-11
2 GCTODIZJB42VCBZRZ 0.999851 0.000149 1.460324e-11
3 7YMVQGV1CDB1WZFNE 0.107968 0.892032 4.458191e-25
4 PHEQC6DV3LTFJYIJU 0.920741 0.079259 1.108694e-18

Save predictions 📨

In [420]:
predictions_df.to_csv(AICROWD_PREDICTIONS_PATH, index=False)

Submit to AIcrowd 🚀

NOTE: PLEASE SAVE THE NOTEBOOK BEFORE SUBMITTING IT (Ctrl + S)

In [423]:
!DATASET_PATH=$AICROWD_DATASET_PATH \
aicrowd notebook submit \
    --assets-dir $AICROWD_ASSETS_DIR \
    --challenge addi-alzheimers-detection-challenge
API Key valid
Saved API Key successfully!
Using notebook: /home/desktop0/ClockFeatures.ipynb for submission...
Removing existing files from submission directory...
Scrubbing API keys from the notebook...
Collecting notebook...
Validating the submission...
Executing install.ipynb...
[NbConvertApp] Converting notebook /home/desktop0/submission/install.ipynb to notebook
[NbConvertApp] Executing notebook with kernel: python
[NbConvertApp] Writing 15301 bytes to /home/desktop0/submission/install.nbconvert.ipynb
Executing predict.ipynb...
[NbConvertApp] Converting notebook /home/desktop0/submission/predict.ipynb to notebook
[NbConvertApp] Executing notebook with kernel: python
2021-05-22 13:37:50.903967: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-05-22 13:37:50.904015: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
  File "/home/desktop0/conda/bin/jupyter-nbconvert", line 11, in <module>
    sys.exit(main())
  File "/home/desktop0/conda/lib/python3.8/site-packages/jupyter_core/application.py", line 254, in launch_instance
    return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
  File "/home/desktop0/conda/lib/python3.8/site-packages/traitlets/config/application.py", line 845, in launch_instance
    app.start()
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/nbconvertapp.py", line 350, in start
    self.convert_notebooks()
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/nbconvertapp.py", line 524, in convert_notebooks
    self.convert_single_notebook(notebook_filename)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/nbconvertapp.py", line 489, in convert_single_notebook
    output, resources = self.export_single_notebook(notebook_filename, resources, input_buffer=input_buffer)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/nbconvertapp.py", line 418, in export_single_notebook
    output, resources = self.exporter.from_filename(notebook_filename, resources=resources)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/exporters/exporter.py", line 181, in from_filename
    return self.from_file(f, resources=resources, **kw)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/exporters/exporter.py", line 199, in from_file
    return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/exporters/notebook.py", line 32, in from_notebook_node
    nb_copy, resources = super().from_notebook_node(nb, resources, **kw)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/exporters/exporter.py", line 143, in from_notebook_node
    nb_copy, resources = self._preprocess(nb_copy, resources)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/exporters/exporter.py", line 318, in _preprocess
    nbc, resc = preprocessor(nbc, resc)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/preprocessors/base.py", line 47, in __call__
    return self.preprocess(nb, resources)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/preprocessors/execute.py", line 79, in preprocess
    self.execute()
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbclient/util.py", line 74, in wrapped
    return just_run(coro(*args, **kwargs))
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbclient/util.py", line 53, in just_run
    return loop.run_until_complete(coro)
  File "/home/desktop0/conda/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbclient/client.py", line 553, in async_execute
    await self.async_execute_cell(
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/preprocessors/execute.py", line 123, in async_execute_cell
    cell, resources = self.preprocess_cell(cell, self.resources, cell_index)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbconvert/preprocessors/execute.py", line 146, in preprocess_cell
    cell = run_sync(NotebookClient.async_execute_cell)(self, cell, index, store_history=self.store_history)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbclient/util.py", line 74, in wrapped
    return just_run(coro(*args, **kwargs))
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbclient/util.py", line 53, in just_run
    return loop.run_until_complete(coro)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nest_asyncio.py", line 70, in run_until_complete
    return f.result()
  File "/home/desktop0/conda/lib/python3.8/asyncio/futures.py", line 178, in result
    raise self._exception
  File "/home/desktop0/conda/lib/python3.8/asyncio/tasks.py", line 280, in __step
    result = coro.send(None)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbclient/client.py", line 857, in async_execute_cell
    self._check_raise_for_error(cell, exec_reply)
  File "/home/desktop0/conda/lib/python3.8/site-packages/nbclient/client.py", line 760, in _check_raise_for_error
    raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)
nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
------------------
image_repr_features_test = None
for n,row in tqdm(test_data.iterrows()):
    image_repr = np.zeros((1,7,7))
    centre_repr = np.zeros((1,7,7))
    for i in range(1,13):
        col = f'missing_digit_{i}'
        present = row[col]

        if present:
            ccw_flag = row["sequence_flag_ccw"] == 1
            translator = translator2d
            if ccw_flag:
                translator = translator2d_ccw
            pos = translator[i]
            image_repr[0,pos[1],pos[0]] = 1
    
    image_repr = np.kron(image_repr, np.ones((scale,scale)))
#     rot_angle_z = image_repr * row["final_rotation_angle"]/360
#     centre_dot = row["centre_dot_detect"]
#     if centre_dot == 1:
#         centre_repr[0,3,3] = 1
#     centre_repr = np.kron(centre_repr, np.ones((scale,scale))) 
#     image_repr = np.vstack([image_repr,rot_angle_z,centre_repr])
    image_repr = np.expand_dims(image_repr, axis = 0)
    if n > 0:
        image_repr_features_test = np.vstack([image_repr_features_test,image_repr])
    else:
        image_repr_features_test = image_repr

image_repr_features_test_no_nan = image_repr_features_test # np.nan_to_num(image_repr_features)
------------------


NameError: name 'tqdm' is not defined

LocalEvaluationError: predict.ipynb failed to execute
In [ ]:


Comments

You must login before you can post a comment.

Execute