Loading

NLP Feature Engineering

Solution for submission 148532

A detailed solution for submission 148532 submitted for challenge NLP Feature Engineering

AkashPB

image.png

Starter Code for Feature Engineering

What we are going to Learn

  • How to convert your text into numbers ?
  • How Bag of words, TF-IDF, Word2Vec works ?
  • Testing and Submitting the Results to the Challenge.

About this Challanges

Now, this challange is very different form what we usually do in AIcrowd Blitz.In this challanges, the task is to generate features from a text data. So, what i mean by features ? it is simply to extract meaningful information about a text, let's take an example.

Crop diseases are a major threat to food securit. The combination of increasing global smartphone
penetration and recent advances in computer vision made possible by deep
learning has paved the way for smartphone-assisted disease diagnosis. Using a
public dataset of 54,306 images of diseased and healthy plant leaves collected
under controlled conditions, we train a deep convolutional neural network to
identify 14 crop species and 26 diseases (or absence thereof). The trained
model achieves an accuracy of 99.35% on a held-out test set, demonstrating the
feasibility of this approach.Overall, the approach of training deep learning models on
increasingly large and publicly available image datasets presents a clear path
towards smartphone-assisted crop disease diagnosis on a massive global scale.

He we can see that the para seems contains words like images, neural network etc, with these features, we quickly figured out that this seems to be a rsearch paper Deep Learning & Computer Vision. Extracting features like these helps up to generate text embeddings will contains more useful information about the text.

Setup AIcrowd Utilities 🛠

We use this to bundle the files for submission and create a submission on AIcrowd. Do not edit this block.

In [1]:
try:
    from google.colab import drive
    drive.mount('/content/drive') 
except:
    pass
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
In [2]:
!pip install -q -U aicrowd-cli

How to use this notebook? 📝

notebook overview

  • Update the config parameters. You can define the common variables here
Variable Description
AICROWD_DATASET_PATH Path to the file containing test data (The data will be available at /data/ on aridhia workspace). This should be an absolute path.
AICROWD_OUTPUTS_PATH Path to write the output to.
AICROWD_ASSETS_DIR In case your notebook needs additional files (like model weights, etc.,), you can add them to a directory and specify the path to the directory here (please specify relative path). The contents of this directory will be sent to AIcrowd for evaluation.
AICROWD_API_KEY In order to submit your code to AIcrowd, you need to provide your account's API key. This key is available at https://www.aicrowd.com/participants/me
  • Installing packages. Please use the Install packages 🗃 section to install the packages
  • Training your models. All the code within the Training phase ⚙️ section will be skipped during evaluation. Please make sure to save your model weights in the assets directory and load them in the predictions phase section

AIcrowd Runtime Configuration 🧷

Define configuration parameters. Please include any files needed for the notebook to run under ASSETS_DIR. We will copy the contents of this directory to your final submission file 🙂

The dataset is available under /data on the workspace.

In [3]:
import os

# Please use the absolute for the location of the dataset.
# Or you can use relative path with `os.getcwd() + "test_data/test.csv"`
AICROWD_DATASET_PATH = os.getenv("DATASET_PATH", os.getcwd()+"/data/data.csv")
AICROWD_OUTPUTS_PATH = os.getenv("OUTPUTS_DIR", "")
AICROWD_ASSETS_DIR = os.getenv("ASSETS_DIR", "assets")

Install packages 🗃

We are going to use many different libraries to demonstrate many idfferent techniques to convert text into numbers ( or more specifically vectors )

In [4]:
!pip install contractions
!pip install --upgrade spacy rich gensim tensorflow scikit-learn
!python -m spacy download en_core_web_sm # Downloaing the model for engligh language will contains many pretrained preprocessing pipelines
Requirement already satisfied: contractions in /usr/local/lib/python3.7/dist-packages (0.0.52)
Requirement already satisfied: textsearch>=0.0.21 in /usr/local/lib/python3.7/dist-packages (from contractions) (0.0.21)
Requirement already satisfied: pyahocorasick in /usr/local/lib/python3.7/dist-packages (from textsearch>=0.0.21->contractions) (1.4.2)
Requirement already satisfied: anyascii in /usr/local/lib/python3.7/dist-packages (from textsearch>=0.0.21->contractions) (0.2.0)
Requirement already up-to-date: spacy in /usr/local/lib/python3.7/dist-packages (3.0.6)
Requirement already up-to-date: rich in /usr/local/lib/python3.7/dist-packages (10.4.0)
Requirement already up-to-date: gensim in /usr/local/lib/python3.7/dist-packages (4.0.1)
Requirement already up-to-date: tensorflow in /usr/local/lib/python3.7/dist-packages (2.5.0)
Requirement already up-to-date: scikit-learn in /usr/local/lib/python3.7/dist-packages (0.24.2)
Requirement already satisfied, skipping upgrade: catalogue<2.1.0,>=2.0.3 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.0.4)
Requirement already satisfied, skipping upgrade: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (4.61.1)
Requirement already satisfied, skipping upgrade: jinja2 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.11.3)
Requirement already satisfied, skipping upgrade: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (20.9)
Requirement already satisfied, skipping upgrade: typer<0.4.0,>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (0.3.2)
Requirement already satisfied, skipping upgrade: pydantic<1.8.0,>=1.7.1 in /usr/local/lib/python3.7/dist-packages (from spacy) (1.7.4)
Requirement already satisfied, skipping upgrade: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from spacy) (0.8.2)
Requirement already satisfied, skipping upgrade: thinc<8.1.0,>=8.0.3 in /usr/local/lib/python3.7/dist-packages (from spacy) (8.0.6)
Requirement already satisfied, skipping upgrade: typing-extensions<4.0.0.0,>=3.7.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from spacy) (3.7.4.3)
Requirement already satisfied, skipping upgrade: spacy-legacy<3.1.0,>=3.0.4 in /usr/local/lib/python3.7/dist-packages (from spacy) (3.0.6)
Requirement already satisfied, skipping upgrade: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy) (3.0.5)
Requirement already satisfied, skipping upgrade: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (1.19.5)
Requirement already satisfied, skipping upgrade: srsly<3.0.0,>=2.4.1 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.4.1)
Requirement already satisfied, skipping upgrade: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.25.1)
Requirement already satisfied, skipping upgrade: pathy>=0.3.5 in /usr/local/lib/python3.7/dist-packages (from spacy) (0.5.2)
Requirement already satisfied, skipping upgrade: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (0.4.1)
Requirement already satisfied, skipping upgrade: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy) (2.0.5)
Requirement already satisfied, skipping upgrade: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy) (1.0.5)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy) (57.0.0)
Requirement already satisfied, skipping upgrade: colorama<0.5.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from rich) (0.4.4)
Requirement already satisfied, skipping upgrade: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich) (2.6.1)
Requirement already satisfied, skipping upgrade: commonmark<0.10.0,>=0.9.0 in /usr/local/lib/python3.7/dist-packages (from rich) (0.9.1)
Requirement already satisfied, skipping upgrade: scipy>=0.18.1 in /usr/local/lib/python3.7/dist-packages (from gensim) (1.4.1)
Requirement already satisfied, skipping upgrade: smart-open>=1.8.1 in /usr/local/lib/python3.7/dist-packages (from gensim) (3.0.0)
Requirement already satisfied, skipping upgrade: tensorboard~=2.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (2.5.0)
Requirement already satisfied, skipping upgrade: gast==0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.4.0)
Requirement already satisfied, skipping upgrade: flatbuffers~=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.12)
Requirement already satisfied, skipping upgrade: keras-nightly~=2.5.0.dev in /usr/local/lib/python3.7/dist-packages (from tensorflow) (2.5.0.dev2021032900)
Requirement already satisfied, skipping upgrade: wrapt~=1.12.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.12.1)
Requirement already satisfied, skipping upgrade: wheel~=0.35 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.36.2)
Requirement already satisfied, skipping upgrade: six~=1.15.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.15.0)
Requirement already satisfied, skipping upgrade: opt-einsum~=3.3.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.3.0)
Requirement already satisfied, skipping upgrade: h5py~=3.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.1.0)
Requirement already satisfied, skipping upgrade: keras-preprocessing~=1.1.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.1.2)
Requirement already satisfied, skipping upgrade: absl-py~=0.10 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.12.0)
Requirement already satisfied, skipping upgrade: tensorflow-estimator<2.6.0,>=2.5.0rc0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (2.5.0)
Requirement already satisfied, skipping upgrade: termcolor~=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.1.0)
Requirement already satisfied, skipping upgrade: google-pasta~=0.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.2.0)
Requirement already satisfied, skipping upgrade: grpcio~=1.34.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.34.1)
Requirement already satisfied, skipping upgrade: astunparse~=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.6.3)
Requirement already satisfied, skipping upgrade: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.12.4)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn) (1.0.1)
Requirement already satisfied, skipping upgrade: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn) (2.1.0)
Requirement already satisfied, skipping upgrade: zipp>=0.5; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from catalogue<2.1.0,>=2.0.3->spacy) (3.4.1)
Requirement already satisfied, skipping upgrade: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->spacy) (2.0.1)
Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->spacy) (2.4.7)
Requirement already satisfied, skipping upgrade: click<7.2.0,>=7.1.1 in /usr/local/lib/python3.7/dist-packages (from typer<0.4.0,>=0.3.0->spacy) (7.1.2)
Requirement already satisfied, skipping upgrade: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (1.24.3)
Requirement already satisfied, skipping upgrade: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (3.0.4)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2.10)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy) (2021.5.30)
Requirement already satisfied, skipping upgrade: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (1.0.1)
Requirement already satisfied, skipping upgrade: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (1.31.0)
Requirement already satisfied, skipping upgrade: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (0.4.4)
Requirement already satisfied, skipping upgrade: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (0.6.1)
Requirement already satisfied, skipping upgrade: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (3.3.4)
Requirement already satisfied, skipping upgrade: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (1.8.0)
Requirement already satisfied, skipping upgrade: cached-property; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from h5py~=3.1.0->tensorflow) (1.5.2)
Requirement already satisfied, skipping upgrade: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (0.2.8)
Requirement already satisfied, skipping upgrade: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (4.7.2)
Requirement already satisfied, skipping upgrade: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (4.2.2)
Requirement already satisfied, skipping upgrade: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow) (1.3.0)
Requirement already satisfied, skipping upgrade: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard~=2.5->tensorflow) (4.5.0)
Requirement already satisfied, skipping upgrade: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (0.4.8)
Requirement already satisfied, skipping upgrade: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow) (3.1.1)
2021-06-26 20:02:46.475921: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Requirement already satisfied: en-core-web-sm==3.0.0 from https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0-py3-none-any.whl#egg=en_core_web_sm==3.0.0 in /usr/local/lib/python3.7/dist-packages (3.0.0)
Requirement already satisfied: spacy<3.1.0,>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from en-core-web-sm==3.0.0) (3.0.6)
Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (0.4.1)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (1.0.5)
Requirement already satisfied: typer<0.4.0,>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (0.3.2)
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (1.19.5)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.11.3)
Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (0.8.2)
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.25.1)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (20.9)
Requirement already satisfied: srsly<3.0.0,>=2.4.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.4.1)
Requirement already satisfied: catalogue<2.1.0,>=2.0.3 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.0.4)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.0.5)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (57.0.0)
Requirement already satisfied: typing-extensions<4.0.0.0,>=3.7.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (3.7.4.3)
Requirement already satisfied: pydantic<1.8.0,>=1.7.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (1.7.4)
Requirement already satisfied: thinc<8.1.0,>=8.0.3 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (8.0.6)
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (4.61.1)
Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (3.0.6)
Requirement already satisfied: pathy>=0.3.5 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (0.5.2)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (3.0.5)
Requirement already satisfied: click<7.2.0,>=7.1.1 in /usr/local/lib/python3.7/dist-packages (from typer<0.4.0,>=0.3.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (7.1.2)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.0.1)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2021.5.30)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.4.7)
Requirement already satisfied: zipp>=0.5; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from catalogue<2.1.0,>=2.0.3->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (3.4.1)
Requirement already satisfied: smart-open<4.0.0,>=2.2.0 in /usr/local/lib/python3.7/dist-packages (from pathy>=0.3.5->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (3.0.0)
✔ Download and installation successful
You can now load the package via spacy.load('en_core_web_sm')

Define preprocessing code 💻

The code that is common between the training and the prediction sections should be defined here. During evaluation, we completely skip the training section. Please make sure to add any common logic between the training and prediction sections here.

In [5]:
# Importing Libraries
import pandas as pd
import numpy as np
import pickle

np.warnings.filterwarnings('ignore', category=np.VisibleDeprecationWarning)
import random
import contractions
import re
from tqdm.notebook import tqdm

# Tensorflow 
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

# Sklearn
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, accuracy_score
from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer

# Word2vec Implementation
import spacy
nlp = spacy.load('en_core_web_sm', exclude=['tagger', 'ner', 'attribute_ruler', 'lemmatizer'])

from gensim.models import Word2Vec
from gensim.models.phrases import Phrases, Phraser

# To make things more beautiful! 
from rich.console import Console
from rich.table import Table
from rich.segment import Segment
from rich import pretty
pretty.install()

# Seeding everything for getting same results 
random.seed(42)
np.random.seed(42)

# function to display YouTube videos
from IPython.display import YouTubeVideo
/usr/local/lib/python3.7/dist-packages/gensim/similarities/__init__.py:15: UserWarning: The gensim.similarities.levenshtein submodule is disabled, because the optional Levenshtein package <https://pypi.org/project/python-Levenshtein/> is unavailable. Install Levenhstein (e.g. `pip install python-Levenshtein`) to suppress this warning.
  warnings.warn(msg)
In [6]:
# Latest version of gensim
import gensim
gensim.__version__
'4.0.1'
Out[6]:
In [7]:
# # Defining the function for preprocessing test dataset which will run after submitting the notebook

def preprocess_pipeline(text):
  '''
      Step 1:- Replace contractions 
      Step 2:- Remove all Punctuations.
      Step 3:- Remove all Numbers.
      Step 4:- Remove all Emoticons.
  '''

  ### remove contractions

  text = contractions.fix(text)

  ### remove punctutations 

  text = re.sub(r'[^\w\s]', '', text)

  ### Remove all numbers 

  text = re.sub(r'[0-9]+', '', text)

  ### Remove all emoticons and smileys 
  regrex_pattern = re.compile(pattern = "["
                              u"\U0001F600-\U0001F64F"  # emoticons
                              u"\U0001F300-\U0001F5FF"  # symbols & pictographs
                              u"\U0001F680-\U0001F6FF"  # transport & map symbols
                              u"\U0001F1E0-\U0001F1FF"  # flags (iOS)
                                                "]+", flags = re.UNICODE)
  text = regrex_pattern.sub(r'',text)

  consider = []
  for word in text.split():
    if len(word)>1:
      consider.append(word)

  return ' '.join(consider)

def tokenize_sentence(sentences): 
  tf_model  = TfidfVectorizer(max_features=512,ngram_range=(1,2),
                              lowercase=True,stop_words='english')
  

  prep_sentences = [preprocess_pipeline(sent) for sent in sentences]

  X         = tf_model.fit_transform(prep_sentences)

  X         = np.array(X.todense())

  X         = np.round(X*5).astype(int)

  return X,tf_model

Training phase ⚙️

You can define your training code here. This sections will be skipped during evaluation.

Downloading Dataset

Must be prety familar thing by now :) In case, here we are downloading the challange dataset using AIcrowd CLI

In [8]:

API Key valid
Saved API Key successfully!
In [9]:
# Downloading the Dataset
!mkdir data

# Donwloading emotion classification dataset for testing purposes
!mkdir emotion-detection-data
mkdir: cannot create directory ‘data’: File exists
data.csv: 100% 110k/110k [00:00<00:00, 1.10MB/s]
mkdir: cannot create directory ‘emotion-detection-data’: File exists
train.csv:   0% 0.00/2.30M [00:00<?, ?B/s]
val.csv:   0% 0.00/262k [00:00<?, ?B/s]

test.csv:   0% 0.00/642k [00:00<?, ?B/s]
val.csv: 100% 262k/262k [00:00<00:00, 1.75MB/s]


test.csv: 100% 642k/642k [00:00<00:00, 3.19MB/s]
train.csv: 100% 2.30M/2.30M [00:00<00:00, 7.27MB/s]

Reading Dataset

Reading the necessary files to train, validation & submit our results!

We are also using Emotion Detection Challange dataset for testing purposes.

In [10]:
dataset = pd.read_csv("/content/data.csv")
train_data1 = pd.read_csv("emotion-detection-data/train.csv")
train_dataset = train_data1.copy()
### traindata2 is data for problem 2 
train_data2 = pd.read_csv('/content/train.csv')

test_data1  = pd.read_csv("emotion-detection-data/test.csv")
test_data2  = pd.read_csv('/content/test.csv')

val_data1   = pd.read_csv("emotion-detection-data/val.csv")
val_data2   = pd.read_csv('/content/val.csv')

dataset
Out[10]:
id text feature
0 0 Zero-divisors (ZDs) derived by Cayley-Dickson ... [0.3745401188473625, 0.9507143064099162, 0.731...
1 1 This paper is an exposition of the so-called i... [0.9327284833540133, 0.8660638895004084, 0.045...
2 2 Zero-divisors (ZDs) derived by Cayley-Dickson ... [0.9442664891134339, 0.47421421665746377, 0.86...
3 3 We calculate the equation of state of dense hy... [0.18114934953468032, 0.6811178539649828, 0.18...
4 4 The Donald-Flanigan conjecture asserts that fo... [0.5435382173426461, 0.08172534574677826, 0.45...
5 5 Let $E$ be a primarily quasilocal field, $M/E$... [0.7945155444907487, 0.7070864772666982, 0.050...
6 6 The paper deals with the study of labor market... [0.3129073942136482, 0.27109625376406576, 0.59...
7 7 Axisymmetric equilibria with incompressible fl... [0.40680480095172356, 0.3282331056783394, 0.45...
8 8 This paper analyses the possibilities of perfo... [0.013682414760681105, 0.08159872000483837, 0....
9 9 I show that an (n+2)-dimensional n-Lie algebra... [0.9562918815133613, 0.37667644042946247, 0.33...

Creating our Templete

So, with this train_model we are going to text the various differetn techniques and pare to see which works best!

In [11]:
def train_model(X, y):

  # Splitting the dataset into training and testing,  also by using stratify, we are making sure to use the same class balance between training and testing. 
  X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=42)

  # Creating and training sklearn's Decision Tree Classifier Model 
  from sklearn.linear_model import LogisticRegression
  clf = DecisionTreeClassifier(random_state=42)
  clf.fit(X_train, y_train)

  # Getting the predictions form unseen (testing dataset)
  predictions = clf.predict(X_test)

  # Calcuating the metrics 
  f1 = f1_score(y_test, predictions, average='weighted')
  accuracy = accuracy_score(y_test, predictions)

  # Creating the table
  console = Console()
  result_table = Table(show_header=False, header_style="bold magenta")

  result_table.add_row("F1 Score", str(f1))
  result_table.add_row("Accuracy Score", str(accuracy))

  # Showing the table
  console.print(result_table)

  return f1, accuracy

Simple Tokenization 🪙

Here, all what we are doing is splitting the senteces into tokens/words, and then assigning a unique id to each token, and here we go, we converted the text into a vector. We are also using padding to make sure all vectors are of maxlen which is 512.

In [12]:
# def tokenize_sentence(sentences, num_words=512, maxlen=512, show=False): 

#   # Creating the tokenizer, the num_words represents the vocabulary and assigning OOV token ( out of vocaculary ) for unknown tokenn
#   # Which can arise if we input a sentence containing a words that tokenizer don't have in his vocabulary

#   tokenizer = Tokenizer(num_words=num_words, oov_token="<OOV>")


#   tokenizer.fit_on_texts(sentences)
  
#   # Getting the unique ID for each token
#   word_index = tokenizer.word_index

#   # Convert the senteces into vector
#   sequences = tokenizer.texts_to_sequences(sentences)

#   # Padding the vectors so that all vectors have the same length
#   padded_sequences = pad_sequences(sequences, padding='post', truncating='pre', maxlen=maxlen)


#   word_index = np.asarray(word_index)
#   sequences = np.asarray(sequences)
#   padded_sequences = np.asarray(padded_sequences)

#   if show==True:
#     console = Console()

#     console.log("Word Index. A unique ID is assigned to each token.")
#     console.log(word_index)
#     console.log("---"*10)

#     console.log("Sequences. senteces converted into vector.")
#     console.log(np.array(sequences[0]))
#     console.log("---"*10)

#     console.log("Padded Sequences. Adding,( 0 in this case ) or removing elements to make all vectors in the samples same.")
#     console.log(np.array(padded_sequences[0]))
#     console.log("---"*10)



#   return tokenizer, word_index, sequences, padded_sequences


# Defining the function for preprocessing test dataset which will run after submitting the notebook



def preprocess_pipeline(text):
  '''
      Step 1:- Replace contractions 
      Step 2:- Remove all Punctuations.
      Step 3:- Remove all Numbers.
      Step 4:- Remove all Emoticons.
  '''

  ### remove contractions

  text = contractions.fix(text)

  ### remove punctutations 

  text = re.sub(r'[^\w\s]', '', text)

  ### Remove all numbers 

  text = re.sub(r'[0-9]+', '', text)

  ### Remove all emoticons and smileys 
  regrex_pattern = re.compile(pattern = "["
                              u"\U0001F600-\U0001F64F"  # emoticons
                              u"\U0001F300-\U0001F5FF"  # symbols & pictographs
                              u"\U0001F680-\U0001F6FF"  # transport & map symbols
                              u"\U0001F1E0-\U0001F1FF"  # flags (iOS)
                                                "]+", flags = re.UNICODE)
  text = regrex_pattern.sub(r'',text)

  consider = []
  for word in text.split():
    if len(word)>1:
      consider.append(word)

  return ' '.join(consider)

def tokenize_sentence(sentences): 
  tf_model  = TfidfVectorizer(max_features=512,ngram_range=(1,2),
                              lowercase=True,stop_words='english')
  

  prep_sentences = [preprocess_pipeline(sent) for sent in sentences]

  X         = tf_model.fit_transform(prep_sentences)

  X         = np.array(X.todense())

  X         = np.round(X*5).astype(int)

  return X,tf_model
In [13]:
# Sample Senteces
sample_sentences = dataset.iloc[0, 1].split(".")
sample_sentences
[
    'Zero-divisors (ZDs) derived by Cayley-Dickson Process (CDP) from\nN-dimensional hypercomplex numbers (N a power of 2, at least 4) can represent\nsingularities and, as N approaches infinite, fractals -- and thereby,scale-free\nnetworks',
    ' Any integer greater than 8 and not a power of 2 generates a\nmeta-fractal or "Sky" when it is interpreted as the "strut constant" (S) of an\nensemble of octahedral vertex figures called "Box-Kites" (the fundamental\nbuilding blocks of ZDs)',
    ' Remarkably simple bit-manipulation rules or "recipes"\nprovide tools for transforming one fractal genus into others within the context\nof Wolfram\'s Class 4 complexity',
    ''
]
In [14]:
_, _,  = tokenize_sentence(sample_sentences)
In [15]:
# Training the model using the vectors and the features
train1 = train_data1['text'].values.tolist()
# train2 = train_data2['text'].values.tolist()
test1  = test_data1['text'].values.tolist()
# test2  = test_data2['text'].values.tolist()
val1   = val_data1['text'].values.tolist()
# val2   = val_data2['text'].values.tolist()

# train1.extend(train2)
# test1.extend(test2)
# val1.extend(val2)

train1.extend(test1)
train1.extend(val1)

to_use = train1.copy()
# X,model2use = tokenize_sentence(to_use)
# tokenizer, _, _, _ = tokenize_sentence(to_use,num_words=512)

# X = tokenizer.texts_to_matrix(train_data1['text'].values.tolist())
# y = train_data1['label'].values
# print(X.shape,y.shape)
In [15]:

In [16]:
# !mkdir assets
# import pickle
# filename = '/content/assets/finalized_model.sav'
# pickle.dump(model2use, open(filename, 'wb'))
In [17]:
# print("Sentence : ", train_data1['text'][2])
# print("Simple Tokenizer : ", X[2])
# import gc
# gc.collect()

X,_ = tokenize_sentence(train_data1['text'].values.tolist())
y = train_data1['label'].values
X.shape,y.shape
((31255, 512), (31255,))
In [18]:
np.unique(X)
array([0, 1, 2, 3, 4, 5])
In [ ]:
token_id_f1, token_id_accuracy = train_model(X, y)

Prediction phase 🔎

Generating the features in test dataset.

In [20]:
test_dataset = pd.read_csv(AICROWD_DATASET_PATH)

test_dataset
Out[20]:
id text feature
0 0 Zero-divisors (ZDs) derived by Cayley-Dickson ... [0.3745401188473625, 0.9507143064099162, 0.731...
1 1 This paper is an exposition of the so-called i... [0.9327284833540133, 0.8660638895004084, 0.045...
2 2 Zero-divisors (ZDs) derived by Cayley-Dickson ... [0.9442664891134339, 0.47421421665746377, 0.86...
3 3 We calculate the equation of state of dense hy... [0.18114934953468032, 0.6811178539649828, 0.18...
4 4 The Donald-Flanigan conjecture asserts that fo... [0.5435382173426461, 0.08172534574677826, 0.45...
5 5 Let $E$ be a primarily quasilocal field, $M/E$... [0.7945155444907487, 0.7070864772666982, 0.050...
6 6 The paper deals with the study of labor market... [0.3129073942136482, 0.27109625376406576, 0.59...
7 7 Axisymmetric equilibria with incompressible fl... [0.40680480095172356, 0.3282331056783394, 0.45...
8 8 This paper analyses the possibilities of perfo... [0.013682414760681105, 0.08159872000483837, 0....
9 9 I show that an (n+2)-dimensional n-Lie algebra... [0.9562918815133613, 0.37667644042946247, 0.33...
In [20]:

In [23]:
# # So, let's do a simple tokenization and generate the features!
X,_ = tokenize_sentence(test_dataset['text'].values)



# X = tokenizer.texts_to_matrix(test_dataset['text'].values, mode='tfidf')

# Creating the wor2vec model, size is the output vector size of each word

for index, row in tqdm(test_dataset.iterrows()):
  test_dataset.iloc[index, 2] = str(X[index].tolist())

test_dataset
Out[23]:
id text feature
0 0 Zero-divisors (ZDs) derived by Cayley-Dickson ... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
1 1 This paper is an exposition of the so-called i... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
2 2 Zero-divisors (ZDs) derived by Cayley-Dickson ... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
3 3 We calculate the equation of state of dense hy... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
4 4 The Donald-Flanigan conjecture asserts that fo... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
5 5 Let $E$ be a primarily quasilocal field, $M/E$... [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, ...
6 6 The paper deals with the study of labor market... [0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, ...
7 7 Axisymmetric equilibria with incompressible fl... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
8 8 This paper analyses the possibilities of perfo... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
9 9 I show that an (n+2)-dimensional n-Lie algebra... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
In [24]:
len(eval(test_dataset['feature'][0]))
512
In [25]:
# Saving the sample submission
test_dataset.to_csv(os.path.join(AICROWD_OUTPUTS_PATH,'submission.csv'), index=False)

Submit to AIcrowd 🚀

Note : Please save the notebook before submitting it (Ctrl + S)

In [26]:
!DATASET_PATH=$AICROWD_DATASET_PATH \
aicrowd -v notebook submit \
    --assets-dir $AICROWD_ASSETS_DIR \
    --challenge nlp-feature-engineering
Using notebook: /content/drive/MyDrive/Colab Notebooks/trial_feat_eng_roughwork.ipynb for submission...
Removing existing files from submission directory...
Scrubbing API keys from the notebook...
Collecting notebook...
Validating the submission...
Executing install.ipynb...
[NbConvertApp] Converting notebook /content/submission/install.ipynb to notebook
[NbConvertApp] Executing notebook with kernel: python3
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] Writing 28731 bytes to /content/submission/install.nbconvert.ipynb
Executing predict.ipynb...
[NbConvertApp] Converting notebook /content/submission/predict.ipynb to notebook
[NbConvertApp] Executing notebook with kernel: python3
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
[NbConvertApp] ERROR | unhandled iopub msg: colab_request
2021-06-26 20:01:08.171759: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[NbConvertApp] ERROR | Error while converting '/content/submission/predict.ipynb'
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/nbconvert/nbconvertapp.py", line 408, in export_single_notebook
    output, resources = self.exporter.from_filename(notebook_filename, resources=resources)
  File "/usr/local/lib/python2.7/dist-packages/nbconvert/exporters/exporter.py", line 179, in from_filename
    return self.from_file(f, resources=resources, **kw)
  File "/usr/local/lib/python2.7/dist-packages/nbconvert/exporters/exporter.py", line 197, in from_file
    return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw)
  File "/usr/local/lib/python2.7/dist-packages/nbconvert/exporters/notebook.py", line 32, in from_notebook_node
    nb_copy, resources = super(NotebookExporter, self).from_notebook_node(nb, resources, **kw)
  File "/usr/local/lib/python2.7/dist-packages/nbconvert/exporters/exporter.py", line 139, in from_notebook_node
    nb_copy, resources = self._preprocess(nb_copy, resources)
  File "/usr/local/lib/python2.7/dist-packages/nbconvert/exporters/exporter.py", line 316, in _preprocess
    nbc, resc = preprocessor(nbc, resc)
  File "/usr/local/lib/python2.7/dist-packages/nbconvert/preprocessors/base.py", line 47, in __call__
    return self.preprocess(nb, resources)
  File "/usr/local/lib/python2.7/dist-packages/nbconvert/preprocessors/execute.py", line 381, in preprocess
    nb, resources = super(ExecutePreprocessor, self).preprocess(nb, resources)
  File "/usr/local/lib/python2.7/dist-packages/nbconvert/preprocessors/base.py", line 69, in preprocess
    nb.cells[index], resources = self.preprocess_cell(cell, resources, index)
  File "/usr/local/lib/python2.7/dist-packages/nbconvert/preprocessors/execute.py", line 424, in preprocess_cell
    raise CellExecutionError.from_cell_and_msg(cell, out)
CellExecutionError: An error occurred while executing the following cell:
------------------
# # So, let's do a simple tokenization and generate the features!
X,_ = tokenize_sentence(dataset['text'].values)

X     = np.array(X.todense())

# X = tokenizer.texts_to_matrix(test_dataset['text'].values, mode='tfidf')

# Creating the wor2vec model, size is the output vector size of each word

for index, row in tqdm(test_dataset.iterrows()):
  test_dataset.iloc[index, 2] = str(X[index].tolist())

test_dataset
------------------

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-8-087e4bfdce0f> in <module>()
      1 # # So, let's do a simple tokenization and generate the features!
----> 2 X,_ = tokenize_sentence(dataset['text'].values)
      3 
      4 X     = np.array(X.todense())
      5 

NameError: name 'dataset' is not defined
NameError: name 'dataset' is not defined

Local Evaluation Error Error: predict.ipynb failed to execute

Congratulations 🎉 you did it, but there still a lot of improvement that can be made, this is feature engineering challange after all, means that we have to fit as much information as we can about the text in 512 numbers. We only covered converting texts into vector, but there are so many things you can try more, for ex. unsupervised classification, idk, maybe it can help :)

And btw -

Don't be shy to ask question related to any errors you are getting or doubts in any part of this notebook in discussion forum or in AIcrowd Discord sever, AIcrew will be happy to help you :)

Also, wanna give us your valuable feedback for next blitz or wanna work with us creating blitz challanges ? Let us know!

In [ ]:


Comments

You must login before you can post a comment.

Execute