DLNLP-note
[Starter Code] DL-NLP-1 Sentiment Analysis
This a starter code for DL-NLP Sentiment Analysis Assignment
You're given a starter code for DL-NLP Sentiment Analysis Assignment. It contains basic instructions on using the notebook to make submissions & pre-defined functions for the participants to fill in. You will be making a copy of this notebook and working with that.
Assignment 1, Sentiment Analysisยถ
In this assignment, youโre tasked with identifying the rating of a review using sentiment analysis.
The training dataset consists of {sentence id, review, rating}. Given a review, your neural network should be able to assign a rating score between 1 to 5 - low to high, respectively.
Do note:
- Perform sentiment analysis using a neural network with just one input and output layer (with softmax as activation function).
- Youโre not allowed to have hidden layers.
How to use this notebook? ๐ยถ
This is a shared template and any edits you make here will not be saved. You should make a copy in your own drive. Click the "File" menu (top-left), then "Save a Copy in Drive". You will be working in your copy however you like.
- Update the config parameters. You can define the common variables here
Variable | Description |
---|---|
TRAIN_DATA_PATH |
Path to the file containing training data (The data will be available at /data/ ). |
TEST_DATA_PATH |
Path to the file containing test data (The data will be available at /data/ ). |
PREDICTIONS_PATH |
Path to write the output to. |
ASSETS_DIR |
In case your notebook needs additional files (like model weights, etc.,), you can add them to a directory and specify the path to the directory here (please specify relative path). The contents of this directory will be sent to AIcrowd for evaluation. |
API_KEY |
In order to submit your code to AICrowd, you need to provide your account's API key. This key is available at https://www.aicrowd.com/participants/me |
- Installing packages. Please use the Install packages ๐ section to install the packages
- Training your models. All the code within the Training phase โ๏ธ section will be skipped during evaluation. Please make sure to save your model weights in the assets directory and load them in the predictions phase section
Dataset Specifications ๐พยถ
- train.csv: has 3 columns with latter two being 'reviews' & corresponding 'ratings'.
- test.csv: has 2 columns with latter being 'reviews'. You will have to predict the corresponding 'ratings'.
- 'ratings' in predictions should be integers in range [1,5] i.e. {1,2,3,4,5}
Setup AIcrowd Utilities ๐ ยถ
We use this to bundle the files for submission and create a submission on AIcrowd. Do not edit this block.
!pip install -U git+https://gitlab.aicrowd.com/aicrowd/aicrowd-cli.git@notebook-submission-v2 > /dev/null
%load_ext aicrowd.magic
Install packages ๐ยถ
Please add all pacakages installations in this section
!pip install numpy pandas
Import necessary modules and packages ๐ยถ
import os
import pandas as pd
import numpy as np
#Add your necessary modules & packages here
AIcrowd Runtime Configuration ๐งทยถ
Define configuration parameters. Please include any files needed for the notebook to run under ASSETS_DIR
. We will copy the contents of this directory to your final submission file ๐
The dataset is available under /data
on the workspace.
class AIcrowdConfig:
DATASET_DIR = "data"
TEST_DATA_PATH = os.path.join(DATASET_DIR, "test.csv")
TRAIN_DATA_PATH = os.path.join(DATASET_DIR, "train.csv")
PREDICTIONS_PATH = "predictions.csv"
ASSETS_DIR = "assets"
API_KEY = "" # Get your key from https://www.aicrowd.com/participants/me (ctrl + click the link)
Download the dataset ๐ฒยถ
AIcrowd magic functions will download the dataset after authenticating your API key.
%aicrowd login --api-key "$AIcrowdConfig.API_KEY"
%aicrowd dataset download -c dlnlp-note
Extract the downloaded dataset to data
directory
!mkdir $AIcrowdConfig.DATASET_DIR
!mv train.csv $AIcrowdConfig.DATASET_DIR
!mv test.csv $AIcrowdConfig.DATASET_DIR
Define preprocessing code ๐ปยถ
The code that is common between the training and the predictions sections should be defined here. During evaluation, we completely skip the training section. Please make sure to add any common logic between the training and prediction sections here.
'''
About the task:
You are provided with a codeflow- which consists of functions to be implemented(MANDATORY).
You need to implement each of the functions mentioned below, you may add your own function parameters if needed.
'''
def encode_data(text):
# This function will be used to encode the reviews using a dictionary(created using corpus vocabulary)
# Example of encoding :"The food was fabulous but pricey" has a vocabulary of 4 words, each one has to be mapped to an integer like:
# {'The':1,'food':2,'was':3 'fabulous':4 'but':5 'pricey':6} this vocabulary has to be created for the entire corpus and then be used to
# encode the words into integers
# return encoded examples
pass
def convert_to_lower(text):
# return the reviews after convering then to lowercase
pass
def remove_punctuation(text):
# return the reviews after removing punctuations
pass
def remove_stopwords(text):
# return the reviews after removing the stopwords
pass
def perform_tokenization(text):
# return the reviews after performing tokenization
pass
def perform_padding(data):
# return the reviews after padding the reviews to maximum length
pass
def preprocess_data(data):
# make all the following function calls on your data
review = data["reviews"]
review = convert_to_lower(review)
review = remove_punctuation(review)
review = remove_stopwords(review)
review = perform_tokenization(review)
review = encode_data(review)
processed_data = perform_padding(review)
# return processed_data # Uncomment this
# Remove this dummy code at the bottom
return np.zeros( (len(data["reviews"]), 100) )
Define your Softmax functionยถ
You have to write your own implementation from scratch and return softmax values(using predefined softmax is prohibited)
def softmax_activation(x):
# write your implementation here
pass
Training phase โ๏ธยถ
You can define your training code here. This sections will be skipped during evaluation.
Define your modelยถ
You should define your medal related methods here using the given template
# Example with tensorflow, but you can replace with pytorch
# For better code add all imports to the top cell marked for imports
import tensorflow
from tensorflow import keras
class NeuralNet:
def __init__(self, reviews, ratings):
self.reviews = reviews
self.ratings = ratings
def build_nn(self):
#add the input and output layer here; you can use either tensorflow or pytorch
model = keras.models.Sequential()
model.add(keras.layers.Input((100,)))
model.add(keras.layers.Dense(np.max(self.ratings)+1, activation='softmax') )
####### Use the softmax activation that you wrote code for above #####
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
self.model = model
def train_nn(self,batch_size,epochs):
# write the training loop here; you can use either tensorflow or pytorch
# print validation accuracy
self.model.fit(x=self.reviews, y=self.ratings, epochs=3)
def predict(self, reviews):
# return a list containing all the ratings predicted by the trained model
self.model.predict(reviews)
Load training data ๐ปยถ
train_data = pd.read_csv(AIcrowdConfig.TRAIN_DATA_PATH)
train_data.head()
Initialize & Train your modelยถ
batch_size, epochs= 1000, 3
train_reviews=preprocess_data(train_data)
train_ratings=train_data['ratings'].values - 1
model=NeuralNet(train_reviews,train_ratings)
model.build_nn()
model.train_nn(batch_size,epochs)
Save your trained modelยถ
if not os.path.isdir(AIcrowdConfig.ASSETS_DIR):
os.mkdir(AIcrowdConfig.ASSETS_DIR)
# This is the example for a keras model, save your model according to your framework
model.model.save(os.path.join(AIcrowdConfig.ASSETS_DIR, "dummy_model.h5"))
Prediction phase ๐ยถ
Please make sure to save the weights from the training section in your assets directory and load them in this section
from tensorflow import keras
model = keras.models.load_model(os.path.join(AIcrowdConfig.ASSETS_DIR, "dummy_model.h5"))
Load test dataยถ
test_data = pd.read_csv(AIcrowdConfig.TEST_DATA_PATH)
test_data.head()
Read and preprocess the dataยถ
test_reviews=preprocess_data(test_data)
Generate predictionsยถ
#Make your predictions here based on your model
raw_predictions = model.predict(test_reviews)
predictions = np.argmax(raw_predictions, axis=-1)
Save predictions ๐จยถ
pd.DataFrame(predictions, columns=["ratings"]).to_csv(AIcrowdConfig.PREDICTIONS_PATH)
Submit to AIcrowd ๐ยถ
NOTE: PLEASE SAVE THE NOTEBOOK BEFORE SUBMITTING IT (Ctrl + S)
%aicrowd submission create --jupyter -c dlnlp-note
Content
Comments
You must login before you can post a comment.