Loading

ashivani

Name

Ayush Shivani

Location

Hyderabad, IN

Activity

May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Sample-efficient reinforcement learning in Minecraft

Latest submissions

See All
graded 10553

Classify images of snake species from around the world

Latest submissions

See All
failed 31887
graded 27332
failed 27331

Multi Agent Reinforcement Learning on Trains.

Latest submissions

See All
graded 26932
failed 26931
failed 26777

A benchmark for image-based food recognition

Latest submissions

See All
failed 59927

Recognise Handwritten Digits

Latest submissions

See All
graded 67441
graded 63132
graded 60159

Online News Prediction

Latest submissions

See All
graded 67445
failed 67442
failed 67440

Crowdsourced Map Land Cover Prediction

Latest submissions

See All
graded 67452
graded 60242

Predict Power Consumption

Latest submissions

See All
graded 67457
failed 67453

Predict Wine Quality

Latest submissions

See All
graded 67444

Student Evaluation

Latest submissions

See All
graded 67454

Predict if an AD will be clicked

Latest submissions

See All
graded 67446
ashivani has not joined any teams yet...

CHESS

Baseline - CHESS

About 1 hour ago

Getting Started Code for Chess Educational Challenge

Author : Faizan Farooq Khan

To open this notebook on Google Computing platform Colab, click below!

Open In Colab

Download Necessary Packages 📚

In [ ]:
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn

Download Data

The first step is to download out train test data. We will be training a model on the train data and make predictions on test data. We submit our predictions

In [ ]:
#Donwload the datasets
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/chess/v0.1/test.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/chess/v0.1/train.csv
!mv train.csv data/train.csv
!mv test.csv data/test.csv

Import packages

In [ ]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here 🤓
In [ ]:
all_data_path = "data/train.csv" #path where data is stored
In [ ]:
all_data = pd.read_csv(all_data_path) #load data in dataframe using pandas

Visualize the data 👀

In [ ]:
all_data.head()

We can see the dataset contains 7 columns,where columns 1-6 denotes the position of white king, white rook and black king respectively and the last column tells the optimal depth-of-win for White in 0 to 16 moves and -1 for draw.

Split Data into Train and Validation 🔪

  • The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
  • The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
  • There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, k-fold, leave one out. 🧐
  • Validation sets are also used to avoid your model from overfitting on the train dataset.
In [ ]:
X_train, X_val= train_test_split(all_data, test_size=0.2, random_state=42)
  • We have decided to split the data with 20 % as validation and 80 % as training.
  • To learn more about the train_test_split function click here. 🧐
  • This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
  • Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
  • with this step we are all set move to the next step with a prepared dataset.
In [ ]:
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]

TRAINING PHASE 🏋️

Define the Model

  • We have fixed our data and now we are ready to train our model.

  • There are a ton of classifiers to choose from some being Logistic Regression, SVM, Random Forests, Decision Trees, etc.🧐

  • Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.

  • A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐

In [ ]:
classifier = SVC(gamma='auto')

#from sklearn.linear_model import LogisticRegression
# classifier = LogisticRegression()
  • To start you off, We have used a basic Support Vector Machines classifier here.
  • But you can tune parameters and increase the performance. To see the list of parameters visit here.
  • Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.

To read more about other sklearn classifiers visit here 🧐. Try and use other classifiers to see how the performance of your model changes. Try using Logistic Regression or MLP and compare how the performance changes.

Train the Model

In [ ]:
classifier.fit(X_train, y_train)

Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the model in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Validation Phase 🤔

Wonder how well your model learned! Lets check it.

Predict on Validation

Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.

In [ ]:
y_pred = classifier.predict(X_val)

Evaluate the Performance

  • We have used basic metrics to quantify the performance of our model.
  • This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
  • Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
  • F1 score and Log Loss are the metrics for this challenge
In [ ]:
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
In [ ]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data on which final submission is to be made.

In [ ]:
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path)

Predict Test Set

Predict on the test set and you are all set to make the submission !

In [ ]:
submission = classifier.predict(final_test)

Save the prediction to csv

In [ ]:
submission = pd.DataFrame(submission)
submission.to_csv('submission.csv',header=['depth'],index=False)

🚧 Note :

  • Do take a look at the submission format.
  • The submission file should contain a header.
  • Follow all submission guidelines strictly to avoid inconvenience.

To download the generated csv in colab run the below command

In [ ]:
try:
  from google.colab import files
  files.download('/tmp/submission.csv')
except ImportError as e:
  print("Only for Collab")

Well Done! 👍 We are all set to make a submission and see you name on leaderborad. Let navigate to challenge page and make one.

In [ ]:

Baseline - CHESS

About 4 hours ago

Getting Started Code for Chess Educational Challenge

Author : Faizan Farooq Khan

To open this notebook on Google Computing platform Colab, click below!

Open In Colab

Download Necessary Packages 📚

In [ ]:
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn

Download Data

The first step is to download out train test data. We will be training a model on the train data and make predictions on test data. We submit our predictions

In [ ]:
#Donwload the datasets
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/chess/v0.1/test.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/chess/v0.1/train.csv
!mv train.csv data/train.csv
!mv test.csv data/test.csv

Import packages

In [ ]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here 🤓
In [ ]:
all_data_path = "data/train.csv" #path where data is stored
In [ ]:
all_data = pd.read_csv(all_data_path) #load data in dataframe using pandas

Visualize the data 👀

In [ ]:
all_data.head()

We can see the dataset contains 7 columns,where columns 1-6 denotes the position of white king, white rook and black king respectively and the last column tells the optimal depth-of-win for White in 0 to 16 moves and -1 for draw.

Split Data into Train and Validation 🔪

  • The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
  • The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
  • There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, k-fold, leave one out. 🧐
  • Validation sets are also used to avoid your model from overfitting on the train dataset.
In [ ]:
X_train, X_val= train_test_split(all_data, test_size=0.2, random_state=42)
  • We have decided to split the data with 20 % as validation and 80 % as training.
  • To learn more about the train_test_split function click here. 🧐
  • This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
  • Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
  • with this step we are all set move to the next step with a prepared dataset.
In [ ]:
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]

TRAINING PHASE 🏋️

Define the Model

  • We have fixed our data and now we are ready to train our model.

  • There are a ton of classifiers to choose from some being Logistic Regression, SVM, Random Forests, Decision Trees, etc.🧐

  • Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.

  • A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐

In [ ]:
classifier = SVC(gamma='auto')

#from sklearn.linear_model import LogisticRegression
# classifier = LogisticRegression()
  • To start you off, We have used a basic Support Vector Machines classifier here.
  • But you can tune parameters and increase the performance. To see the list of parameters visit here.
  • Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.

To read more about other sklearn classifiers visit here 🧐. Try and use other classifiers to see how the performance of your model changes. Try using Logistic Regression or MLP and compare how the performance changes.

Train the Model

In [ ]:
classifier.fit(X_train, y_train)

Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the model in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Validation Phase 🤔

Wonder how well your model learned! Lets check it.

Predict on Validation

Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.

In [ ]:
y_pred = classifier.predict(X_val)

Evaluate the Performance

  • We have used basic metrics to quantify the performance of our model.
  • This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
  • Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
  • F1 score and Log Loss are the metrics for this challenge
In [ ]:
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
In [ ]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data on which final submission is to be made.

In [ ]:
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path)

Predict Test Set

Predict on the test set and you are all set to make the submission !

In [ ]:
submission = classifier.predict(final_test)

Save the prediction to csv

In [ ]:
submission = pd.DataFrame(submission)
submission.to_csv('submission.csv',header=['depth'],index=False)

🚧 Note :

  • Do take a look at the submission format.
  • The submission file should contain a header.
  • Follow all submission guidelines strictly to avoid inconvenience.

To download the generated csv in colab run the below command

In [ ]:
try:
  from google.colab import files
  files.download('/tmp/submission.csv')
except ImportError as e:
  print("Only for Collab")

Well Done! 👍 We are all set to make a submission and see you name on leaderborad. Let navigate to challenge page and make one.

In [ ]:

Labor

Baseline - LABOR

About 1 hour ago

Baseline for LABOR Challenge on AIcrowd

Author : Faizan Farooq Khan

Download Necessary Packages 📚

In [ ]:
!pip install numpy
!pip install pandas
!pip install scikit-learn

Download Data

The first step is to download out train test data. We will be training a model on the train data and make predictions on test data. We submit our predictions

In [ ]:
#Donwload the datasets
!rm -rf data
!mkdir data 
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/labor/v0.1/test.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/labor/v0.1/train.csv
!mv test.csv data/test.csv
!mv train.csv data/train.csv

Import packages

In [ ]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here 🤓
In [ ]:
all_data_path = "data/train.csv" #path where data is stored
In [ ]:
all_data = pd.read_csv(all_data_path) #load data in dataframe using pandas

Visualize the data 👀

In [ ]:
all_data.head()

We can see the dataset contains 17 columns,where columns 1-16 denotes the information about the labor and the last column tell whether he is in good(1) class or bad(0) classsubcribed the service or not.

Split Data into Train and Validation 🔪

  • The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
  • The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
  • There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, k-fold, leave one out. 🧐
  • Validation sets are also used to avoid your model from overfitting on the train dataset.
In [ ]:
X_train, X_val= train_test_split(all_data, test_size=0.2, random_state=42)
  • We have decided to split the data with 20 % as validation and 80 % as training.
  • To learn more about the train_test_split function click here. 🧐
  • This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
  • Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
  • with this step we are all set move to the next step with a prepared dataset.
In [ ]:
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]

TRAINING PHASE 🏋️

Define the Model

  • We have fixed our data and now we are ready to train our model.

  • There are a ton of classifiers to choose from some being Logistic Regression, SVM, Random Forests, Decision Trees, etc.🧐

  • Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.

  • A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐

In [ ]:
classifier = SVC(gamma='auto')

#from sklearn.linear_model import LogisticRegression
# classifier = LogisticRegression()
  • To start you off, We have used a basic Support Vector Machines classifier here.
  • But you can tune parameters and increase the performance. To see the list of parameters visit here.
  • Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.

To read more about other sklearn classifiers visit here 🧐. Try and use other classifiers to see how the performance of your model changes. Try using Logistic Regression or MLP and compare how the performance changes.

Train the classifier

In [ ]:
classifier.fit(X_train, y_train)

Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Validation Phase 🤔

Wonder how well your model learned! Lets check it.

Predict on Validation

Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.

In [ ]:
y_pred = classifier.predict(X_val)

Evaluate the Performance

  • We have used basic metrics to quantify the performance of our model.
  • This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
  • Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
  • F1 score are the metrics for this challenge
In [ ]:
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
In [ ]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data on which final submission is to be made.

In [ ]:
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path)

Predict Test Set

Time for the moment of truth! Predict on test set and time to make the submission.

In [ ]:
submission = classifier.predict(final_test)

Save the prediction to csv

In [ ]:
#change the header according to the submission guidelines
In [ ]:
submission = pd.DataFrame(submission)
submission.to_csv('/tmp/submission.csv',header=['class'],index=False)

🚧 Note :

  • Do take a look at the submission format.
  • The submission file should contain a header.
  • Follow all submission guidelines strictly to avoid inconvenience.

To download the generated csv in colab run the below command

In [ ]:
try:
  from google.colab import files
  files.download('submission.csv')
except ImportError as e:
  print("Only for Collab")

Well Done! 👍 We are all set to make a submission and see your name on leaderborad. Let navigate to challenge page and make one.

DIBRD

Evaluation Failing

19 days ago

Hey,
The file you submitted has 230 rows with first row as header “label” followed by 229 predictions. There should be 230 predictions.

Baseline - DIBRD

20 days ago

Thanks for pointing out. Have updated it.

Baseline - DIBRD

23 days ago

Baseline for DIBRD Challenge on AIcrowd

Author : Shubham Sharma

To open this notebook on Google Computing platform Colab, click below!

Open In Colab

Download Necessary Packages

In [ ]:
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn

Download dataset

The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions

In [ ]:
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/dibrd/v0.1/train.csv -O data/train.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/dibrd/v0.1/test.csv -O data/test.csv

Import packages

In [ ]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load Data

We use pandas library to load our data. Pandas loads them into dataframes which helps us analyze our data easily. Learn more about it here

In [ ]:
train_data_path = "data/train.csv" #path where data is stored
In [ ]:
train_data = pd.read_csv(train_data_path,header=None) #load data in dataframe using pandas

Visualise the Dataset

In [ ]:
train_data.head()

You can see the columns goes from 0 to 19, where columns from 0 to 19 represents features extracted from the image set and last column represents the type of patient i.e 1 if if signs of Diabetic Retinopathy is present else 0.

Split Data into Train and Validation

Now we want to see how well our classifier is performing, but we dont have the test data labels with us to check. What do we do ? So we split our dataset into train and validation. The idea is that we test our classifier on validation set in order to get an idea of how well our classifier works. This way we can also ensure that we dont overfit on the train dataset. There are many ways to do validation like k-fold,leave one out, etc

In [ ]:
X_train, X_val= train_test_split(train_data, test_size=0.2, random_state=42)

Here we have selected the size of the testing data to be 20% of the total data. You can change it and see what effect it has on the accuracies. To learn more about the train_test_split function click here.

Now, since we have our data splitted into train and validation sets, we need to get the label separated from the data.

In [ ]:
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]

Define the Classifier

Now we come to the juicy part. We have fixed our data and now we train a classifier. The classifier will learn the function by looking at the inputs and corresponding outputs. There are a ton of classifiers to choose from some being Logistic Regression, SVM, Random Forests, Decision Trees, etc.
Tip: A good model doesnt depend solely on the classifier but on the features(columns) you choose. So make sure to play with your data and keep only whats important.

In [ ]:
classifier = LogisticRegression(solver = 'lbfgs',multi_class='auto',max_iter=10)

We have used Logistic Regression as a classifier here and set few of the parameteres. But one can set more parameters and increase the performance. To see the list of parameters visit here.

We can also use other classifiers. To read more about sklean classifiers visit here. Try and use other classifiers to see how the performance of your model changes.

Train the classifier

In [ ]:
classifier.fit(X_train, y_train)

Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Predict on Validation

Now we predict our trained classifier on the validation set and evaluate our model# Predict on test set

In [ ]:
y_pred = classifier.predict(X_val)

Evaluate the Performance

We use the same metrics as that will be used for the test set.
F1 score are the metrics for this challenge

In [ ]:
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
In [ ]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)

Prediction on Evaluation Set

Load Test Set

Load the test data now# Load the evaluation data

In [ ]:
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path,header=None)

Predict Test Set

Time for the moment of truth! Predict on test set and time to make the submission.

In [ ]:
submission = classifier.predict(final_test)

Save the prediction to csv

In [ ]:
submission = pd.DataFrame(submission)
submission.to_csv('/tmp/submission.csv',header=['label'],index=False)

Note: Do take a look at the submission format.The submission file should contain a header.For eg here it is "label".

To download the generated csv in colab run the below command

In [ ]:
from google.colab import files
files.download('/tmp/submission.csv')

Go to platform. Participate in the challenge and submit the submission.csv generated.

WINEQ

Baseline - WINEQ

7 days ago

Baseline for WINEQ Educational Challenge on AIcrowd

Author : Faizan Farooq Khan

Download Necessary Packages

In [ ]:
import sys
!pip install numpy
!pip install pandas
!pip install scikit-learn

Download Data

The first step is to download out train test data. We will be training a model on the train data and make predictions on test data. We submit our predictions

In [ ]:
#Donwload the datasets
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/wineq/v0.1/test.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/wineq/v0.1/train.csv
!mv train.csv data/train.csv
!mv test.csv data/test.csv

Import packages

In [ ]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here 🤓
In [ ]:
all_data_path = "data/train.csv" #path where data is stored
In [ ]:
all_data = pd.read_csv(all_data_path,header=None) #load data in dataframe using pandas

Visualize the data 👀

In [ ]:
all_data.head()

We can see the dataset contains 12 columns,where columns 0-10 denotes different attributes of the wine the last column tells the quality of the wine from 1-10.

Split Data into Train and Validation 🔪

  • The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
  • The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
  • There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, k-fold, leave one out. 🧐
  • Validation sets are also used to avoid your model from overfitting on the train dataset.
In [ ]:
X_train, X_val= train_test_split(all_data, test_size=0.2, random_state=42)
  • We have decided to split the data with 20 % as validation and 80 % as training.
  • To learn more about the train_test_split function click here. 🧐
  • This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
  • Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
  • with this step we are all set move to the next step with a prepared dataset.
In [ ]:
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]

TRAINING PHASE 🏋️

Define the Model

  • We have fixed our data and now we are ready to train our model.

  • There are a ton of classifiers to choose from some being Logistic Regression, SVM, Random Forests, Decision Trees, etc.🧐

  • Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.

  • A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐

In [ ]:
classifier = SVC(gamma='auto')

#from sklearn.linear_model import LogisticRegression
# classifier = LogisticRegression()
  • To start you off, We have used a basic Support Vector Machines classifier here.
  • But you can tune parameters and increase the performance. To see the list of parameters visit here.
  • Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.

To read more about other sklearn classifiers visit here 🧐. Try and use other classifiers to see how the performance of your model changes. Try using Logistic Regression or MLP and compare how the performance changes.

Train the classifier

In [ ]:
classifier.fit(X_train, y_train)

Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Validation Phase 🤔

Wonder how well your model learned! Lets check it.

Predict on Validation

Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.

In [ ]:
y_pred = classifier.predict(X_val)

Evaluate the Performance

  • We have used basic metrics to quantify the performance of our model.
  • This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
  • Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
  • F1 score is the metric for this challenge
In [ ]:
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
In [ ]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data now

In [ ]:
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path)

Predict Test Set

Time for the moment of truth! Predict on test set and time to make the submission.

In [ ]:
submission = classifier.predict(final_test)

Save the prediction to csv

In [ ]:
submission = pd.DataFrame(submission)
submission.to_csv('submission.csv',header=['quality'],index=False)

🚧 Note :

  • Do take a look at the submission format.
  • The submission file should contain a header.
  • Follow all submission guidelines strictly to avoid inconvenience.

To download the generated csv in colab run the below command

In [ ]:
try:
    from google.colab import files
    files.download('submission.csv')
except:
    print('only in colab')

Well Done! 👍 We are all set to make a submission and see your name on leaderborad. Let navigate to challenge page and make one.

TMPMN

Baseline - TMPMN

7 days ago

Baseline for TMPMN Educational Challenge on AIcrowd

Author : Faizan Farooq Khan

Download Necessary Packages

In [1]:
import sys
!pip install numpy
!pip install pandas
!pip install scikit-learn
Requirement already satisfied: numpy in /home/ayush/.local/lib/python3.7/site-packages (1.18.1)
Requirement already satisfied: pandas in /home/ayush/.local/lib/python3.7/site-packages (0.25.0)
Requirement already satisfied: numpy>=1.13.3 in /home/ayush/.local/lib/python3.7/site-packages (from pandas) (1.18.1)
Requirement already satisfied: pytz>=2017.2 in /home/ayush/.local/lib/python3.7/site-packages (from pandas) (2019.3)
Requirement already satisfied: python-dateutil>=2.6.1 in /home/ayush/anaconda3/lib/python3.7/site-packages (from pandas) (2.8.0)
Requirement already satisfied: six>=1.5 in /home/ayush/anaconda3/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas) (1.12.0)
Requirement already satisfied: scikit-learn in /home/ayush/.local/lib/python3.7/site-packages (0.21.3)
Requirement already satisfied: scipy>=0.17.0 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (1.4.1)
Requirement already satisfied: joblib>=0.11 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (0.14.0)
Requirement already satisfied: numpy>=1.11.0 in /home/ayush/.local/lib/python3.7/site-packages (from scikit-learn) (1.18.1)

Download data

The first step is to download out train test data. We will be training a model on the train data and make predictions on test data. We submit our predictions

In [1]:
#Donwload the datasets
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/tmpmn/v0.1/test.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/tmpmn/v0.1/train.csv
!mv train.csv data/train.csv
!mv test.csv data/test.csv

Import packages

In [1]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
from sklearn.svm import SVC
from sklearn.metrics import mean_absolute_error,mean_squared_error

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads them into dataframes which helps us analyze our data easily.
  • Learn more about it here
In [2]:
all_data_path = "data/train.csv" #path where data is stored
In [3]:
all_data = pd.read_csv(all_data_path) #load data in dataframe using pandas

Visualize the data

In [4]:
all_data.head()
Out[4]:
Max_temperature Min_temperature Dewpoint Precipitation Sea_level_pressure Standard_pressure Visibility Wind_speed Max_wind_speed Mean_temperature
0 86.5 57.6 56.5 0.0 29.93 7.4 7.48 13.8 34.28 72.4
1 55.6 37.4 36.1 0.0 30.30 7.5 12.70 20.8 34.28 46.6
2 85.6 62.4 52.8 0.0 29.94 7.4 10.40 16.1 34.28 74.3
3 75.2 53.6 46.9 0.0 29.93 7.3 19.70 25.3 34.28 62.8
4 60.8 34.0 41.9 0.0 30.04 6.4 9.09 16.1 34.28 49.9

We can see the dataset contains 10 columns,where columns 1-9 denotes the information about the current conditions of the place and the last column tells the mean temperature.

Split Data into Train and Validation

  • The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
  • The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
  • There are multiple ways to split a dataset into validation and training sets. Following are two popular ways to go about it k-fold,leave one out 🧐
  • Validation sets are also used to avoid your model from overfitting on the train dataset.
In [6]:
X_train, X_val= train_test_split(all_data, test_size=0.2, random_state=42)
  • We have decided to split the data with 20 % as validation and 80 % as training.
  • To learn more about the train_test_split function click here. 🧐
  • This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
  • Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
  • with this step we are all set move to the next step with a prepared dataset.
In [8]:
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]

TRAINING PHASE 🏋️

Define the Model

  • We have fixed our data and now we are ready to train our model.
  • There are a ton of regressors to choose from like LinearRegression, etc.🧐
  • Remember that there are no hard-laid rules here. you can mix and match regressors, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.
  • A good model does not depend solely on the regressor but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐
In [9]:
regressor = LinearRegression()
  • To start you off, We have used a basic Linear Regression here.
  • But you can tune parameters and increase the performance. To see the list of parameters visit here.
  • Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.
  • To read more about other sklearn classifiers visit here 🧐.
  • Try and use other regressors to see how the performance of your model changes.

Train the Model

In [12]:
regressor.fit(X_train, y_train)
Out[12]:
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)

Validation Phase 🤔

Wonder how well your model learned! Lets check it.

Predict on Validation

Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.

In [13]:
y_pred = regressor.predict(X_val)

Evaluate the Performance

  • We have used basic metrics to quantify the performance of our model.
  • This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
  • Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
  • Mean Squared Error score and Mean Absolute Error are the metrics for this challenge
In [14]:
mse = mean_squared_error(y_val,y_pred)
mae = mean_absolute_error(y_val,y_pred)
In [15]:
print("MSE of the model is :" ,mse)
print("MAE of the model is :" ,mae)
MSE of the model is : 1.7495436632693695
MAE of the model is : 0.9798772295819201

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data on which final submission is to be made.

In [16]:
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path)

Predict Test Set

Predict on the test set and you are all set to make the submission !

In [17]:
submission = regressor.predict(final_test)

Save the prediction to csv

In [19]:
submission = pd.DataFrame(submission)
submission.to_csv('submission.csv',header=['mean_temp'],index=False)

🚧 Note:

  • Do take a look at the submission format.
  • The submission file should contain a header.
  • Follow all submission guidelines strictly to avoid inconvenience.

To download the generated csv in colab run the below command

In [1]:
try:
    from google.colab import files
    files.download('submission.csv') 
except:
    print("only in Colab")
only in Colab

Well Done! 👍 We are all set to make a submission and see your name on leaderborad. Let navigate to challenge page and make one.

STDEV

Baseline - STDEV

7 days ago

Getting Started Code for STDEV Educational Challenge

Author : Faizan Farooq Khan

Download Necessary Packages

In [ ]:
import sys
!pip install numpy
!pip install pandas
!pip install scikit-learn

Download data

The first step is to download out train test data. We will be training a classifier on the train data and make predictions on test data. We submit our predictions

In [ ]:
#Donwload the datasets
!rm -rf data
!mkdir data
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/stdev/v0.1/train.csv
!wget https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/stdev/v0.1/test.csv
!mv train.csv data/train.csv
!mv test.csv data/test.csv

Import packages

In [ ]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here 🤓
In [ ]:
all_data_path = "data/train.csv" #path where data is stored
In [ ]:
all_data = pd.read_csv(all_data_path) #load data in dataframe using pandas

Visualize the data 👀

In [ ]:
all_data.head()

We can see the dataset contains 33 columns,where columns 1-22 denotes the various attributes about the course and the last column tells the final evaluation score for that course.

Split Data into Train and Validation 🔪

  • The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
  • The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
  • There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, k-fold, leave one out. 🧐
  • Validation sets are also used to avoid your model from overfitting on the train dataset.
In [ ]:
X_train, X_val= train_test_split(all_data, test_size=0.2, random_state=42)
  • We have decided to split the data with 20 % as validation and 80 % as training.
  • To learn more about the train_test_split function click here. 🧐
  • This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
  • Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
  • with this step we are all set move to the next step with a prepared dataset.
In [ ]:
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]

TRAINING PHASE 🏋️

Define the Model

  • We have fixed our data and now we are ready to train our model.

  • There are a ton of classifiers to choose from some being Logistic Regression, SVM, Random Forests, Decision Trees, etc.🧐

  • Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.

  • A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐

In [ ]:
classifier = SVC(gamma='auto')

#from sklearn.linear_model import LogisticRegression
# classifier = LogisticRegression()
  • To start you off, We have used a basic Support Vector Machines classifier here.
  • But you can tune parameters and increase the performance. To see the list of parameters visit here.
  • Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.

To read more about other sklearn classifiers visit here 🧐. Try and use other classifiers to see how the performance of your model changes. Try using Logistic Regression or MLP and compare how the performance changes.

Train the Model

In [ ]:
classifier.fit(X_train, y_train)

Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Validation Phase 🤔

Wonder how well your model learned! Lets check it.

Predict on Validation

Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.

In [ ]:
y_pred = classifier.predict(X_val)

Evaluate the Performance

  • We have used basic metrics to quantify the performance of our model.
  • This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
  • Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
  • F1 score and Log Loss are the metrics for this challenge
In [ ]:
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
In [ ]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data on which final submission is to be made.

In [ ]:
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path)

Predict Test Set

Predict on the test set and you are all set to make the submission !

In [ ]:
submission = classifier.predict(final_test)

Save the prediction to csv

In [ ]:
#change the header according to the submission guidelines
In [ ]:
submission = pd.DataFrame(submission)
submission.to_csv('submission.csv',header=['rating'],index=False)

🚧 Note :

  • Do take a look at the submission format.
  • The submission file should contain a header.
  • Follow all submission guidelines strictly to avoid inconvenience.

To download the generated csv in colab run the below command

In [ ]:
try:
    from google.colab import files
    files.download('submission.csv') 
except:
    print('only on colab')

Well Done! 👍 We are all set to make a submission and see your name on leaderborad. Let navigate to challenge page and make one.

In [ ]:

POWER

Baseline - Power

7 days ago