Loading

AkashPB 0

Name

Akash PB

Organization

The Smart Cube

Location

IN

Badges

0
0
0

Connect

LinkedIn

Activity

Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

5 Puzzles 21 Days. Can you solve it all?

Latest submissions

See All
graded 155613
graded 155612
failed 155611

5 Puzzles 21 Days. Can you solve it all?

Latest submissions

See All
submitted 152642
graded 152641
graded 152640

5 Puzzles 21 Days. Can you solve it all?

Latest submissions

See All
failed 148986
failed 148980
failed 148979

Deshuffle the Shuffled Text

Latest submissions

See All
graded 147159
graded 147137
graded 147132
Gold 0
Silver 0
Bronze 0

Badges

Participant Rating
Participant Rating

AI Blitz #9

Submission format for the NLP feature engineering challenge

4 months ago

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

Submission format for the NLP feature engineering challenge

4 months ago

Now things are clear. I think out of all the problems given, this one is the most challenging of all. :+1: :+1:

Submission format for the NLP feature engineering challenge

4 months ago

Hi Shubham,
Thanks ! Now got it … Damn the code structure is too rigid :sweat_smile: . Also, I am trying to understand the problem statement-

  1. We have train data with inputs feature being text and output feature being labels. In your starter notebook, you consider the emotion detection dataset as train data and the corresponding label as target. My question is - “Is the emotion detection dataset the train dataset for our problem?”
    I am getting confused because in the descriptions section, it is written -
    " Working on the same Research Paper Dataset you used in the multi-class problem, you will be building a model using the word2vec approach using Tensorflow."

The train,test and validation datasets are not clear for the problem to be honest.

  1. We need to find the embeddings in such a way that the F1 score is increased on the test dataset. I can see that datasets.csv is the test data having just 10 observations. Is this the complete data or there are some hidden data for us to generalize our solution?

  2. Also I can see - " Each vector is should only contain 512 elements" in the description. Is it so that we can’t use any SOTA model embeddings(like SBERT) here(which may have more than 512 elements)?

  3. Are you going to use any other models like Decision Tree Classifier in “train_model” function. I mean how exactly is F score predicted on leaderboard ?

Submission format for the NLP feature engineering challenge

4 months ago

Hi Shubham,
I am still getting the error message- DockerBuildError. I am running your starter notebook as it is but still getting the error mentioned.
Thanks,
Akash

Evaluation Timed out message

4 months ago

Hi Team,
I guess there seems to be some bug in the system. I can see a lot of evaluation timed-out errors while submitting (even one of my submissions was showing this and the worst thing is I did not even save that submission of mine because I was submitting via colab notebook and did factory reset after the submission was made from colab :sweat_smile: ).

Can any one rectify the bug so that we can make our submissions?? Also, can the timed-out submissions be made from your backend so that we know what possibly we could have scored in those?

Thanks in advance!
Akash PB

AkashPB has not provided any information yet.

Notebooks

Create Notebook