Loading

Challenge Unboxed📦: AI Blitz⚡9

By  aryankargwal

👋🏼Welcome readers to Challenge Unboxed: AI Blitz⚡9!

Through this series, we dive deeper and explain some of the heuristics and methods used by winners of a challenge in the spotlight. This series seeks to introduce you to new tools and methods that will help you expand your Machine Learning horizons.

In this segment, we’re shining a spotlight on AI Blitz⚡9: Hello NLP. The platform’s first all NLPBlitz challenge nudged participants to bring in some fresh approaches. Keep reading to know more.


👀About the Challenge

The triumph of the human race can be greatly credited to our ability to communicate and the connections we are able to form because of it. This also translates into the requirements for the success of AI. Through AI Blitz⚡9: Hello NLP, our first all NLP Blitz we wanted to explore and expand this AI territory.

Consisting of five meticulously designed NLP puzzles that are aimed to take participants on a learning journey where they encounter some of the essential prompts faced in the industry.

Let us look at the puzzles that the participants tackled like champions! 

  1. Emotion Detection
  2. Research Paper Classification
  3. De-shuffling Text
  4. NLP Feature Engineering
  5. Sound Prediction

🏆Winning Heuristics

The challenge saw some very interesting approaches from participants. In this blog, we will break down the approach of the community contributor Falak Shah for the Sound Prediction puzzle. We will also explore the solution of the other Community Contributor winner Sean Ben Hur for the Feature Engineering puzzle.


🗣Autocorrect with DeepSpeech for Sound Prediction

Speech to text is a necessary technology that benefits the differently-abled. Due to its utility, this assistive tool is constantly being improved and perfected. Blitz 9 took a different approach to the classic speech-2-text problem in its Sound Prediction puzzle. Given a sound clip as an input, the participant needs to output only the numbers in a text format.

Keeping the Starter Kit for the puzzle in mind, Falak Shah came up with an interesting solution that managed to put him high on the leaderboard. The starter kit introduced a solution that used Mozilla’s DeepSpeech. It is a Speech-to-text engine that works on a model based on a paper from Andrew NG’s Baidu Research Lab.

Falak’s solution introduced a simple trick of using Autocorrect on the text predictions produced by the DeepSpeech model. The autocorrect feature provides a very optimized way of rounding up the near-perfect predicted words to the nearest number. The autocorrect module automatically increases the number of predicted numbers hence giving Falak a greater chance to score higher on the leaderboard. 

 

 

For the Autocorrect module, Falak utilizes the Context Spell Corrector API provided by John Snow Labs. Context Spell Checker is a module that not only calculates the highly possible candidates for correct spelling but also takes into account the context of the word by judging preceding and upcoming words.

The solution helped Falak climb up to 4th place in the puzzle, with the least amount of lines added to the starter kit! Can you think of any such way that builds upon our Starter Kits?


📊TF IDF for Feature Engineering

Feature Engineering is an integral part of training NLP Models. The process involves using the domain knowledge of the data to identify relevant features and create greater features from them that work in the favour of the model. This stems from the hypothesis that a data-driven approach will fetch us a better result than a model-driven approach.

Sean Ben Hur, in his community contribution winning solution, shows us some incredible ways of tackling feature engineering. He uses TF IDF as the vectorizer instead of One-Hot Encoding. Vectorizer takes up a very integral role in NLP. It is used to map words or phrases from vocabulary to a corresponding vector of real numbers which is used to find word predictions, word similarities/semantics.

TFIDF or Term Frequency-Inverse Dense Frequency is a statistical assessment of a word's relevance to a document in a collection of documents. It is calculated by multiplying how many times a word appears in a document and the inverse document frequency of the word across a set of documents. You can check out this video to know more about inverse document frequency.

The vectorizer proves successful in getting to the top 3 scores on the Feature Engineering Leaderboard with an F1 score of 0.803! The solution made Sean’s notebook stand out. See the real-time implementation over here.


Feeling motivated to put these methods into practice? How about checking out the other beginner-friendly NLP challenges available on our platform.

Let us know what do you wanna read next down in the comments or tweet them at AIcrowdHQ!🐥

Written by

aryankargwal
Follow

Comments

You must login before you can post a comment.

You may also like...