Evolution of Natural Language Processing (NLP)

Harshit Mishra
2 min readFeb 17, 2021

Natural Language Processing is arguably one of the most fascinating aspects of machine learning with a vast application in almost every industry.

Few of our most common applications of NLP can be

  • Chatbots
  • Autocorrection and Autocomplete
  • Language translation
  • Social Media analysis

And the list goes on

In fact the below graph shows the popularity of Chatbots over the years

Source: https://ai.facebook.com/blog/state-of-the-art-open-source-chatbot/
Source: https://ai.facebook.com/blog/state-of-the-art-open-source-chatbot/

But the journey of the development of NLP as a widely used tool has its own vast history.

The origin can date back up to 1948 with a dictionary lookup system.

Traditional NLP Models

Traditional NLPModel

The following model took a lot of time. During the Preprocessing phase following tons of steps like tokenization of words, Stemming, Parsing etc.

Bag of words technique was a very common technique for traditional NLP models.

Lacking in both efficiency and accuracy, only 10% of the time was spent on training the actual model. Also, the context of the language was usually lost. The problem was in the very essence of computing the features which created unnecessary strain.

Embeddings in NLP

Embedding

Word embedding converts the collected data into vectors which in turn is provided to the ML models to provide the results.

Word Embedding protects the context of the data while providing it to the Model.

Word2Vec is the most popular algorithm used for embedding.

Deep NLP

Pretrained Models

The latest big contributions like Google’s BERT has changed the entire arena of NLP. No longer is there need for building the entire model from scratch but it’s more common to take already brilliant models available in the open-source market, fine-tuning it to our needs and using the models as per our requirements

--

--