Beginners Guide to Text Classification | Machine Learning | NLP | part 8

In this post, we will develop a classification model where we’ll try to classify the movie reviews on positive and negative classes. I have used different machine learning algorithm to train the model and compared the accuracy of those models at the end. you can keep this post as a template to use various machine learning algorithms in python for text classification.

At the end we will validate the model by passing a random review to the trained model and understand the output class predicted by the model. You will learn how to create and use the pipeline for numerical feature extraction and model training together as a one function.

Continue reading

Numerical Feature Extraction from Text | NLP series | Part 6

Machine Learning algorithms don’t understand the textual data rather it understand only numerical data. So the problem is how to convert the textual data to the numerical features and further pass these numerical features to the machine learning algorithms.

As we all know that the raw text stored in some dump repository contains a lot of meaningful information. And in today’s fast changing world, it becomes essential to consider data driven decision than fully rely on experience driven decision.

How to Perform Sentence Segmentation or Sentence Tokenization using spaCy | NLP Series | Part 5

Sentence Segmentation or Sentence Tokenization is the process of identifying different sentences among group of words. Spacy library designed for Natural Language Processing, perform the sentence segmentation with much higher accuracy. Spacy provides different models for different languages. In this post we’ll learn how sentence segmentation works, and how to set user defined segmentation rules.

Parts of Speech Tagging and Dependency Parsing using spaCy | NLP | Part 3

Parts of Speech tagging is the next step of the tokenization. Once we have done tokenization, spaCy can parse and tag a given Doc. spaCy is pre-trained using statistical modelling. This model consists of binary data and is trained on enough examples to make predictions that generalize across the language. Example, a word following “the” in English is most likely a noun.

1 2 3