Parts of Speech Tagging and Dependency Parsing using spaCy | NLP | Part 3

Parts of Speech tagging is the next step of the tokenization. Once we have done tokenization, spaCy can parse and tag a given Doc. spaCy is pre-trained using statistical modelling. This model consists of binary data and is trained on enough examples to make predictions that generalize across the language. Example, a word following “the” in English is most likely a noun.

Continue reading

A Quick Guide to Tokenization, Lemmatization, Stop Words, and Phrase Matching using spaCy | NLP | Part 2

spaCy is designed specifically for production use. It helps you build applications that process and “understand” large volumes of text. It can be used to build information extraction or natural language understanding systems, or to pre-process text for deep learning. In this article you will learn about Tokenization, Lemmatization, Stop Words and Phrase Matching operations using spaCy.

Spacy Installation and Basic Operations | NLP Text Processing Library | Part 1

spaCy is an open-source Python library that parses and “understands” large volumes of text.
spaCy is the best way to prepare text for deep learning.
It interoperates seamlessly with TensorFlow, PyTorch, scikit-learn, Gensim and the rest of Python’s awesome AI ecosystem.
With spaCy, you can easily construct linguistically sophisticated statistical models for a variety of NLP problems.

What is Bagging in Ensemble Learning

Ensemble Learning says, if we can build multiple models then why to select the best one why not top 2, again why not top 3 and why not top 10. Then if you find top 10 deploy all 10 models. And when new data comes, make a prediction from all 10 models and combine the predictions and finally make a joint prediction. This is the key idea of ensemble learning.

1 2 3 4 5 10