In the field of Machine Learning, logistic regression is still the top choice for classification problems. It is simple yet efficient algorithm which produces accurate models in most of the cases. In its basic form, it uses the logistic function to calculate the probability score which helps to classify the binary dependent variable to its respective class. Logistic regression is the transformed form of the linear regression. In this post I have explained the end to end step involved in the classification machine learning problems using the logistic regression and also performed the detailed analysis of the model output with various performance parameters.
In this post, we will develop a classification model where we’ll try to classify the movie reviews on positive and negative classes. I have used different machine learning algorithm to train the model and compared the accuracy of those models at the end. you can keep this post as a template to use various machine learning algorithms in python for text classification.
At the end we will validate the model by passing a random review to the trained model and understand the output class predicted by the model. You will learn how to create and use the pipeline for numerical feature extraction and model training together as a one function.
Data Visualization is one of the important activity we perform when doing Exploratory Data Analysis. It helps in preparing business reports, visual dashboards, story telling etc important tasks. In this post I have explained how to ask questions from the data and in return get the self explanatory graphs. In this You will learn the use of various python libraries like plotly, matplotlib, seaborn, squarify etc to plot those graphs.
Data Science and Machine Learning Articles | Yearly round-up 2019
Boosting helps to improve the accuracy of any given machine learning algorithm. It is algorithm independent so we can apply it with any learning algorithms. It is not used to reduce the model variance.
Boosting involves many sequential
iterations to strengthen the model accuracy, hence it becomes computationally costly.
Ensemble Learning says, if we can build multiple models then why to select the best one why not top 2, again why not top 3 and why not top 10. Then if you find top 10 deploy all 10 models. And when new data comes, make a prediction from all 10 models and combine the predictions and finally make a joint prediction. This is the key idea of ensemble learning.