Bayes Theorem is the extension of Conditional probability. Conditional probability helps us to determine the probability of A given B, denoted by P(A|B). So Bayes’ theorem says if we know P(A|B) then we can determine P(B|A), given that P(A) and P(B) are known to us.Continue reading “Bayes’ Theorem with Example for Data Science Professionals”
As the name suggests, Conditional Probability is the probability of an event under some given condition. And based on the condition our sample space reduces to the conditional element.
For example, find the probability of a person subscribing for the insurance given that he has taken the house loan. Here sample space is restricted to the persons who have taken house loan.Continue reading “Conditional Probability with examples For Data Science”
Probability in itself is a huge topic to study. Applications of probability are found everywhere whether it is medical science, share market trading, sports, gaming Industry and many more. However in this post my focus is on to explain the topics which are needed to understand data science and machine learning concepts.Continue reading “Probability Basics for Data Science”
Variance and Standard Deviation are the most commonly used measures of variability and spread. Variability and spread are nothing but the process to know how much data is being varying from the mean point. And Variance tells us the average distance of all data points from the mean point. Standard deviation is just the square root of the variance. As variance is calculated in squared unit (explained below in the post) and hence to come up a value having unit equal to the data points, we take square root of the variance and it is called as Standard Deviation.Continue reading “Variance, Standard Deviation and Other Measures of Variability and Spread”
Principal Component Analysis or PCA is used for dimensionality reduction of the large data set. In my previous post A Complete Guide to Principal Component Analysis – PCA in Machine Learning , I have explained what is PCA and the complete concept behind the PCA technique. This post is in continuation of previous post, However if you have the basic understanding of how PCA works then you may continue else it is highly recommended to go through above mentioned post first.Continue reading “Step by Step Approach to Principal Component Analysis using Python”
Principal Component Analysis or PCA is a widely used technique for dimensionality reduction of the large data set. Reducing the number of components or features costs some accuracy and on the other hand, it makes the large data set simpler, easy to explore and visualize. Also, it reduces the computational complexity of the model which makes machine learning algorithms run faster. It is always a question and debatable how much accuracy it is sacrificing to get less complex and reduced dimensions data set. we don’t have a fixed answer for this however we try to keep most of the variance while choosing the final set of components.Continue reading “A Complete Guide to Principal Component Analysis – PCA in Machine Learning”
The Science of collecting, organizing, presenting, analyzing and interpreting the data is statistics. It is one of the most important disciplines or methods to get a deeper insight into data. Statistical analysis is implemented to manipulate, summarize and investigate data so that useful information can be obtained.
Take away from this post:
- Types of Statistics: Descriptive vs Inferential
- Basic terminology like Population vs Sample
- Types of Variables: Numerical vs Categorical
- Measures of central tendencies: Mean, Median and Mode and their specific use cases
- Measures of dispersion/spread: Variance, standard deviation etc.