Let me start with simple question. Can we compare Mango and Apple? Both have different features in terms of tastes, sweetness, health benefits etc. So comparison can be performed between similar entities else it will be biased. Same logic applies to Machine Learning as well. Feature Scaling in Machine Learning brings features to the same scale before we apply any comparison or model building. Normalization and Standardization are the two frequently used techniques of Feature Scaling in Machine Learning.

# Category: Statistics

## Bayes’ Theorem with Example for Data Science Professionals

Bayes Theorem is the extension of Conditional probability. Conditional probability helps us to determine the probability of A given B, denoted by P(A|B). So Bayes’ theorem says if we know P(A|B) then we can determine P(B|A), given that P(A) and P(B) are known to us.

## Conditional Probability with examples For Data Science

Conditional Probability helps Data Scientists to get better results from the given data set and for Machine Learning Engineers, it helps in building more accurate models for predictions.

## Variance, Standard Deviation and Other Measures of Variability and Spread

Variance and Standard Deviation are the most commonly used measures of variability and spread. Variability and spread are nothing but the process to know how much data is being varying from the mean point.

## Basic Statistics for Data Science – Part 1

Types of Statistics: Descriptive vs Inferential

Basic terminology like Population vs Sample

Types of Variables: Numerical vs Categorical

Measures of central tendencies: Mean, Median and Mode and their specific use cases

Measures of dispersion/spread: Variance, standard deviation etc.