Techno Blender
Digitally Yours.

Statistics Bootcamp 2: Center, Variation, and Position | by Adrienne Kline | Aug, 2022

0 97


Learn the math and methods behind the libraries you use daily as a data scientist

Image by Author

To more formally address the need for a statistics lecture series on Medium, I have started to create a series of Statistics Bootcamps, as seen in the title above. These will build on one another and as such will be numbered accordingly. The motivation for doing so is to democratize the knowledge of statistics in a ground up fashion to address the need for more formal statistics training in the data science community. These will begin simple and expand upwards and outwards, with exercises and worked examples along the way. My personal philosophy when it comes to engineering, coding, and statistics is that if you understand the math and the methods the abstraction now seen using a multitude of libraries falls away and allows you to be a producer, not only a consumer of information. Many facets of these will be a review for many learners/readers, however having a comprehensive understanding and a resource to refer to is important. Happy reading/learning!

In this second article, we will be covering descriptive statistics. Namely, data grouping and distributions.

Quantitative Data

Let’s say we have the following data set:

34 78 98 56 74 74 93
88 67 89 91 95 73 70
49 56 87 97 76 85 71
66 78 90 84 58 76 73
81 90 93 89 77 79 84
73 67 74 89 90 50 76
98 46 88 78 89 56 69
  • What questions might we ask about this dataset?
  • How might we start to answer some of these questions?

Let’s first organize the dataset from its smallest value to its largest value:

34, 46, 49, 50, 56, 56, 56, 58, 66, 67, 67, 69, 70, 71, 73, 73, 73, 74, 74, 74, 76, 76, 76, 77, 78, 78, 78, 79, 81, 84, 84, 85, 87, 88, 88, 89, 89, 89, 89, 90, 90, 90, 91, 93, 93, 95, 97, 98, 98

Hint: you can use list.sort() in Python or sort(list) in R

From here, let’s create a frequency table. Frequency is the found by counting the number of unique observations that fall within a particular class. Class, as used here, is our grouping category. When we think about the width of the class, they should all have the same size (e.g. 10 shown below). Our list of frequencies gives rise to our frequency distribution — is the list of all classes and their frequencies. Compare this with relative frequency, which is the ratio of the frequency of a class to the total # of observations (proportion of unique observations fall in the class). Lastly, we have our relative frequency distribution, which is the listing of all classes and their relative frequencies. Some general guidelines when creating or using frequency tables:

  1. # of classes should be a balance between providing an effective summary vs. displaying relevant characteristics of data
  2. Each observation must belong to only ONE class (mutually exclusive)
  3. Each class should have the same width
Class  |  Frequency  |   Relative Frequency
_____________________________________________
30-39 1 0.02
40-49 2 0.04
50-59 5 0.10
60-69 4 0.08
70-79 16 0.33
80-89 11 0.22
90-99 10 0.20
Total 49 1.00

Cumulative Frequency

We can add to our relative frequency table by implementing cumulative frequency and cumulative relative frequency. To do so, we progress through the listed data in ascending order, and add the total number of unique instances from all classes previous to and including the class we are ‘in’, to arrive at cumulative frequency. Similar to relative frequency, the cumulative relative frequency, is the proportion of data that is encompassed relative to the whole.

Class| Freq.| Relative Freq.| Cumulative Freq.| Cumulative Rel. Freq
____________________________________________________________________
30-39 1 0.02 1 0.02
40-49 2 0.04 3 0.06
50-59 5 0.10 8 0.16
60-69 4 0.08 12 0.24
70-79 16 0.33 28 0.57
80-89 11 0.22 39 0.80
90-99 10 0.20 49 1.00
Total 49 1.00

A distribution can take many forms (table, graph, or formula). However, what it always is/provides is the values of the observations, and how often they occur (remember our frequency table above!). The shape of a distribution plays a very important role in determining the appropriate statistical method to use.

Here are some distribution shapes you may run into throughout your career:

Image by author

The modality of a distribution is the number of peaks. So unimodal is one, bimodal is 2, multimodal is greater than 2 peaks. Symmetry exists in a distribution if it can be divided into two pieces that are mirror images of one another. In the image above, the bell-shaped, triangular, and uniform distributions would all be considered symmetrical. Skewness exists in a UNIMODAL distribution that is not symmetric. A right skewed distribution, counterintuitively perhaps, indicates the distribution has a right tail that is longer than the left, so the peak appears on the left and vice versa for left skewed.

A gentle reminder, the distribution of population data is called the population distribution and a distribution of sample data is called a sample distribution!

As we covered in the first bootcamp, parameter is a descriptive measure for a population and statistic is a descriptive measure for a sample.

We are going to cover 3 descriptive measures:

  1. Measures of central tendency
  2. Measures of variation
  3. Measures of position

The arithmetic mean (average) is the sum of the values divided by the total number of values. Although the math is the same whether calculating for a sample or population, the notation differs:

The median is the midpoint in the data array and requires your data to arranged in order.

  • If the data array size is even, choose the two data points in the middle and find their average
  • If the data array size is odd, choose the middle
  • THE MEDIAN IS IMPORTANT TO USE WHEN YOU HAVE OUTLIERS

The midrange of the data is the sum of the lowest and highest values in the dataset divided by 2.

The mode is the value that occurs most often in the dataset. If you think back to our frequency table above, this would constitute the class/value with the highest frequency.

For purposes of review, try to answer these:

Example. What is the distribution called that has a single class with the highest frequency?
Example. What about two classes that have the highest frequency?

Image by author

To illustrate how you can be led astray by measures of center, let’s use an example.

Example. Heights of basketball payers on two opposing teams in inches:
Data set for team 1: 72, 73, 76, 76, 78
Data set for team 2: 67, 72, 76, 76, 84
Here the two data sets have the same mean, median, and mode! Let’s follow this up with a look into measures of variation/spread…

There are different ways we can think about the span of a data set. For example, the range of a dataset is the highest value minus the lowest value. The most common however is standard deviation. Standard deviation is how far on average the observations are from the mean. Take note here of the difference in the calculation, as it differs between a sample vs. population metric:

A coefficient of variation (CVar) is used to compare standard deviations (variation) of two variables when then units are different. The MORE CVar, the more variation.

Example. Which data set has more variation?
Data set 1: 41, 44, 45, 47, 47, 48, 51, 53, 58, 66
Data set 2: 20, 37, 48, 48, 49, 50, 53, 61, 64, 70
set 1 mean: 50
set 1 std. dev.: 7.4
set 2 mean: 50
set 2 std. dev.: 14.2
We can see that data set 2 has a larger standard deviation

Chebyshev’s Theorem

Is the proportion of values from ANY data set that we expect will fall within ‘k’ standard deviations of the mean and will be at least 1–(1/k²), where ‘k’ is a number greater than 1.

  • At least 75% of the data will fall within 2 standard deviations:
    1-(1/k²) = 1–(1/2²) = 1–1/4 = 0.75
  • At least 88.89% of the data will fall within 3 standard deviations:
    1-(1/k²) = 1–(1/3²) = 1–1/9 = 0.8889
  • If we have mean=50, s=7.4, within 2 standard deviations
    = (50–2*7.4, 50+2*7.4) = (35.2, 64.8)
    Therefore, ~9 out of 10 observations are within 35.2 and 64.8, ~90% of the data will fall within this range

Example. Given the mean and standard deviation of a data set, how would you determine the values within which exactly 75% of the data fall?
Let’s work backwards:
75% =0.75
0.75= 1-(1/k²)
solving for k:
k = 2

Empirical Normal Rule

The empirical normal rule can be defined as: when a data’s distribution is bell shaped (normally distributed) the following statements, which make up the empirical normal are true:

  1. Approximately 68.3% of the data values will fall within 1 standard deviation of the mean, x±1s for samples and with endpoints μ±3σ for populations
  2. Approximately 95.4% of the data values will fall within 2 standard deviations of the mean, x±2s for samples and with endpoints μ±2σ for populations
  3. Approximately 99.7% of the data values will fall within 3 standard deviations of the mean, x±1s for samples and with endpoints μ±3σ for populations
Image by Author

The Normal Distribution

The normal distribution is characterized by the figure above. Also called Gaussian distribution or bell-curve. The notation for the normal distribution is ’N’ and takes into account the mean and standard deviation:

The notation for the standard normal distribution, meaning our data has a normal distribution, but the mean is centered at 0, and standard deviation is 1:

Image by author

We can standardize our normal distribution to the standard normal distribution by performing standardization. This gives rise to our Z score (stat) which we will see in a subsequent bootcamp.

Here is a summary table for the notation of population and sample with respect to mean, standard deviation and variance.

Percentiles divide the data set into 100 equal parts, in which the pth percentile (corresponding to a value ‘X’) is a value that at most p% of the observations in the data set are less than this value X and the remainder are greater. We can compute the percentile for a given value X by using the formula:

The first percentile you were probably ever exposed to was when you were less than 1 day old! We often talk about newborn babies w.r.t. their length, weight and head circumference in percentiles. Here is a link to such a chart. From a medical perspective, tracking this as an individual ages can be signs of different pathologies (precocious puberty, bone disease etc.) if they ‘fall off’ their curve or have a sudden growth spurt etc.

Quartiles as the name would suggest, allow us to divide our data into four equal parts, and 3 cut points — Q1, Q2, and Q3.

  • Q1 = 25th percentile
  • Q2 is analogous to the median (50th percentile)
  • Q3 = 75th percentile

Interquartile range (IQR) is the difference between the 1st Quartile (Q1) and 3rd quartile (Q3). Therfore, is can be expressed as: IQR=(Q3-Q1).

When thinking about distributions we can summarize it with a 5-number summary. Three of these are the quartiles (Q1, Q2, Q3), and provide a measure of center and variation of the middle two quarters of the data. The last 2 numbers in our 5-number summary are the minimum and maximum data values to provide information on total data range.

Looking at our distribution below:

  1. Find the median, this is Q2.
  2. Find the median of the first half of the distribution (i.e. from the first data value to median), this is Q1
  3. Find the median of the second half of the distribution (i.e. from the median data value to last valye), this is Q3

We use these measures to determine the skewness in our dataset, which is important for determining what ype of inferential statistics to run. I.e. using the mean or meadian, and any transformations that may be warranted.

Outliers

Outliers are observations that fall well outside the overall pattern of the data. Outliers are often discussed in the data science realm without referencing any quantifying measures, so let’s fix that. We call a data point an outlier IF it’s value is 1.5*IQR greater or less than the values on the upper and lower ends of the data, Q3 and Q1 respectively. These outliers are outside our inner fences. Extreme outliers fall past the outer fences, or greater than or less than 3*IQR values.

lower inner fence: Q1–(1.5*IQR)
upper inner fence: Q1–(1.5*IQR)

Outliers (outside inner fences)
<Q1–(1.5*IQR)
>Q3–(1.5*IQR)

Extreme outliers (outer fences)
<Q1–(3*IQR)
>Q3–(3*IQR)

Boxplots

We need 5 elements to construct our boxplot
1. Q1
2. Q2
3. Q3
4. inner fences
5. outer fences

  • Draw a box with the sides representing Q1 and Q3 (always perpendicular to where values are)
  • Draw a line inside the box at Q2 (parallel to those of Q1 and Q3)
  • Draw a parallel line connecting Q1 to the smallest value still within the lower inner fence
  • Draw a parallel line connecting Q3 to the largest value still within the upper inner fence

You should arrive at something like the following:

Image by author

Using a boxplot we get an idea of the center of our data, the variation and skew via length of each section of the box and the length of the whiskers/fences.

Example 1. Compute the quartiles for an even number of observations:
3, 16, 17, 18, 19, 20, 21, 22
Q2 = (18+19)/2 = 18.5
Q1 = (16+17)/2 = 16.5
Q3 = (20+21)/2 = 20.5

Example 2. Compute the quartiles for an odd number of observations:
53, 62, 78, 94, 96, 99, 103
Q2 = 94
Q1 = ?
Q3 = ?

A word on including the median in the computation of Q1 and Q3 for odd number of observations…. This actually differs depending on which statistical software you are using. Different references use different rules:

  • Excel/SPSS/R — include Q2, the median (94)
    Q1=70
    Q3 = 97.5
  • Stata — does not include Q2, the median (94)
    Q1=62
    Q3 = 99

In this bootcamp we have introduced distributions and descriptive statistics. We’ve covered measures of center — mean, median and mode. You learned measures of variation — standard deviation, coefficient of variation, Chebyshevs rule, and the Empirical rule. Quartiles and outliers quantified measures of position. Lastly, you’ve seen how the notation and calculation differ between a population and a sample.

Previous boot camps in the series:

#1 Laying the Foundations


Learn the math and methods behind the libraries you use daily as a data scientist

Image by Author

To more formally address the need for a statistics lecture series on Medium, I have started to create a series of Statistics Bootcamps, as seen in the title above. These will build on one another and as such will be numbered accordingly. The motivation for doing so is to democratize the knowledge of statistics in a ground up fashion to address the need for more formal statistics training in the data science community. These will begin simple and expand upwards and outwards, with exercises and worked examples along the way. My personal philosophy when it comes to engineering, coding, and statistics is that if you understand the math and the methods the abstraction now seen using a multitude of libraries falls away and allows you to be a producer, not only a consumer of information. Many facets of these will be a review for many learners/readers, however having a comprehensive understanding and a resource to refer to is important. Happy reading/learning!

In this second article, we will be covering descriptive statistics. Namely, data grouping and distributions.

Quantitative Data

Let’s say we have the following data set:

34 78 98 56 74 74 93
88 67 89 91 95 73 70
49 56 87 97 76 85 71
66 78 90 84 58 76 73
81 90 93 89 77 79 84
73 67 74 89 90 50 76
98 46 88 78 89 56 69
  • What questions might we ask about this dataset?
  • How might we start to answer some of these questions?

Let’s first organize the dataset from its smallest value to its largest value:

34, 46, 49, 50, 56, 56, 56, 58, 66, 67, 67, 69, 70, 71, 73, 73, 73, 74, 74, 74, 76, 76, 76, 77, 78, 78, 78, 79, 81, 84, 84, 85, 87, 88, 88, 89, 89, 89, 89, 90, 90, 90, 91, 93, 93, 95, 97, 98, 98

Hint: you can use list.sort() in Python or sort(list) in R

From here, let’s create a frequency table. Frequency is the found by counting the number of unique observations that fall within a particular class. Class, as used here, is our grouping category. When we think about the width of the class, they should all have the same size (e.g. 10 shown below). Our list of frequencies gives rise to our frequency distribution — is the list of all classes and their frequencies. Compare this with relative frequency, which is the ratio of the frequency of a class to the total # of observations (proportion of unique observations fall in the class). Lastly, we have our relative frequency distribution, which is the listing of all classes and their relative frequencies. Some general guidelines when creating or using frequency tables:

  1. # of classes should be a balance between providing an effective summary vs. displaying relevant characteristics of data
  2. Each observation must belong to only ONE class (mutually exclusive)
  3. Each class should have the same width
Class  |  Frequency  |   Relative Frequency
_____________________________________________
30-39 1 0.02
40-49 2 0.04
50-59 5 0.10
60-69 4 0.08
70-79 16 0.33
80-89 11 0.22
90-99 10 0.20
Total 49 1.00

Cumulative Frequency

We can add to our relative frequency table by implementing cumulative frequency and cumulative relative frequency. To do so, we progress through the listed data in ascending order, and add the total number of unique instances from all classes previous to and including the class we are ‘in’, to arrive at cumulative frequency. Similar to relative frequency, the cumulative relative frequency, is the proportion of data that is encompassed relative to the whole.

Class| Freq.| Relative Freq.| Cumulative Freq.| Cumulative Rel. Freq
____________________________________________________________________
30-39 1 0.02 1 0.02
40-49 2 0.04 3 0.06
50-59 5 0.10 8 0.16
60-69 4 0.08 12 0.24
70-79 16 0.33 28 0.57
80-89 11 0.22 39 0.80
90-99 10 0.20 49 1.00
Total 49 1.00

A distribution can take many forms (table, graph, or formula). However, what it always is/provides is the values of the observations, and how often they occur (remember our frequency table above!). The shape of a distribution plays a very important role in determining the appropriate statistical method to use.

Here are some distribution shapes you may run into throughout your career:

Image by author

The modality of a distribution is the number of peaks. So unimodal is one, bimodal is 2, multimodal is greater than 2 peaks. Symmetry exists in a distribution if it can be divided into two pieces that are mirror images of one another. In the image above, the bell-shaped, triangular, and uniform distributions would all be considered symmetrical. Skewness exists in a UNIMODAL distribution that is not symmetric. A right skewed distribution, counterintuitively perhaps, indicates the distribution has a right tail that is longer than the left, so the peak appears on the left and vice versa for left skewed.

A gentle reminder, the distribution of population data is called the population distribution and a distribution of sample data is called a sample distribution!

As we covered in the first bootcamp, parameter is a descriptive measure for a population and statistic is a descriptive measure for a sample.

We are going to cover 3 descriptive measures:

  1. Measures of central tendency
  2. Measures of variation
  3. Measures of position

The arithmetic mean (average) is the sum of the values divided by the total number of values. Although the math is the same whether calculating for a sample or population, the notation differs:

The median is the midpoint in the data array and requires your data to arranged in order.

  • If the data array size is even, choose the two data points in the middle and find their average
  • If the data array size is odd, choose the middle
  • THE MEDIAN IS IMPORTANT TO USE WHEN YOU HAVE OUTLIERS

The midrange of the data is the sum of the lowest and highest values in the dataset divided by 2.

The mode is the value that occurs most often in the dataset. If you think back to our frequency table above, this would constitute the class/value with the highest frequency.

For purposes of review, try to answer these:

Example. What is the distribution called that has a single class with the highest frequency?
Example. What about two classes that have the highest frequency?

Image by author

To illustrate how you can be led astray by measures of center, let’s use an example.

Example. Heights of basketball payers on two opposing teams in inches:
Data set for team 1: 72, 73, 76, 76, 78
Data set for team 2: 67, 72, 76, 76, 84
Here the two data sets have the same mean, median, and mode! Let’s follow this up with a look into measures of variation/spread…

There are different ways we can think about the span of a data set. For example, the range of a dataset is the highest value minus the lowest value. The most common however is standard deviation. Standard deviation is how far on average the observations are from the mean. Take note here of the difference in the calculation, as it differs between a sample vs. population metric:

A coefficient of variation (CVar) is used to compare standard deviations (variation) of two variables when then units are different. The MORE CVar, the more variation.

Example. Which data set has more variation?
Data set 1: 41, 44, 45, 47, 47, 48, 51, 53, 58, 66
Data set 2: 20, 37, 48, 48, 49, 50, 53, 61, 64, 70
set 1 mean: 50
set 1 std. dev.: 7.4
set 2 mean: 50
set 2 std. dev.: 14.2
We can see that data set 2 has a larger standard deviation

Chebyshev’s Theorem

Is the proportion of values from ANY data set that we expect will fall within ‘k’ standard deviations of the mean and will be at least 1–(1/k²), where ‘k’ is a number greater than 1.

  • At least 75% of the data will fall within 2 standard deviations:
    1-(1/k²) = 1–(1/2²) = 1–1/4 = 0.75
  • At least 88.89% of the data will fall within 3 standard deviations:
    1-(1/k²) = 1–(1/3²) = 1–1/9 = 0.8889
  • If we have mean=50, s=7.4, within 2 standard deviations
    = (50–2*7.4, 50+2*7.4) = (35.2, 64.8)
    Therefore, ~9 out of 10 observations are within 35.2 and 64.8, ~90% of the data will fall within this range

Example. Given the mean and standard deviation of a data set, how would you determine the values within which exactly 75% of the data fall?
Let’s work backwards:
75% =0.75
0.75= 1-(1/k²)
solving for k:
k = 2

Empirical Normal Rule

The empirical normal rule can be defined as: when a data’s distribution is bell shaped (normally distributed) the following statements, which make up the empirical normal are true:

  1. Approximately 68.3% of the data values will fall within 1 standard deviation of the mean, x±1s for samples and with endpoints μ±3σ for populations
  2. Approximately 95.4% of the data values will fall within 2 standard deviations of the mean, x±2s for samples and with endpoints μ±2σ for populations
  3. Approximately 99.7% of the data values will fall within 3 standard deviations of the mean, x±1s for samples and with endpoints μ±3σ for populations
Image by Author

The Normal Distribution

The normal distribution is characterized by the figure above. Also called Gaussian distribution or bell-curve. The notation for the normal distribution is ’N’ and takes into account the mean and standard deviation:

The notation for the standard normal distribution, meaning our data has a normal distribution, but the mean is centered at 0, and standard deviation is 1:

Image by author

We can standardize our normal distribution to the standard normal distribution by performing standardization. This gives rise to our Z score (stat) which we will see in a subsequent bootcamp.

Here is a summary table for the notation of population and sample with respect to mean, standard deviation and variance.

Percentiles divide the data set into 100 equal parts, in which the pth percentile (corresponding to a value ‘X’) is a value that at most p% of the observations in the data set are less than this value X and the remainder are greater. We can compute the percentile for a given value X by using the formula:

The first percentile you were probably ever exposed to was when you were less than 1 day old! We often talk about newborn babies w.r.t. their length, weight and head circumference in percentiles. Here is a link to such a chart. From a medical perspective, tracking this as an individual ages can be signs of different pathologies (precocious puberty, bone disease etc.) if they ‘fall off’ their curve or have a sudden growth spurt etc.

Quartiles as the name would suggest, allow us to divide our data into four equal parts, and 3 cut points — Q1, Q2, and Q3.

  • Q1 = 25th percentile
  • Q2 is analogous to the median (50th percentile)
  • Q3 = 75th percentile

Interquartile range (IQR) is the difference between the 1st Quartile (Q1) and 3rd quartile (Q3). Therfore, is can be expressed as: IQR=(Q3-Q1).

When thinking about distributions we can summarize it with a 5-number summary. Three of these are the quartiles (Q1, Q2, Q3), and provide a measure of center and variation of the middle two quarters of the data. The last 2 numbers in our 5-number summary are the minimum and maximum data values to provide information on total data range.

Looking at our distribution below:

  1. Find the median, this is Q2.
  2. Find the median of the first half of the distribution (i.e. from the first data value to median), this is Q1
  3. Find the median of the second half of the distribution (i.e. from the median data value to last valye), this is Q3

We use these measures to determine the skewness in our dataset, which is important for determining what ype of inferential statistics to run. I.e. using the mean or meadian, and any transformations that may be warranted.

Outliers

Outliers are observations that fall well outside the overall pattern of the data. Outliers are often discussed in the data science realm without referencing any quantifying measures, so let’s fix that. We call a data point an outlier IF it’s value is 1.5*IQR greater or less than the values on the upper and lower ends of the data, Q3 and Q1 respectively. These outliers are outside our inner fences. Extreme outliers fall past the outer fences, or greater than or less than 3*IQR values.

lower inner fence: Q1–(1.5*IQR)
upper inner fence: Q1–(1.5*IQR)

Outliers (outside inner fences)
<Q1–(1.5*IQR)
>Q3–(1.5*IQR)

Extreme outliers (outer fences)
<Q1–(3*IQR)
>Q3–(3*IQR)

Boxplots

We need 5 elements to construct our boxplot
1. Q1
2. Q2
3. Q3
4. inner fences
5. outer fences

  • Draw a box with the sides representing Q1 and Q3 (always perpendicular to where values are)
  • Draw a line inside the box at Q2 (parallel to those of Q1 and Q3)
  • Draw a parallel line connecting Q1 to the smallest value still within the lower inner fence
  • Draw a parallel line connecting Q3 to the largest value still within the upper inner fence

You should arrive at something like the following:

Image by author

Using a boxplot we get an idea of the center of our data, the variation and skew via length of each section of the box and the length of the whiskers/fences.

Example 1. Compute the quartiles for an even number of observations:
3, 16, 17, 18, 19, 20, 21, 22
Q2 = (18+19)/2 = 18.5
Q1 = (16+17)/2 = 16.5
Q3 = (20+21)/2 = 20.5

Example 2. Compute the quartiles for an odd number of observations:
53, 62, 78, 94, 96, 99, 103
Q2 = 94
Q1 = ?
Q3 = ?

A word on including the median in the computation of Q1 and Q3 for odd number of observations…. This actually differs depending on which statistical software you are using. Different references use different rules:

  • Excel/SPSS/R — include Q2, the median (94)
    Q1=70
    Q3 = 97.5
  • Stata — does not include Q2, the median (94)
    Q1=62
    Q3 = 99

In this bootcamp we have introduced distributions and descriptive statistics. We’ve covered measures of center — mean, median and mode. You learned measures of variation — standard deviation, coefficient of variation, Chebyshevs rule, and the Empirical rule. Quartiles and outliers quantified measures of position. Lastly, you’ve seen how the notation and calculation differ between a population and a sample.

Previous boot camps in the series:

#1 Laying the Foundations

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment