Techno Blender
Digitally Yours.

A Brief Introduction to Neural Networks : A Regression Problem | by Chayma Zatout | Dec, 2022

0 39


The goal of machine learning is to come up with a model (a method) to predict the value, class, or cluster of an input instance. This model is built by exploiting existing data (dataset). In order to implement such a model, a set of steps is considered, namely:

  • Problem understanding.
  • Data preparation and pre-processing.
  • Model conception.
  • Training the model.
  • Model evaluation and validation

In this tutorial, we will be using:

  • pandas: for data manipulation (importing and splitting data).
  • keras: for model conception and training.
  • matplotlib: for data visualization.
  • scikit-learn (sklearn): for additional metrics computing.

We also add the following lines to reproduce the results each execution:

Understanding the problem is the first step to consider. It consists of identifying the nature of the problem: is it a regression problem (predicting a real value such the users’ engagement rate with different products) or a classification problem (predicting the class of an input instance like the classification of Iris flowers or sentiment analyses). Sometimes regression can give bad results, in this case, you have to think about the possibility of turning it into a classification problem. For example: transform the problem of predicting the user engagement rate into a problem of classifying his engagement (0 engagement, low engagement, medium engagement and high engagement).

As an example, I created a toy dataset that you can download from my GitHub repository. The dataset represents a signal and the objective of this example is to learn the signal function so that we can use it for future predictions. In the next section, we will explore it by code.

Using neural networks to solve a given problem means exploiting a dataset. Before that, the dataset needs to be preprocessed. Data preprocessing generally includes cleaning (replacing missing values and removing outliers), discretization (converting continuous data into a finite set of intervals with minimum data loss) and normalization (like normalize the values to be between 0 and 1). The techniques to be used in this step depend on the nature of the data-set and/or the nature of the model to be used subsequently. During data preprocessing, it is important to visualize your data to make sure that it is correctly processed. At the end of this phase, the data-set is generally partitioned into 2 sets:

  • The training set: used to set model parameters such as weights in neural networks and coefficients in linear regression.
  • The validation set: used to evaluate the behavior of the model against new data (which does not belong to the training set) during training.

In this article our main focus is the neural networks side. Therefore, we will be using a dataset that does not require preprocessing.

Let’s start by importing the dataset, print the 5th first rows and plot the scatter:

There are 510 instances.
x y
0 10.5392 1.2058
1 5.1571 2.6770
2 12.6563 3.1471
3 11.7546 2.3668
4 10.9499 2.3400
Initial dataset

The dataset includes 510 rows (instances) and two columns: a single feature (x) and the value to learn, also called the target (y). Both features are real numbers.

Now, we randomly split the dataset into 2 sets: the train set and the validation set. 30% of the data will be used for validation and 70% for training:

Dividing the data into a training set and validation set

The dataset is now ready to be used in the next step. We will not apply any further data preprocessing. The most important thing is that there are no missing values since neural networks can not handle missing values. We will see through the next tutorials how we can handle them and how to apply further data preprocessing; meanwhile you can check other Medium articles. Note that, I provided another dataset ‘test.csv’ as the test set. Here, the test set is selected randomly from the original dataset. It will be used to test the model after training.

Once the problem is understood and the data is prepared, the model conception takes place. Designing a neural network model includes: defining the number of neurons in each layer, defining the number of layers and defining the activation functions. In this section, we will explore the different types of layers provided by Keras. Then, the activation functions are presented. Finally, we will create our multi layer perceptron neural network. But first, let’s learn about a single neuron model (also called a node).

5.1 Artificial neuron model

The following Figure describes an artificial neuron model:

Artificial neuron model.

A single neuron model is represented by its:

  • Weights (W0, .., Wn). A weight defines the impact of its associated input on the neuron’s output. The weights are initialized before training and updated during training.
  • Bias (b). A bias is a constant that is not associated with an input. We can see it as the intercept used to offset the result. Similarly to weights, the bias is initialized before training and updated during training.
  • Activation function (f). The activation function defines the output of the node in terms of the weighted sum. If the activation function is linear then the weighted sum is returned. Keras provides several activation functions which we will see in a moment.

The input of an artificial neuron (x0, .., xn) can be the input dataset (the features) or the output of other artificial neurons.

5.2 Activation functions

Keras provides a rich set of activation functions among which we cite: the linear function, the sigmoid function, the tanh function, the softplus function, the softsign function, the ReLU (Rectified Linear Unit) function, the SELU (Scaled Exponential Linear Unit) function and the ELU (Exponential Linear Unit) function. These functions are represented in the following Figure. We set the x in [-10, +10].

The common activation functions provided by Keras.

It is important to know the functions’ codomain so you can define the appropriate functions especially for the model output. I usually refer to the Figure above when selecting the output activation function. Suppose our model is suppose to learn a linear function (a function from ℝto ℝ), RELU can not be used since its codomain is the set of positive real numbers. We will see later the impact of the activation functions on the model output.

There are other activation functions which are good for classification problems. These will not be discussed in this tutorial but rather in the next tutorial. However, you can find more details in Keras activation functions reference.

5.3 Layers

Now that we have introduced a single neuron model, what if we have multiple neurons (nodes)? A set of nodes having the same input builds a layer (see the following Figure). The nodes of the same layer have generally the same activation function.

A single layer model.

According to the documentation, Keras provides many types of layers. Each layer has its own arguments and functionality. There are convolution layers and pooling layers that are usually used for 2D and 3D data, there are normalization layers and regularization layers that are usually used to normalize and regularize the layer’s input and there are the core layers. In this tutorial, we are only interested in the Input and the Dense layer that are core layers.

The Input layer is used to define the number of features (the shape) of the model’s input. In Keras, it is mainly defined as:

keras.Input(
shape=None,
**kwargs
)

A dense layer is a set of nodes called units in Keras. It is mainly defined as:

keras.layers.Dense(
units,
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
**kwargs
)

Where:

  • units is the number of neurons in this layer.
  • activation is the activation function for the units.
  • use_bias is a boolean to specify whether the bias is used or not. It is set to True by default.
  • kernal_initializer and bias_initializer define how the weights and the bias are initialized respectively. By default, they are set to glorot_uniform and zeros respectively.

You can find more about layers in Keras layers API.

5.4 Multi layer model

A succession of layers defines a multi layer model. Generally, each node of the current layer is connected to all nodes of the previous/next layer. This type of model is called a fully connected network. In a model there is three types of layers:

  • The input layer is the very first layer that its input is the instance features: the model’s input.
  • The output layer is the last layer that its output is the whole model output. The activation function of this layer has to be selected carefully according to the needed output.
  • The hidden layers are the layers between the input and the output layers.

Now moving on to writing some code and let’s create our model! The model includes: an input layer that defines the number of features, three hidden layers with 200 units and the sigmoid activation function for each layer, and finally the output layer with a single unit and the linear activation function:

_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 200) 400

dense_1 (Dense) (None, 200) 40200

dense_2 (Dense) (None, 200) 40200

dense_3 (Dense) (None, 1) 201

The function model.summary() displays the model architecture. For each layer, its type, its output shape (the number of units) and the number of weights are displayed. The number of weights of dense_1 is computed as follows: the number of input which are the units of the previous dense layer is 200 and each unit is connected to all the previous units so we have 200 * 200 = 40000 weights. In addition, each unit has its bias then in total we have 200 biases. Therefore, the total number of parameters is 40000 + 200 = 40200 . Can you tell me how the number of parameters of dense and dense_3 are computed?

Our model is ready for training!

So what is training? Training is finding the weights that fit well as much as possible the train data. In other words, it is the process of updating the weights so that the difference between the predicted value and the ground truth (also called target) is very small. We can also see it as an optimization problem when the objective is to find the best weight to minimize the difference between the model output and the target.

Training algorithm for feed forward network

So how does training work? To answer this question, we describe the different steps in a simple way.

1. Initialize the weights and bias.2. Forward pass: compute the model output.3. Measure the error between the target and the output using the loss function.4. Backward pass: propagate the error and update the weights and bias using an optimizer.5. Repeat the steps 2 to 4 for every batch.6. Repeat the steps from 2 to 5 epochs times.

I bet for absolute beginners there are a lot of new words. Don’t worry, let’s introduce them one by one:

  • Optimizer. It updates the trainable weights. Keras provides a set of optimizers among which there are SGD (Stochastic Gradient Descent) and Adam.
  • Loss function. It measures how well the model is able to fit the training set; in other words, it measures the difference between the prediction and the ground truth during training. Keras implements different types of loss functions. Among the loss functions for regression there are MSE (Mean Squared Error) and MAE (Mean Absolute Error).
  • Batch. The batch is the number of samples that are used in order to update weights. During training, the data is divided into batches and the weights are updated according to each batch.
  • Epochs. An epoch is when all the training data is used. The number of epochs defines how many times the weights are updated along all the training data. It is one of the conditions that stops training.
  • Learning rate. The learning rate controls how the weights are affected by the propagated error; it is generally set between [0, 1] and can be updated during training. For example, the default learning rate of SGD and Adam are equal to 0.01 and 0.001 respectively.

In Keras, the first thing to do before starting training is to compile your model which means configure your model by passing some parameters like: the loss function, the optimizer and the metrics to track during training. Here, we specified the optimizer that we will be using as a string that means the default parameters will be used including the learning rate. To train the model, the function fit() is called. In this example, we passed the train data ( x_tain and y_train ) the number of epochs and the batch size as parameters.

When the verbose=1 , the loss ( loss , val_loss ) and the metrics ( mae , val_mae ) values are printed at the end of each epoch for the train and validation set respectively:

Epoch 1749/1750
6/6 [==============================] - 0s 9ms/step - loss: 0.0554 - mae: 0.1955 - val_loss: 0.0596 - val_mae: 0.2085
Epoch 1750/1750
6/6 [==============================] - 0s 6ms/step - loss: 0.0539 - mae: 0.1966 - val_loss: 0.0586 - val_mae: 0.2062

The displayed metrics of the last epochs isn’t enough to conclude whether the model has learnt from data or not. The first thing to observe is the learning curves. Then, we can evaluate our model using other metrics. We can also test it on a test set if it’s available.

7.1 Learning curves

Learning curves are the first metrics that I always observe after the training ends. It reveals the model performance during training for the seen data (train set) and the unseen data (validation set).

Learning curves

7.2 Evaluation on test set

Usually, the test set comes with the data set. It is used to evaluate the model after training. Sometimes you need to split the dataset yourself into training, validation, and test sets. However, when the dataset is small, the test set is not considered. As I mentioned earlier, in this tutorial, the test set is provided along the dataset.

Let’s evaluate our model on the test set and see how it works. We first import the test set and then call the method evaluate() that returns the loss and the metrics used during training:

Test set: - loss: 0.05698258429765701 - mae: 0.20276354253292084

7.3 Evaluation metrics

During training to metrics were used :

  • Mean Squared Error (MSE) that was used as the lost function.
  • Mean Absolute Error (MAE) that was an additional metric.

We can evaluate our model using other metrics and I selected 2 other metrics that I find interesting:

  • Median Absolute Error (MedAE) that is robust to outliers since it takes the median instead of the mean of all absolute differences between the target and the prediction.
  • Mean Absolute Percentage Error (MAPE) that is sensitive to relative errors and it is not affected by global scaling of the target variable: it computes the percentage of error.
Displaying other metrics:
MedAE MAPE
Train: 0.173 0.112
Val : 0.17 0.119
Test : 0.198 0.115

7.4 Displaying the learnt function

Even if the computed metrics reveal that the model fits the dataset well, it is important to see visually how well it is. Here, we draw the learnt function in terms of x :

The learnt function. Red: the function in terms of x. Blue: the dataset.

The model has learnt a cosine-like function.

That’s it for this article! In this article, we have learned how to create neural networks and train and validate it for regression problems. We will look at more difficult examples in the next tutorials. If you ask me if it is the best model of all time for this problem, I immediately say: No! You can do better! You can start by playing with the model architecture by changing the number of layers and the number of units. You can also change the training parameters and compare the results. You can share with me your solution in the comments below!

This is my first article in machine learning and absolutely not the last one! I’ll be writing more tutorials about it (data preprocessing for machine learning, neural networks for classification, sentiment analyses, etc.), so stay tuned!

Thanks, I hope you enjoyed reading this. You can find the examples here in my GitHub repository. If you have any questions or suggestions feel free to leave me a comment below.

All images unless otherwise noted are by the author.


The goal of machine learning is to come up with a model (a method) to predict the value, class, or cluster of an input instance. This model is built by exploiting existing data (dataset). In order to implement such a model, a set of steps is considered, namely:

  • Problem understanding.
  • Data preparation and pre-processing.
  • Model conception.
  • Training the model.
  • Model evaluation and validation

In this tutorial, we will be using:

  • pandas: for data manipulation (importing and splitting data).
  • keras: for model conception and training.
  • matplotlib: for data visualization.
  • scikit-learn (sklearn): for additional metrics computing.

We also add the following lines to reproduce the results each execution:

Understanding the problem is the first step to consider. It consists of identifying the nature of the problem: is it a regression problem (predicting a real value such the users’ engagement rate with different products) or a classification problem (predicting the class of an input instance like the classification of Iris flowers or sentiment analyses). Sometimes regression can give bad results, in this case, you have to think about the possibility of turning it into a classification problem. For example: transform the problem of predicting the user engagement rate into a problem of classifying his engagement (0 engagement, low engagement, medium engagement and high engagement).

As an example, I created a toy dataset that you can download from my GitHub repository. The dataset represents a signal and the objective of this example is to learn the signal function so that we can use it for future predictions. In the next section, we will explore it by code.

Using neural networks to solve a given problem means exploiting a dataset. Before that, the dataset needs to be preprocessed. Data preprocessing generally includes cleaning (replacing missing values and removing outliers), discretization (converting continuous data into a finite set of intervals with minimum data loss) and normalization (like normalize the values to be between 0 and 1). The techniques to be used in this step depend on the nature of the data-set and/or the nature of the model to be used subsequently. During data preprocessing, it is important to visualize your data to make sure that it is correctly processed. At the end of this phase, the data-set is generally partitioned into 2 sets:

  • The training set: used to set model parameters such as weights in neural networks and coefficients in linear regression.
  • The validation set: used to evaluate the behavior of the model against new data (which does not belong to the training set) during training.

In this article our main focus is the neural networks side. Therefore, we will be using a dataset that does not require preprocessing.

Let’s start by importing the dataset, print the 5th first rows and plot the scatter:

There are 510 instances.
x y
0 10.5392 1.2058
1 5.1571 2.6770
2 12.6563 3.1471
3 11.7546 2.3668
4 10.9499 2.3400
Initial dataset

The dataset includes 510 rows (instances) and two columns: a single feature (x) and the value to learn, also called the target (y). Both features are real numbers.

Now, we randomly split the dataset into 2 sets: the train set and the validation set. 30% of the data will be used for validation and 70% for training:

Dividing the data into a training set and validation set

The dataset is now ready to be used in the next step. We will not apply any further data preprocessing. The most important thing is that there are no missing values since neural networks can not handle missing values. We will see through the next tutorials how we can handle them and how to apply further data preprocessing; meanwhile you can check other Medium articles. Note that, I provided another dataset ‘test.csv’ as the test set. Here, the test set is selected randomly from the original dataset. It will be used to test the model after training.

Once the problem is understood and the data is prepared, the model conception takes place. Designing a neural network model includes: defining the number of neurons in each layer, defining the number of layers and defining the activation functions. In this section, we will explore the different types of layers provided by Keras. Then, the activation functions are presented. Finally, we will create our multi layer perceptron neural network. But first, let’s learn about a single neuron model (also called a node).

5.1 Artificial neuron model

The following Figure describes an artificial neuron model:

Artificial neuron model.

A single neuron model is represented by its:

  • Weights (W0, .., Wn). A weight defines the impact of its associated input on the neuron’s output. The weights are initialized before training and updated during training.
  • Bias (b). A bias is a constant that is not associated with an input. We can see it as the intercept used to offset the result. Similarly to weights, the bias is initialized before training and updated during training.
  • Activation function (f). The activation function defines the output of the node in terms of the weighted sum. If the activation function is linear then the weighted sum is returned. Keras provides several activation functions which we will see in a moment.

The input of an artificial neuron (x0, .., xn) can be the input dataset (the features) or the output of other artificial neurons.

5.2 Activation functions

Keras provides a rich set of activation functions among which we cite: the linear function, the sigmoid function, the tanh function, the softplus function, the softsign function, the ReLU (Rectified Linear Unit) function, the SELU (Scaled Exponential Linear Unit) function and the ELU (Exponential Linear Unit) function. These functions are represented in the following Figure. We set the x in [-10, +10].

The common activation functions provided by Keras.

It is important to know the functions’ codomain so you can define the appropriate functions especially for the model output. I usually refer to the Figure above when selecting the output activation function. Suppose our model is suppose to learn a linear function (a function from ℝto ℝ), RELU can not be used since its codomain is the set of positive real numbers. We will see later the impact of the activation functions on the model output.

There are other activation functions which are good for classification problems. These will not be discussed in this tutorial but rather in the next tutorial. However, you can find more details in Keras activation functions reference.

5.3 Layers

Now that we have introduced a single neuron model, what if we have multiple neurons (nodes)? A set of nodes having the same input builds a layer (see the following Figure). The nodes of the same layer have generally the same activation function.

A single layer model.

According to the documentation, Keras provides many types of layers. Each layer has its own arguments and functionality. There are convolution layers and pooling layers that are usually used for 2D and 3D data, there are normalization layers and regularization layers that are usually used to normalize and regularize the layer’s input and there are the core layers. In this tutorial, we are only interested in the Input and the Dense layer that are core layers.

The Input layer is used to define the number of features (the shape) of the model’s input. In Keras, it is mainly defined as:

keras.Input(
shape=None,
**kwargs
)

A dense layer is a set of nodes called units in Keras. It is mainly defined as:

keras.layers.Dense(
units,
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
**kwargs
)

Where:

  • units is the number of neurons in this layer.
  • activation is the activation function for the units.
  • use_bias is a boolean to specify whether the bias is used or not. It is set to True by default.
  • kernal_initializer and bias_initializer define how the weights and the bias are initialized respectively. By default, they are set to glorot_uniform and zeros respectively.

You can find more about layers in Keras layers API.

5.4 Multi layer model

A succession of layers defines a multi layer model. Generally, each node of the current layer is connected to all nodes of the previous/next layer. This type of model is called a fully connected network. In a model there is three types of layers:

  • The input layer is the very first layer that its input is the instance features: the model’s input.
  • The output layer is the last layer that its output is the whole model output. The activation function of this layer has to be selected carefully according to the needed output.
  • The hidden layers are the layers between the input and the output layers.

Now moving on to writing some code and let’s create our model! The model includes: an input layer that defines the number of features, three hidden layers with 200 units and the sigmoid activation function for each layer, and finally the output layer with a single unit and the linear activation function:

_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 200) 400

dense_1 (Dense) (None, 200) 40200

dense_2 (Dense) (None, 200) 40200

dense_3 (Dense) (None, 1) 201

The function model.summary() displays the model architecture. For each layer, its type, its output shape (the number of units) and the number of weights are displayed. The number of weights of dense_1 is computed as follows: the number of input which are the units of the previous dense layer is 200 and each unit is connected to all the previous units so we have 200 * 200 = 40000 weights. In addition, each unit has its bias then in total we have 200 biases. Therefore, the total number of parameters is 40000 + 200 = 40200 . Can you tell me how the number of parameters of dense and dense_3 are computed?

Our model is ready for training!

So what is training? Training is finding the weights that fit well as much as possible the train data. In other words, it is the process of updating the weights so that the difference between the predicted value and the ground truth (also called target) is very small. We can also see it as an optimization problem when the objective is to find the best weight to minimize the difference between the model output and the target.

Training algorithm for feed forward network

So how does training work? To answer this question, we describe the different steps in a simple way.

1. Initialize the weights and bias.2. Forward pass: compute the model output.3. Measure the error between the target and the output using the loss function.4. Backward pass: propagate the error and update the weights and bias using an optimizer.5. Repeat the steps 2 to 4 for every batch.6. Repeat the steps from 2 to 5 epochs times.

I bet for absolute beginners there are a lot of new words. Don’t worry, let’s introduce them one by one:

  • Optimizer. It updates the trainable weights. Keras provides a set of optimizers among which there are SGD (Stochastic Gradient Descent) and Adam.
  • Loss function. It measures how well the model is able to fit the training set; in other words, it measures the difference between the prediction and the ground truth during training. Keras implements different types of loss functions. Among the loss functions for regression there are MSE (Mean Squared Error) and MAE (Mean Absolute Error).
  • Batch. The batch is the number of samples that are used in order to update weights. During training, the data is divided into batches and the weights are updated according to each batch.
  • Epochs. An epoch is when all the training data is used. The number of epochs defines how many times the weights are updated along all the training data. It is one of the conditions that stops training.
  • Learning rate. The learning rate controls how the weights are affected by the propagated error; it is generally set between [0, 1] and can be updated during training. For example, the default learning rate of SGD and Adam are equal to 0.01 and 0.001 respectively.

In Keras, the first thing to do before starting training is to compile your model which means configure your model by passing some parameters like: the loss function, the optimizer and the metrics to track during training. Here, we specified the optimizer that we will be using as a string that means the default parameters will be used including the learning rate. To train the model, the function fit() is called. In this example, we passed the train data ( x_tain and y_train ) the number of epochs and the batch size as parameters.

When the verbose=1 , the loss ( loss , val_loss ) and the metrics ( mae , val_mae ) values are printed at the end of each epoch for the train and validation set respectively:

Epoch 1749/1750
6/6 [==============================] - 0s 9ms/step - loss: 0.0554 - mae: 0.1955 - val_loss: 0.0596 - val_mae: 0.2085
Epoch 1750/1750
6/6 [==============================] - 0s 6ms/step - loss: 0.0539 - mae: 0.1966 - val_loss: 0.0586 - val_mae: 0.2062

The displayed metrics of the last epochs isn’t enough to conclude whether the model has learnt from data or not. The first thing to observe is the learning curves. Then, we can evaluate our model using other metrics. We can also test it on a test set if it’s available.

7.1 Learning curves

Learning curves are the first metrics that I always observe after the training ends. It reveals the model performance during training for the seen data (train set) and the unseen data (validation set).

Learning curves

7.2 Evaluation on test set

Usually, the test set comes with the data set. It is used to evaluate the model after training. Sometimes you need to split the dataset yourself into training, validation, and test sets. However, when the dataset is small, the test set is not considered. As I mentioned earlier, in this tutorial, the test set is provided along the dataset.

Let’s evaluate our model on the test set and see how it works. We first import the test set and then call the method evaluate() that returns the loss and the metrics used during training:

Test set: - loss: 0.05698258429765701 - mae: 0.20276354253292084

7.3 Evaluation metrics

During training to metrics were used :

  • Mean Squared Error (MSE) that was used as the lost function.
  • Mean Absolute Error (MAE) that was an additional metric.

We can evaluate our model using other metrics and I selected 2 other metrics that I find interesting:

  • Median Absolute Error (MedAE) that is robust to outliers since it takes the median instead of the mean of all absolute differences between the target and the prediction.
  • Mean Absolute Percentage Error (MAPE) that is sensitive to relative errors and it is not affected by global scaling of the target variable: it computes the percentage of error.
Displaying other metrics:
MedAE MAPE
Train: 0.173 0.112
Val : 0.17 0.119
Test : 0.198 0.115

7.4 Displaying the learnt function

Even if the computed metrics reveal that the model fits the dataset well, it is important to see visually how well it is. Here, we draw the learnt function in terms of x :

The learnt function. Red: the function in terms of x. Blue: the dataset.

The model has learnt a cosine-like function.

That’s it for this article! In this article, we have learned how to create neural networks and train and validate it for regression problems. We will look at more difficult examples in the next tutorials. If you ask me if it is the best model of all time for this problem, I immediately say: No! You can do better! You can start by playing with the model architecture by changing the number of layers and the number of units. You can also change the training parameters and compare the results. You can share with me your solution in the comments below!

This is my first article in machine learning and absolutely not the last one! I’ll be writing more tutorials about it (data preprocessing for machine learning, neural networks for classification, sentiment analyses, etc.), so stay tuned!

Thanks, I hope you enjoyed reading this. You can find the examples here in my GitHub repository. If you have any questions or suggestions feel free to leave me a comment below.

All images unless otherwise noted are by the author.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment