Techno Blender
Digitally Yours.

Black-box Hyperparameter Optimization in Python | by Sadrach Pierre, Ph.D. | Aug, 2022

0 78


Comparing Brute force and Black-box Optimization Methods in Python

Image by PhotoMIX Company on Pexels

In machine learning, hyperparameters are values used to control the learning process for a machine learning model. This is to be distinguished from internal machine learning model parameters that are learned from the data. Hyperparameters are values that are external to machine learning training data that determine the optimality of a machine learning model’s performance. Each unique set of hyperparameters correspond to a unique machine learning model. The set of all possible hyperparameter combinations can become quite large for most state of the art machine learning models. Fortunately, most machine learning model packages come with default hyperparameter values that achieve decent baseline performance. This means that the data scientist or machine learning engineer can use models out of the box without having to worry about hyperparameter selection at the start. These default models often outperform what a data scientist or engineer would be able to test and select manually.

Conversely, to optimize performance, the data scientist or machine learning engineer must test a wide range of values for hyperparameters that are distinct from the default values. This can become quite cumbersome and inefficient to perform manually. For this reason many algorithms and libraries have been designed to automate the process of hyperparameter selection. Hyperparameter selection is an exercise in optimization, where the objective function is represented by how poorly the model performs. The optimization task is to find the best set of parameters that minimizes how poorly a machine learning model performs. If you find the machine learning model with the least poor performance, that corresponds to the model with the best performance.

The space of optimization is quite rich with literature spanning, brute force techniques and black-box non-convex optimization. Brute force optimization is the task of exhaustively searching for the best set of parameters of all possible hyperparameter combinations. If it is possible to exhaustively search the hyperparameter space it will give the set of hyperparameters that give the globally optimal solution. Unfortunately, exhaustively searching the hyperparameter space is often not feasible in terms of computationally resources and time.This is because hyperparameter tuning machine learning models falls into the category of non-convex optimization. This is a type of optimization where finding a global optimum is not feasible since it may get stuck in one of several suboptimal ‘traps’, also called local minima, that make it difficult for the algorithm to search the full space of hyperparameters.

Alternatives to brute force optimization are black-box non-convex optimization optimization techniques. Black-box non-convex optimization algorithms find suboptimal solutions, local minima (or maxima), that are optimal enough based on some predefined metric.

Python has tools for brute force optimization and black box optimization. The GridSearchcv in the model selection module enables brute force optimization.The RBFopt python package is a black-box optimization library developed by IBM. It works by using a radial basis functions to build and refine the surrogate models of the function being optimized. It is useful because it makes no assumptions about the shape or behavior of the function being optimized. It has been used to optimize complex models such as deep neural networks.

The task of building, testing and comparing model hyperparameters and machine learning algorithms is often collaborative in nature. With this in mind, I will be working with DeepNote, a collaborative data science notebook that makes it easy for data scientists to work together on machine learning and data analytic tasks. Here we will walk through how to apply each of these optimization tools for tuning hyperparameters of a classification model. We will consider the supervised machine learning task of predicting if a customer will not make a repeat purchase, which is called churning. We will work with the fictitious Telco Churn data set which is publicly available on Kaggle. The data set is free to use, modify and share under the Apache 2.0 License.

Reading in Telco Churn Data

To start let’s import the python pandas library and read our data into a pandas data frame and display the first five rows of data:

import pandas as pd

df = pd.read_csv("telco_churn.csv")
Screenshot taken by Author

We see that the data contains fields such as customer ID, gender, senior citizen status, and more. If we hoover our cursor over the cell output to the left we will see the following:

Screenshot taken by Author

We see that we have the field ‘churn’, which corresponds to whether or not a customer made a repeat purchase. A value of ‘No’ means that the customer has made repeat purchases and a value of ‘Yes’ means that the customer stopped making purchases.

We will build a simple classification model that takes the fields gender, SeniorCitizen, InternetService, DeviceProtection, MonthlyCharges, and TotalCharges as inputs and predicts whether or not the customer will churn. To do this we need to convert our categorical columns into machine readable values that can be passed as inputs into our machine learning models. Let’s do this for gender, SeniorCitizen, InternetService, and DeviceProtection:

#convert categorical columns

#convert categorical columns
df['gender'] = df['gender'].astype('category')
df['gender_cat'] = df['gender'].cat.codes
df['SeniorCitizen'] = df['SeniorCitizen'].astype('category')
df['SeniorCitizen_cat'] = df['SeniorCitizen'].cat.codes
df['InternetService'] = df['InternetService'].astype('category')
df['InternetService_cat'] = df['InternetService'].cat.codes
df['DeviceProtection'] = df['DeviceProtection'].astype('category')
df['DeviceProtection_cat'] = df['DeviceProtection'].cat.codes

And let’s display the resulting columns:

df[['gender_cat', 'SeniorCitizen_cat', 'InternetService_cat', 'DeviceProtection_cat']].head()
Screenshot taken by Author

We also have to do something similar with the Churn column:

df['Churn'] = df['Churn'].astype('category')
df['Churn_cat'] = df['Churn'].cat.codes

Next thing we need to do is clean up our TotalCharges column by replacing invalid values with NaN and imputing NaNs with the mean of TotalCharges

df['TotalCharges'] = pd.to_numeric(df['TotalCharges'], 'coerce')
df['TotalCharges'].fillna(df['TotalCharges'].mean(), inplace=True)

Now let’s prepare our inputs and our output. We will define a variable X which will be a series containing the columns gender, SeniorCitizen, InternetService, DeviceProtection, MonthlyCharges, and TotalCharges. Our output will be a variable called Y which will contain the Churn values:

#define input and output
X = df[['TotalCharges', 'MonthlyCharges', 'gender_cat', 'SeniorCitizen_cat', 'InternetService_cat', 'DeviceProtection_cat']]
y = df['Churn_cat']

Next, let’s split our data for training and testing. We will use the train_test_split method from the model_selection module in scikit-learn:

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)

Modeling with Default Parameters

To start, we will build a random forest classification model. The random forest algorithm is a type of tree-based ensemble model algorithm that uses a combination of decision trees to prevent overfitting. Let’s import the random forest class from the ensemble module in scikit-learn:

from sklearn.ensemble import RandomForestClassifier

Next let’s define our random forest classifier model object and fit our model to our training data. By leaving the argument of the RandomForestClassifier empty we are define a model with predefined default parameters:

model = RandomForestClassifier()
model.fit(X_train, y_train)

Let’s print the default parameter values of our model. To do this we simply call the get_params() method on our model object:

model.get_params()
Screenshot taken by Author

We will use precision to evaluate our classification model. This is a good choice for imbalance classification problems such as churn prediction. Let’s evaluate the precision on the hold out set of targets:

from sklearn.metrics import precision_scorey_pred_default = model.predict(X_test)precision = precision_score(y_test, y_pred_default)precision

Now let’s look at how we apply brute force grid search to find the best random forest classification model.

Brute Force Optimization with GridSearchCV

Brute force searching methods such as GridSearchCv work by exhaustively searching for the best set of hyperparameters over the entire search space. To start let’s import the GridSearchCV method from the model selection module in scikit-learn:

from sklearn.model_selection import GridSearchCV

Let’s also define a dictionary which we will use to specify our grid of parameters. Let’s define a range of estimators (decision tree from 10 to 100), max depth from, of the decision tree, from 5 to 20, max features equal to sqrt, and criterion equal to the gini index (which is the metric used to split groups in the decision tree:

params = {'n_estimators': [10, 100],
'max_features': ['sqrt'],
'max_depth' : [5, 20],
'criterion' :['gini']
}

Next let’s define our grid search object with our parameter dictionary:

grid_search_rf = GridSearchCV(estimator=model, param_grid=params, cv= 20, scoring='precision')

And fit the object to our training data:

grid_search_rf.fit(x_train, y_train)

And from there we can display the best parameters:

gscv_params = grid_search_rf.best_params_gscv_params

And redefine our random forest model with the optimal parameters:

gscv_params = grid_search_rf.best_params_model_rf_gscv = RandomForestClassifier(**gscv_params)model_rf_gscv.fit(X_train, y_train)
Screenshot taken by Author

Let’s evaluate the precision on the hold out set of targets:

y_pred_gscv = model_rf_gscv.predict(X_test)precision_gscv = precision_score(y_test, y_pred_gscv)precision_gscv
Screenshot taken by Author

We see that our precision actually outperforms the default values. While this is nice, for a large range of parameter values and larger data sets this method can become intractable. Alternative methods such as black box optimization and bayesian optimization are better choices for hyperparameter tuning.

Black-Box Optimization with RBFopt

Let’s now consider black-box hyperparameter optimization with RBFopt. RBFopt works by using radial basis function to build and refine the surrogate model of the function being optimized. This is typically used for a function with no closed-form expression and many hills and valleys. This is in contrast to simple well-known functions with closed-form expressions such as a quadratic or exponential function.

To start let’s install RBFopt:

%pip install -U rbfopt
Screenshot taken by Author

Next we need to define a list of upper and lower bounds for our model parameters. The lower bound list will contain 10 for the number of estimators and 5 for the max depth. The upper bound list will contain 100 for the number of estimators and 20 for the max depth:

lbounds = [10, 5]ubounds = [100, 20]

Next let’s import RBFopt and the cross validation method:

import rbfoptfrom sklearn.model_selection import cross_val_score

Next we need to define our objective function. It will take inputs for n_estimators and max_depth and build multiple models for each set of parameters. For each model we will calculate and return the precision. We seek to find the set of values for n_estimators and max_depth that maximize precision. Since RBFopt finds the minimum, in order to find the set of parameters that maximize precision, we will return the negative of precision:

def precision_objective(X):
n_estimators, max_depth = X
n_estimators = int(n_estimators)
max_depth = int(max_depth)
params = {'n_estimators':n_estimators, 'max_depth': max_depth}
model_rbfopt = RandomForestClassifier(criterion='gini', max_features='sqrt', **params)
model_rbfopt.fit(X_train, y_train)
precision = cross_val_score(model_rbfopt, X_train, y_train, cv=20, scoring='precision')
return -np.mean(precision)

Next we specify the number of runs, function calls, and dimensions:

num_runs = 1max_fun_calls = 8ndim = 2

Here we only run with 8 function calls. If you wish to run for more than 10 function calls you have to install the bonmin and ipopt packages. Instructions for installation can be found on their respective linked GitHub pages.

Now, let’s specify our objective function and run RBFopt:

obj_fun = precision_objectivebb = rbfopt.RbfoptUserBlackBox(dimension=ndim, var_lower=np.array(lbounds, dtype=np.float), var_upper=np.array(ubounds, dtype=np.float), var_type=['R'] * ndim, obj_funct=obj_fun)settings = rbfopt.RbfoptSettings(max_evaluations=max_fun_calls)alg = rbfopt.RbfoptAlgorithm(settings, bb)
Screenshot taken by Author

And store the objective value and solutions in their respective variables:

fval, sol, iter_count, eval_count, fast_eval_count = alg.optimize()obj_vals = fval

We then store the integer values solutions in a dictionary:

sol_int = [int(x) for x in sol]
params_rbfopt = {'n_estimators': sol_int[0], 'max_depth': sol_int[1]}
params_rbfopt
Screenshot taken by Author

We see that RBFopt finds optimal values of 81 and 5 for n_estimators and max_depth respectively.

And then pass these optimal parameters into our new model and fit to our training data:

model_rbfopt = RandomForestClassifier(criterion=’gini’, max_features=’sqrt’, **params_rbfopt)model_rbfopt.fit(X_train, y_train)

And evaluate the precision:

y_pred_rbfopt = model_rbfopt.predict(X_test)precision_rbfopt = precision_score(y_test, y_pred_rbfopt)precision_rbfopt
Screenshot taken by Author

We see that we have a slight improvement in precision with the faster optimization algorithm. This is especially useful for when you have large hyperparameter search spaces.

The code used in this post is available on GitHub.

Conclusions

Having a good understanding of the available tools for hyperparameter tuning machine learning models is essential for every data scientist. While the default hyperparameters of most machine learning algorithms give good baseline performance, hyperparameter tuning is often necessary to see improvement on the baseline performance. Brute force optimization techniques are useful as they exhaustively search the hyperparameter space which will guarantee an improvement on baseline performance from default parameters. Unfortunately, brute force optimization is resource intensive in terms of time and computation. For these reasons, more efficient black-box optimization methods, like RBFopt, are useful alternatives to brute force optimization. RBFopt is a very useful black-box technique that should be a part of every data science toolkit for hyperparameter optimization.


Comparing Brute force and Black-box Optimization Methods in Python

Image by PhotoMIX Company on Pexels

In machine learning, hyperparameters are values used to control the learning process for a machine learning model. This is to be distinguished from internal machine learning model parameters that are learned from the data. Hyperparameters are values that are external to machine learning training data that determine the optimality of a machine learning model’s performance. Each unique set of hyperparameters correspond to a unique machine learning model. The set of all possible hyperparameter combinations can become quite large for most state of the art machine learning models. Fortunately, most machine learning model packages come with default hyperparameter values that achieve decent baseline performance. This means that the data scientist or machine learning engineer can use models out of the box without having to worry about hyperparameter selection at the start. These default models often outperform what a data scientist or engineer would be able to test and select manually.

Conversely, to optimize performance, the data scientist or machine learning engineer must test a wide range of values for hyperparameters that are distinct from the default values. This can become quite cumbersome and inefficient to perform manually. For this reason many algorithms and libraries have been designed to automate the process of hyperparameter selection. Hyperparameter selection is an exercise in optimization, where the objective function is represented by how poorly the model performs. The optimization task is to find the best set of parameters that minimizes how poorly a machine learning model performs. If you find the machine learning model with the least poor performance, that corresponds to the model with the best performance.

The space of optimization is quite rich with literature spanning, brute force techniques and black-box non-convex optimization. Brute force optimization is the task of exhaustively searching for the best set of parameters of all possible hyperparameter combinations. If it is possible to exhaustively search the hyperparameter space it will give the set of hyperparameters that give the globally optimal solution. Unfortunately, exhaustively searching the hyperparameter space is often not feasible in terms of computationally resources and time.This is because hyperparameter tuning machine learning models falls into the category of non-convex optimization. This is a type of optimization where finding a global optimum is not feasible since it may get stuck in one of several suboptimal ‘traps’, also called local minima, that make it difficult for the algorithm to search the full space of hyperparameters.

Alternatives to brute force optimization are black-box non-convex optimization optimization techniques. Black-box non-convex optimization algorithms find suboptimal solutions, local minima (or maxima), that are optimal enough based on some predefined metric.

Python has tools for brute force optimization and black box optimization. The GridSearchcv in the model selection module enables brute force optimization.The RBFopt python package is a black-box optimization library developed by IBM. It works by using a radial basis functions to build and refine the surrogate models of the function being optimized. It is useful because it makes no assumptions about the shape or behavior of the function being optimized. It has been used to optimize complex models such as deep neural networks.

The task of building, testing and comparing model hyperparameters and machine learning algorithms is often collaborative in nature. With this in mind, I will be working with DeepNote, a collaborative data science notebook that makes it easy for data scientists to work together on machine learning and data analytic tasks. Here we will walk through how to apply each of these optimization tools for tuning hyperparameters of a classification model. We will consider the supervised machine learning task of predicting if a customer will not make a repeat purchase, which is called churning. We will work with the fictitious Telco Churn data set which is publicly available on Kaggle. The data set is free to use, modify and share under the Apache 2.0 License.

Reading in Telco Churn Data

To start let’s import the python pandas library and read our data into a pandas data frame and display the first five rows of data:

import pandas as pd

df = pd.read_csv("telco_churn.csv")
Screenshot taken by Author

We see that the data contains fields such as customer ID, gender, senior citizen status, and more. If we hoover our cursor over the cell output to the left we will see the following:

Screenshot taken by Author

We see that we have the field ‘churn’, which corresponds to whether or not a customer made a repeat purchase. A value of ‘No’ means that the customer has made repeat purchases and a value of ‘Yes’ means that the customer stopped making purchases.

We will build a simple classification model that takes the fields gender, SeniorCitizen, InternetService, DeviceProtection, MonthlyCharges, and TotalCharges as inputs and predicts whether or not the customer will churn. To do this we need to convert our categorical columns into machine readable values that can be passed as inputs into our machine learning models. Let’s do this for gender, SeniorCitizen, InternetService, and DeviceProtection:

#convert categorical columns

#convert categorical columns
df['gender'] = df['gender'].astype('category')
df['gender_cat'] = df['gender'].cat.codes
df['SeniorCitizen'] = df['SeniorCitizen'].astype('category')
df['SeniorCitizen_cat'] = df['SeniorCitizen'].cat.codes
df['InternetService'] = df['InternetService'].astype('category')
df['InternetService_cat'] = df['InternetService'].cat.codes
df['DeviceProtection'] = df['DeviceProtection'].astype('category')
df['DeviceProtection_cat'] = df['DeviceProtection'].cat.codes

And let’s display the resulting columns:

df[['gender_cat', 'SeniorCitizen_cat', 'InternetService_cat', 'DeviceProtection_cat']].head()
Screenshot taken by Author

We also have to do something similar with the Churn column:

df['Churn'] = df['Churn'].astype('category')
df['Churn_cat'] = df['Churn'].cat.codes

Next thing we need to do is clean up our TotalCharges column by replacing invalid values with NaN and imputing NaNs with the mean of TotalCharges

df['TotalCharges'] = pd.to_numeric(df['TotalCharges'], 'coerce')
df['TotalCharges'].fillna(df['TotalCharges'].mean(), inplace=True)

Now let’s prepare our inputs and our output. We will define a variable X which will be a series containing the columns gender, SeniorCitizen, InternetService, DeviceProtection, MonthlyCharges, and TotalCharges. Our output will be a variable called Y which will contain the Churn values:

#define input and output
X = df[['TotalCharges', 'MonthlyCharges', 'gender_cat', 'SeniorCitizen_cat', 'InternetService_cat', 'DeviceProtection_cat']]
y = df['Churn_cat']

Next, let’s split our data for training and testing. We will use the train_test_split method from the model_selection module in scikit-learn:

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)

Modeling with Default Parameters

To start, we will build a random forest classification model. The random forest algorithm is a type of tree-based ensemble model algorithm that uses a combination of decision trees to prevent overfitting. Let’s import the random forest class from the ensemble module in scikit-learn:

from sklearn.ensemble import RandomForestClassifier

Next let’s define our random forest classifier model object and fit our model to our training data. By leaving the argument of the RandomForestClassifier empty we are define a model with predefined default parameters:

model = RandomForestClassifier()
model.fit(X_train, y_train)

Let’s print the default parameter values of our model. To do this we simply call the get_params() method on our model object:

model.get_params()
Screenshot taken by Author

We will use precision to evaluate our classification model. This is a good choice for imbalance classification problems such as churn prediction. Let’s evaluate the precision on the hold out set of targets:

from sklearn.metrics import precision_scorey_pred_default = model.predict(X_test)precision = precision_score(y_test, y_pred_default)precision

Now let’s look at how we apply brute force grid search to find the best random forest classification model.

Brute Force Optimization with GridSearchCV

Brute force searching methods such as GridSearchCv work by exhaustively searching for the best set of hyperparameters over the entire search space. To start let’s import the GridSearchCV method from the model selection module in scikit-learn:

from sklearn.model_selection import GridSearchCV

Let’s also define a dictionary which we will use to specify our grid of parameters. Let’s define a range of estimators (decision tree from 10 to 100), max depth from, of the decision tree, from 5 to 20, max features equal to sqrt, and criterion equal to the gini index (which is the metric used to split groups in the decision tree:

params = {'n_estimators': [10, 100],
'max_features': ['sqrt'],
'max_depth' : [5, 20],
'criterion' :['gini']
}

Next let’s define our grid search object with our parameter dictionary:

grid_search_rf = GridSearchCV(estimator=model, param_grid=params, cv= 20, scoring='precision')

And fit the object to our training data:

grid_search_rf.fit(x_train, y_train)

And from there we can display the best parameters:

gscv_params = grid_search_rf.best_params_gscv_params

And redefine our random forest model with the optimal parameters:

gscv_params = grid_search_rf.best_params_model_rf_gscv = RandomForestClassifier(**gscv_params)model_rf_gscv.fit(X_train, y_train)
Screenshot taken by Author

Let’s evaluate the precision on the hold out set of targets:

y_pred_gscv = model_rf_gscv.predict(X_test)precision_gscv = precision_score(y_test, y_pred_gscv)precision_gscv
Screenshot taken by Author

We see that our precision actually outperforms the default values. While this is nice, for a large range of parameter values and larger data sets this method can become intractable. Alternative methods such as black box optimization and bayesian optimization are better choices for hyperparameter tuning.

Black-Box Optimization with RBFopt

Let’s now consider black-box hyperparameter optimization with RBFopt. RBFopt works by using radial basis function to build and refine the surrogate model of the function being optimized. This is typically used for a function with no closed-form expression and many hills and valleys. This is in contrast to simple well-known functions with closed-form expressions such as a quadratic or exponential function.

To start let’s install RBFopt:

%pip install -U rbfopt
Screenshot taken by Author

Next we need to define a list of upper and lower bounds for our model parameters. The lower bound list will contain 10 for the number of estimators and 5 for the max depth. The upper bound list will contain 100 for the number of estimators and 20 for the max depth:

lbounds = [10, 5]ubounds = [100, 20]

Next let’s import RBFopt and the cross validation method:

import rbfoptfrom sklearn.model_selection import cross_val_score

Next we need to define our objective function. It will take inputs for n_estimators and max_depth and build multiple models for each set of parameters. For each model we will calculate and return the precision. We seek to find the set of values for n_estimators and max_depth that maximize precision. Since RBFopt finds the minimum, in order to find the set of parameters that maximize precision, we will return the negative of precision:

def precision_objective(X):
n_estimators, max_depth = X
n_estimators = int(n_estimators)
max_depth = int(max_depth)
params = {'n_estimators':n_estimators, 'max_depth': max_depth}
model_rbfopt = RandomForestClassifier(criterion='gini', max_features='sqrt', **params)
model_rbfopt.fit(X_train, y_train)
precision = cross_val_score(model_rbfopt, X_train, y_train, cv=20, scoring='precision')
return -np.mean(precision)

Next we specify the number of runs, function calls, and dimensions:

num_runs = 1max_fun_calls = 8ndim = 2

Here we only run with 8 function calls. If you wish to run for more than 10 function calls you have to install the bonmin and ipopt packages. Instructions for installation can be found on their respective linked GitHub pages.

Now, let’s specify our objective function and run RBFopt:

obj_fun = precision_objectivebb = rbfopt.RbfoptUserBlackBox(dimension=ndim, var_lower=np.array(lbounds, dtype=np.float), var_upper=np.array(ubounds, dtype=np.float), var_type=['R'] * ndim, obj_funct=obj_fun)settings = rbfopt.RbfoptSettings(max_evaluations=max_fun_calls)alg = rbfopt.RbfoptAlgorithm(settings, bb)
Screenshot taken by Author

And store the objective value and solutions in their respective variables:

fval, sol, iter_count, eval_count, fast_eval_count = alg.optimize()obj_vals = fval

We then store the integer values solutions in a dictionary:

sol_int = [int(x) for x in sol]
params_rbfopt = {'n_estimators': sol_int[0], 'max_depth': sol_int[1]}
params_rbfopt
Screenshot taken by Author

We see that RBFopt finds optimal values of 81 and 5 for n_estimators and max_depth respectively.

And then pass these optimal parameters into our new model and fit to our training data:

model_rbfopt = RandomForestClassifier(criterion=’gini’, max_features=’sqrt’, **params_rbfopt)model_rbfopt.fit(X_train, y_train)

And evaluate the precision:

y_pred_rbfopt = model_rbfopt.predict(X_test)precision_rbfopt = precision_score(y_test, y_pred_rbfopt)precision_rbfopt
Screenshot taken by Author

We see that we have a slight improvement in precision with the faster optimization algorithm. This is especially useful for when you have large hyperparameter search spaces.

The code used in this post is available on GitHub.

Conclusions

Having a good understanding of the available tools for hyperparameter tuning machine learning models is essential for every data scientist. While the default hyperparameters of most machine learning algorithms give good baseline performance, hyperparameter tuning is often necessary to see improvement on the baseline performance. Brute force optimization techniques are useful as they exhaustively search the hyperparameter space which will guarantee an improvement on baseline performance from default parameters. Unfortunately, brute force optimization is resource intensive in terms of time and computation. For these reasons, more efficient black-box optimization methods, like RBFopt, are useful alternatives to brute force optimization. RBFopt is a very useful black-box technique that should be a part of every data science toolkit for hyperparameter optimization.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment