Techno Blender
Digitally Yours.

Clearing the Dust: How CNNs and Transfer Learning Can Detect Dust on Solar Panels | by Suhas Maddali | Mar, 2023

0 36


With the aid of convolutional neural networks and transfer learning, it is possible to build a classifier in determining whether solar panels are clean or dusty

Photo by Moritz Kindler on Unsplash

Solar panels have become a popular source of renewable energy in a variety of industries, from agriculture and transportation to construction and hospitality. By harnessing the power of the sun, we can generate electricity without harming the environment. However, there are challenges associated with using solar panels, and one of the biggest is the accumulation of dust on their surfaces. This can significantly reduce their efficiency and limit their usefulness for energy production and other applications.

To address this issue, automation can play a key role in ensuring regular and timely maintenance of solar panels. By automating the cleaning process, we can increase productivity and efficiency, while also reducing the environmental impact of energy generation. Overall, the potential benefits of solar panels are vast and varied, and with the help of automation, we can overcome the challenges associated with their use and continue to drive progress in this exciting and rapidly-evolving field.

With the aid of deep learning and heavy computing resources, it is possible to alert the authorities when there is dust accumulation in solar panels. Convolutional Neural Networks (CNNs) are known for their image recognition abilities. Transfer learning is an approach that uses pre-trained weights for complex tasks for our task of solar panel dust detection. Therefore, these methods could be leveraged to improve the accuracy and f1-score of deep learning models.

We will be implementing a project in this article about building a solar panel dust detection classifier. A large number of neural network configurations are tested to finally determine the best architecture to be deployed in real-time to aid in determining dust in solar panels.

Reading the libraries

We will be looking at a list of libraries that were used in the process of building a solar panel dust detection classifier.

When it comes to building deep learning applications, there are a wealth of libraries at our disposal, including TensorFlow, NumPy, Pandas, and OS. While it may seem overwhelming at first, understanding how to use these libraries in code can greatly simplify the development process and make our models more effective.

By leveraging these powerful tools, we can streamline data processing, feature engineering, model training, and deployment. With a solid grasp of these libraries and their capabilities, we can build more complex and accurate models with greater ease and efficiency.

In this article, we’ll be using these libraries extensively to build our solar panel dust detection classifier. Through practical examples and step-by-step instructions, you’ll learn how to harness the power of these tools and apply them to real-world problems.

Reading the Data

To begin building our solar panel dust detection classifier, the first step is to load the images from pre-defined paths on our local computer. However, the exact location of these images may vary depending on the user’s computer configuration.

To perform this loading operation, we define a separate function that extracts the images from the specified paths while discarding any low-resolution images. This ensures that our dataset only contains high-quality images that are suitable for training our deep learning model.

Note: The dataset was taken from Solar Panel dust detection | Kaggle under Creative Commons — CC0 1.0 Universal license

We have stored the clean and dusty solar panels as a list of arrays that are used for computation. Note that we are dealing with a small dataset, there are no issues such as out-of-memory errors. If dealing with a large dataset, it is recommended to use ImageDataGenerator as it loads data from disk in batches.

Exploratory Data Analysis (EDA)

This is an important part of a machine learning lifecycle where the dataset used by ML models is examined to see if there are discrepancies and outliers in data. In this way, feature engineering steps can be taken to remove such datapoints and aid in building a strong classifier.

Solar Panel Images (Image by Author)

Above is a list of images that we are going to be using in the classifier for it to determine whether the panels are clean or dusty. It is to be noted that there are few images that contain text. There are other images that contain white backgrounds or are not cropped appropriately. During the feature engineering phase, therefore, steps are taken to remove these images as they can confuse our classifier in making accurate predictions.

Feature Engineering

In order to ensure that only high-quality images are used for training, we take steps to discard images with a white background in the dataset. This is achieved by implementing the following code, which identifies and removes any images with a predominantly white background. By doing so, we can improve the overall accuracy and reliability of the model’s training process.

Solar Panels with White Background (Image by Author)

It is seen based on the output that the white background images are accurately identified. However, there are a few false positives in the data. But we can go ahead and use this method to collect images without white background.

Model Training

Let us look at a list of all the possible models that could be used to train a solar panel dust classifier. The initial configuration is a convolutional neural network with decent layer depth. There are layers such as the convolution layer, max-pooling layer and flatten layers. Below is the code implementation.

Configuration 1

Model Performance, Model Architecture and Classification Report (Images by Author)

There is a function that is designed to plot a list of all the metrics and give us a good understanding of the performance of models with these metrics. There is information such as classification reports, confusion matrices, and other plots that help guide us in determining the best models to be used for production.

With the increase in the number of epochs, there is an improvement in the accuracy and also a decrease in the error. In addition, it is good to note that the cross-validation error is also reduced with additional training. This implies that there is still more room for further training. Care must be taken such that the model does not overfit the training data. We can look at other configurations as well to determine the best model to be deployed in real-time.

Configuration 2

Model Performance, Model Architecture and Classification Report (Images by Author)

A new configuration is defined as shown in the code above. Metrics are tracked for the dataset about its performance. This configuration tends to overfit the data as there is a good improvement in the training accuracy while decreasing or a steady value for cross-validation accuracy. This is also reflected by the loss curves as with the increase in the number of epochs, there is a reduction in training loss while an increase in the cross-validation loss. Hence this model is overfitting the training data without much improvement on the test set.

Configuration 3

Model Performance, Model Architecture and Classification Report (Images by Author)

This configuration also behaves similarly to the previous configuration where there is an issue with overfitting. However, these curves show that the model is not overfitting too much on the training data as compared to the previous configuration. The accuracy of the model on the test data is about 68 percent on the test data. The precision on the positive class is quite low as well. As a result, additional configurations and transfer learning methodologies can be used to improve the performance of the model to a large extent.

Configuration 4

Model Performance, Model Architecture and Classification Report (Images by Author)

This model performance is quite similar to the previous two configurations that were tested. There is overfitting on the training data as illustrated in the training and loss curves. Defining custom configurations of CNN was not working as intended especially giving rise to lower accuracy and lower f1-score for the positive class. Additional models with increased complexity can be used as they should be able to find underlying patterns in the data and make good predictions.

Transfer Learning Models

We can go ahead and look at a list of transfer learning models and determine the performance on the test set. These models are pre-trained on ImageNet data that contain a large number of samples. We extract the weights of these networks for our task of solar panel dust detection and retrain the last few layers to save computation.

VGG 16

Model Performance, Model Architecture and Classification Report (Images by Author)

The architecture seems to be performing well with an accuracy of about 70 percent on the test data. However, there tends to be some overfitting on the cross-validation data. As a result, there is a decrease in the accuracy with the increase in the number of epochs (iterations over the entire dataset). Let us also consider a list of other architectures and determine the best model to be deployed.

VGG 19

Model Performance, Model Architecture and Classification Report (Images by Author)

VGG 19 network tends to be performing less accurately as compared to VGG 16. This is because the former network is more complex, leading to a higher chance of overfitting. When we explored VGG 16 network, it was also prone to overfitting. By increasing the complexity of the network, there are higher chances that the model can overfit the training data.

InceptionNet

Model Performance, Model Architecture and Classification Report (Images by Author)

The architecture for InceptionNet is quite complex as shown above with a large depth and a lot of hidden units. Since the network was already trained on “ImageNet”, we can extract useful weights from it and only consider training the last few layers to speed up the process. Overall, InceptionNet is performing the best on the test data with an accuracy of about 77 percent on the test data.

MobileNet

Model Performance, Model Architecture and Classification Report (Images by Author)

MobileNet was able to perform exceptionally well with an overall accuracy of about 79 percent on the test data. The accuracy curves and loss curves also show that the model is trained well with an increase in performance not just on the training set but also on the cross-validation data. In addition, the model can be trained further and also used with hyperparameter tuning to improve the performance and generalization capabilities of the test data (unseen data). Note the computational complexity needed to make an inference. It indicates that it can generalize well with smaller configurations and is also able to deliver good performance.

Xception Network

Model Performance, Model Architecture and Classification Report (Images by Author)

Xception architecture is also complex as shown above. The last few layers are modified to ensure that these are trainable for the task of solar panel dust detection. It shows a good performance on the test data with an accuracy of about 71 percent. However, it tends to be overfitting as with the increase in the number of epochs, the gap between the training accuracy and cross-validation accuracy is increasing. MobileNet was performing the best out of all the models. But let us also explore a list of potential models to determine the best ones to use and make predictions.

MobileNetV2

Model Performance, Model Architecture and Classification Report (Images by Author)

The figures above show the architecture and the overall performance of MobileNetV2 architecture on the image recognition task of predicting whether there are clean or dusty solar panels. This architecture is complex to a certain extent. On the final few set of layers, additional layers and units are added to optimize the weights for our task. The overall performance of the model was not as good as that of the initial MobileNet model that is referenced earlier. Furthermore, this architecture is more complex, and it requires good computing power to ensure that there are low-latency applications. Therefore, we can use MobileNet as one of the best models for deploying in real-time for making predictions.

ResNet 50

Model Performance, Model Architecture and Classification Report (Images by Author)

There is a lot of randomness in ResNet architecture when looking at curves such as the accuracy curve and the loss curve. Overall, there is an increasing trend when it comes to the accuracy of the cross-validation data. However, the model fails to capture important distinctions from the training data to make good predictions on the test data. As a result, it gives a sub-par performance on the test set. Further training could be performed to improve the performance. Considering the computational complexity, we can go ahead and use MobileNet architecture for deployment after performing hyperparameter optimization. ResNet can be good for other image-related tasks but for this task, MobileNet performs the best.

Hyperparameter Tuning

This step is important in computer vision where the best model is taken and the hyperparameters are altered to determine the change in performance in the model. They can improve the model performance to a large extent. Let us now focus on altering a few hyperparameters from the best model. Learning rate and batch size are some hyperparameters that can improve model performance. We will use these hyperparameters to improve performance. Since MobileNet was performing the best on the test data, we use this model and perform hyperparameter tuning to get the best achievable results.

Learning Rate

Model Performance, Model Architecture and Classification Report (Images by Author)

After conducting hyperparameter tuning and determining the optimal learning rate, our chosen model (MobileNet) demonstrated a notable 1% improvement in performance on the test dataset. Notably, we retained the same architecture as before, while focusing on identifying the best learning rate to achieve optimal results.

While we won’t delve into the specifics of how we performed hyperparameter tuning, it’s worth noting that there’s another key hyperparameter we can explore in order to maximize performance on unseen data points. By taking into account this additional hyperparameter, we can ensure that our model is even more effective at accurately predicting outcomes beyond the training dataset.

Batch Size

Model Performance, Model Architecture and Classification Report (Images by Author)

Following our successful hyperparameter tuning process, we utilized the optimal learning rate to determine the best batch size for our deep learning model. In this case, a batch size of 128 yielded the greatest performance gains, resulting in a notable 2% improvement on the test dataset. This reinforces the importance of hyperparameter tuning, which can be a powerful tool in enhancing the accuracy and reliability of deep learning models.

Looking ahead, our next step is to save the final, hyperparameter-tuned model and deploy it in real-time, using it in a camera module or web interface where users can upload images of solar panels. By leveraging the power of deep learning, our model can accurately identify whether panels are clean or dusty, providing valuable insights to users. This project underscores the potential of hyperparameter tuning to boost performance across a wide range of problems and applications.

Saving the Best Model

Now that we’ve put in the effort to develop, train, and test a range of sophisticated deep learning models, it’s time to save the best-performing model for future use. We do this by storing the model in a way that enables us to easily retrieve it later, allowing for real-time or batch inferences based on the specific needs of developers.

By saving the best model, we can ensure that our efforts to optimize model performance don’t go to waste, and that our hard work pays off in the form of accurate, reliable results. This represents an important step in the deep learning process and underscores the power of these techniques to drive improvements across a wide range of applications and domains.

Conclusion

By reading this article, you should now have a comprehensive understanding of the various stages involved in a machine learning project, including data collection, feature engineering, model training, model selection, hyperparameter tuning, and model deployment. Each of these steps is critical to the success of the project, and requires careful attention and consideration to achieve optimal results.

However, the work doesn’t stop once the model is deployed. It’s important to monitor its performance on an ongoing basis, particularly when dealing with real-time data. This allows you to identify potential issues such as model drift, data drift, or security concerns, and take steps to address them in a timely manner.

Overall, this article provides a valuable overview of the deep learning process, highlighting the many challenges and opportunities involved in building accurate and reliable models for a wide range of applications. I hope you’ve found it informative and helpful, and I look forward to continuing to explore this exciting and rapidly evolving field in the future. Thank you for taking the time to read this article.


With the aid of convolutional neural networks and transfer learning, it is possible to build a classifier in determining whether solar panels are clean or dusty

Photo by Moritz Kindler on Unsplash

Solar panels have become a popular source of renewable energy in a variety of industries, from agriculture and transportation to construction and hospitality. By harnessing the power of the sun, we can generate electricity without harming the environment. However, there are challenges associated with using solar panels, and one of the biggest is the accumulation of dust on their surfaces. This can significantly reduce their efficiency and limit their usefulness for energy production and other applications.

To address this issue, automation can play a key role in ensuring regular and timely maintenance of solar panels. By automating the cleaning process, we can increase productivity and efficiency, while also reducing the environmental impact of energy generation. Overall, the potential benefits of solar panels are vast and varied, and with the help of automation, we can overcome the challenges associated with their use and continue to drive progress in this exciting and rapidly-evolving field.

With the aid of deep learning and heavy computing resources, it is possible to alert the authorities when there is dust accumulation in solar panels. Convolutional Neural Networks (CNNs) are known for their image recognition abilities. Transfer learning is an approach that uses pre-trained weights for complex tasks for our task of solar panel dust detection. Therefore, these methods could be leveraged to improve the accuracy and f1-score of deep learning models.

We will be implementing a project in this article about building a solar panel dust detection classifier. A large number of neural network configurations are tested to finally determine the best architecture to be deployed in real-time to aid in determining dust in solar panels.

Reading the libraries

We will be looking at a list of libraries that were used in the process of building a solar panel dust detection classifier.

When it comes to building deep learning applications, there are a wealth of libraries at our disposal, including TensorFlow, NumPy, Pandas, and OS. While it may seem overwhelming at first, understanding how to use these libraries in code can greatly simplify the development process and make our models more effective.

By leveraging these powerful tools, we can streamline data processing, feature engineering, model training, and deployment. With a solid grasp of these libraries and their capabilities, we can build more complex and accurate models with greater ease and efficiency.

In this article, we’ll be using these libraries extensively to build our solar panel dust detection classifier. Through practical examples and step-by-step instructions, you’ll learn how to harness the power of these tools and apply them to real-world problems.

Reading the Data

To begin building our solar panel dust detection classifier, the first step is to load the images from pre-defined paths on our local computer. However, the exact location of these images may vary depending on the user’s computer configuration.

To perform this loading operation, we define a separate function that extracts the images from the specified paths while discarding any low-resolution images. This ensures that our dataset only contains high-quality images that are suitable for training our deep learning model.

Note: The dataset was taken from Solar Panel dust detection | Kaggle under Creative Commons — CC0 1.0 Universal license

We have stored the clean and dusty solar panels as a list of arrays that are used for computation. Note that we are dealing with a small dataset, there are no issues such as out-of-memory errors. If dealing with a large dataset, it is recommended to use ImageDataGenerator as it loads data from disk in batches.

Exploratory Data Analysis (EDA)

This is an important part of a machine learning lifecycle where the dataset used by ML models is examined to see if there are discrepancies and outliers in data. In this way, feature engineering steps can be taken to remove such datapoints and aid in building a strong classifier.

Solar Panel Images (Image by Author)

Above is a list of images that we are going to be using in the classifier for it to determine whether the panels are clean or dusty. It is to be noted that there are few images that contain text. There are other images that contain white backgrounds or are not cropped appropriately. During the feature engineering phase, therefore, steps are taken to remove these images as they can confuse our classifier in making accurate predictions.

Feature Engineering

In order to ensure that only high-quality images are used for training, we take steps to discard images with a white background in the dataset. This is achieved by implementing the following code, which identifies and removes any images with a predominantly white background. By doing so, we can improve the overall accuracy and reliability of the model’s training process.

Solar Panels with White Background (Image by Author)

It is seen based on the output that the white background images are accurately identified. However, there are a few false positives in the data. But we can go ahead and use this method to collect images without white background.

Model Training

Let us look at a list of all the possible models that could be used to train a solar panel dust classifier. The initial configuration is a convolutional neural network with decent layer depth. There are layers such as the convolution layer, max-pooling layer and flatten layers. Below is the code implementation.

Configuration 1

Model Performance, Model Architecture and Classification Report (Images by Author)

There is a function that is designed to plot a list of all the metrics and give us a good understanding of the performance of models with these metrics. There is information such as classification reports, confusion matrices, and other plots that help guide us in determining the best models to be used for production.

With the increase in the number of epochs, there is an improvement in the accuracy and also a decrease in the error. In addition, it is good to note that the cross-validation error is also reduced with additional training. This implies that there is still more room for further training. Care must be taken such that the model does not overfit the training data. We can look at other configurations as well to determine the best model to be deployed in real-time.

Configuration 2

Model Performance, Model Architecture and Classification Report (Images by Author)

A new configuration is defined as shown in the code above. Metrics are tracked for the dataset about its performance. This configuration tends to overfit the data as there is a good improvement in the training accuracy while decreasing or a steady value for cross-validation accuracy. This is also reflected by the loss curves as with the increase in the number of epochs, there is a reduction in training loss while an increase in the cross-validation loss. Hence this model is overfitting the training data without much improvement on the test set.

Configuration 3

Model Performance, Model Architecture and Classification Report (Images by Author)

This configuration also behaves similarly to the previous configuration where there is an issue with overfitting. However, these curves show that the model is not overfitting too much on the training data as compared to the previous configuration. The accuracy of the model on the test data is about 68 percent on the test data. The precision on the positive class is quite low as well. As a result, additional configurations and transfer learning methodologies can be used to improve the performance of the model to a large extent.

Configuration 4

Model Performance, Model Architecture and Classification Report (Images by Author)

This model performance is quite similar to the previous two configurations that were tested. There is overfitting on the training data as illustrated in the training and loss curves. Defining custom configurations of CNN was not working as intended especially giving rise to lower accuracy and lower f1-score for the positive class. Additional models with increased complexity can be used as they should be able to find underlying patterns in the data and make good predictions.

Transfer Learning Models

We can go ahead and look at a list of transfer learning models and determine the performance on the test set. These models are pre-trained on ImageNet data that contain a large number of samples. We extract the weights of these networks for our task of solar panel dust detection and retrain the last few layers to save computation.

VGG 16

Model Performance, Model Architecture and Classification Report (Images by Author)

The architecture seems to be performing well with an accuracy of about 70 percent on the test data. However, there tends to be some overfitting on the cross-validation data. As a result, there is a decrease in the accuracy with the increase in the number of epochs (iterations over the entire dataset). Let us also consider a list of other architectures and determine the best model to be deployed.

VGG 19

Model Performance, Model Architecture and Classification Report (Images by Author)

VGG 19 network tends to be performing less accurately as compared to VGG 16. This is because the former network is more complex, leading to a higher chance of overfitting. When we explored VGG 16 network, it was also prone to overfitting. By increasing the complexity of the network, there are higher chances that the model can overfit the training data.

InceptionNet

Model Performance, Model Architecture and Classification Report (Images by Author)

The architecture for InceptionNet is quite complex as shown above with a large depth and a lot of hidden units. Since the network was already trained on “ImageNet”, we can extract useful weights from it and only consider training the last few layers to speed up the process. Overall, InceptionNet is performing the best on the test data with an accuracy of about 77 percent on the test data.

MobileNet

Model Performance, Model Architecture and Classification Report (Images by Author)

MobileNet was able to perform exceptionally well with an overall accuracy of about 79 percent on the test data. The accuracy curves and loss curves also show that the model is trained well with an increase in performance not just on the training set but also on the cross-validation data. In addition, the model can be trained further and also used with hyperparameter tuning to improve the performance and generalization capabilities of the test data (unseen data). Note the computational complexity needed to make an inference. It indicates that it can generalize well with smaller configurations and is also able to deliver good performance.

Xception Network

Model Performance, Model Architecture and Classification Report (Images by Author)

Xception architecture is also complex as shown above. The last few layers are modified to ensure that these are trainable for the task of solar panel dust detection. It shows a good performance on the test data with an accuracy of about 71 percent. However, it tends to be overfitting as with the increase in the number of epochs, the gap between the training accuracy and cross-validation accuracy is increasing. MobileNet was performing the best out of all the models. But let us also explore a list of potential models to determine the best ones to use and make predictions.

MobileNetV2

Model Performance, Model Architecture and Classification Report (Images by Author)

The figures above show the architecture and the overall performance of MobileNetV2 architecture on the image recognition task of predicting whether there are clean or dusty solar panels. This architecture is complex to a certain extent. On the final few set of layers, additional layers and units are added to optimize the weights for our task. The overall performance of the model was not as good as that of the initial MobileNet model that is referenced earlier. Furthermore, this architecture is more complex, and it requires good computing power to ensure that there are low-latency applications. Therefore, we can use MobileNet as one of the best models for deploying in real-time for making predictions.

ResNet 50

Model Performance, Model Architecture and Classification Report (Images by Author)

There is a lot of randomness in ResNet architecture when looking at curves such as the accuracy curve and the loss curve. Overall, there is an increasing trend when it comes to the accuracy of the cross-validation data. However, the model fails to capture important distinctions from the training data to make good predictions on the test data. As a result, it gives a sub-par performance on the test set. Further training could be performed to improve the performance. Considering the computational complexity, we can go ahead and use MobileNet architecture for deployment after performing hyperparameter optimization. ResNet can be good for other image-related tasks but for this task, MobileNet performs the best.

Hyperparameter Tuning

This step is important in computer vision where the best model is taken and the hyperparameters are altered to determine the change in performance in the model. They can improve the model performance to a large extent. Let us now focus on altering a few hyperparameters from the best model. Learning rate and batch size are some hyperparameters that can improve model performance. We will use these hyperparameters to improve performance. Since MobileNet was performing the best on the test data, we use this model and perform hyperparameter tuning to get the best achievable results.

Learning Rate

Model Performance, Model Architecture and Classification Report (Images by Author)

After conducting hyperparameter tuning and determining the optimal learning rate, our chosen model (MobileNet) demonstrated a notable 1% improvement in performance on the test dataset. Notably, we retained the same architecture as before, while focusing on identifying the best learning rate to achieve optimal results.

While we won’t delve into the specifics of how we performed hyperparameter tuning, it’s worth noting that there’s another key hyperparameter we can explore in order to maximize performance on unseen data points. By taking into account this additional hyperparameter, we can ensure that our model is even more effective at accurately predicting outcomes beyond the training dataset.

Batch Size

Model Performance, Model Architecture and Classification Report (Images by Author)

Following our successful hyperparameter tuning process, we utilized the optimal learning rate to determine the best batch size for our deep learning model. In this case, a batch size of 128 yielded the greatest performance gains, resulting in a notable 2% improvement on the test dataset. This reinforces the importance of hyperparameter tuning, which can be a powerful tool in enhancing the accuracy and reliability of deep learning models.

Looking ahead, our next step is to save the final, hyperparameter-tuned model and deploy it in real-time, using it in a camera module or web interface where users can upload images of solar panels. By leveraging the power of deep learning, our model can accurately identify whether panels are clean or dusty, providing valuable insights to users. This project underscores the potential of hyperparameter tuning to boost performance across a wide range of problems and applications.

Saving the Best Model

Now that we’ve put in the effort to develop, train, and test a range of sophisticated deep learning models, it’s time to save the best-performing model for future use. We do this by storing the model in a way that enables us to easily retrieve it later, allowing for real-time or batch inferences based on the specific needs of developers.

By saving the best model, we can ensure that our efforts to optimize model performance don’t go to waste, and that our hard work pays off in the form of accurate, reliable results. This represents an important step in the deep learning process and underscores the power of these techniques to drive improvements across a wide range of applications and domains.

Conclusion

By reading this article, you should now have a comprehensive understanding of the various stages involved in a machine learning project, including data collection, feature engineering, model training, model selection, hyperparameter tuning, and model deployment. Each of these steps is critical to the success of the project, and requires careful attention and consideration to achieve optimal results.

However, the work doesn’t stop once the model is deployed. It’s important to monitor its performance on an ongoing basis, particularly when dealing with real-time data. This allows you to identify potential issues such as model drift, data drift, or security concerns, and take steps to address them in a timely manner.

Overall, this article provides a valuable overview of the deep learning process, highlighting the many challenges and opportunities involved in building accurate and reliable models for a wide range of applications. I hope you’ve found it informative and helpful, and I look forward to continuing to explore this exciting and rapidly evolving field in the future. Thank you for taking the time to read this article.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment