Techno Blender
Digitally Yours.

Classification using PyTorch linear function

0 51


In machine learning, prediction is a critical component. It is the process of using a trained model to make predictions on new data. PyTorch is an open-source machine learning library that allows developers to build and train neural networks. One common use case in PyTorch is using linear classifiers for prediction tasks. In this article, we will go through the steps to build a linear classifier in PyTorch and use it to make predictions on new data.

Linear Classifier:

A linear classifier is a type of machine learning model that uses a linear function to classify data into two or more classes. It works by computing a weighted sum of the input features and adding a bias term. The result is then passed through an activation function, which maps the output to a probability distribution over the classes.

In PyTorch, we can define a linear classifier using the nn.Linear module. This module takes two arguments: the number of input features and the number of output classes. It automatically initializes the weight and bias parameters with random values.

Let’s go through an example of building a linear classifier in PyTorch.

Example:

We will use the famous Iris dataset for our example. The Iris dataset contains measurements of the sepal length, sepal width, petal length, and petal width for three species of iris flowers. Our goal is to build a linear classifier that can predict the species of an iris flower based on its measurements.

Step 1: Import the Required Libraries

We will start by importing the necessary libraries. We need torch for building the linear classifier and sklearn for loading the Iris dataset.

Python3

import torch

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

 Step 2: Load the Data

Next, we will load the Iris dataset and split it into training and testing sets.

Python3

iris = load_iris()

  

X_train, X_test, y_train, y_test = train_test_split(iris.data,

                                                    iris.target,

                                                    test_size=0.2,

                                                    random_state=42)

Step 3: Prepare the Data

We need to convert the data into PyTorch tensors and normalize the features to have a mean of zero and a standard deviation of one.

Python3

X_train = torch.tensor(X_train).float()

X_test = torch.tensor(X_test).float()

y_train = torch.tensor(y_train)

y_test = torch.tensor(y_test)

  

mean = X_train.mean(dim=0)

std = X_train.std(dim=0)

X_train = (X_train - mean) / std

X_test = (X_test - mean) / std

Step 4: Define the Model

We can define our linear classifier using the nn.Linear module. We will set the number of input features to 4 (since we have four measurements) and the number of output classes to 3 (since we have three species of iris flowers).

Python3

model = torch.nn.Sequential(

    torch.nn.Linear(in_features = 4, out_features =3),

    torch.nn.Softmax(dim=1)

)

We also add a Softmax activation function to the end of the model. This will map the output to a probability distribution over the classes.

Step 5: Train the Model

We can train the model using the CrossEntropyLoss loss function and the SGD optimizer.

Python3

criterion = torch.nn.CrossEntropyLoss()

optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

  

num_epochs = 1000

for epoch in range(num_epochs):

    

    y_pred = model(X_train)

    loss = criterion(y_pred, y_train)

  

    

    optimizer.zero_grad()

    loss.backward()

    optimizer.step()

  

    

    if (epoch+1) % 100 == 0:

        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

Output:

Epoch [100/1000], Loss: 0.8240
Epoch [200/1000], Loss: 0.7616
Epoch [300/1000], Loss: 0.7324
Epoch [400/1000], Loss: 0.7152
Epoch [500/1000], Loss: 0.7021
Epoch [600/1000], Loss: 0.6913
Epoch [700/1000], Loss: 0.6819
Epoch [800/1000], Loss: 0.6737
Epoch [900/1000], Loss: 0.6665
Epoch [1000/1000], Loss: 0.6602

Step 6: Evaluate the Model

The final step is to evaluate the performance of the linear classifier using the test set. We’ll use the test set to compute the accuracy of the model. Here’s how we can do that:

Python3

with torch.no_grad():

    y_pred = model(X_test)

    _, predicted = torch.max(y_pred, dim=1)

    accuracy = (predicted == y_test).float().mean()

    print(f'Test Accuracy: {accuracy.item():.4f}')

Output:

Test Accuracy: 0.9667

Here’s the complete example:

Python3

import torch

import torchvision

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

  

iris = load_iris()

  

X_train, X_test, y_train, y_test = train_test_split(iris.data,

                                                    iris.target, 

                                                    test_size=0.2

                                                    random_state=42)

  

X_train = torch.tensor(X_train).float()

X_test = torch.tensor(X_test).float()

y_train = torch.tensor(y_train)

y_test = torch.tensor(y_test)

  

mean = X_train.mean(dim=0)

std = X_train.std(dim=0)

X_train = (X_train - mean) / std

X_test = (X_test - mean) / std

  

model = torch.nn.Sequential(

    torch.nn.Linear(4, 3),

    torch.nn.Softmax(dim=1)

)

  

criterion = torch.nn.CrossEntropyLoss()

optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

  

num_epochs = 1000

for epoch in range(num_epochs):

    

    y_pred = model(X_train)

    loss = criterion(y_pred, y_train)

  

    

    optimizer.zero_grad()

    loss.backward()

    optimizer.step()

  

    

    if (epoch+1) % 100 == 0:

        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

  

with torch.no_grad():

    y_pred = model(X_test)

    _, predicted = torch.max(y_pred, dim=1)

    accuracy = (predicted == y_test).float().mean()

    print(f'Test Accuracy: {accuracy.item():.4f}')

Output:

Epoch [100/1000], Loss: 0.7564
Epoch [200/1000], Loss: 0.6042
Epoch [300/1000], Loss: 0.5304
Epoch [400/1000], Loss: 0.4833
Epoch [500/1000], Loss: 0.4513
Epoch [600/1000], Loss: 0.4286
Epoch [700/1000], Loss: 0.4121
Epoch [800/1000], Loss: 0.3995
Epoch [900/1000], Loss: 0.3897
Epoch [1000/1000], Loss: 0.3817
Test Accuracy: 0.9667

We start by importing the necessary libraries, including PyTorch, and scikit-learn. We then load the Iris dataset and split it into training and testing sets using the train_test_split function.

Next, we convert the data to PyTorch tensors and normalize the features to have a mean of zero and a standard deviation of one. We define our linear classifier using the nn.Linear module and add a Softmax activation function to the end of the model.

We train the model using the CrossEntropyLoss loss function and the SGD optimizer. We loop over the dataset for a specified number of epochs and perform a forward pass, backward pass, and optimization at each iteration. We print the loss every 100 epochs.

Finally, we evaluate the model on the testing set by computing the accuracy.

Other subtopics related to this concept include:

1. Custom datasets: In the example above, we used scikit-learn to load and split the Iris dataset. However, PyTorch provides a Dataset class that can be used to create custom datasets. This is particularly useful when working with large datasets that cannot fit in memory. You can define a custom Dataset class that loads and preprocesses the data on the fly, making it easier to work with.

2. Transfer learning: Transfer learning is a technique that involves using a pre-trained model and fine-tuning it on a new task. PyTorch provides several pre-trained models through torchvision that can be used for transfer learning. By freezing some of the layers of the pre-trained model and training only the last few layers on the new task, we can achieve good results with fewer training examples.

3. Hyperparameter tuning: The performance of a machine learning model depends on several hyperparameters, such as the learning rate, the number of hidden layers, and the number of epochs. PyTorch provides several libraries, such as Optuna and Ray Tune, that can be used for hyperparameter tuning. These libraries automate the process of trying out different hyperparameter configurations and selecting the best one based on a specified metric.

Conclusion

In conclusion, PyTorch provides a powerful platform for building and training machine learning models. The nn module provides a flexible and easy-to-use interface for defining neural networks, and the optim module provides a range of optimization algorithms. By using PyTorch, you can focus on designing and testing new machine learning models, rather than spending time on low-level implementation details.

Last Updated :
24 May, 2023

Like Article

Save Article


In machine learning, prediction is a critical component. It is the process of using a trained model to make predictions on new data. PyTorch is an open-source machine learning library that allows developers to build and train neural networks. One common use case in PyTorch is using linear classifiers for prediction tasks. In this article, we will go through the steps to build a linear classifier in PyTorch and use it to make predictions on new data.

Linear Classifier:

A linear classifier is a type of machine learning model that uses a linear function to classify data into two or more classes. It works by computing a weighted sum of the input features and adding a bias term. The result is then passed through an activation function, which maps the output to a probability distribution over the classes.

In PyTorch, we can define a linear classifier using the nn.Linear module. This module takes two arguments: the number of input features and the number of output classes. It automatically initializes the weight and bias parameters with random values.

Let’s go through an example of building a linear classifier in PyTorch.

Example:

We will use the famous Iris dataset for our example. The Iris dataset contains measurements of the sepal length, sepal width, petal length, and petal width for three species of iris flowers. Our goal is to build a linear classifier that can predict the species of an iris flower based on its measurements.

Step 1: Import the Required Libraries

We will start by importing the necessary libraries. We need torch for building the linear classifier and sklearn for loading the Iris dataset.

Python3

import torch

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

 Step 2: Load the Data

Next, we will load the Iris dataset and split it into training and testing sets.

Python3

iris = load_iris()

  

X_train, X_test, y_train, y_test = train_test_split(iris.data,

                                                    iris.target,

                                                    test_size=0.2,

                                                    random_state=42)

Step 3: Prepare the Data

We need to convert the data into PyTorch tensors and normalize the features to have a mean of zero and a standard deviation of one.

Python3

X_train = torch.tensor(X_train).float()

X_test = torch.tensor(X_test).float()

y_train = torch.tensor(y_train)

y_test = torch.tensor(y_test)

  

mean = X_train.mean(dim=0)

std = X_train.std(dim=0)

X_train = (X_train - mean) / std

X_test = (X_test - mean) / std

Step 4: Define the Model

We can define our linear classifier using the nn.Linear module. We will set the number of input features to 4 (since we have four measurements) and the number of output classes to 3 (since we have three species of iris flowers).

Python3

model = torch.nn.Sequential(

    torch.nn.Linear(in_features = 4, out_features =3),

    torch.nn.Softmax(dim=1)

)

We also add a Softmax activation function to the end of the model. This will map the output to a probability distribution over the classes.

Step 5: Train the Model

We can train the model using the CrossEntropyLoss loss function and the SGD optimizer.

Python3

criterion = torch.nn.CrossEntropyLoss()

optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

  

num_epochs = 1000

for epoch in range(num_epochs):

    

    y_pred = model(X_train)

    loss = criterion(y_pred, y_train)

  

    

    optimizer.zero_grad()

    loss.backward()

    optimizer.step()

  

    

    if (epoch+1) % 100 == 0:

        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

Output:

Epoch [100/1000], Loss: 0.8240
Epoch [200/1000], Loss: 0.7616
Epoch [300/1000], Loss: 0.7324
Epoch [400/1000], Loss: 0.7152
Epoch [500/1000], Loss: 0.7021
Epoch [600/1000], Loss: 0.6913
Epoch [700/1000], Loss: 0.6819
Epoch [800/1000], Loss: 0.6737
Epoch [900/1000], Loss: 0.6665
Epoch [1000/1000], Loss: 0.6602

Step 6: Evaluate the Model

The final step is to evaluate the performance of the linear classifier using the test set. We’ll use the test set to compute the accuracy of the model. Here’s how we can do that:

Python3

with torch.no_grad():

    y_pred = model(X_test)

    _, predicted = torch.max(y_pred, dim=1)

    accuracy = (predicted == y_test).float().mean()

    print(f'Test Accuracy: {accuracy.item():.4f}')

Output:

Test Accuracy: 0.9667

Here’s the complete example:

Python3

import torch

import torchvision

from sklearn.datasets import load_iris

from sklearn.model_selection import train_test_split

  

iris = load_iris()

  

X_train, X_test, y_train, y_test = train_test_split(iris.data,

                                                    iris.target, 

                                                    test_size=0.2

                                                    random_state=42)

  

X_train = torch.tensor(X_train).float()

X_test = torch.tensor(X_test).float()

y_train = torch.tensor(y_train)

y_test = torch.tensor(y_test)

  

mean = X_train.mean(dim=0)

std = X_train.std(dim=0)

X_train = (X_train - mean) / std

X_test = (X_test - mean) / std

  

model = torch.nn.Sequential(

    torch.nn.Linear(4, 3),

    torch.nn.Softmax(dim=1)

)

  

criterion = torch.nn.CrossEntropyLoss()

optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

  

num_epochs = 1000

for epoch in range(num_epochs):

    

    y_pred = model(X_train)

    loss = criterion(y_pred, y_train)

  

    

    optimizer.zero_grad()

    loss.backward()

    optimizer.step()

  

    

    if (epoch+1) % 100 == 0:

        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

  

with torch.no_grad():

    y_pred = model(X_test)

    _, predicted = torch.max(y_pred, dim=1)

    accuracy = (predicted == y_test).float().mean()

    print(f'Test Accuracy: {accuracy.item():.4f}')

Output:

Epoch [100/1000], Loss: 0.7564
Epoch [200/1000], Loss: 0.6042
Epoch [300/1000], Loss: 0.5304
Epoch [400/1000], Loss: 0.4833
Epoch [500/1000], Loss: 0.4513
Epoch [600/1000], Loss: 0.4286
Epoch [700/1000], Loss: 0.4121
Epoch [800/1000], Loss: 0.3995
Epoch [900/1000], Loss: 0.3897
Epoch [1000/1000], Loss: 0.3817
Test Accuracy: 0.9667

We start by importing the necessary libraries, including PyTorch, and scikit-learn. We then load the Iris dataset and split it into training and testing sets using the train_test_split function.

Next, we convert the data to PyTorch tensors and normalize the features to have a mean of zero and a standard deviation of one. We define our linear classifier using the nn.Linear module and add a Softmax activation function to the end of the model.

We train the model using the CrossEntropyLoss loss function and the SGD optimizer. We loop over the dataset for a specified number of epochs and perform a forward pass, backward pass, and optimization at each iteration. We print the loss every 100 epochs.

Finally, we evaluate the model on the testing set by computing the accuracy.

Other subtopics related to this concept include:

1. Custom datasets: In the example above, we used scikit-learn to load and split the Iris dataset. However, PyTorch provides a Dataset class that can be used to create custom datasets. This is particularly useful when working with large datasets that cannot fit in memory. You can define a custom Dataset class that loads and preprocesses the data on the fly, making it easier to work with.

2. Transfer learning: Transfer learning is a technique that involves using a pre-trained model and fine-tuning it on a new task. PyTorch provides several pre-trained models through torchvision that can be used for transfer learning. By freezing some of the layers of the pre-trained model and training only the last few layers on the new task, we can achieve good results with fewer training examples.

3. Hyperparameter tuning: The performance of a machine learning model depends on several hyperparameters, such as the learning rate, the number of hidden layers, and the number of epochs. PyTorch provides several libraries, such as Optuna and Ray Tune, that can be used for hyperparameter tuning. These libraries automate the process of trying out different hyperparameter configurations and selecting the best one based on a specified metric.

Conclusion

In conclusion, PyTorch provides a powerful platform for building and training machine learning models. The nn module provides a flexible and easy-to-use interface for defining neural networks, and the optim module provides a range of optimization algorithms. By using PyTorch, you can focus on designing and testing new machine learning models, rather than spending time on low-level implementation details.

Last Updated :
24 May, 2023

Like Article

Save Article

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment