Techno Blender
Digitally Yours.

Train YOLOv8 Instance Segmentation on Your Data | by Alon Lekhtman | Feb, 2023

0 56


YOLOv8 was launched on January 10th, 2023. And as of this moment, this is the state-of-the-art model for classification, detection, and segmentation tasks in the computer vision world. The model outperforms all known models both in terms of accuracy and execution time.

A comparison between YOLOv8 and other YOLO models (from ultralytics)

The ultralytics team did a really good job in making this model easier to use compared to all the previous YOLO models — you even don’t have to clone the git repository anymore!

In this post, I created a very simple example of all you need to do to train YOLOv8 on your data, specifically for a segmentation task. The dataset is small and “easy to learn” for the model, on purpose, so that we would be able to get satisfying results after training for only a few seconds on a simple CPU.

We will create a dataset of white circles with a black background. The circles will be of varying sizes. We will train a model that segments the circles inside the image.

This is what the dataset looks like:

The dataset was generated using the following code:

import numpy as np
from PIL import Image
from skimage import draw
import random
from pathlib import Path

def create_image(path, img_size, min_radius):
path.parent.mkdir( parents=True, exist_ok=True )

arr = np.zeros((img_size, img_size)).astype(np.uint8)
center_x = random.randint(min_radius, (img_size-min_radius))
center_y = random.randint(min_radius, (img_size-min_radius))
max_radius = min(center_x, center_y, img_size - center_x, img_size - center_y)
radius = random.randint(min_radius, max_radius)

row_indxs, column_idxs = draw.ellipse(center_x, center_y, radius, radius, shape=arr.shape)

arr[row_indxs, column_idxs] = 255

im = Image.fromarray(arr)
im.save(path)

def create_images(data_root_path, train_num, val_num, test_num, img_size=640, min_radius=10):
data_root_path = Path(data_root_path)

for i in range(train_num):
create_image(data_root_path / 'train' / 'images' / f'img_{i}.png', img_size, min_radius)

for i in range(val_num):
create_image(data_root_path / 'val' / 'images' / f'img_{i}.png', img_size, min_radius)

for i in range(test_num):
create_image(data_root_path / 'test' / 'images' / f'img_{i}.png', img_size, min_radius)

create_images('datasets', train_num=120, val_num=40, test_num=40, img_size=120, min_radius=10)

Now that we have an image dataset, we need to create labels for the images. Usually, we would need to do some manual work for this, but because the dataset we created is very simple, it is pretty easy to create code that generates labels for us:

from rasterio import features

def create_label(image_path, label_path):
arr = np.asarray(Image.open(image_path))

# There may be a better way to do it, but this is what I have found so far
cords = list(features.shapes(arr, mask=(arr >0)))[0][0]['coordinates'][0]
label_line = '0 ' + ' '.join([f'{int(cord[0])/arr.shape[0]} {int(cord[1])/arr.shape[1]}' for cord in cords])

label_path.parent.mkdir( parents=True, exist_ok=True )
with label_path.open('w') as f:
f.write(label_line)

for images_dir_path in [Path(f'datasets/{x}/images') for x in ['train', 'val', 'test']]:
for img_path in images_dir_path.iterdir():
label_path = img_path.parent.parent / 'labels' / f'{img_path.stem}.txt'
label_line = create_label(img_path, label_path)

Here is an example of a label file content:

0 0.0767 0.08433 0.1417 0.08433 0.1417 0.0917 0.15843 0.0917 0.15843 0.1 0.1766 0.1 0.1766 0.10844 0.175 0.10844 0.175 0.1177 0.18432 0.1177 0.18432 0.14333 0.1918 0.14333 0.1918 0.20844 0.18432 0.20844 0.18432 0.225 0.175 0.225 0.175 0.24334 0.1766 0.24334 0.1766 0.2417 0.15843 0.2417 0.15843 0.25 0.1417 0.25 0.1417 0.25846 0.0767 0.25846 0.0767 0.25 0.05 0.25 0.05 0.2417 0.04174 0.2417 0.04174 0.24334 0.04333 0.24334 0.04333 0.225 0.025 0.225 0.025 0.20844 0.01766 0.20844 0.01766 0.14333 0.025 0.14333 0.025 0.1177 0.04333 0.1177 0.04333 0.10844 0.04174 0.10844 0.04174 0.1 0.05 0.1 0.05 0.0917 0.0767 0.0917 0.0767 0.08433

The label corresponds to this image:

an image that corresponds to the label example

The label content is only a single text line. We have only one object (circle) in each image, and each object is represented by a line in the file. If you have more than one object in each image, you should create a line for each labeled object.

The first 0 represents the class type of the label. Because we have only one class type (a circle) we always have 0. If you have more than one class in your data, you should map each class to a number ( 0, 1, 2…) and use this number in the label file.

All the other numbers represent the coordinates of the bounding polygon of the labeled object. The format is <x1 y1 x2 y2 x3 y3…> and the coordinates are relative to the size of the image —you should normalize the coordinates to a 1×1 image size. For example, if there is a point (15, 75) and the image size is 120×120 the normalized point is (15/120, 75/120) = (0.125, 0.625).

It is always confusing when dealing with image libraries to get the correct directionality of the coordinates. So to make this clear, for YOLO, the X coordinate goes from left to right, and the Y coordinate goes from top to bottom.

We have the images and the labels. Now we need to create a YAML file with the dataset configuration:

yaml_content = f'''
train: train/images
val: val/images
test: test/images

names: ['circle']
'''

with Path('data.yaml').open('w') as f:
f.write(yaml_content)

Pay attention that if you have more object class types, you need to add them here in the names array, in the same order you ordered them in the label files. The first is 0, the second is 1, etc…

Let’s see the file structure we created, using the Linux tree command:

tree .
data.yaml
datasets/
├── test
│ ├── images
│ │ ├── img_0.png
│ │ ├── img_1.png
│ │ ├── img_2.png
│ │ ├── ...
│ └── labels
│ ├── img_0.txt
│ ├── img_1.txt
│ ├── img_2.txt
│ ├── ...
├── train
│ ├── images
│ │ ├── img_0.png
│ │ ├── img_1.png
│ │ ├── img_2.png
│ │ ├── ...
│ └── labels
│ ├── img_0.txt
│ ├── img_1.txt
│ ├── img_2.txt
│ ├── ...
|── val
| ├── images
│ │ ├── img_0.png
│ │ ├── img_1.png
│ │ ├── img_2.png
│ │ ├── ...
| └── labels
│ ├── img_0.txt
│ ├── img_1.txt
│ ├── img_2.txt
│ ├── ...

Now that we have the images and the labels, we can start training the model. So first of all let’s install the package:

pip install ultralytics==8.0.38

The ultralytics library changes pretty fast and sometimes breaks the API, so I prefer to stick with one version. The below code depends on version 8.0.38 (the newest version at the time I write those words). If you upgrade to a newer version, maybe you will need to do some code adaptations to make it work.

And start the training:

from ultralytics import YOLO

model = YOLO("yolov8n-seg.pt")

results = model.train(
batch=8,
device="cpu",
data="data.yaml",
epochs=7,
imgsz=120,
)

For the simplicity of this post, I use the nano model (yolov8n-seg), I train it only on the CPU, with only 7 epochs. The training took just a few seconds on my laptop.

For more information about the parameters that can be used to train the model, you can check this.

Understanding the Results

After the training is done you will see a line, similar to this, at the end of the output:

Results saved to runs/segment/train60

Let’s take a look at some of the results found here:

Validation labels

from IPython.display import Image as show_image
show_image(filename="runs/segment/train60/val_batch0_labels.jpg")
part of the validation set labels

Here we can see the ground truth labels on part of the validation set. This should be almost perfectly aligned. In case you see those labels do not cover the objects well, it is highly likely that your labeling is incorrect.

Predicted validation labels

show_image(filename="runs/segment/train60/val_batch0_pred.jpg")
validation set predictions

Here we can see the predictions the trained model did on part of the validation set (the same part we saw above). This can give you a feeling of how well the model performs. Pay attention that in order to create this image a confidence threshold should be chosen, the threshold used here is 0.5, which is not always the optimal one (we will discuss it later).

Precision curve

To understand this and the next charts you need to be familiar with precision and recall concepts. Here is a good explanation of how they work.

show_image(filename="runs/segment/train60/MaskP_curve.png")
precision/confidence threshold curve

Every detected object by the model has some confidence, usually, if it is important to you to be as sure as possible when declaring “this is a circle” you will use only high confidence values (high confidence threshold). Off course, it comes with a tradeoff- you can miss some “circles”. On the other hand, if you want to “catch” as many “circles” as you can with a tradeoff that some of them are not really “circles” you would use both low and high confidence values (low confidence threshold).

The above chart (and the chart below) helps you decide which confidence threshold to use. In our case, we can see that for a threshold higher than 0.128, we get 100% precision, which means all objects are correctly predicted.

Pay attention that because we actually doing a segmentation task, there is another important threshold we need to worry about — IoU ( intersection over union), if you are not familiar with it, you can read about it here. For this chart, an IoU of 0.5 is used.

Recall curve

show_image(filename="runs/segment/train60/MaskR_curve.png")
recall/confidence threshold curve

Here you can see the recall chart, as the confidence threshold values go up, the recall goes down. Which means you “catch” fewer “circles”.

Here you can see why using the 0.5 confidence threshold, in this case, is a bad idea. For a 0.5 threshold, you get about 90% recall. However, in the precision curve, we saw that for a threshold above 0.128, we get 100% precision, so we don’t need to get to 0.5, we can safely use a 0.128 threshold and get both 100% precision and almost 100% recall 🙂

Precision-Recall curve

Here is a good explanation of the precision-recall curve

show_image(filename="runs/segment/train60/MaskPR_curve.png")
precision-recall curve

We can see here clearly the conclusion we made before, for this model, we can get to almost 100% precision and 100% recall.

The disadvantage of this chart is that we can’t see what threshold we should use, this is why we still need the charts above.

Loss over time

show_image(filename="runs/segment/train60/results.png")
loss over time

Here you can see how the different losses change during the training, and how they behave on the validation set after each epoch.

There is a lot to say about the losses, and conclusions you can make from those charts, however, it is out of the scope of this post. I just wanted to state that you can find it here 🙂

Using the trained model

Another thing that can be found in the result directory is the model itself. Here’s how to use the model on new images:

my_model = YOLO('runs/segment/train60/weights/best.pt')
results = list(my_model('datasets/test/images/img_5.png', conf=0.128))
result = results[0]

The results list may have multiple values, one for each detected object. Because in this example we have only one object in each image, we take the first list item.

You can see I passed here the best confidence threshold value we found before (0.128).

There are two ways to get the actual placement of the detected object in the image. Choosing the right method depends on what you intend to do with the results. I will show both ways.

result.masks.segments
[array([[    0.10156,     0.34375],
[ 0.09375, 0.35156],
[ 0.09375, 0.35937],
[ 0.078125, 0.375],
[ 0.070312, 0.375],
[ 0.0625, 0.38281],
[ 0.38281, 0.71094],
[ 0.39062, 0.71094],
[ 0.39844, 0.70312],
[ 0.39844, 0.69531],
[ 0.41406, 0.67969],
[ 0.42187, 0.67969],
[ 0.44531, 0.46875],
[ 0.42969, 0.45312],
[ 0.42969, 0.41406],
[ 0.42187, 0.40625],
[ 0.41406, 0.40625],
[ 0.39844, 0.39062],
[ 0.39844, 0.38281],
[ 0.39062, 0.375],
[ 0.38281, 0.375],
[ 0.35156, 0.34375]], dtype=float32)]

This returns the bounding polygon of the object, similar to the format we passed the labeled data.

And the second way:

result.masks.masks
tensor([[[0., 0., 0.,  ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]])

This returns a tensor with a shape (1, 128, 128) that represents all the pixels in the image. Pixels that are part of the object receive 1 and background pixels receive 0.

Let’s see what the mask looks like:

import torchvision.transforms as T
T.ToPILImage()(result.masks.masks).show()
predicted segmentation for image

This was the original image:

original image

Not perfect, but good enough for many applications, and the IoU is definitely higher than 0.5.


YOLOv8 was launched on January 10th, 2023. And as of this moment, this is the state-of-the-art model for classification, detection, and segmentation tasks in the computer vision world. The model outperforms all known models both in terms of accuracy and execution time.

A comparison between YOLOv8 and other YOLO models (from ultralytics)

The ultralytics team did a really good job in making this model easier to use compared to all the previous YOLO models — you even don’t have to clone the git repository anymore!

In this post, I created a very simple example of all you need to do to train YOLOv8 on your data, specifically for a segmentation task. The dataset is small and “easy to learn” for the model, on purpose, so that we would be able to get satisfying results after training for only a few seconds on a simple CPU.

We will create a dataset of white circles with a black background. The circles will be of varying sizes. We will train a model that segments the circles inside the image.

This is what the dataset looks like:

The dataset was generated using the following code:

import numpy as np
from PIL import Image
from skimage import draw
import random
from pathlib import Path

def create_image(path, img_size, min_radius):
path.parent.mkdir( parents=True, exist_ok=True )

arr = np.zeros((img_size, img_size)).astype(np.uint8)
center_x = random.randint(min_radius, (img_size-min_radius))
center_y = random.randint(min_radius, (img_size-min_radius))
max_radius = min(center_x, center_y, img_size - center_x, img_size - center_y)
radius = random.randint(min_radius, max_radius)

row_indxs, column_idxs = draw.ellipse(center_x, center_y, radius, radius, shape=arr.shape)

arr[row_indxs, column_idxs] = 255

im = Image.fromarray(arr)
im.save(path)

def create_images(data_root_path, train_num, val_num, test_num, img_size=640, min_radius=10):
data_root_path = Path(data_root_path)

for i in range(train_num):
create_image(data_root_path / 'train' / 'images' / f'img_{i}.png', img_size, min_radius)

for i in range(val_num):
create_image(data_root_path / 'val' / 'images' / f'img_{i}.png', img_size, min_radius)

for i in range(test_num):
create_image(data_root_path / 'test' / 'images' / f'img_{i}.png', img_size, min_radius)

create_images('datasets', train_num=120, val_num=40, test_num=40, img_size=120, min_radius=10)

Now that we have an image dataset, we need to create labels for the images. Usually, we would need to do some manual work for this, but because the dataset we created is very simple, it is pretty easy to create code that generates labels for us:

from rasterio import features

def create_label(image_path, label_path):
arr = np.asarray(Image.open(image_path))

# There may be a better way to do it, but this is what I have found so far
cords = list(features.shapes(arr, mask=(arr >0)))[0][0]['coordinates'][0]
label_line = '0 ' + ' '.join([f'{int(cord[0])/arr.shape[0]} {int(cord[1])/arr.shape[1]}' for cord in cords])

label_path.parent.mkdir( parents=True, exist_ok=True )
with label_path.open('w') as f:
f.write(label_line)

for images_dir_path in [Path(f'datasets/{x}/images') for x in ['train', 'val', 'test']]:
for img_path in images_dir_path.iterdir():
label_path = img_path.parent.parent / 'labels' / f'{img_path.stem}.txt'
label_line = create_label(img_path, label_path)

Here is an example of a label file content:

0 0.0767 0.08433 0.1417 0.08433 0.1417 0.0917 0.15843 0.0917 0.15843 0.1 0.1766 0.1 0.1766 0.10844 0.175 0.10844 0.175 0.1177 0.18432 0.1177 0.18432 0.14333 0.1918 0.14333 0.1918 0.20844 0.18432 0.20844 0.18432 0.225 0.175 0.225 0.175 0.24334 0.1766 0.24334 0.1766 0.2417 0.15843 0.2417 0.15843 0.25 0.1417 0.25 0.1417 0.25846 0.0767 0.25846 0.0767 0.25 0.05 0.25 0.05 0.2417 0.04174 0.2417 0.04174 0.24334 0.04333 0.24334 0.04333 0.225 0.025 0.225 0.025 0.20844 0.01766 0.20844 0.01766 0.14333 0.025 0.14333 0.025 0.1177 0.04333 0.1177 0.04333 0.10844 0.04174 0.10844 0.04174 0.1 0.05 0.1 0.05 0.0917 0.0767 0.0917 0.0767 0.08433

The label corresponds to this image:

an image that corresponds to the label example

The label content is only a single text line. We have only one object (circle) in each image, and each object is represented by a line in the file. If you have more than one object in each image, you should create a line for each labeled object.

The first 0 represents the class type of the label. Because we have only one class type (a circle) we always have 0. If you have more than one class in your data, you should map each class to a number ( 0, 1, 2…) and use this number in the label file.

All the other numbers represent the coordinates of the bounding polygon of the labeled object. The format is <x1 y1 x2 y2 x3 y3…> and the coordinates are relative to the size of the image —you should normalize the coordinates to a 1×1 image size. For example, if there is a point (15, 75) and the image size is 120×120 the normalized point is (15/120, 75/120) = (0.125, 0.625).

It is always confusing when dealing with image libraries to get the correct directionality of the coordinates. So to make this clear, for YOLO, the X coordinate goes from left to right, and the Y coordinate goes from top to bottom.

We have the images and the labels. Now we need to create a YAML file with the dataset configuration:

yaml_content = f'''
train: train/images
val: val/images
test: test/images

names: ['circle']
'''

with Path('data.yaml').open('w') as f:
f.write(yaml_content)

Pay attention that if you have more object class types, you need to add them here in the names array, in the same order you ordered them in the label files. The first is 0, the second is 1, etc…

Let’s see the file structure we created, using the Linux tree command:

tree .
data.yaml
datasets/
├── test
│ ├── images
│ │ ├── img_0.png
│ │ ├── img_1.png
│ │ ├── img_2.png
│ │ ├── ...
│ └── labels
│ ├── img_0.txt
│ ├── img_1.txt
│ ├── img_2.txt
│ ├── ...
├── train
│ ├── images
│ │ ├── img_0.png
│ │ ├── img_1.png
│ │ ├── img_2.png
│ │ ├── ...
│ └── labels
│ ├── img_0.txt
│ ├── img_1.txt
│ ├── img_2.txt
│ ├── ...
|── val
| ├── images
│ │ ├── img_0.png
│ │ ├── img_1.png
│ │ ├── img_2.png
│ │ ├── ...
| └── labels
│ ├── img_0.txt
│ ├── img_1.txt
│ ├── img_2.txt
│ ├── ...

Now that we have the images and the labels, we can start training the model. So first of all let’s install the package:

pip install ultralytics==8.0.38

The ultralytics library changes pretty fast and sometimes breaks the API, so I prefer to stick with one version. The below code depends on version 8.0.38 (the newest version at the time I write those words). If you upgrade to a newer version, maybe you will need to do some code adaptations to make it work.

And start the training:

from ultralytics import YOLO

model = YOLO("yolov8n-seg.pt")

results = model.train(
batch=8,
device="cpu",
data="data.yaml",
epochs=7,
imgsz=120,
)

For the simplicity of this post, I use the nano model (yolov8n-seg), I train it only on the CPU, with only 7 epochs. The training took just a few seconds on my laptop.

For more information about the parameters that can be used to train the model, you can check this.

Understanding the Results

After the training is done you will see a line, similar to this, at the end of the output:

Results saved to runs/segment/train60

Let’s take a look at some of the results found here:

Validation labels

from IPython.display import Image as show_image
show_image(filename="runs/segment/train60/val_batch0_labels.jpg")
part of the validation set labels

Here we can see the ground truth labels on part of the validation set. This should be almost perfectly aligned. In case you see those labels do not cover the objects well, it is highly likely that your labeling is incorrect.

Predicted validation labels

show_image(filename="runs/segment/train60/val_batch0_pred.jpg")
validation set predictions

Here we can see the predictions the trained model did on part of the validation set (the same part we saw above). This can give you a feeling of how well the model performs. Pay attention that in order to create this image a confidence threshold should be chosen, the threshold used here is 0.5, which is not always the optimal one (we will discuss it later).

Precision curve

To understand this and the next charts you need to be familiar with precision and recall concepts. Here is a good explanation of how they work.

show_image(filename="runs/segment/train60/MaskP_curve.png")
precision/confidence threshold curve

Every detected object by the model has some confidence, usually, if it is important to you to be as sure as possible when declaring “this is a circle” you will use only high confidence values (high confidence threshold). Off course, it comes with a tradeoff- you can miss some “circles”. On the other hand, if you want to “catch” as many “circles” as you can with a tradeoff that some of them are not really “circles” you would use both low and high confidence values (low confidence threshold).

The above chart (and the chart below) helps you decide which confidence threshold to use. In our case, we can see that for a threshold higher than 0.128, we get 100% precision, which means all objects are correctly predicted.

Pay attention that because we actually doing a segmentation task, there is another important threshold we need to worry about — IoU ( intersection over union), if you are not familiar with it, you can read about it here. For this chart, an IoU of 0.5 is used.

Recall curve

show_image(filename="runs/segment/train60/MaskR_curve.png")
recall/confidence threshold curve

Here you can see the recall chart, as the confidence threshold values go up, the recall goes down. Which means you “catch” fewer “circles”.

Here you can see why using the 0.5 confidence threshold, in this case, is a bad idea. For a 0.5 threshold, you get about 90% recall. However, in the precision curve, we saw that for a threshold above 0.128, we get 100% precision, so we don’t need to get to 0.5, we can safely use a 0.128 threshold and get both 100% precision and almost 100% recall 🙂

Precision-Recall curve

Here is a good explanation of the precision-recall curve

show_image(filename="runs/segment/train60/MaskPR_curve.png")
precision-recall curve

We can see here clearly the conclusion we made before, for this model, we can get to almost 100% precision and 100% recall.

The disadvantage of this chart is that we can’t see what threshold we should use, this is why we still need the charts above.

Loss over time

show_image(filename="runs/segment/train60/results.png")
loss over time

Here you can see how the different losses change during the training, and how they behave on the validation set after each epoch.

There is a lot to say about the losses, and conclusions you can make from those charts, however, it is out of the scope of this post. I just wanted to state that you can find it here 🙂

Using the trained model

Another thing that can be found in the result directory is the model itself. Here’s how to use the model on new images:

my_model = YOLO('runs/segment/train60/weights/best.pt')
results = list(my_model('datasets/test/images/img_5.png', conf=0.128))
result = results[0]

The results list may have multiple values, one for each detected object. Because in this example we have only one object in each image, we take the first list item.

You can see I passed here the best confidence threshold value we found before (0.128).

There are two ways to get the actual placement of the detected object in the image. Choosing the right method depends on what you intend to do with the results. I will show both ways.

result.masks.segments
[array([[    0.10156,     0.34375],
[ 0.09375, 0.35156],
[ 0.09375, 0.35937],
[ 0.078125, 0.375],
[ 0.070312, 0.375],
[ 0.0625, 0.38281],
[ 0.38281, 0.71094],
[ 0.39062, 0.71094],
[ 0.39844, 0.70312],
[ 0.39844, 0.69531],
[ 0.41406, 0.67969],
[ 0.42187, 0.67969],
[ 0.44531, 0.46875],
[ 0.42969, 0.45312],
[ 0.42969, 0.41406],
[ 0.42187, 0.40625],
[ 0.41406, 0.40625],
[ 0.39844, 0.39062],
[ 0.39844, 0.38281],
[ 0.39062, 0.375],
[ 0.38281, 0.375],
[ 0.35156, 0.34375]], dtype=float32)]

This returns the bounding polygon of the object, similar to the format we passed the labeled data.

And the second way:

result.masks.masks
tensor([[[0., 0., 0.,  ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]])

This returns a tensor with a shape (1, 128, 128) that represents all the pixels in the image. Pixels that are part of the object receive 1 and background pixels receive 0.

Let’s see what the mask looks like:

import torchvision.transforms as T
T.ToPILImage()(result.masks.masks).show()
predicted segmentation for image

This was the original image:

original image

Not perfect, but good enough for many applications, and the IoU is definitely higher than 0.5.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment