Techno Blender
Digitally Yours.

Color Segmentation with K-means Clustering | by Lihi Gur Arie, PhD | Dec, 2022

0 30


A detailed guide to identify and quantify objects in an image based on their color, using Contours and K-means clustering.

Introduction

Color segmentation is a technique used in computer vision to identify and distinguish different objects or regions in an image based on their colors. Clustering algorithms can automatically group similar colors together, without the need to specify threshold values for each color. This can be useful when working with images that have a large range of colors, or when the exact threshold values are not known in advance.

In this tutorial we will explore how to use the K-means clustering algorithm to perform color segmentation, and count the number of objects of each color. We will use an image from the “bubble shooter” game as an example, find and filter bubbles objects by their contours, and apply K-means algorithm to group together bubbles with similar colors. This will allow us to count and extract masks of bubbles with similar colors for further downstream applications. We will use the OpenCV and scikit-learn libraries for image segmentation and color clustering.

Extracting binary mask with thresholding

The first step is to extract all bubbles from the background. For that, we will first convert the image to grayscale with cv2.cvtColor() function, and then use cv2.threshold() to convert it to a binary image, where the pixels are either 0 or 255. The threshold is set to 60, so all pixels below 60 are set to 0 and the others are set to 255. Since some of the bubbles are slightly overlapped on the binary image, we use the cv2.erode() function to separate them. Erosion is a morphological operation that reduces the size of objects in an image. It can be used to remove small white noises, as well as to separate connected objects.

image = cv2.imread(r'bubbles.jpeg', cv2.IMREAD_UNCHANGED)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_ , mask = cv2.threshold(gray, 60, 255, cv2.THRESH_BINARY)
mask = cv2.erode(mask, np.ones((7, 7), np.uint8))
Left: input image. Right: binary image | Image by author

Extracting objects borders using Contours

The next step is to find objects in the binary image. We use the cv2.findContours() function on the binary image to detect the objects’ borders. A contour is defined as a continuous curve that forms the boundary of an object in an image. When the cv2.RETR_EXTERNAL flag is used, only the outermost contours are returned. The algorithm outputs a list of contours, each of which represents the boundary of a single object in the image.

contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

Filtering Contours and Extracting Mean Colors

To remove contours that do not represent bubbles, we will iterate over the resulting contours and select only those with a large area (greater than 3000 pixels). This will allow us to isolate the contours of the bubbles and discard any smaller objects, such as letters or parts of the background.

filtered_contours = []
df_mean_color = pd.DataFrame()
for idx, contour in enumerate(contours):
area = int(cv2.contourArea(contour))

# if area is higher than 3000:
if area > 3000:
filtered_contours.append(contour)
# get mean color of contour:
masked = np.zeros_like(image[:, :, 0]) # This mask is used to get the mean color of the specific bead (contour), for kmeans
cv2.drawContours(masked, [contour], 0, 255, -1)

B_mean, G_mean, R_mean, _ = cv2.mean(image, mask=masked)
df = pd.DataFrame({'B_mean': B_mean, 'G_mean': G_mean, 'R_mean': R_mean}, index=[idx])
df_mean_color = pd.concat([df_mean_color, df])

Contours in green on a binary image, before (left) and after (right) filtering | Image by author

To find the mean color of each bubble, we will first create a mask for each bubble by drawing its contours in white on a black image. Then, we will use the cv2.mean() function to calculate the bubble’s mean Blue, Green, and Red (BGR) channels values using the the original image and the bubble’s mask. The mean BGR values of each bubble are stored in a pandas DataFrame.

Clustering similar colors with K-means algorithm

Finally, we will apply the K-means clustering algorithm to group together bubbles with similar colors. We will use the mean color values of the contours as the input data for the KMeans algorithm from the sklearn library. The n_clusters hyperparameter specifies the number of clusters to be created by the algorithm. In this case, since there are 6 bubbles colors, we’ll set the value to 6.

The K-means algorithm is a popular clustering method that can be used to group similar data points together. The algorithm works by taking a set of data points as input and dividing them into a specified number of clusters, with each cluster being represented by a centroid. The centroids are initialized to random positions within the data space, and the algorithm iteratively assigns each data point to the cluster represented by the closest centroid. Once all data points have been assigned to a cluster, the centroids are updated to the mean position of the data points in their cluster. This process is repeated until the centroids converge to stable positions and the data points are no longer reassigned to different clusters. By using the K-means algorithm with the mean BGR values of each bubble as input, we can group together bubbles that have similar colors.

Once the KMeans class is initialized, the fit_predict method is called to perform the clustering. The fit_predict method returns the cluster labels for each object, which are then assigned to a new ‘label’ column in the dataset. This allows us to identify which data points belong to which cluster.

km = KMeans( n_clusters=6)
df_mean_color['label'] = km.fit_predict(df_mean_color)

The draw_segmented_objects function is then defined to create a new masked image with bubbles of the same color. This is achieved by first creating a binary mask: contours of all bubbles with the same label are drawn in white on a black image. Then, the original image is combined with the mask using the bitwise_and function from cv2, resulting in an image where only the bubbles with the same label are visible. For convenience, the number of bubbles of each color is drawn on the image using cv2.putText() function.

def draw_segmented_objects(image, contours, label_cnt_idx, bubbles_count):
mask = np.zeros_like(image[:, :, 0])
cv2.drawContours(mask, [contours[i] for i in label_cnt_idx], -1, (255), -1)
masked_image = cv2.bitwise_and(image, image, mask=mask)
masked_image = cv2.putText(masked_image, f'{bubbles_count} bubbles', (200, 1200), cv2.FONT_HERSHEY_SIMPLEX,
fontScale = 3, color = (255, 255, 255), thickness = 10, lineType = cv2.LINE_AA)
return masked_image

The draw_segmented_objects function is called for each group of bubbles with the same label, to generate a masked image for each color. The number of beads in each color can be determined by counting the number of rows in the DataFrame after it has been grouped by colors.

img = image.copy()
for label, df_grouped in df_mean_color.groupby('label'):
bubbles_amount = len(df_grouped)
masked_image = draw_segmented_objects(image, contours, df_grouped.index, bubbles_amount)
img = cv2.hconcat([img, masked_image])

plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB) )

The original image (Left) along the segmented images of each color | image by author

Concluding remarks

The use of K-means clustering for color segmentation can be a powerful tool for identifying and quantifying objects in an image based on their colors. In this tutorial, we demonstrated how to use the K-means algorithm, along with OpenCV and scikit-learn, to perform color segmentation and count the number of objects of each color in an image. This technique can be applied to a variety of scenarios where it is necessary to analyze and classify objects in an image based on their colors.

A user-friendly Jupiter notebook containing the complete code is included for your convenience:


A detailed guide to identify and quantify objects in an image based on their color, using Contours and K-means clustering.

Introduction

Color segmentation is a technique used in computer vision to identify and distinguish different objects or regions in an image based on their colors. Clustering algorithms can automatically group similar colors together, without the need to specify threshold values for each color. This can be useful when working with images that have a large range of colors, or when the exact threshold values are not known in advance.

In this tutorial we will explore how to use the K-means clustering algorithm to perform color segmentation, and count the number of objects of each color. We will use an image from the “bubble shooter” game as an example, find and filter bubbles objects by their contours, and apply K-means algorithm to group together bubbles with similar colors. This will allow us to count and extract masks of bubbles with similar colors for further downstream applications. We will use the OpenCV and scikit-learn libraries for image segmentation and color clustering.

Extracting binary mask with thresholding

The first step is to extract all bubbles from the background. For that, we will first convert the image to grayscale with cv2.cvtColor() function, and then use cv2.threshold() to convert it to a binary image, where the pixels are either 0 or 255. The threshold is set to 60, so all pixels below 60 are set to 0 and the others are set to 255. Since some of the bubbles are slightly overlapped on the binary image, we use the cv2.erode() function to separate them. Erosion is a morphological operation that reduces the size of objects in an image. It can be used to remove small white noises, as well as to separate connected objects.

image = cv2.imread(r'bubbles.jpeg', cv2.IMREAD_UNCHANGED)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
_ , mask = cv2.threshold(gray, 60, 255, cv2.THRESH_BINARY)
mask = cv2.erode(mask, np.ones((7, 7), np.uint8))
Left: input image. Right: binary image | Image by author

Extracting objects borders using Contours

The next step is to find objects in the binary image. We use the cv2.findContours() function on the binary image to detect the objects’ borders. A contour is defined as a continuous curve that forms the boundary of an object in an image. When the cv2.RETR_EXTERNAL flag is used, only the outermost contours are returned. The algorithm outputs a list of contours, each of which represents the boundary of a single object in the image.

contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

Filtering Contours and Extracting Mean Colors

To remove contours that do not represent bubbles, we will iterate over the resulting contours and select only those with a large area (greater than 3000 pixels). This will allow us to isolate the contours of the bubbles and discard any smaller objects, such as letters or parts of the background.

filtered_contours = []
df_mean_color = pd.DataFrame()
for idx, contour in enumerate(contours):
area = int(cv2.contourArea(contour))

# if area is higher than 3000:
if area > 3000:
filtered_contours.append(contour)
# get mean color of contour:
masked = np.zeros_like(image[:, :, 0]) # This mask is used to get the mean color of the specific bead (contour), for kmeans
cv2.drawContours(masked, [contour], 0, 255, -1)

B_mean, G_mean, R_mean, _ = cv2.mean(image, mask=masked)
df = pd.DataFrame({'B_mean': B_mean, 'G_mean': G_mean, 'R_mean': R_mean}, index=[idx])
df_mean_color = pd.concat([df_mean_color, df])

Contours in green on a binary image, before (left) and after (right) filtering | Image by author

To find the mean color of each bubble, we will first create a mask for each bubble by drawing its contours in white on a black image. Then, we will use the cv2.mean() function to calculate the bubble’s mean Blue, Green, and Red (BGR) channels values using the the original image and the bubble’s mask. The mean BGR values of each bubble are stored in a pandas DataFrame.

Clustering similar colors with K-means algorithm

Finally, we will apply the K-means clustering algorithm to group together bubbles with similar colors. We will use the mean color values of the contours as the input data for the KMeans algorithm from the sklearn library. The n_clusters hyperparameter specifies the number of clusters to be created by the algorithm. In this case, since there are 6 bubbles colors, we’ll set the value to 6.

The K-means algorithm is a popular clustering method that can be used to group similar data points together. The algorithm works by taking a set of data points as input and dividing them into a specified number of clusters, with each cluster being represented by a centroid. The centroids are initialized to random positions within the data space, and the algorithm iteratively assigns each data point to the cluster represented by the closest centroid. Once all data points have been assigned to a cluster, the centroids are updated to the mean position of the data points in their cluster. This process is repeated until the centroids converge to stable positions and the data points are no longer reassigned to different clusters. By using the K-means algorithm with the mean BGR values of each bubble as input, we can group together bubbles that have similar colors.

Once the KMeans class is initialized, the fit_predict method is called to perform the clustering. The fit_predict method returns the cluster labels for each object, which are then assigned to a new ‘label’ column in the dataset. This allows us to identify which data points belong to which cluster.

km = KMeans( n_clusters=6)
df_mean_color['label'] = km.fit_predict(df_mean_color)

The draw_segmented_objects function is then defined to create a new masked image with bubbles of the same color. This is achieved by first creating a binary mask: contours of all bubbles with the same label are drawn in white on a black image. Then, the original image is combined with the mask using the bitwise_and function from cv2, resulting in an image where only the bubbles with the same label are visible. For convenience, the number of bubbles of each color is drawn on the image using cv2.putText() function.

def draw_segmented_objects(image, contours, label_cnt_idx, bubbles_count):
mask = np.zeros_like(image[:, :, 0])
cv2.drawContours(mask, [contours[i] for i in label_cnt_idx], -1, (255), -1)
masked_image = cv2.bitwise_and(image, image, mask=mask)
masked_image = cv2.putText(masked_image, f'{bubbles_count} bubbles', (200, 1200), cv2.FONT_HERSHEY_SIMPLEX,
fontScale = 3, color = (255, 255, 255), thickness = 10, lineType = cv2.LINE_AA)
return masked_image

The draw_segmented_objects function is called for each group of bubbles with the same label, to generate a masked image for each color. The number of beads in each color can be determined by counting the number of rows in the DataFrame after it has been grouped by colors.

img = image.copy()
for label, df_grouped in df_mean_color.groupby('label'):
bubbles_amount = len(df_grouped)
masked_image = draw_segmented_objects(image, contours, df_grouped.index, bubbles_amount)
img = cv2.hconcat([img, masked_image])

plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB) )

The original image (Left) along the segmented images of each color | image by author

Concluding remarks

The use of K-means clustering for color segmentation can be a powerful tool for identifying and quantifying objects in an image based on their colors. In this tutorial, we demonstrated how to use the K-means algorithm, along with OpenCV and scikit-learn, to perform color segmentation and count the number of objects of each color in an image. This technique can be applied to a variety of scenarios where it is necessary to analyze and classify objects in an image based on their colors.

A user-friendly Jupiter notebook containing the complete code is included for your convenience:

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment