Tempeh Tech

RGB Channels


By Noël Jung


Content

  1. Intro
  2. Imports
  3. Code
  4. Conclusion
In [1]:
from functools import partial
from ipywidgets import interact, IntSlider
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import graph_style # Contains styling parameters for the plots.
from project_utils import (
    standard_load_transform,
    load_image_from_path,
    zoom_in,
    scale_to_percent,
)

Intro

We have explored how the gray value changed over fermentation time. A higher average image gray value corresponds to larger areas of the image covered with mycelium. But when we convert the images to grayscale, we lose a lot of information. Would it not be better to analyze the colors directly? Here, I will discuss a simple approach. Unlike the fairly established gray value analysis, I came up with this technique myself.

Imports

We load the data and add a few new columns.

In [2]:
IMAGE_DIR = r"../runV7"
CSV_FILE = r"../CSV/runV7.csv"
EX_ID = "V7"
display_only = True

ferm_data = standard_load_transform(CSV_FILE, new_columns=["h_passed", "min_passed"])
In [3]:
# Define a default image loader that applies a zoom-in transformation.
default_im_loader = partial(load_image_from_path,
    dir_path=IMAGE_DIR, as_gray=False,
    img_transformers=[(zoom_in, (1050, 3300, 1230, 2000))])


# Overwrite matplotlib default rcParams with custom styles.
for param, value in mpl.rcParamsDefault.items():
    mpl.rcParams[param] = graph_style.style.get(param, value)

Code

My idea for calculating a progress curve from color images is to observe the area occupied by the chickpeas disappearing from the images. For this, we need to identify the color of the peas. Let's look at an early image with a lot of visible chickpeas. Color pictures often consist of three channels: red, green, and blue. In other words, the color of each pixel in a usual image is defined by three numeric values, which indicate the degree of "Redness", "Greenness", and "Blueness", respectively. We can look at each of the color channels individually.
In [4]:
# Load an early datapoint.
early_datapoint = ferm_data.iloc[50]
early_image = default_im_loader(early_datapoint["im"])

# Create a figure showing the activation of each color channel.
def plot_image_channels(image, datapoint):
    # Create a figure with subplots.
    fig, axs = plt.subplots(3, 1, figsize=(20, 10))
    fig.subplots_adjust(left=0, right=1, wspace=-0.60, hspace=0.6)
    fig.suptitle(f"Image taken after {datapoint['h_passed']} hours",
        **graph_style.super_title_style, y=1)

    # Plot each color channel of the image in a separate subplot.
    for ax, cmap, (channel_index, channel) in zip(axs.flatten(),
        ["Reds", "Greens", "Blues"],enumerate(["Red", "Green", "Blue"])):

        ax.set_title(label=f"{channel} channel", **graph_style.title_style)
        ax.set(xticks=[], yticks=[])
        ax.imshow(image[:, :, channel_index], cmap=cmap)

plot_image_channels(early_image, early_datapoint)

Note that the colors given to the images above are sort of artificial. One could represent the activation of each channel on a scale from black (low values) to white (high values). For clarity, we choose scales from white (low values) to dark red, green, or blue (high values). Areas that are lighter in all three channels, like the background behind the tempeh bag, appear as black in the composite RGB image. The individual chickpeas can be seen in the green and red color channels. The blue channel is activated to a lesser degree.

In [5]:
# Repeat for later datapoint.
late_datapoint = ferm_data.iloc[200]
late_image = default_im_loader(late_datapoint["im"])
plot_image_channels(late_image, late_datapoint)

In the late image, all three color channels have high values, which translates into white pixels. As we know, the mycelium is indeed white. The values of the blue channel increase the most.

Let's take a look at the (combined) RGB images.

In [6]:
# Plot the RGB image of the early and late datapoint.
fig, axs = plt.subplots(2, 1, figsize=(6, 8))
fig.subplots_adjust(left=0, right=1, wspace=1, hspace=0.05)

axs[0].imshow(early_image)
axs[0].set_title("8 h", **graph_style.title_style)
axs[1].imshow(late_image)
axs[1].set_title("34 h", **graph_style.title_style)

for ax in axs:
    ax.axis('off')
The early image shows only chickpeas, while the late one is mostly mycelium.

My idea now is to manually define filters. The yellow color shows you which pixels are currently retained by the filter. We want to include all pixels that show chickpeas.
In [7]:
# Function to apply filters interactively.
# Only works in interactive mode, not in exported HTML.
def apply_channel_filters(im1, dp1, im2, dp2):
    """
    Interactively apply filters to each channel of two images and display as subplots.
    """
    def update_filter(red_min=0, red_max=255, green_min=0, green_max=255, blue_min=0, blue_max=255):
        # Filter the images based on the selected channel values.
        def filter_image(image):
            red = (image[:, :, 0] >= red_min) & (image[:, :, 0] <= red_max)
            green = (image[:, :, 1] >= green_min) & (image[:, :, 1] <= green_max)
            blue = (image[:, :, 2] >= blue_min) & (image[:, :, 2] <= blue_max)
            return red & green & blue

        # Apply the filter to both images.
        mask1 = filter_image(im1)
        mask2 = filter_image(im2)

        # Create a new figure with subplots to display the filtered images.
        fig, axs = plt.subplots(2, 1, figsize=(10, 5))
        images = [mask1, mask2]
        titles = [f"{dp1['h_passed']} hours", f"{dp2['h_passed']} hours"]
        for ax, im, title in zip(axs, images, titles):
            ax.imshow(im, cmap="inferno")
            ax.set_title(title, **graph_style.title_style)
            ax.axis('off')
        fig.tight_layout()

    # Create interactive sliders for each color channel.
    interact(update_filter,
        red_min = IntSlider(min=0, max=255, step=1, value=100, description="Red Min"),
        red_max = IntSlider(min=0, max=255, step=1, value=255, description="Red Max"),
        green_min = IntSlider(min=0, max=255, step=1, value=100, description="Green Min"),
        green_max = IntSlider(min=0, max=255, step=1, value=255, description="Green Max"),
        blue_min = IntSlider(min=0, max=255, step=1, value=75, description="Blue Min"),
        blue_max = IntSlider(min=0, max=255, step=1, value=180, description="Blue Max"))

apply_channel_filters(early_image, early_datapoint, late_image, late_datapoint)
  • Red: 100 ≤ x ≤ 255
  • Green: 100 ≤ y ≤ 255
  • Blue: 75 ≤ z ≤ 180
x,y,z are approximate color channel values of chickpeas in the images. Certainly, you never get a clear-cut result, but I think those values are quite good. Now, we count the pixels of that color at each time point.
In [8]:
# List selected filters
red_min, red_max = 100, 255
green_min, green_max = 100, 255
blue_min, blue_max = 75, 180

# Apply the selected filters to the early and late images.
def count_filtered_pixels(row):
    """
    Count the number of pixels in the image that fall within the specified RGB filter range.
    """
    im = default_im_loader(row["im"])

    red_mask = (im[:, :, 0] >= red_min) & (im[:, :, 0] <= red_max)
    green_mask = (im[:, :, 1] >= green_min) & (im[:, :, 1] <= green_max)
    blue_mask = (im[:, :, 2] >= blue_min) & (im[:, :, 2] <= blue_max)

    filtered_pixels = np.sum(red_mask & green_mask & blue_mask)
    return filtered_pixels

ferm_data["filtered_pixels"] = ferm_data.apply(count_filtered_pixels, axis=1)
ferm_data.to_csv(f"../OutputCSVs/{EX_ID}-RGB-filter.csv")

We could plot the pixel count to get the growth curve. But the absolute number of pixels is dependent on factors like the resolution of the camera. Let's apply a percentage scale instead.

In [9]:
# Plot the growth curve.
fig, ax = plt.subplots(figsize=(10, 4))
ax.plot(ferm_data["min_passed"]/60,
    scale_to_percent(ferm_data["filtered_pixels"]),
    **graph_style.lineplot_kwargs,
    color=graph_style.colors_pomegranate_var[0])

# Apply graph styles.
ax.set_title("Scaled development of chickpea color", **graph_style.title_style)
ax.set_xlabel(xlabel="Time in hours", **graph_style.axes_style)
ax.set_ylabel(ylabel=r"% of max. pixel count", **graph_style.axes_style)
ax.set(xlim=(0, ferm_data["min_passed"].iloc[-1]/60),
    ylim=(0, 101))
ax.tick_params(**graph_style.tick_style)

Conclusion

In this notebook, we explore the use of RGB color channel filtering to track the disappearance of chickpea-colored pixels over the fermentation time. I developed this method myself. In the next notebook of this series, we explore sklearn's segmentation API in detail.