Multispectral auto-encoder

I occasionally run into articles about multispectral and hyperspectral imaging. These always look interesting to me, but it’s very abstract since I don’t have a multispectral camera to play around with.

There are some datasets online, especially of satellite data. But the resolution on those isn’t that fun to play with, multispectral images are especially fun if you can see lots of details.

I found this video on the YouTube channel of “Spectral Devices Inc.”, a company that sells multispectral cameras. It is an 8-channel video of a human eye.

I read the frames of the video into Python using the subprocess module and ffmpeg. The command I used is the following.

ffmpeg -i video.mp4 -f image2pipe -pix_fmt rgb24 -vcodec rawvideo -

Reading PIL images from this can be done as follows.

raw = proc.stdout.read(1920 * 1080 * 3)
img = Image.frombytes("RGB", (1920, 1080), raw)

To view the 8-channel video on my RGB screen, I need to convert it to three channels. But to take advantage of the extra data, I want those three channels to contain as much data about the rest of the channels as possible.

There are many ways to do this, but I wanted to use an Autoencoder Neural network. Below is the architecture I used.

encoder = keras.Sequential([
    keras.layers.Dense(8, activation="swish"),
    keras.layers.Dense(32, activation="swish"),
    keras.layers.Dense(3, activation="sigmoid"),
])

decoder = keras.Sequential([
    keras.layers.Dense(3, activation="swish"),
    keras.layers.Dense(32, activation="swish"),
    keras.layers.Dense(8, activation="swish"),
])

autoencoder = keras.Sequential([encoder, decoder])

autoencoder.compile(optimizer="adam", loss="mse")

The results were quite good, a ton of detail was visible on the resulting images that would simply not be there on regular pictures. I would love to have one of those cameras for myself to play with.

Perhaps a future project to build one.