Software Defined Radio software usually comes with a waterfall view (spectrogram) that lets the user quickly inspect the spectrum. The spectrogram plots the amplitude of frequencies over time. This means, by carefully outputting a signal consisting of multiple frequencies, we can draw shapes and pictures on the spectrogram.
A common NFM walkie-talkie is too limited to do this, but a Software Defined Radio that can transmit arbitrary I/Q samples will do the job perfectly. Fortunately I have a hackrf-one at hand, so I gave this a try.
In order to transmit from the HackRF, I will be using the hackrf_transfer command. This means all Iāll need to do in my modulator is to output I/Q samples to stdout. Letās make a quick helper method to do this.
Writing samples
Traditionally, DSP samples are kept between -1 and 1, so we will be using this format internally. In order to give them to hackrf_transfer
, we need to encode them as a signed 8-bit integer. The format accepted by the program is alternating 8-bit signed I and Q samples.
import struct, os
dsp = os.fdopen(1, 'wb')
def write(i, q):
i = int(i * 127)
q = int(q * 127)
data = struct.pack('bb', i, q)
dsp.write(data)
Configuration
Letās also define some constants; such as the output sample rate, the maximum frequency deviation, and how long it should take to transmit the image. The frequency deviation determines how wide our signal will be on the spectrum and the transmission time will determine the height. You should play around with these values until you can get a clear image.
RATE = 2_000_000 # 4M sample rate
TRANSMIT_TIME = 2 # 2 Seconds
FREQ_DEV = 15_000 # 15 KHz
Loading the image
With the configuration out of the way, we are now ready to produce the samples. The first thing we need to do is to read an image file. To do this, I will be using the Pillow library. Letās get the image file path from the command line arguments, load the image and convert it to a black and white bitmap.
from PIL import Image
import sys
im = Image.open(sys.argv[1])
im.convert('1') # 1 means a 1-bit image
Outputting the image
We need to output the image bottom-to-top because the spectrogram will put the signals received earlier at the bottom, as it scrolls like a waterfall.
t = 0
for y in range(im.height)[::-1]:
target = t + TRANSMIT_TIME / im.height
while t < target:
# Output line...
pass
Every line, we pick a target time. We will be outputting samples for the current line until we reach target
. Each line gets TRANSMIT TIME / IMAGE HEIGHT
seconds.
First of all, letās cache the pixels of the current line since Python is not very fast.
line = [im.getpixel((x, y)) for x in range(im.width)]
When we are outputting the line, weāll pretend that each pixel of the image is a frequency in out output. So for an image with the width of 300 and frequency deviation of 5000 Hz; x = 0
is offset by 0 Hz, x = 150
is offset by 2500 Hz and x = 299
is offset by 5000 Hz.
Using the mapping we described above, letās accumulate I and Q values for all the pixels.
i = 0
q = 0
for x, pix in enumerate(line):
if not pix:
continue
offs = x / im.width
offs *= FREQ_DEV
i += math.cos(2 * math.pi * offs * t) * 0.01
q += math.sin(2 * math.pi * offs * t) * 0.01
write(i, q)
t += 1.0 / RATE
We can represent a wave of a particular frequency in time using the well-known formula 2 * pi * freq * time
. Since I is the cosine of the value and Q is the sine, our final values become sin(2 * pi * f * t)
and cos(2 * pi * f * t)
.
We donāt output anything for lines where the pixel value is 0. We multiply the signals we add to I and Q (i.e. dampen them) by 0.1 in order to prevent the signal from excessive clipping. This approach actually has some downsides, as the signal might still clip for certain images, but for a short demo where we can pick the images and change the dampening factors it wonāt be a problem.
Now letās combine the code snippets so far and try to render a signal. I recommend not transmitting this in real-time as Python is slow, and using PyPy as Python is slow.
$ pypy3 ./pic2spec.py btc.png > btc.raw
... Wait a lot
$ hackrf_transfer -f 433000000 -t btc.raw -s 4000000 -a 1
Results
Hereās a video of what our signal looks like on gqrx.
Code
Hereās the full code, if you want to try this on your own.
#!/usr/bin/env python3
import struct
import os
from PIL import Image
import sys
import math
dsp = os.fdopen(1, "wb")
def write(i, q):
i = int(i * 127)
q = int(q * 127)
data = struct.pack("bb", i, q)
dsp.write(data)
RATE = 4_000_000 # 4M sample rate
TRANSMIT_TIME = 2 # 2 Seconds
FREQ_DEV = 15_000 # 15 KHz
im = Image.open(sys.argv[1])
im.convert("1") # 1 means 1-bit image
t = 0
for y in range(im.height)[::-1]:
target = t + TRANSMIT_TIME / im.height
line = [im.getpixel((x, y)) for x in range(im.width)]
while t < target:
i = 0
q = 0
for x, pix in enumerate(line):
if not pix:
continue
offs = x / im.width
offs *= FREQ_DEV
i += math.cos(2 * math.pi * offs * t) * 0.01
q += math.sin(2 * math.pi * offs * t) * 0.01
write(i, q)
t += 1.0 / RATE