Automate resolution measurement

TL;DR: Disable legacy camera support with raspi-config and run sudo apt install python3-picamera2. Then try again (with Thonny or any other IDE).

@DougKoebler I had suggested using picamera2 without thinking of the number of steps you’d have to take to use it. You’ll have to disable legacy camera support using raspi-config and during the time that it’s disabled you will not be able to use the openflexure software (at least I don’t think so, never tried it). Then you’ll need to install picamera2. You can try sudo apt get install python3-picamera2. I don’t know much about the openflexure debian distribution, so you may run into snags here that I’m not aware of.

Another option would be to use picamera. I just gave this a shot (pun intended?) and found that getting the bayer data is a little bit difficult. Happy to get some insight from the OpenFlexure folks here if there is some. What I found is that the bayer data is not actually included in the EXIF data when all you’re doing is setting bayer=True in the picamera interface. Perhaps the OpenFlexure software places it in the EXIF section, I haven’t checked. When using just the picamera interface, the bayer data is (at least appears to be) appended to the end of the JPEG. I wrote a script to search for the EOI marker (End of Image marker) and found that immediately passed it is a “BRCM” magic number. This indicates to me that the bayer data is actually in some kind of proprietary file format. Normal raw bayer data (like what you get from picamera2) does not usually contain anything other than the raw color data. So I’m not actually sure how to proceed extracting bayer data from picamera. Maybe someone has reverse engineered the format.

What I did notice was that the JPEG file I got was 13M (13 * 1024 * 1024 bytes), which includes the bayer data and JPEG data. A 10 bit raw bayer array with 6 bits of padding per pixel, like what picamera2 does, takes up about 15M. My guess is that in the v1 version of picamera the data is packed into 10 bits with no padding, which would make it about 9M (3280246410/8 bytes).

So you might need to try disabling legacy camera support with raspi-config for a bit. You can enable it again when you want to run the OpenFlexure software again. Unless someone here knows how to actually decode the proprietary bayer data with the picamera v1. Sorry for the long post. Initially I was going to send you a script to capture data with picamera v1, but could never get it to work due to not knowing the data format.

If there isn’t an easy way to convert to picamera2, then back to picamera, I could just build a separate system. I am currently building the flat top version so I could do the tests on that system, using the same lighting system.
Thanks again for your help.

Switching picamera to picamera2 will, I think, require more than installing it. The OpenFlexure server v2 is tied to an old version of the Raspbian operating system because it is tied to picamera (picamerax).
The changes in camera handling in the newer Pi OS are quite deep, and then lead on to picamera2 in the python implementation. This is all being worked through in the complete rewrite of the OpenFlexure server to v3.

Thanks for the info! I’m currently using debian bookworm and switching between picamera 1 and picamera 2 is pretty seamless. I think OFM uses debian buster? @DougKoebler another thing you could do is get another micro SD card to store a newer version of the OS, that way you can use picamera2, and keep a working copy of the current OpenFlexure software.

I ordered another SD card, should get it on Friday.

I used a second SD card and Taylor, used your program above to collect the png image. I must be doing something wrong. The image out of OpenFlexure is 12,422KB, the image collecting a .png is 10,547KB. The png image looks like a grayscale. Here is the program I used:

from picamera2 import Picamera2
import numpy as np
from PIL import Image

def unpack(raw_bytes: np.ndarray):
“”"
Convert a 2D uint8 image with shape (H, W*2) to (H, W) 10-bit uint16 image.
Each 2-byte pair encodes a 10-bit pixel.
“”"
h, w_bytes = raw_bytes.shape
if w_bytes % 2 != 0:
raise ValueError(“Expected even number of bytes per row”)

w_pixels = w_bytes // 2
# View as 16-bit little-endian
raw_words = raw_bytes.reshape(h, w_pixels, 2)
raw_10bit = (raw_words[:, :, 1].astype(np.uint16) <<
             8) | raw_words[:, :, 0].astype(np.uint16)

# If only 10 bits are valid, mask out the rest
return raw_10bit & 0x03FF  # 10-bit mask

def main():
picam2 = Picamera2()
capture_config = picam2.create_still_configuration(
raw={“format”: ‘SBGGR10’})
picam2.configure(capture_config)
picam2.start()
print(picam2.stream_configuration(“raw”))
raw_data = picam2.capture_array(“raw”)
data = unpack(raw_data)

# This next line isn't really necessary, but helps when viewing the image
# from a file explorer. When writing 16-bit PNGs, the full brightness is expected
# to be at 65535. With 10-bit image data, the max is 1023. To avoid the images appearing
# extremely dark, this line will scale the pixel intensity to 65535. However, the conversion
# will cause rounding errors in the least significant bit. To be considered actual "raw" data,
# this line should be removed. If the rounding errors in the LSB are acceptable, then this can
# be a practical choice.
data = (data.astype(np.uint32) * 65535 // 1023).astype(np.uint16)

img = Image.fromarray(data, mode='I;16')
img.save('bayer_preserved.png', format='PNG')

if name == ‘main’:
main()


Here is the jpeg image

I think the answer is in the name bayer_preserved. In the code the raw image is captured, which has pixels of Blue Green Green Red. The 8MP camera has not got all four colours at each position, there are only 8 million sensors, 2 million Blue, 2 million Red and 4 million Green. Despite working with colour cameras for years I only realised this last week: all colour cameras only actually have 1/4 of the pixels that they claim for full colour information. The rest is interpolated.

The code seems to take the raw data from the camera and process it as a 2D array of the raw pixels. This 2D array is interpreted as a greyscale image in the PNG. A colour image would have a 3rd dimension for the three colours, which you get by demosaicing the Bayer pattern. Even for a true greyscale image you need to average the BGGR blocks in your array.

1 Like

Hey Doug,

This is expected, you’re not doing anything wrong. The reason why your PNG is small is probably because there’s not a lot of variation in the color in your image. PNG does have compression, but it’s not lossy like JPEG. So the size of your PNG will not always match the size of the image. As long as it’s PNG though, you can be sure that there isn’t going to be compression artifacts. Since PNG uses RLE compression and your image contains a lot of repeating pixels (black or white), this makes it easy to shrink the image size.

The reason your image is in black and white is something I explained earlier, but perhaps wasn’t clear about. @WilliamW explained it perfectly. I’ll elaborate slightly - most imaging sensors (at least the ones that I have worked with) are actually monochrome. They get a voltage every time a photon hits the pixel surface area, mostly regardless of the wavelength. To get RGB information, a color filter array is installed on top of the mono chrome pixels. This allows each pixel to only get certain wavelengths of light. The most common arrangement of filters is the bayer pattern, which consists of a red pixel, a blue pixel, and two green pixels. There are other patterns, but this one is the most common and what is used in the OpenFlexure camera. In order to go from this arrangement of colors to RGB on every pixel, you have to do what is called “demosaicing”. OpenCV has a function to do this:

import cv2
raw = cv2.imread('raw_bayer.png', cv2.IMREAD_UNCHANGED)
bgr = cv2.cvtColor(raw, cv2.COLOR_BayerBG2BGR)
cv2.imwrite('color.png', bgr)

The reason I suggested using bayer data is that demosaicing adds information to your image that isn’t truly there - which interferes with measurements such as the ones you’d be making for resolution. It’s still nice to see the full color image, so demosaicing is useful for visual inspection, just not for precision measurements.

When it comes to using this data for resolution measurement, you’ve got a couple of options. You could use all pixels as is, but obviously the red and blue pixels are going to have different responses than the green pixels. You could choose one of the two arrangements of green pixels (see the color filter array pattern to see what I mean), but you’ll only get half the spatial resolution. I’d suggest the later option. It’s debatable whether the camera is actually 3280x2464 if it’s got a color filter array anyway.

Here’s a function to export one of the four channels:

import numpy as np

def extract_bayer_channel(image: np.ndarray, channel: str) -> np.ndarray:
    if channel == 'blue':
        return image[0::2, 0::2]
    elif channel == 'green1':  # green at blue row
        return image[0::2, 1::2]
    elif channel == 'green2':  # green at red row
        return image[1::2, 0::2]
    elif channel == 'red':
        return image[1::2, 1::2]
    else:
        raise ValueError("Channel must be one of: 'red', 'green1', 'green2', 'blue'.")

Also, I had suggested a minor tweak to the code I posted originally. Here is the code to run with the changes I suggested. The result won’t be that different from what you have with the code you posted, but this version is technically more correct. The difference is the 6 bit shift (which can be thought of as a multiplication of 64) is lossless compared to scaling it by 65535/1023 (64.06).

from picamera2 import Picamera2
import numpy as np
from PIL import Image


def unpack(raw_bytes: np.ndarray):
    """
    Convert a 2D uint8 image with shape (H, W*2) to (H, W) 10-bit uint16 image.
    Each 2-byte pair encodes a 10-bit pixel.
    """
    h, w_bytes = raw_bytes.shape
    if w_bytes % 2 != 0:
        raise ValueError("Expected even number of bytes per row")

    w_pixels = w_bytes // 2
    # View as 16-bit little-endian
    raw_words = raw_bytes.reshape(h, w_pixels, 2)
    raw_10bit = (raw_words[:, :, 1].astype(np.uint16) <<
                 8) | raw_words[:, :, 0].astype(np.uint16)

    # If only 10 bits are valid, mask out the rest
    return (raw_10bit & 0x03FF) << 6


def main():
    picam2 = Picamera2()
    capture_config = picam2.create_still_configuration(
        raw={"format": 'SBGGR10'})
    picam2.configure(capture_config)
    picam2.start()
    print(picam2.stream_configuration("raw"))
    raw_data = picam2.capture_array("raw")
    data = unpack(raw_data)

    img = Image.fromarray(data, mode='I;16')
    img.save('bayer_preserved.png', format='PNG')


if __name__ == '__main__':
    main()

Some further thoughts. To take these images first I used a white LED and did a Full Auto-Calibrate. The result was R = 1.6796875, B = 2.1171875. Next I replaced the white LED with a green LED. I didn’t change the Calibration. My thought was that I would then get more data to the 2 green pixels, to get more resolution. The other thought I had was to use a blue LED and do the same. But in that case I would have only the blue pixel. If the final grayscale image could select only the 2 green pixels or only the blue pixel, would that improve the resolution?
The down side of this is the image isn’t flat, the center region is brighter.
My other interest is to extract each individual pixel (blue, green, green, red) and test the resolution for each.
Does any of this make sense or should I just do a Full Auto-Calibrate with what ever LED I am testing?

@DougKoebler The white balance does not change the resolution in anyway. Neither spatial or pixel resolution change due to white balance. The most you can really do to change what physical measurements are taken by the sensor are to change the analog gain (takes place before signal becomes digitized by the DAC) or change the exposure time. Ideally the exposure time would change first, and you’d have your brightest pixels close to (but not at or exceeding) 1023 (or 255, if you’re using 8-bit data). If the pixels aren’t bright enough with the max exposure time (which is probably not going to be true in your case), then you’d increase the gain (which happens to also increase noise). Everything other than analog gain and exposure are purely digital effects happening on the GPU of the Raspberry Pi and have no effect on resolution - other than causing you to lose information.

Changing LEDs obviously does affect your signal. You can have a look at the datasheet for the IMX 219 (figure 45) to get an idea of what the spectral sensitivity is.

Regarding the lack of flat field correction, that’s not really a downside in my opinion. Flat field correction is something that’s done for visualization, much like demosaicing. It certainly is desirable for that. For measurement, it is a good indicator of how far from the center of the image your lighting is uniform. Wherever it is uniform is where you’d want the edge measurement to happen.

1 Like

So if I use your latest code and collect a bayer.png, I should then be able to extract each of the 4 channels separately, blue, green1, green2 and red. Since these are direct from the camera, the visual color image is really only for show. I should see a difference in resolution, best if I light with a blue LED and use the blue channel. I will obviously test all the channels. If I combine the 2 green channels say in ImageJ and use a green LED, will I see an improvement in the resolution? It is 2x the # of pixels.
In the Full Auto-Calibrate, I get R and B values and I think they change if I use different LEDs; white, blue, green or red. How do these numbers effect the visible image I see. I think it sets the ratio of the 3 RGB colors, but there is no adjustment for the green?
Finally does the height of the lighting above the sample effect the resolution. I realize the software corrects for uneven lighting, but shouldn’t the height effect resolution?

@DougKoebler you’re asking if you’d see an improvement in the resolution in using the green channels vs the blue channels? Perhaps, if the edge falls across the diagonal pair of green pixels.

Normally in white balance algorithms, which I believe is part of the auto-calibration for OpenFlexure, the color channels are normalized with the green channel (so the digital gain for green is always set to 1, and only the other two channels change). The way the coefficients for red and blue affect your image is that they are essentially coefficients to a hadamard multiplication:

red_out = red_in * white_balance_r
green_out = green_in * 1
blue_out = blue_out * white_balance_b

The height of the lighting above the sample effects how strong your signal is. Ideally you’d want it 99% saturated - meaning that the brightest pixel is going to be 1022 or 254, depending on whether you’re dealing with 10-bit or 8-bit data. Anything higher than that and you can’t be sure whether or not clipping is taking place. If you have a condenser lens, there is an ideal height between the sample and the consider lens so that you’re getting maximum brightness. I don’t think it’s really going to effect resolution, but it will affect your signal strength.

@WilliamW I wonder if modification of this microscope would be useful where the sensor is monochrome with no CFA and an RGB LED can get you different wavelength intensities all on the same pixel surface. You’d probably have to put it in a container were you could block off ambient light though.

@DougKoebler, @tay10r,
Illumination:
The illumination numerical aperture NA affects the resolution. This means that the illumination height is important: when the illumination is focussed on the sample the light is at the maximum angle (max NA) across all of the illuminated area.

Colour:
There are some subtleties in the colour aspect.
In theory the minimum resolvable feature size is proportional to the wavelength, so blue should be best, green middling and red worst. Designing a lens to be ‘perfect’ at all wavelengths is difficult, so a real lens could be worse in blue if the design was primarily for performance in red light.
In theory also it should not make any difference if you assess ‘blue’ performance by illuminating with blue, or taking the blue channel of the image. In a microscope based on a miniature consumer camera there is mixing of colours into the wrong colour channels as you move away from the centre of the sensor. This is because the sensor is designed specifically to operate as a camera with a particular wide angle lens. More about this in Flat-Field and Colour Correction for the Raspberry Pi Camera Module | Journal of Open Hardware . We see the effect of this in the microscope images where the colours are less vibrant at the edge of the image. Because of this, if you want to look at resolution for different colours you either need to use coloured illumination, or do the colour unmixing from the paper, or only look at the colour dependence of the resolution close to the middle of the sensor.

I would do the Bayer demosaicing into an 8MP colour PNG, then possibly downsample to 2MP as there is only that level of detail in reality. I would start looking at resolution on a greyscale version of that. Then look to see any differences that there might be in the resolution for the colour channels in the central third of the sensor.

Thank you for the article reference, very interesting. It looks like there might be a difference in resolution from the middle to the edges. Something to test by moving the razor over to near the edge.
I did see the overlap of sensitivity between the 3 filters, blue, green and red again causing possible resolution issues. I will test the png for each channel, I expect there will be a difference, blue should be better. Using blue, green or a red LED I assume will show a bigger difference? Maybe the blue chsnnel with the blue LED.
Is it possible to turn off the red and blue pixels by putting in 0 for R and B, and then just use the green pixels with a green LED. With the jpeg images thru OpenFlexure or will that cause a problem. The other possibility is to Full Auto-Calibrate with the green LED.
When I use the white LED and Auto-Calibrate, then use the green LED, I don’t get a flattened image, same as the article.
Finally, I will move the light up and down. I will do a Full Auto-Calibrate at each position with the white LED first.
Just looking for ideas to find the best resolution!

I used the green LED on the OpenFlexure scope, I did a Full Auto-Calibrate, then I set R = 0 and B = 0. (Don’t know if that did anything.)
I took an image of the razor blade, centered.


Next I used Taylor’s python Automated Resolution code:

The results look good that’s 3pix (0.430um/pix) = 1.29um
I will repeat this test doing it both ways soon.

1 Like

Using the white LED and then measuring the resolution


Using the Green LED Full Auto-Calibrate but not making R=0 and B=0

It does look like using the green LED and doing a Full Auto-Calibrate, gives you a white image, then setting R = 0 and B = 0 gives you a green image with better resolution.

Doing full autocallibrate is only a good idea when the illumination is reasonably close to uniform in colour and brightness. If you have only green from the LED, then there should be almost nothing in the red and blue channels in the image. Trying to make them as bright as the green channel means mulitplying the tiny amount of signal (which will also be noise) by a big correction factor. This is not helpful.
This is the same if you have a white LED, but the alignment is not good and parts of the image are very dark.

For this study I would not use the autocallibrate. You could record images without the knife edge if you want to correct for the illumination fading towards the edge. It is a small change compared with the sharp edge, so should not really make any difference.

Is there a description anywhere on how the autocalibrate works?
There is obviously some changes in intensity, across the image. My interest is finding the best resolution. I am for now assuming the center of the image is best, with a single wavelength, probably blue or green. If I autocalibrate with a white LED then turn off R = 0 and B = 0, i should get the best resolution in the middle of the image with a green LED? I can then move the knife edge to the edge of the image to see the change. On the blue LED, all I can do is turn off the red, R = 0.
Any idea on why it isn’t a good idea to autocalibrate with a blue, green or red LED?

Taylor, a question. When I use your code to take a png file, I am getting what seems to be a grayscale image. I used this code

from picamera2 import Picamera2
import numpy as np
from PIL import Image


def unpack(raw_bytes: np.ndarray):
    """
    Convert a 2D uint8 image with shape (H, W*2) to (H, W) 10-bit uint16 image.
    Each 2-byte pair encodes a 10-bit pixel.
    """
    h, w_bytes = raw_bytes.shape
    if w_bytes % 2 != 0:
        raise ValueError("Expected even number of bytes per row")

    w_pixels = w_bytes // 2
    # View as 16-bit little-endian
    raw_words = raw_bytes.reshape(h, w_pixels, 2)
    raw_10bit = (raw_words[:, :, 1].astype(np.uint16) <<
                 8) | raw_words[:, :, 0].astype(np.uint16)

    # If only 10 bits are valid, mask out the rest
    return (raw_10bit & 0x03FF) << 6


def main():
    picam2 = Picamera2()
    capture_config = picam2.create_still_configuration(
        raw={"format": 'SBGGR10'})
    picam2.configure(capture_config)
    picam2.start()
    print(picam2.stream_configuration("raw"))
    raw_data = picam2.capture_array("raw")
    data = unpack(raw_data)

    img = Image.fromarray(data, mode='I;16')
    img.save('bayer_preserved.png', format='PNG')


if __name__ == '__main__':
    main()

Here is the image


My thought is to take a white image, then use ImageJ to extract the B, G, and R channels separately, then do a resolution measurement.
I tried this with an OpenFlexure image, it seems to work. In this case I did an autocalibrate with white light, then switched out the white light for a green LED. When I separated out the red and Blue, I got the green image below, but again in grayscale.



When I do this with the png image, ImageJ indicates it’s only one channel so I am assuming there is something wrong with the png image?

Hey @DougKoebler,

Yeah black and white is expected, I mentioned this in an earlier post.
Here’s some code you can use to convert it to RGB if you’d like:

import cv2
raw = cv2.imread('raw_bayer.png', cv2.IMREAD_UNCHANGED)
bgr = cv2.cvtColor(raw, cv2.COLOR_BayerBG2BGR)
cv2.imwrite('color.png', bgr)

The whole reason I brought up using bayer data is because anything else is going to mess up your resolution measurement (including the JPEG images you’re using from the OFM software). It does introduce some obstacles to super resolution, but the only other option is finding a non-bayer camera. In the article on fourier ptychography with Raspberry Pi, they seemed to be able to use raw bayer data to achieve super resolution (25 MP from 2MP images), so its worth reading about.