Image resolution and colour correction

There is some really nice discussions happening in the Automate resolution measurement thread talking about the underlying raw data from the camera.

In it @WilliamW mentioned:

I thought it worth updating the community a little more on the progress we are making in v3 around the images collection during scanning. In general for scanning in v3 pre-release we have been using 0.5MP images for reliability and speed in image collection and stitching. However as the optical resolution is equivalent to about a 1-2MP image this felt like we were leaving performance on the table as each colour channel has then only 0.125MP during capture.

Resolution

As web programs tend to downample image for speed, we will zoom right in to some comparison images

So take this full capture from the microscope:

If we capture in the same region with 0.5MP, 2MP and 8MP. And then:

0.5MP:

2MP:

8MP:

Now it is important to differ between images looking artificially smoother by having more pixels, compared to there bing more actual data. But I think it is clear that there is more resolution in each of the images as we increase resolution. Especially in the colour contrast.

Coming back to the point above, the optical limit should be about 1-2MP. So our thought is that we probably only need a 2MP final image. But we want each colour channel collected with 2MP (i.e. 8MP capture). So if we now take the 8MP capture and downsample with a Box algorithm (no interpolation) to 2MP we get:

This is very similar quality in terms of the features we can resolve, but a factor of 4 smaller in both disk space (and RAM when stitching).

As such we are planning to make 8MP capture downsampled to 2MP the standard capture mode in v3.

But what about the colour?

Another issue the we have is the colour saturation dropping at the edge of the image. This is especially visible for red images. Like this look at the first image. The colour is clearly more vibrant in the centre than at the edge. If we stitch the images together we then get this sort of grid pattern:

We wrote a paper 5 years ago about the reasons for it this (the lenslet array) and how we do the calibration to get an even white field of view.

In that paper we also had a method to further correct the desaturation, but we can’t do it directly on the Pi GPU as it is a more complex calculation. And the data you need to collect includes a number of images with Red Green and Blue data (requiring calibration jigs or or coloured samples).

However, we have been experimenting with extracting the data from that 2020 paper and calculating a static correction that can be applied to the images afterwards. If we do that to the first image, the corrected version looks like:

Now this isn’t perfect, but is considerably less desaturated. If we do that for every image in the scan before stitching the same scan becomes:

Again not perfect, but a large improvement. The next stage of this is that I need to get some RGB filters (probably lighting gels) and see if we can improve the data for correction. The other thing we may want to do at some point is get a transmission colour calibration slide. But these are not something we can require everyone to have to calibrate a microscope because they cost far more than a microscope!

2 Likes

It’s great to see that the new version of the OpenFlexure software is enhancing image clarity using a box filter. However, I’d personally prefer access to the raw data, with any visual enhancements—like filtering—applied as optional post-processing, rather than being hardcoded into the imaging pipeline. I’m not sure if that’s the case in the current v3 branch, but I thought it was worth bringing up. Either way, this is an exciting update. I can’t wait to use the v3 release!

I agree @tay10r It very much depends on use case. We are currently focussing on viewable images for histopathology. So taking high quality JPEG images and stitching them into a deep zoom image.

The current version we do not have RAW as an exposed option, but we certainly will. We won’t have RAW saving by default because the overhead of writing the RAW to disk is large, and our feedback from many users is they are interested in speed.

1 Like

@j.stirling totally fair. High quality JPEG probably fits 99% of the use cases then. Either way, these are exciting updates for the V3 branch! Nice work!

1 Like