There is some really nice discussions happening in the Automate resolution measurement thread talking about the underlying raw data from the camera.
In it @WilliamW mentioned:
I thought it worth updating the community a little more on the progress we are making in v3 around the images collection during scanning. In general for scanning in v3 pre-release we have been using 0.5MP images for reliability and speed in image collection and stitching. However as the optical resolution is equivalent to about a 1-2MP image this felt like we were leaving performance on the table as each colour channel has then only 0.125MP during capture.
Resolution
As web programs tend to downample image for speed, we will zoom right in to some comparison images
So take this full capture from the microscope:
If we capture in the same region with 0.5MP, 2MP and 8MP. And then:
0.5MP:
2MP:
8MP:
Now it is important to differ between images looking artificially smoother by having more pixels, compared to there bing more actual data. But I think it is clear that there is more resolution in each of the images as we increase resolution. Especially in the colour contrast.
Coming back to the point above, the optical limit should be about 1-2MP. So our thought is that we probably only need a 2MP final image. But we want each colour channel collected with 2MP (i.e. 8MP capture). So if we now take the 8MP capture and downsample with a Box algorithm (no interpolation) to 2MP we get:
This is very similar quality in terms of the features we can resolve, but a factor of 4 smaller in both disk space (and RAM when stitching).
As such we are planning to make 8MP capture downsampled to 2MP the standard capture mode in v3
.
But what about the colour?
Another issue the we have is the colour saturation dropping at the edge of the image. This is especially visible for red images. Like this look at the first image. The colour is clearly more vibrant in the centre than at the edge. If we stitch the images together we then get this sort of grid pattern:
We wrote a paper 5 years ago about the reasons for it this (the lenslet array) and how we do the calibration to get an even white field of view.
In that paper we also had a method to further correct the desaturation, but we can’t do it directly on the Pi GPU as it is a more complex calculation. And the data you need to collect includes a number of images with Red Green and Blue data (requiring calibration jigs or or coloured samples).
However, we have been experimenting with extracting the data from that 2020 paper and calculating a static correction that can be applied to the images afterwards. If we do that to the first image, the corrected version looks like:
Now this isn’t perfect, but is considerably less desaturated. If we do that for every image in the scan before stitching the same scan becomes:
Again not perfect, but a large improvement. The next stage of this is that I need to get some RGB filters (probably lighting gels) and see if we can improve the data for correction. The other thing we may want to do at some point is get a transmission colour calibration slide. But these are not something we can require everyone to have to calibrate a microscope because they cost far more than a microscope!