Hi everyone,
I would like to build a microscope by attaching a Raspberry pi camera to an objective.
I have two questions regarding the Raspberry pi HQ camera. I found that a resolution of 0.21um/px is achieved with the raspberry pi HQ camera, which seemed to be diffraction limit.
My question are
- Does this number um/px is really translate to better resolution in reality?
- Would there be any change of resolution from frames captured during a video recording to still frames?
The micrometers per pixel does not change the actual image resolution unless the resolution was previously limited by the camera rather than by the optical system.
The OpenFlexure high resolution optics modules, using microscope objectives and a Pi Camera 2, are limited by the optical resolution of the objective lenses. We have not tried the Pi HQ camera, although a couple of people on this forum have. It is a little tricky to design a high resolution system with the HQ camera that is physically short enough to fit in the microscope sensibly. With an appropriate optical system the HQ camera will also give images limited by the optical resolution of the objective lenses. So that will be the same as for the Pi Camera 2.
Where the HQ camera could be a benefit in microscopy is the sensitivity and noise levels, which might be better for the larger sensor. In normal photography the noise reduces with increasing size of the sensor, because to get the same field of view you need a longer focal length lens, and at the same f# (or NA in microscopy) the lens is then a larger physical diameter and collects more light. I am not sure that this argument translates into microscopy where we are using the same objective lens, so collecting the same amount of light but then imaging it onto a bigger sensor.
1 Like
Hi William,
Thank you so much for the detailed reply. If I understand correctly, once the image resolution is fixed by the optics, sensor can only make it worse but not better.
Regarding the video vs still image, in a well lit situation, do you think video frames will work same as still images? I hope to increase the scanning speed by taking the videos rather than still images. What do you suggest?
On the first point, yes that is correct.
For video vs still, the video stream is quite a lot lower resolution than the still images. However there is some room between the pixel resolution of the Pi camera 2 and the optical resolution of the system. I think this is something you would need to test in your actual situation whether the difference is acceptable for the gain in speed.
Thanks, William.
I will investigate that with my set-up and update here.
@ChinnaDev video vs still isn’t just a question of number-of-pixels-in-the-image (often termed “resolution” but of course that word has multiple meanings here). That’s the most noticable issue, I can’t remember the maximum video capture resolution but I’m pretty sure it’s less than native resolution on the sensor. There are also other things that differ:
- Compression: video frames will pretty much always be compressed, and may not be independent e.g. if bit rate control is active, or if you are using h.264 (because it takes advantage of similarity between adjacent frames to compress more efficiently
- Image processing pipeline: video frames are processed using a quick-and-dirty method rather than the slower, more careful denoising/debayering done for stills. This is the same difference you should see when taking still images using the video port vs the still port.
- Resampling: the pixels in the video probably don’t correspond one-to-one with pixels on the camera sensor. That means there will be some resampling going on, which can affect image quality.
I don’t know if the sensor has different readout modes with differring noise characteristics - I suspect not. But the differences in software pipeline and compression could definitely turn out to be significant.
It is worth mentioning that there are much faster ways to acquire lots of still images than with the microscope’s capture method, if you use the picamera module directly (e.g. through a plugin). Depending on what you need to do, this might be a good option.
Hi Richard,
Thanks a lot for sharing your knowledge on this topic. Could you please provide any further information on the plugin you mentioned. I would like to try that.
Hi @ChinnaDev there is a video plugin (OpenFlexure / microscope-extensions / video-extension · GitLab). There’s not currently a plugin that does fast acquisition other than that; what is it you’re trying to do?
Hi Richard, Thanks for the plugin. I am trying to digitally scan the histopathology slides and apply AI models for diagnostic applications, eg. Malaria.
I see - that’s something we are also actively working on - but for us, image acquisition time isn’t really the bottleneck. We acquire a Z stack at each XY location, so most of the time is spent autofocusing and moving the stage. @JohemianKnapsody has done some nice work optimising the Z stacking and autofocusing procedure, and we’re in the progress of putting this together as a plugin and manuscript. It should work in the latest version of the server - but it’s not been tested much yet.
I guess you would want to be integrating the acquisition with moving the microscope slide? If so, it would probably be possible to do this by creating a custom version of the scan plugin - you could override the code that does image capturing with a faster method (e.g. using a “generator”). However, as most of the time is spent moving the stage, I suspect you are unlikely to go much faster than using the exsting plugin but requesting it to use the “video port” on the camera, which saves a fraction of a second per image.
I’ve just checked, and there’s no tick-box for that in the software, though it is possible to access it through the “swagger” client. I’ve recently started a push to document that interface a bit better, you can see the current attempt though I’ve not yet got as far as improving the documentation for the scan plugin.
1 Like