First build, High-Res v7 (#969fc80) with v3-alpha-server

I am not a scientist or something, just an average linux-guy from germany with a long-lasting love for 3d-printing.
However, my wife is a scientist, and has done a lot of pathology-microscopy in her career and still enjoys nice slides of any kind. So I got her some hardware for birthday, and we just finished building our first OFM…well technically it might count as the second one, because my wife just used openflexure-server and a custom adapter she designed for a raspberry-pi-case to digitize one of her manual microscopes with the stable server-release, but without a motor-stage → Customizable microscope camera case for Raspberry pi HQ cam by pixel | Download free STL model | Printables.com …i guess that already counts as a half OFM :slight_smile: ).

However, now we just finished our first proper OFM, and I like it so much, that I’m already thinking about building a second one with a delta stage.
Some specs about the scope:

  • v7 Hardware (iirc 7.0.0-beta4, hash is #969fc80)
    • unmodified v7-High-Res-Version
  • v3.0.0-alpha1 server
  • sangaboard v5 (obtained with the rest of the basic hw from labcrafter)
  • pi4b-4gb
  • picam v2
  • some cheap regular 40x-objective and correction lens from aliexpress (which looks fantastic anyways)
  • printed in PLA
    • PLA “galaxy purple” (Prusament PLA)
    • PLA “glitter opalgreen” (redline-filament)
    • PLA “matte graphite-black” (redline-filament) - for the optical train and the various little printed tools

So here are some pictures.
I think it came out gorgeous, and I really love the color-scheme:





Besides some test-scans and after getting somewhat familar with the options and controls, I already produced a boring, but high-quality slide-scan of a slide showing some red blood-cells of frog-blood (smear-slide). some more interesting scans are in progress…
full image (scaled down, cropped, adjusted whitebalance) :

Full-resolution crop of the central area of above scan:

so, left to say is thank you very, very much for this awesome project. I have designed dozens of things in openscad myself, but man, this microscope and it’s various options is so absurdly overengineered, I absolutely love it. To everyone, who is involved: Please keep up this great work!

-zeus

4 Likes

Meanwhile a reasonable good scan with a 40x-lens has finished. Original image is ~11k x 10k pixels.

Slide: appendix vermiformis (human),
unfortunately not dated, but I think it is from around the 70’s or 80’s.

10%-scale of the image:

slightly uneven lighing results in a more saturated center-area per tile, but I assume this is to be expected from a single LED…

two full-resulution crops from the center area:


Needless to say, that I am very happy with the results, the new auto-stitching feature is simply amazing, even tough it sometimes needs quite alot of overlap to produce good results…

1 Like

This is actually a loss in colour saturation at the edge of the sensor. It is caused by a lenslet array that is slightly offset from the pixels at the edge because it is expecting light to be coming at a large angle from the short focal length lens.

This is explained in Flat-Field and Colour Correction for the Raspberry Pi Camera Module | Journal of Open Hardware . The colour mixing can be measured, and then the reverse tranformation applied to images will correct most of the problem. It is particularly noticeable for redish samples, which unfortunately includes h&e stained histology slides.

Note this is not the case for the HQ camera, which is not optimised for a particular standard lens. I don’t know whether it has lenslets at all. I noticed your Printable. How did you use the OFM software with the HQ camera? The standard v2 and v3 server software are both tied to the Pi Camera v2.

1 Like

This actually makes alot of sense, and is the missing puzzle-piece in understanding, where this comes from, thank you very much. I already tought of the effect looking like a vignette that is present not evenly, but different depending of the color-channel, and already thought “that would have to be corrected before stitching to be beneficial”, but I did not get, that the auto-calibration already takes care of it. I did not mind to color-correct for this effect in particular, and my last color calibration was on a non-HE-sample, which might explain alot.

This also explains alot xD.
The linked printables-thingy from my wife is indeed using another camera-module, and it is indeed the HQ-Cam, just with a C-/CS-Mount. It is this one and uses a Sony IMX477R as sensor (12MP). In this configuration it has no lens, at all, just the bare sensor, hard to say if it is using lenslets, but I did not noticed any vignetting at all, and this version is intended to be used with external objectives anyways (hence the C-/CS-Mount). There we used a standard-eyepiece-adapter to fit a regular Olympus CH-A Microscope (and - not seen in that printable - with a different-diameter-eyepiece-adapter on an Olympus VM-T stereo-microscope as well, which also works fine, even tough for 3d-objects some halo-ing is to be expected…). This actually worked out of the box. I noticed a somewhat smaller FoV (it looks to me like a higher (narrower) focal-lenght, maybe the eyepiece-adapter is correcting for a different (shorter) length of the optical tube), resulting in a smaller image-circle (at least it seems to me that you have a significant smaller image-circle as with just the eyepiece).

The second noteworthy difference was, that OFM’S color-correction is way off what the picture actually should look like. Flat-field is okay and even, just white-balance (and gain to some degree) is completely off, giving a very teal image most of the time. However, for now I manually color-calibrated the Camera with grey things and noted down the values to be at least be always roughly at the right ballpark (unfortunately I don’t have a translucent grey-card…does something like that even exist? anyways, I digress…). Correcting for WB/Color is however not a big deal anyways, even afterwards in post-processing. As you can see in the Images in that Printables-Thing, The images produced by the cam are at least “okay” out of the Box for different objectives. Those are not post-processed or afterwards color-corrected, just the radiolaria-image was raw-processed (changed to B/W, some basic RAW-processing like sharpening, curves and contrast as well as some maunal “de-dusting” in GIMP for aesthetical reasons).
So beside automatic color- gain- and wb-correction the HQ-cam does work actually fine with all tested RMS-Objectives, some of them HQ-Olympus branded ones (4x,10x,20x,40x), which all produced very useable results. We used the current stable (v2) build on the manual scope. Maybe for “modding purposes” a guideline on how to correct for different camera-models would be helpful, because other than calibration, the HQ-Cam works perfectly fine.

Actually I am unable to get rid of the flat-field errors, not matter, on how I try to calibrate the camera. Using full-auto-calibration gives an even image without a slide (or on an unobstructed part of the slide), the saturation difference is pretty much exactly like before:

Single Full-Res-Image from Scope:

The distance of the LED seem to have absolutely no effect on this, so I don’t know how to properly correct for this from here on. Any ideas @WilliamW ?

This is the effect described in the paper. It is not something that can be corrected with the lens shading, it has to be done in post-processing each image.

Okay, thats good to know, that there’s at least nothing wrong then with my calibration. I guess the per-image color-correction will be added to the automatic stitching-workflow of v3-server in the long run? because automatic stitching would be obsolete, if images cannot be automatically color-corrected first. i guess this will be the right workflow to correct the images manually?

1 Like

I did not realise that @j.stirling had made the repository. That would be the right place to look.

Thanks @zeus, yeah the plan is certainly to make this happen by default, but we are in an early dev stage for the colour correction so it is in a separate repo so it doesn’t just idle on my machine.

It is computationally heavy enough that we need to consider the implications of two discreet options:

  1. Do colour correction in the stitching thread with each image – this will cause the live stitching to lag further behind the live scan
  2. Do colour correction only for the final stitch, this will significantly increase final stitch time.

My gut feeling is that option 1 is the best plan. There is also then a question of if we overwrite the “original” images or if we save a second copy.

Again my guy says overwrite as the original is not truly original as there are umpteen other corrections at the GPU level. I think an option to save RAW Bayer data is the way to ensure that the truly original data is saved. This cannot be a default option due to the overhead of saving this data

2 Likes

I’d argue for a third option.
The way how it currently (7.0.0-alpha1) works is, that stitching begins as soon as there are 4 tiles or so. However: the longer the scan runs, the more the stitch lags behind the actual scanning progress, I assume stitching is about 1-5% slower than the scanning. I don’t know if the reason is because the image getting bigger, or if the stitching is just ever-so-slightly slower than scanning of a single tile, i assume the latter.

With this in mind, you have to accept, that for reasonable large scans, the process is not over anyways, when the scan itself is finished. So, accepting this, I think the much more reasonable strategy would be to (live-)create additional thumbnails of each frame, scaled down to lets say 0.5-1 megapixels. Scaling images down is computational inexpensive anyways and the preview-stitch can be done on the thumbnails instead to somewhat indicate a live-progress of the scan. this way the preview-stitch is much faster (and therefore “live”), but you get all the info you want from the preview (is the overlap high enough, is the positioning and brightness okay, and so on) you can identify basically every reason why you might want to cancel a scan at this early stage. you also do not need to color-correct at this point, because this isn’t the final stitch anyways.

I’d then start color-correcting independently in the background as soon as there are enough useable tiles (lets say a hundred tiles with <50% background, or ~250 tiles if that background-condition is not met, OR if tile-skipping because of background is turned off. If there are less tiles overall, start when scanning is completed). This way you can average your color-correction over a number of tiles that I’d treat as “representative enough for the entire current slide”, and most likely it will fit the rest of the scan therefore well enough. When the scan finishes, treat it as finished, and run the real stitching afterwards as independent background task. Because we now have not such heavy lifting to do during the scan itself, one might even start another scan, while stitching the previous scan in the background, observable via a progress-bar in the gallery or other obvious places.

To better deal with “unintended user-actions”, you might want to indicate somewhere (at a always visible spot) that background tasks are running, AND check for running tasks, when the user tries to shutdown or reboot the system (I am pretty sure, this can be easily done with a pluggable script for “molly-guard”, which is a CLI-tool exactly for this purpose: check if there are conditions met, why we don’t want the user to shut down the system. In ubuntu or debian you can install it from the base-repos, in 99% of the cases the reason for what molly-guard is used for is to ask for additional confirmation (by typing the system’s hostname), when a user tries to shut down or reboot a system from a ssh-session, but it can do much, much more). Maybe there could be an option for “shutdown when all tasks are completed”.

For a single scan this method most likely will take up to 50% more time, BUT is negligible for more than 2-3++ scans in a row, AND you will have already all positional data for stitching from the thumb-stitch. however, how well this translates to the final stitch has to be tested i guess…

edit: another advantage i just experienced: if the stitching process crashes, it would not cause the scanning-process to stop (hopefully)…

To be fair, i think having scanning, stitching and preview all at the same time without major hickups is pretty incredible archievement on it’s own, but i don’t neccessarily think it’s always the best option, because you are limiting your results due to that degree of parallelism.

Thats my take on this, what do you think @j.stirling ?

It is a strategy worth considering. Before we pick a final strategy we would need to re-analyse the time taken in each task as the scan sizes grow.

The process currently is optimised on the assumption that the most computationally intensive task is the cross correlation between images. This is done live and is cached as it will never change. The stitching is computationally inexpensive to make a low-res preview, but gets more expensive for full res. It is now dominating for the final scan. But this is partially due to PyVIPS

For full res, some stitches start to use up more than the available RAM. For this reason for the final stitch we first stitch to tiles, then use PyVIPS to convert the tiles into the final JPEG and DZI. PyVIPS is very memory efficient for huge scans, but painfully slow. There is quite a lot of optimisation needed to optimise timing, both on adjusting the tile sizes and on not using PyVIPS unless it is needed.

2 Likes

I know this is quite involved, but what I tought about a little in the last few days is, that you might additionally want to have an openflexure-server-version as docker-container for x86_64 and arm for a node without any imaging hardware, be it a PI, a Workstation, Server or VM. It would also address any issues with stitching and color-correction, but is a very different way to approach that goal. And unfortunately it most likely won’t help anyone with “just one microscope”. However, as long term idea for anyone who has a fleet of scopes, i think the following might be very interesting:

you could do the following things with it when making (full-res) stitches on the pi optional:

  • use a openflexure-server-docker without physical imaging hardware to collect scans (e.G. from samba/nfs-shares of the PIs, scp would be rather slow for transferring huge amounts of data due to the encryption layer of ssh) and do easier maintenance of scans in a single place. adding the hostname of the source-microscope to the scan’s metadata would likely be necessary tough…
  • do the stitches on beefier hardware on a mini-pc or a VM running the docker container, maybe even utilize accellerators like google-coral or GPUs (with CUDA or ROCm)
  • use that node to maintain your fleet of microscopes via GUI (updating OS, OFM and firmware of e.G. sangaboards)

even a docker-container just for collecting scans, color-correction and stitching alone would be awesome. that way i could do the heavy lifting on my workstation or server, and just grab the images from the microscope. however, i admit, that the preview of the stitch is very sexy and i don’t want to miss that…

I think there are certain optimisations for fleets which would be cool. However, the key goal for the project is making laboratory grade microscopy accessible anywhere in the world. As such being able to run a single microscope without an external machine (except maybe a phone or tablet for control) must be our foundation and that must work.

If we did want an external stitching service/server. As the colour correction and stitching sit in python modules it wouldn’t be hard to load them into a docker container. They are not optimised for parallelism, but I am sure they could be.

2 Likes

yeah, absolutely agree, the microscope has to be able to work standalone. however, having the possibility to outsource the heavy lifting as option would be nice also. I guess - like always - there is no “one size fits all”…

A very good job! Just wanted to double check: whether the v3 server software is compatible with HQ camera?

V3 software is still locked to the Pi camera v2.

Supporting the HQ camera is a medium-term goal.

I really really hope we will get the HQ camera working soonish. We have a test piece for the hardware.

There are a number of software changes that are needed to make it work; we are laying the groundwork for this right now; I would really hope that by October/November we have a test version that supports the HQ camera.

If we don’t please do push us.

3 Likes

We now have a very early development branch supporting HQ

2 Likes

We have the code that allows the HQ to work merged into V3 so it will be available with the next alpha release. There is still a lot of work to do to optimise the HQ camera. But this is a start.

2 Likes