348 images automatically captured, downloaded and tiled!
Cool pictures out the way, here’s the plan for anyone interested
The aim is to let a user put a sample under the microscope, set a background image, then leave the microscope running, finding its own path, until all of the interesting sample is scanned. Hopefully this will speed up a lot of medical uses with irregularly shaped samples.
To make this more useful (and to avoid issues with integrating Fiji or other tiling programs), our tiling code can copy all the images to the user’s computer in real time, and tile them based on the OpenFlexure metadata rather than filename. It estimates positions from the stage locations and camera stage mapping, finds better overlaps between each pair, then “shuffles” the images around to find the optimal position. Unlike in Fiji, you set your thresholds once all pairs have been processed, rather than at the start. This means the user can see the outputs and set thresholds accordingly, and quickly test multiple thresholds without rerunning the entire procedure.
This is still in early development, but if anyone has any suggestions, feedback or wants to test it out, please let me know.
Thats super interesting! I just got my microscope set up (although my lighting setup is still kind of janky) and was messing around with using open cv to stich together images. It worked well, but I’m sure the performance penalty is large for not utilizing the image metadata.
What magnification objective is on the scope you used to make the image?
OpenCV was my starting point too, but it really limits how much you can use the metadata and speed up some of the time consuming steps. The image in the video and post are both 60x, and one of the Gigapan images is 40x. It works with 100x as well, as long as the camera stage mapping data is a good estimate it should work with any OFM scans with a decent overlap between images.
As @dgrosen has mentioned before as well, Fiji requires rectangular scans and images captured or named in a snake or raster order. This approach is fine with any path or filename
Hi Garvolian,
Just tidying up the code a bit (a lot) after a massive rewrite, I’ll send you a link and instructions by Thursday unless it’s urgent and you want to test out the current version?
Sorry to keep you waiting
Joe
That’s really helpful, cheers. Happy with all the issues you’ve raised and I’ll look into them. Next version of our OS should come with OpenFlexure Stitching bundled to run automatically as scans run, which should take away a lot of the headache
That’s a really useful write up you’ve shared, I’ll turn the feedback into some issues as well
The image from March 2023 is such an inspiration! I will definitely try the code (thank you for writing it in my favourite programming language). What I am still struggling with are two things in stack and scan:
How do I figure out the right scanning parameters? I understand that it depends on the scanning area (let’s stay the standard coverslip ~400 mm^2) and on the magnification (right now I am using x10 because I have trouble focusing the x40 from below and flipping my sample is not an option because it is not fixed). What are the best step sizes, and number of steps to have enough overlap for stitching but optimise scanning time? What are the units we are using even? Where can I read up on this topic?
Autofocus works well for me when the image is already pretty well focused. How do I ensure that for every tile the microscope goes through an iterative procedure narrowing to the perfect focus even if initially the image is very blurry? This many happen if initially the image was focused in the centre of the slide but towards one end it becomes blurry because the platform it not completely even (happens with my “normal/boring” microscope too).
@evolk, I have not got detailed answers on @JohemianKnapsody’s code, but there are some other questions in there as well.
The units are steps of the small stepping motors (about 4000 steps per turn of the small gear). The easiest way to tell how many steps you need to move in a scan is to use the navigate tab to move the sample manually and see how many steps it is to go across the whole image in x and y. For good stitching I think you need to aim for at least 1/3 image overlap.
The camera stage mapping also saves a calibration file that tells you how many steps per pixel of the image.
If you are needing to focus through the thickness of a microscope slide (about 1mm) then it is usually possible to manage with most x20 objectives, although they are not designed to give the best images with a thick glass. There are x20 and x40 objectives that are specifically designed for this purpose, labelled 160/1.1 or (infinity)/1.1. For example CCIS® Plan achromatic objective LWD PL 40X/0.5 (WD=3.0mm) (there is also a 20x). I am sure someone posted a similar one from Amscope, but I cannot find it just now(edit: not Amscope Is this the correct specs for a LM objective? - #5 by heehaw). You will need the correct optics module for your microscope to match the standard (160/) or infinity corrected objective.
In a scan the consecutive images are close together, so it should be able to find the focus when it moves to the next tile. If there is a large move between images - as in a raster scan going to the next line - then there can be issues. A snake or spiral scan never makes long moves. @JohemianKnapsody is making progress on predictive focus - based on the images in the current scan and / or the known stage geometry. I don’t know how much of that is in the code linked here. We are at a change-over in the software from v2 to v3, some new features are not compatible with v2, but v3 is not ready for an alpha release.
Note that the scan size in xy is not clear for a spiral scan. If you select spiral then the number of y-steps is ignored. the number of x-steps is used as the number of spiral rings, I think. There is a thread on the Forum on the pattern for spiral scans, I think.
I used Humin to put my first large scan together (around 70 images), but I really want to get this code running. My pip on python 3.11 is unhappy for some reason but I will get there.
I built OMF to do soil microscopy and slide scanning is an integral part of my project. Most stitching software relies on image alone, it is like it is putting a puzzle together. Soil particles dissolved in water with occasional nematode or a fungal hypha is not a simple puzzle to put together! This is why I like that your script uses the metadata!
I have a question about the z axis. Drops of water under the cover slide have depth (about 150 micron if I remember correctly). If I were to set e.g. 5 steps of size 300 I would end up with 5 images of the same spot focusing at different depths. How will openflexure-stitching deal with that scenario? In the example dataset on GitLab there is only one image per x-y combination…
@JohemianKnapsody this is really cool. How does this determine which parts are “interesting” or not?
I have a use case for this in my pet project for automated WBC differentials. I was wondering if this could be made interoperable with other software. I have a program right now that can start a report and, image by image, count + classify WBCs in each frame (at full resolution). It could work nicely with software like this if they can some how sync up.
Thanks @tay10r , currently “interesting” is just based on how much it looks like background. So before the scan, we take an image of background, extract some parameters, and compare every future image to make sure it’s not background. I have some other ideas on similar things which might also work for your use case.
Have you seen the work shared here by @Nico on WBC differentials?
@JohemianKnapsody I think so! Last I saw there was a yolo model finding WBCs via bbox regression. My approach is slightly different, and aimed more towards a automated setting rather than an interactive one. I’m also including a semantic heuristic for estimating the foreground/background ratio - this is help determine where the edges of the smear are. I’m also working on some newer firmware for the sangaboard to allow async control of each of the stages, so that you do things like adjust the Z-stage while the X-Y stages are moving.
@JohemianKnapsody Your code works like a clock on images obtained by OFM directly. But I have two small questions. Is there a limit on image size? E.g. if I wanted to stitch a hundred images in original resolution (3280 × 2464 pixels), would your code be happy about it? I’m asking because I got this error once: ValueError: operands could not be broadcast together with shapes (4836,3243) (4858,3244)
However this takes me to my second question - the images I was trying to stitch were not from OFM but z-stacks from focus-stack tool. After I resized the images to a third of their original sizes, I tried to run openflexure-stitch again and got a different error, although these three files were still created: OFMTileConfiguration.txt openflexure_stitching_cache.json stitching_inputs.png
Here is the error:
Starting with 0 possible peak quality thresholds and 0 possible stage discrepancy thresholds
After filtering and resampling 0 peak quality thresholds and 0 stage discrepancy thresholds will be trialed for fit quality.
Traceback (most recent call last):
File "/opt/anaconda3/envs/ofm/bin/openflexure-stitch", line 8, in <module>
sys.exit(load_tile_and_stitch_cli())
File "/opt/anaconda3/envs/ofm/lib/python3.9/site-packages/openflexure_stitching/__main__.py", line 62, in load_tile_and_stitch_cli
ofs.load_tile_and_stitch(
File "/opt/anaconda3/envs/ofm/lib/python3.9/site-packages/openflexure_stitching/pipeline.py", line 125, in load_tile_and_stitch
peak_qual_thresh, stage_discrep_thresh = determine_thresholds(
File "/opt/anaconda3/envs/ofm/lib/python3.9/site-packages/openflexure_stitching/pipeline.py", line 267, in determine_thresholds
peak_qual_thresh, stage_discrep_thresh = optimise_peak_and_discrepancy_thresholds(
File "/opt/anaconda3/envs/ofm/lib/python3.9/site-packages/openflexure_stitching/optimisation.py", line 323, in optimise_peak_and_discrepancy_thresholds
peak_qual_thresholds, stage_discrep_thresholds, rms_errors, n_pairs = test_values
ValueError: not enough values to unpack (expected 4, got 0)
Is it because focus-stack changed the images too much and there is now not enough overlap between them? Or did it remove some important metadata?
For the second question, I beleive there is XY position data embedded in the EXIF that the OFM directly produces. If you process those images into a new one it will probably be missing. Without that data the stitching script won’t have its starting point image positions to refine.
For a quick fix maybe you could copy the exif data from one original unstacked image per XY position to each stacked image.