Automated Slide Scanning and Tiling

Hi @ffesti. It was about 400+ pictures. I have timed the cycle and I think the movement to a new position and the fast autofocus take about 15-16 seconds. But if I can speed this up I would be delighted! My wifi at home is awful, it is only a 5G and many devices are connected, do you think this might impact the speed?

Ideally I would like to be completely independent of wifi, I even have a separate touch display with mini-hdmi and two USBs for power, but it keeps switching off after a few minutes and/or becomes unresponsive. But this is probably for a different thread altogether.

A lot of auto focus algorithms work by finding a local minimum of a function that scores how in-focus the image is (a lot of times, it is based on the amount of sharp edges in the image). This requires evaluating focus at several locations along the Z axis, which takes time. This has to happen at every tile, because shifting in X or Y can cause the distance between the objective lens and slide to change slightly.

I feel like it’s worth mentioning that I’m working on an ML model to predict the exact (signed) number of steps needed to bring the image into focus, from the current image. This would reduce the time it takes to focus to around 1-2 seconds, since you would no longer have to search for a local minimum. I have a post about it, that I update every now and then, on the ML section of this website if anyone is interested.

1 Like

The OpenFlexure autofocus works using the jpeg compression that is already there in the live preview stream. Scan z over some range at full speed and look for the biggest jpeg frame, then work out where that is, using the frame rate and step speed. The standard parameters mean moving a lot of steps to have a good z range and avoid backlash, and at only 1000 steps per second it takes a few seconds. The processing part is basically ‘free’. See Fast, high-precision autofocus on a motorised microscope: Automating blood sample imaging on the OpenFlexure Microscope (the smart-stack in that paper is not implemented in the microscope server v2)

In the v3 software development there are a lot of improvements in scanning speed being tested: minimising focusing time, minimising the number of waits to settle in position, minimising capture time etc. This is not yet even at pre-alpha release stage, but it can be a lot quicker. It still needs to be tested for reliability on different types of sample.

1 Like

OK, with around 15 seconds per picture this is more believable. And I can imagine there is quite a bit that can be shaved off.

I wonder if the xy axis are geared down too much. 8000 steps is a lot for the field of view - even if we assume there is on objective with 10 times the magnification. 80 steps for a 100x times objective would still be enough. Ofc the delta can’t afford such luxuries as having xy in a different gear reduction.

Edit: Right now there is a 2:1 reduction one the 3D printed gears. May be we should swap the gears around for the x and y axis - reducing the overall gearing by 1:4. This should still give about 200 steps for the smallest field of view (100x objective).

We have been testing this on and off for a while. Faster motors with less gearing? - #5 has some thoughts and there is a Merge Request (a bit old) with the physical gears. The gears will not fit physically between the motor lugs for reversed ratio of 1:2, but you can get to 1:1.25 from 2:1.

Faster gearing on the z-axis is a problem for autofocus, but @JohemianKnapsody has been using faster xy successfully. I don’t actually know which ratio.

I’ve been using 1:1.25 with a lot of success. I still wouldn’t recommend them for 100x, where the backlash can then become a significant amount of the field of view, and for relatively little speed improvement.

If anything, in z I’d say we could afford to move the other way - overshooting when autofocusing by any amount can really affect higher power imaging

1 Like

Sigh So many ideas so little time…

Really need to do a prototype for my Backlash free drive train idea. One could probably make a version that can be bolted onto the motor mount to be used with a standard OFM.

1 Like

I’m considering building a belt-driven miniature corexy stage. I have no idea if it’ll have acceptable precision, but the idea of avoiding all gearing seems attractive. I might make one out of Lego as a mock-up.

The Enderscope uses a full size 3D printer with a 4x objective in the optics module. So a mini corexy should work for xy positioning.
I probably would keep the flexure system for the z focusing. Focus needs precision, and anyway focusing speed is largely limited by the camera preview frame rate.