I was thinking that perhaps to speed up the motorized scan the system could autofocus on predetermined intervals and not on every field…just a thought
There are certainly speed improvements possible from focus optimisation in scans. I don’t think it would be wise to miss out the focusing, unless the magnification is low. However currently there is a full autofocus each step. This makes the scans robust, particularly in the raster-style scan.
There is a trade-off between fast focus procedures that have a smaller range, or are predictive, or don’t do the unidirectional travel to focus, and the reliability that the images are in focus.
Having to repeat a scan is very slow. This probably means that the scan optimisation depends a lot on the particular application.
I would assume that if the sample is very flat we have two issues causing a lack of focus. Slide tilt, and the parasitic z-motion from xy-motion.
If we had a reliable way of knowing where we were in the x-y plane then it may be possible to move to the extrema of the scan to calculate slide tilt, and then scan quickly using feed-forward for focus. This is very hypothetical as it would require really good closed loop control. But I think the steps to get there are
- Good ways to “home” xy in the centre. Some great work on this from @siddons
- Closed loop xy motion using the camera as an encoder. I know @r.w.bowman and @JohemianKnapsody are working on this
- Closed loop z. This is harder, and I suppose would be needed as due to imperfect backlash compensation?
I suppose experimenting with missing out the focus step on some scans is simple enough to implement, and may be of use in some applications. We would just need to make it clear that “here be dragons” until it is well tested.
I think Julian’s covered more or less everything I was going to say, so thanks! To fill in some blanks:
Closed loop XY motion works, and is implemented for single moves already, but not extensively tested so it’s not in the graphical interface. I’d like to get this into the interface, and integrated with the stack and scan feature, in the near future.
Closed loop Z is, as you say, harder - Joe’s work on autofocus will (hopefully in the next few weeks) help us to speed up autofocus and make it more reliable, but true closed-loop Z motion will almost certainly require additional hardware.
I’m not convinced skipping focus altogether is a great plan for all the reasons Julian just mentioned, but it should be possible to reduce the range when working with the “snake” or “spiral” scan patterns which will save a lot of time. Also, it’s worth checking you are using the “fast” autofocus method - this is not only the fastest, it’s also often the best.
If you want to experiment more with this, the easiest way to tweak the algorithm is to control the scan from a Python script or iPython notebook. @JohemianKnapsody, @samuelmcdermott or I should be able to help you set that up if you have a working familiarity with Python (or if you know a willing volunteer who does).