Phone Camera & Autofocus

I have at home a DIPLE Smartphone enabled microscope

And looking at images using vs using the Open Flexure.
Doing that I realized that the autofocus, I was reminded of the speed of the smartphone for autofocus and checked how it worked quickly on Quora
I am now wondering :
Shouldn’t we try to use a smartphone camera set up including some of the lenses and piezoelectric actuators to dramatically increase the autofocus speed ?

Smartphone cameras are a few different parts. The sensor, the lenses, the piezo actuators.

There are a few problems with using them that I see (bare in mind I am not an optics expert):

  • Hard to find phone cameras assemblies on their own (not in a phone), and if you do they don’t normally come on a break out board, requiring a custom board
  • Phone cameras advancement comes at such a pace that if we put dev work into a custom board for it the assembly may become unavailable very soon.
  • The tightly integrated set of lenses and actuators is set for focal distances expected for a phone. In a microscope we are working in a very different domain when you consider focal length, depth of focus/magnification etc.
    • For the low cost option of just reversing and spacing the lens, the complex multi-lens system will not perform as expected
    • For the RMS version we have been careful to consider each component in the optical path: Plan corrected objective and achromatic doublet. If we moved to trying to combine the objective with this complex lens system, how would this affect things such as the aberrations we are trying to minimise.

Also I am not sure that the real delay is the lack of speed in the actuation. We have motors to move up and down. If I am not wrong, for the biggest delay when tile scanning is the extra stationary measurements we make to ensure that we are in the correct focus. While the piezos may have less backlash, the whole analyse and move until you regain full focus would be limited by data transfer unless it could all be programmed on something more specialised that the Pi’s CPU.

@r.w.bowman and @JohemianKnapsody probably have more accurate information.

1 Like

I think Julian is pretty much spot on.

“remote focusing” (effectively treating the smartphone camera like an eye, and thus requiring an ocular in our optical train) is possible, but only over a fairly narrow range limited by aberrations and the range of the smartphone sensor.

Some lower-mag systems (like IOLight and Grundium) use something very similar to a smartphone lens as their objective, which means it’s small enough to actuate with a voice coil or piezo. However, for oil immersion it’s pretty hard to take that approach, hence our slower-moving objective.

The autofocus procedure takes a few seconds per position, which it would be nice to shave off. However, at the moment the slowest part is acquiring the images because of (1) settling time and (2) transferring a high quality image out of the GPU and onto a disk. So it might help, but it wouldn’t be a 10x improvement. It would, however, require a pretty massive software, firmware, and electronics redesign, that we don’t currently have the skills or resources for. If you do, and you fancy implementing a smartphone-style imaging unit (one that doesn’t depend on disassembling consumer electronic devices for which there’s no spec sheet), it would be very useful to lots of people!

2 Likes

Thanks for the very clear answer.

So what I understand is that : to decrease the scanning time, we are mainly limited by settling time (mechanical stability) and image capture + transfer time.
To solve these, we using an smartphone camera like set up is nice because it can be fast (increased control over mechanical movement → decreased settling time) but not practical for high mag set ups.
To decrease the acquisition time I thus assume than adding some sort of movement dampening might help
on the settling time and using a higher performance GPU device would help for the image acquisition (capture +save).

Am I going in the right direction here ?

ensuring everything’s mechanically stable and eliminating any resonances definitely helps to reduce settling time. Moving to a more powerful embedded computer could shave some time off image saving too - but that’s a fairly major step; the Raspberry Pi has a lot of advantages (especially around accessibility) so the improvement would have to be pretty significant to justify a platform change.

The “low hanging fruit” is probably parallelism; the actual image capture is quick, it’s the transfer of data that is slow. If we can move the stage while the data is transferring, we can potentially speed things up by a factor of 2, without increasing hardware complexity. Currently, we move the stage, wait for it to settle, take an image, wait for it to transfer, then move the stage again. If we started moving the stage immediately after taking the image, we could save a chunk of time - but there are some complications around knowing exactly when that happens.