Fast z-stack recording

Is there a way to perform a fast z-stack (similar to autofocus, so no stops in between) but retrieve the images and relative z-steps?

@JohemianKnapsody will know better than me. But I believe that the fast autofocus is just using the MJPEG video stream, and an estimate of the step position based on time. So I don’t think you would get the best quality z-stack. I know Joe is working on a much faster stack and scan that doesn’t autofocus before z-stacks.

Hey @j.stirling, thanks for your answer. I am not that concerned about image quality but about recording data with different focus in a quick way. So I would be really curious to have a good way to jump into the code and write the mjpeg images and estimated z-step to the file system for later use :slight_smile:

@JohemianKnapsody How easy is it to do this?

Writing to file is a lot of the slow part, particularly on a Pi.

The stream is available to remote clients, where saving could be a lot quicker, if the web stream does not drop frames. For the z-position in steps you would need to estimate from tagging the start time of the move. I believe everything runs at 1000 steps per second. However, do you want absolute step position, or something more like position relative to focus? The most focussed image your can determine from you recording. Then steps away from that position is estimated by the timestamps.

Yes, the autofocus estimates z based on timestamps, you should be able to see the code in the “autofocus” extension. I think I added code to the python client that captures from the image stream, but it’s not terribly well documented. That might be the easiest place to try.

V3 uses the newer camera library and does a better job of capturing to RAM, but doing what you describe nicely would need an extension to the server code. Running on the server is mostly helpful because it makes it easy to get consistent times between the stage and camera. If you don’t mind a few milliseconds of error (equating to a few steps, as William says it’s 1000 steps per second by default), making the moves from a python script and capturing from the stream is probably the quickest way to do this.

R

Is using V3 something that your recommend? It’s not yet the recommended release, is it?

So for on server changes, I could adapt the focus_rel function and then retrieve the self.camera.stream.frames to access the image data? How would you convert these frames into an RGB numpy array?

The data in the stream is only ever JPEG format (MJPEG is just a sequence of JPEG images, and it comes out of the GPU already compressed). So, you’d need to load each frame using Pillow or similar and convert it. You’re not likely to be able to do that in real time on the Pi, I’d recommend supporting the JPEGs and re inflating them afterwards.

V3 is still very much a development image, but it’s reaching the stage where done early users/testers/guinea pigs would be very useful!

Thanks, then I will try to get into the recording there and try to export the jpeg stream :slight_smile:

1 Like