I have written a python script for a time lapse experiment, doing some tests I found some details with pictures collected at specific positions, it seems like the microscope has some accuracy issues with that. Let explain you with an example, the image sequence attached belongs to position X=24000, Y= 12000, using Fiji(ImageJ) was expecting to see no movement in the background, but as you can see there is certain movement, which means something is not going well, the movement is just a little but enough to complicate analysis during real experiments (e.g. visualising protein synthesis during certain period of time).
Could you help me to solve this problem (if it is feasible)? It could be a hardware or software issue?
I am using the OpenFlexure Microscope v6 (no Delta Stage)
Thank you so much!
How long is the timelapse, and what is the magnification? Is the microscope stationary at this position, or is it taking pictures at more than one different position at each time point?
The microscope is remarkably stable in position if it is left stationary, but it does creep a little. A one-piece 3D printed flexure translation stage for open-source microscopy: Review of Scientific Instruments: Vol 87, No 2 (scitation.org) is an earlier version of the microscope but there is a measurement of the drift over a few days. Sometimes there is a more rapid drift in a ‘settling’ period just after the microscope has been sent to its fixed position. If you move the microscope and return to the same position there are other possible positioning errors, such as missed steps and backlash.
No system will be totally stable, but the movement here is relatively small and there are a couple of ways that you could correct for it. In timelapses that I have done I have needed to autofocus at each time step. This is the same type of drift but in z not x-y, and the microscope has a built-in focus algorithm to get back to the right place. You could have an algorithm to correct in x-y by taking the correlation of the new image with the old one which tells you where to move to, and then you can move the motors (the basic processes for this exist in the stage calibration routine). Alternatively in the image processing that you are doing you could cut out a smaller region of interest from wherever it is in each image, again by an image matching/location process. That would be easier, as long as the movement is not too far across the whole image. It would not be quite as good if you want to see small changes as there may be focus change or distortion or lighting change as your region of interest moves across the field of view. Moving the stage to keep your region of interest in the same place keeps all those things as similar as possible over time, but would take a bit more to implement.
About your questions:
- The time lapse was two hours.
- Magnification is the maximum provided by Raspberry Pi Camera (I am not sure about the specific number).
- The microscope was taking pictures at more than one different position at each time point. To be specific, I made a loop to take pictures at three different positions for 2 hours, the image that I attached is one of those three positions (other images have the same ‘drift’).
Thank you so much for the feedback, I really appreciate this valuable information.
About your suggestions, I have already processed the images using a fiji macro, but I didn’t have a good result. Please, could you suggest another tool that I can use? Also, could you share with me the code that you use to correct x-y by taking correlation or the calibration routine to check it?
Thank you so much!
The Raspberry Pi camera with teh lens space gives a field of view similar to a 20x objective (see Comparison of a pi-camera and a 20x optical lens - Contributions - OpenFlexure Forum)
This means that the shift that you are seeing is quite large compared to the expected drift of the static stage from the paper that I linked earlier. However as you are also moving to different parts of the slide I would expect the repositioning to have larger changes than the static drift.
Image processing is not my core expertise, but what you are needing is basically to search for the position of a small image within a larger one. Fiji probably can do this. I am imagining that you centre the microscope field of view on the interesting features for the first set of pictures for the first time step. You can then cut out the central portion of that first image as the part you are actually going to use. On the second time step you look in the new picture for the area of the new picture that looks the most similar to the first picture. You can then cut out that area to be the second picture that you use, and so on. The detail of the method will be how much it moves, so how tightly you need to crop the image to make sure there region is present in all images. Then whether there is enough in the image that is the same throughout your timelapse that you can do the matching. You may need to have deliberate markers that you can search for if your specimen is changing a lot.
Possibly this Fiji / ImageJ plug-in might work on the final timelapse frames: Image Stabilizer (imagej.net) or Register Virtual Stack Slices (imagej.net)
I see, I understand better the drift during experiments.
I will try image processing with the tools that you shared with me.
As usual, @WilliamW has done most of the answer for me
That does look like more drift than I’d expect over two hours, though it’s worth asking if it was at room temperature or in an incubator?
If you are moving the stage, it is worth making sure you always approach the position from the same direction - i.e. each time you move, deliberately go to the desired position minus 300 steps in X, Y, Z, then make a move of [300, 300, 300]. That ensures that any mechanical backlash is compensated for. This used to be built into the software, but in some situations it got annoying and confusing, so it’s now off by default.