Changed the objective to a different 100x, it’s in focus now. Vid below
I’m currently writing a consumer for moonraker API to enable the OFM server to talk to the board. I already am halfway there, I think reporting position is already done, with absolute / relative moves being next. I might need to figure out how to calibrate distances from klipper → OFM server. Once I have a proof of concept I’ll upload it to the Glia github, and once I’m happy with the code I’ll create a PR to the Server gitlab.
Is that autofocus after each step? It looks as though the focus move has to be relatively slow. I assume for the video feed to keep up and see the focus.
There are issues with the autofocus - it kept rolling till it either unscrewed the Z axis completely, or kept forcing itself together when I swapped the direction of Z. Possible reason - no stage mapping, or the fact that my client doesn’t support interrupted moves, since it’s just a rest client hitting endpoints at the moment. Also might be some issue with absolute positioning compared with how Sangaboard handles it. For Klipper, I just used G91 and G90 for absolute, but I saw in the sangaboard.py that there is some sort of calculation done for every move. Will probably investigate more over the weekend or next week.
Code currently here:
It doesn’t handle http statuses, let alone anything else and isn’t remotely ready for production, doesn’t have auth. But if someone wants to mess around with it and make one for themselves, I included everything that’s needed - the printer.cfg, the ofm config, and the handling class. I ran this on Trixie full desktop, ran kiauh to setup klipper + moonraker + fluidd (fluidd isn’t necessary, but nice to have for debugging / movement if OFM server doesn’t build). Used fluidd to edit the relevant cfg’s for the printer, moonraker. Sadly cannot run Crowsnest concurrently with OFM software, since the stream only works for one frontend, the other one crashes.
You have highlighted a little bug in the UI there!
The numbers do not add up - much less than one pixel per step should give very many steps per field of view, not 30 . I shall raise an issue on the repository. It is probably a mistake in the words describing the variable on the UI, but it might be a mistake in the calculation.
It might be because I don’t send steps to the microscope, I send position in mm’s. This is also something to consider, the cartesian printer setup allows me to add gearing ratios to the motors, along with steps per full rotation. If I knew the exact “gearing ratio” the microscope has built in the flexures, I could have it precisely move “3 micrometers to the left” for instance, not counting backlash, which with the gearing is a small problem.
This is most likely also the reason why autofocus goes crazy - it moves 250 steps per slice, which then becomes 250mm’s. The “30” to move full field of vision seems correct for X and Y for instance (I set that to the manual stepsize in settings and it worked). I’m slowly removing all the references to the hardcoded stepsize / length, or rather changing it to ~30 which should be plenty there.
Also, the relative moves in the app act differently than I’d expect them to act. When I pass a relative move of 30 in Z to Klipper, it correctly moves up the small amount. But in the Server, it seems to pass a huge value (as in - not really a relative move coordinate?) I still need to debug what the actual value is and what’s causing it.
Edit: now that I think of it, I had issues with moving it 0.1 at a time somewhere? I’ll need to check what that was, but maybe I’m getting cut off on decimal numbers since the Server assumes steps instead
Steps are always sent as integers from the server to the motor controller I believe. I think internally the server keeps track of part steps to avoid cumulative rounding errors. It would be best to keep to something that can be integer - multiply your numbers by 1000 as microns instead of mm probably gives things that still make sense when made integer.
The calculation for microscope movement from motor movement comes from:
Motor steps per revolution of the small gear.
×2 standard gearing of the motor gear to actuator gear.
0.5mm pitch of the M3 screw.
7/4 lever ratio for (horizontal movement of the stage top)/(vertical movement of the actuator nut).
For the z focus it is the same calculation, but with a lever ratio of 1.
Starting testing of scanning - encountering a few issues though. Main one - after ~20 pics taken, the Z stepper seems to get stuck? It starts clicking, and the power supply switches to constant current - supplying 1A at 7V and everything dies. Usually it runs at 24V 0.8A and is fine. Restarting works well, but I think maybe the driver is overheating (even though it shouldn’t? Apparently it can handle 2A easily with 2.8A peak).
Second issue is server software related - I have the steps for autofocus set to 200 right now, which is a reasonable range, but there’s a check that disables autofocus entirely saying that the range is too small. It autofocuses every so often though, so I don’t know what that is about.
I tried messing around with the gearing ratios. Setting too-high of a ratio (more steps) slows down the stage a lot, to the speeds of 28byj. I currently have it in an okay spot for resolution, but have encountered the issue with autofocus above, and a fit issue in stage mapping - where backlash calculation has a check that throws out any error factor above 0.1 (my stage mapping usually sits around 0.9). Had to bruteforce the stage mapping a bunch, worked out in the end.
Edit: calibration matrix below
The z-steps minimum limit is hard coded based on the standard motor performance, particularly it needs to be larger than any backlash. If you are starting from the alpha-4 from December it is probably in more than one place. You will see in the repository that there is work to refactor scanning, focussing etc into a more adaptable structure.
For focussing there is also a physical speed limit from the focusing method: the data size of frames in the mjpeg stream is used as the sharpness metric, and that stream is 30fps. So on z, your step size, step rate and gearing must be small enough that you take many frames through the depth of field of your lens.
For CSM there is a sequence of moves from very small to large. This accounts for the different fields of view at different magnifications, and makes sure that the step is not too large at high magnification, but also gets noticeable movement at low magnification. With your fast gearing you might need to bring all of these numbers down so that you don’t go too far on small moves.
Finally, for all of the processes relying on image correlations or sharpness, there are some built in settling times after moves to make sure that the stage is still. These are adjusted to be appropriate for the standard setup.
Are you still using 100x? The field of view and depth of field are both the smallest for the highest magnification, which will be hardest.
Yes, this is 100x. I want to get 100x working, since just as you said - it’s the hardest and slowest to scan the entire sample.
Regarding focusing - I’m actually going to speed it up. I had it going slow compared to what it can be doing. It’s the slowest part of the process now. I also get additional “settle” time from adding an M400 gcode command which is “wait to finish moves”. I had issues with the stage mapping not going backwards, since it sent the command to go backwards before the other was finished, especially when the RPi got hot. Now it works even if everything else is lagging like crazy. I also don’t think I’m close to getting the Z too fast for settling. An additional benefit of using klipper is that I’m able to use input shaping to get rid of vibrations during start / stop. It is a guessing game though, since I don’t have an accelerometer on the stage. Maybe setting one up for calibration wouldn’t be a bad idea though? I could bruteforce it too, but it’ll be a long and annoying process, I have it in the ballpark right now, but getting it exact is an issue. Would help with getting Z moving faster.
I think that direct drive is the way to go though, should get rid of all backlash entirely, since the biggest problem now is the two printed gears.
Edit: also I’m specifically working around any issue by just changing the stage class and the printer.cfg config, not anything else in the Server, to get something that is easy to just swap over by changing the class, like it’s done with the HQ camera.
I let the scan run today. Got a 1.1GB zip file out it. The scan failed due to a timeout though. Stitching failed with a “key error”. Shame, since it was going well. Downloading the zip right now. Is it possible to stitch it somewhere else?
I can see that file in the folder and I can open it too.
Edit2: I’ve previously posted a second scan from my first HQ “traditional” microscope, scanning in the same environment (although with 1000 more images), on x60. Links to the pretty graphs included here:
Not sure what is happening with your scan in the bottom left? As those are the input positions i assume something has changed with the scan planner, or your stage isn’t going where the scan planner asked it (but that should cause an error).
It sounds like you have quite a number of accumulated code changes on your branch. I’m not sure where it is branched from. We limited the autofocus dz range due to conflict ls with the backlash compensation.
We should be generalising how we do backlash compensation and then tidying up some of the autofocus code in this dev cycle which may help.
I love the idea of having input shaping. I think an accelerometer would be needed for tuning. In z it will depend quite heavily on the mass of the objective. So a light weight accelerometer chip, or some sort of non-contact detection would be needed.
No idea what’s happening there. I can upload the scan if that’d help with debugging? I did get a “low voltage” warning on the RPi, which is weird since I’m running this off of a constant voltage power supply. Had those issues with power earlier though, maybe that’s part of it, or overheating.
I’ll check the file dates, but I’d bet the code is from ~20th of January. Do you think it’d be worth it to update?
This is a scan that failed with a timeout, so maybe the data is corrupted or something?
There seem to be very tiny, 2cm x 1cm boards with accelerometers on board. Also saw some with a usb-c connection. I’ll check tomorrow
If you have an accelerometer with a USB-c connection the cable is likely to affect the motion a lot on the Microscope. USB connection would be useful if you have both the accelerometer and Pi in your car, but when the accelerometer is on the microscope stage and the Pi is not, you need very light and flexible wires connecting the two. Probably bringing the wires down from a clamp quite high above would be best for free movement.
Still getting scanning issues with the reason of “timed out”. It keeps scanning, then switches the view from direct-from-camera to a cropped view, throws a timeout message (without a long time between movements) and tries to stitch whatever is there.
I’ve modelled up a quick tray so I can screw the skr pico on top of the RPi and not have to rely on a pad that led to overheating, or it being a pain to move. File for the tray included. The microscope still doesn’t have a USB-c dummy PD board to draw the required power, I’m running it off of a lab power supply. I’ll need to fix the thickness on the HDMI port, since I lost video on the touchscreen above. Not a big deal, I was running it off of my PC anyway.
What’s hilarious, is the addressable RGB on the SKR Pico lighting up the bottom of the microscope like a tacky JDM car filled with neons. The light can be turned off, though I don’t think it gets to the sample.
I don’t think that it did much for three hours of work, seeing how much of it was already done 2hrs in (or even sooner?). I noticed with my past scans, that it would bounce from one spot to the other repeatedly, taking photos of the same spot multiple times. Yet again we see the lower left hand corner being "weird”. I also setup a fan to blow at the RPi + Pico, so it’s not a thermal issue as I previously thought.
It is probably something to do with backlash, missed steps or some other factor that makes the predicted position from steps a poor indicator of the actual position.
If you are on the settings from the previous post, then the 0.2 overlap might be an issue.
You probably need to do some motion tests to see the repeatability of moves. Not just backlash, but repeatability over small moves and over large moves, and possibly moves over a long time period - the motors might sag?. That might show up the issue.
If motion looks ok, then do some small scans, but look really carefully. What you expect the moves to be, and what they actually appear to be from the images?