As the software counterpart to v7 of the hardware, we are planning for the next major release. This won’t be as dramatic an overhaul as v7 of the microscope, but will have some substantial new features and may have breaking API changes. Currently, the plan is to focus on reliability and security:
Plugins will be much easier to install and manage, which should mean more functionality is easy to access.
The microscope configuration will be set at start-up by a configuration file, and clear error messages will tell you what’s wrong if it is not consistent with your hardware. This should eliminate the confusion that results from the microscope quietly starting up with an emulated stage, for example.
We’ll make provision for some level of authentication on the HTTP interface, which makes the microscope much safer to use on larger networks.
It would be good to hear from anyone with ideas on what should be addressed. This is a particularly good moment to think about changes that can’t easily be made backwards-compatible. So, if there are architectural changes you’d like to see or features that have been annoying you for ages, by all means post here. Our current thinking is that we’ll work on the features above and release an alpha as soon as we can, even if it’s not feature-complete. That way, we can get some feedback on the new features while we work on other things.
A less exciting but no less important point is that a number of API changes will probably be needed in order to tidy up the code - e.g. the arguments to the “scan” action will be reorganised, so they can be passed around the microscope server more cleanly. Refactoring the code to make it cleaner, more modular, more testable, and generally more reliable will probably end up being the bulk of the work, though I guess if we do it well, it will be hard for most people using the software to notice!
late but hopefully not too late. Doing my first steps here (server V 2.10.1, WIFI connected to the raspi/OFM). Few thoughts / ideas from a non developer and a non expert in microscopy point of view:
OF Connect Navigation:
It is not possible to cancel movements in case e.g. entering too large numbers (happened to me by accident). I assume once the data is sent to the motor board, there is not much one can do or pull the plug
hence, could the calibration estimate the min/max of the axis’ screw travel? I mean move x/y/z to the highest point, lowest point (visual inspection) and store those limits. Out of limit numbers could then be caught before sent to the motor
the arbitrary units do not tell anything to me. I read a few threads about step/mm/rotation calculations. I would prefer to see real dimension or ideally select either of the three in OF connect
related: the ability to project a scale bar. I’m aware that there is not a one size fits all, but few people will have a gauge so a real scaling factor could be entered, or estimated by the geometry (you know how many um relate to motor steps and rotations. Which as s far as I understand it should be independent of the objective/camera in use
maybe something I missed: recording of a video. Could be a performance hit, though. My work around is the recording of the video stream on the client side (Win10)
Gallery:
it’s getting crowded very quickly. What about drag and drop the thumbnails creating folders on the backend?
General remark: I’m poking in the dark where to fix the objective. The manual says close to the slide, or elsewhere, not touching the slide. What is the recommendation? Again something I may have missed in the (different) instructions. With that said, the manual has much improved, the Wiki style is great. Maybe worth to move all documentation there. This would allow also to select different versions (see Blender manual or others)
For your first point, the stage calibration routine measures how many steps there are per pixel on the image, which makes more sense of the movement in steps. It is not a distance but at least relates to something physical.
The steps per pixel from stage calibration, together with the stage geometry would give an uncalibrated scale estimate of pixels per mm on the image. This is potentially very useful and possible. Would you be able to put them into an Issue in the project Gitlab? Issues · OpenFlexure / openflexure-microscope-server · GitLab. A proper calibration routine would be better, but needs a calibration artefact. I have tried a blank CD-R, which is a reasonably common item with a reasonably well defined spacing. It is rather hard to image well enough unfortunately.
Thanks for these, all really useful bits of feedback, and many of them things I’d love to fix.
Cancelling movements is possible with the next generation of Sangaboard firmware (runs on the same Arduino-compatible hardware). Support for this in the Python code will definitely come at some point, and a nice button in the interface will follow. I agree it’s really useful to be able to abort a move before either you have to pull the plug, or you crash the objective into something!
Estimating and storing max/min positions is possible, but is the sort of thing that goes wrong in annoying ways. It would be good to fix this and perhaps adding endstops would be a solution - this one probably falls into the “yes we’d like to fix it but we’re not quite sure when” category. Software limits that you can manually set is probably the easiest fix, and that’s something that can happen on a more reasonable timescale. Probably one for 3.something rather than the initial v3 though.
As @WilliamW says, we can already calibrate X/Y steps to pixels. Estimating the size of steps is possible, as is measuring the magnification. I’m slightly nervous about using calibration factors that aren’t verified by a measurement though; we’d need to think through carefully how we make it clear what the provenance of any reported distances is. That’s particularly an issue for any future medical usage, because there are whole sets of considerations that apply to anything that can produce numerical measurements.
Projecting a scale bar ought to be possible, if you’ve got a calibration artifact. It can be estimated even if you’ve not, but again it would be sensible to be very clear what calibration (or estimate) has been used.
There’s an extension to record video, it’s basic but should work. Recording the video stream on the client side is a very good option, especially if your client is a more powerful computer than the embedded Pi!
I would love to swap the built-in gallery for a more fully featured solution; I think Joel was very aware of its limitations when he put it together, and I think it would be a bad use of our resources to put too much more work into writing something that already exists elsewhere. Anyone who wants to help integrating a gallery solution would be very welcome…
the latest version of v7, which we hope will become the first beta, features a hard stop for the objective holder, so there is now no ambiguity on where to put it. I will be curious to see if this is useful consistency, or if it removes adjustability that was helpful.
Version selection for the documentation is something I’d like, again it’s just a matter of finding the dev time to do it
Sorry to revive such an old thread. Happy to surface in a separate one if preferred.
Cancelling movements is possible with the next generation of Sangaboard firmware (runs on the same Arduino-compatible hardware). Support for this in the Python code will definitely come at some point, and a nice button in the interface will follow. I agree it’s really useful to be able to abort a move before either you have to pull the plug, or you crash the objective into something!
This is something we’ve run into, especially for large scans. Calling m.scan(...) for something that might run for 10+ minutes typically leads to us just rebooting the entire device.
Projecting a scale bar ought to be possible, if you’ve got a calibration artifact. It can be estimated even if you’ve not, but again it would be sensible to be very clear what calibration (or estimate) has been used.
Any updated recommendations on this? We are fine with having a calibration artifact, though it would be nice to reuse existing infrastructure if there’s a function available. If it’s not existing in one of the software stacks, we can implement a workaround.
Calling m.scan(...) for something that might run for 10+ minutes typically leads to us just rebooting the entire device.
I think this is a slightly different issue as that will involve multiple moves. The firmware-related part is the ability to abort a single move commanded to the Sangaboard, that part can be used via the Stop Stage button in the Sangaboard extension, but I haven’t tested it with scans. An alternative to fully rebooting would probably be to restart the software, which can be done via the About tab in the web interface.
Useful update: in v3 of the software (which I’m working to get ready to release a bit more widely - currently there are a few packaging-related hacks) there is much, much better support for aborting actions. Specifically:
Stage moves can be aborted (by sending an HTTP DELETE request to the corresponding Action Invocation).
Any Action that makes use of the stage may also be aborted, with no extra code. This works by raising an exception in the move call.
Most actions started in the GUI will now display an “abort” button while they are running. This includes moving the stage and running sample scans.
I appreciate this isn’t immediately helpful to anyone not running the developer build of v3. My top software priority is getting this to the stage where it may be shared more widely, and I’ll be sure to announce on the forum when that’s possible.
I’m wondering what the expectations are timewise on a v3 release.
It would be really nice to have something to play with, albeit functionally as it is, without needing the technical know-how to get the current developer build up and running. That way a wider group (myself included!) can give feedback etc.
There’s one build there from 2024, feel free to test it out (though I’m not 100% sure it’s working yet). I really hope I can at least get a prerelease out before Christmas, as I’m going on parental leave for a few months after then…
Gave the image a try. Of course there are some things missing and some quirks. But overall it looks really promising and the new functionality like cancel a movement is highly appreciated. Looking forward to the final release!
That was impressively quick! Thanks for confirming it works for you
My current priority is getting the code ready for testing with blood smears in Tanzania in a couple of weeks, then a big push on code quality, testing, and robustness. I’m hoping all the work we’ve put into LabThings-FastAPI over the last couple of years will make the v3 server codebase much more manageable, and hopefully enable more people to get involved. In particular, it will make swapping out the stage and camera much easier, and remove hard dependencies on Raspberry Pi-specific libraries (if you use an alternative camera), so there’s potential to use it a lot more widely.
For what it’s worth, the Python client code now lives in the Python package labthings-fastapi, and you can get a client object with:
from labthings_fastapi.client import ThingClient
stage = ThingClient.from_url("http://microscope.local:5000/stage/")
camera = ThingClient.from_url("http://microscope.local:5000/camera/")
Each unit of functionality is now considered a Thing, so you end up with one for the stage, one for the camera, one for autofocus, etc. etc. - the intention is to split up the API and make it a bit more manageable. Nice API docs are in the pipeline, but http://microscope.local:5000/docs/ will give a list of all the things you can do, and hopefully the mapping from entries there to the Python client objects is relatively simple…
@zealkarel the v3 software is still very developmental. At present it does not show clear errors when hardware fails to load. It is also tied to the latest version of the Sangaboard firmware on a Sangaboard v0.5. If you have no motor controller, or a different version, then the microscope server will not start, although the process will not terminate so it may appear that the server is running. Similarly, any issue with the camera will cause it to fail silently.
My Sangaboard firmware is not updated, so I have the same kind of symptoms on v3 that you have. I remain on v2.10.