As the software counterpart to v7 of the hardware, we are planning for the next major release. This won’t be as dramatic an overhaul as v7 of the microscope, but will have some substantial new features and may have breaking API changes. Currently, the plan is to focus on reliability and security:
Plugins will be much easier to install and manage, which should mean more functionality is easy to access.
The microscope configuration will be set at start-up by a configuration file, and clear error messages will tell you what’s wrong if it is not consistent with your hardware. This should eliminate the confusion that results from the microscope quietly starting up with an emulated stage, for example.
We’ll make provision for some level of authentication on the HTTP interface, which makes the microscope much safer to use on larger networks.
It would be good to hear from anyone with ideas on what should be addressed. This is a particularly good moment to think about changes that can’t easily be made backwards-compatible. So, if there are architectural changes you’d like to see or features that have been annoying you for ages, by all means post here. Our current thinking is that we’ll work on the features above and release an alpha as soon as we can, even if it’s not feature-complete. That way, we can get some feedback on the new features while we work on other things.
A less exciting but no less important point is that a number of API changes will probably be needed in order to tidy up the code - e.g. the arguments to the “scan” action will be reorganised, so they can be passed around the microscope server more cleanly. Refactoring the code to make it cleaner, more modular, more testable, and generally more reliable will probably end up being the bulk of the work, though I guess if we do it well, it will be hard for most people using the software to notice!
late but hopefully not too late. Doing my first steps here (server V 2.10.1, WIFI connected to the raspi/OFM). Few thoughts / ideas from a non developer and a non expert in microscopy point of view:
OF Connect Navigation:
It is not possible to cancel movements in case e.g. entering too large numbers (happened to me by accident). I assume once the data is sent to the motor board, there is not much one can do or pull the plug
hence, could the calibration estimate the min/max of the axis’ screw travel? I mean move x/y/z to the highest point, lowest point (visual inspection) and store those limits. Out of limit numbers could then be caught before sent to the motor
the arbitrary units do not tell anything to me. I read a few threads about step/mm/rotation calculations. I would prefer to see real dimension or ideally select either of the three in OF connect
related: the ability to project a scale bar. I’m aware that there is not a one size fits all, but few people will have a gauge so a real scaling factor could be entered, or estimated by the geometry (you know how many um relate to motor steps and rotations. Which as s far as I understand it should be independent of the objective/camera in use
maybe something I missed: recording of a video. Could be a performance hit, though. My work around is the recording of the video stream on the client side (Win10)
it’s getting crowded very quickly. What about drag and drop the thumbnails creating folders on the backend?
General remark: I’m poking in the dark where to fix the objective. The manual says close to the slide, or elsewhere, not touching the slide. What is the recommendation? Again something I may have missed in the (different) instructions. With that said, the manual has much improved, the Wiki style is great. Maybe worth to move all documentation there. This would allow also to select different versions (see Blender manual or others)
For your first point, the stage calibration routine measures how many steps there are per pixel on the image, which makes more sense of the movement in steps. It is not a distance but at least relates to something physical.
The steps per pixel from stage calibration, together with the stage geometry would give an uncalibrated scale estimate of pixels per mm on the image. This is potentially very useful and possible. Would you be able to put them into an Issue in the project Gitlab? Issues · OpenFlexure / openflexure-microscope-server · GitLab. A proper calibration routine would be better, but needs a calibration artefact. I have tried a blank CD-R, which is a reasonably common item with a reasonably well defined spacing. It is rather hard to image well enough unfortunately.
Thanks for these, all really useful bits of feedback, and many of them things I’d love to fix.
Cancelling movements is possible with the next generation of Sangaboard firmware (runs on the same Arduino-compatible hardware). Support for this in the Python code will definitely come at some point, and a nice button in the interface will follow. I agree it’s really useful to be able to abort a move before either you have to pull the plug, or you crash the objective into something!
Estimating and storing max/min positions is possible, but is the sort of thing that goes wrong in annoying ways. It would be good to fix this and perhaps adding endstops would be a solution - this one probably falls into the “yes we’d like to fix it but we’re not quite sure when” category. Software limits that you can manually set is probably the easiest fix, and that’s something that can happen on a more reasonable timescale. Probably one for 3.something rather than the initial v3 though.
As @WilliamW says, we can already calibrate X/Y steps to pixels. Estimating the size of steps is possible, as is measuring the magnification. I’m slightly nervous about using calibration factors that aren’t verified by a measurement though; we’d need to think through carefully how we make it clear what the provenance of any reported distances is. That’s particularly an issue for any future medical usage, because there are whole sets of considerations that apply to anything that can produce numerical measurements.
Projecting a scale bar ought to be possible, if you’ve got a calibration artifact. It can be estimated even if you’ve not, but again it would be sensible to be very clear what calibration (or estimate) has been used.
There’s an extension to record video, it’s basic but should work. Recording the video stream on the client side is a very good option, especially if your client is a more powerful computer than the embedded Pi!
I would love to swap the built-in gallery for a more fully featured solution; I think Joel was very aware of its limitations when he put it together, and I think it would be a bad use of our resources to put too much more work into writing something that already exists elsewhere. Anyone who wants to help integrating a gallery solution would be very welcome…
the latest version of v7, which we hope will become the first beta, features a hard stop for the objective holder, so there is now no ambiguity on where to put it. I will be curious to see if this is useful consistency, or if it removes adjustability that was helpful.
Version selection for the documentation is something I’d like, again it’s just a matter of finding the dev time to do it