It has been a considerable period of time since the last server release, and it will be a bit longer before we have a stable or full featured release. That being said we are delighted to announce the release of
The release is mostly focussed on sample scanning. Some other features may be incomplete or missing.
The scanning interface is significantly improved, with automatic path planning and background detect
Stitching runs automatically on the raspberry Pi
There is a live preview of the stitched image (appearing after 4-5 images)
A built in viewer for complete scans.
A dedicated shutdown tab, so shutdown isn’t hidden in settings
Currently the only SD card version is the “lite” image, so it must be controlled over the network.
Currently there is no image gallery - We would love feedback on if this needs to be reimplemented
Currently extensions don’t work as the underlying framework has changed
Option to scan in higher resolution is gone (but will return very very soon!!!)
There are many more changes, but most of the changes are very low level, updating the OS, moving to the new PiCamera software stack, a new underlying server framework.
Getting started with this alpha release
Burn the SD card like normal with Raspberry Pi imager
Connect your microscope to the network (either configuring wifi in imager, or via ethernet)
Follow the first calibration wizard
Take the sample out and run background detect - The scan will fail with default settings if you don’t do this step, we are going to add this to the calibration
From the scan tab start a scan, and wait for it to capture a few images, the live preview should appear.
Once you are happy cancel the scan, and let it perform a final stitch of the images
Open the “Scan List” tab, and you should see your scan with a thumnail
Select “Show Stitched Scan” to get a interactive zoomable view of your scan
Feedback
As this is an alpha pre-release, we expect there to be some issues. Let us know what you think, the good and the bad.
Are there things you really like?
Are there things you miss from v2?
Are there bugs?
Are sections of the UI confusing?
Let us know here and then we will triage them into issues on GitLab (or you can open issues directly in GitLab, but please search to see an issue has already been created)
Hi,
I managed to make it work using a nano motorboard. I’m using windows machine and managed to write hex file using AVRDUDESS. Here’s how it works:
First plug your nano to computer using a usb cable then open device manager and find its port.
Then open avrdudess set programmer to arduino (green highlight), from presets select arduino nano (blue highlight), from port select your com which you found from device manager (red highlight), set baud rate to 115200 (yellow highlight), select nano.hex file (purple highlight), and hit program.
Apologies, that was somehow a bug on YouTube, I’ve never had this happen before. I’ve reuploaded the video.
But it might also be that I did something wrong. I just took the image file and flashed it onto the SD card using the Raspberry Pi Flasher, then put it back into my Raspberry Pi.
And I have another question: is this yellow tint normal? Because before, with the previous version, everything looked very white on my setup.
So the problem you have is not finding images? Is this the same if you ask for more images, and does the scan path look sensible - moving along the hair?
There are a number of errors in that log, so it could be a deeper problem.
I think scan is tilted. I suspected that geometry of stage is wrong and tried to change between delta and cartesian, but couldn’t change from settings.
Is there a way to correct this?
I just wanted to test it again, so I turned my microscope back on and noticed that my configuration, specifically the full auto calibration, is not being saved. It runs every time I start the microscope, so maybe that’s related?
Anyway, I tried the slide scan again, because as shown in the video it didn’t work at all before after one image it stopped and gave an error, and it didn’t take multiple images. But now, all of a sudden, it’s working again.
I think I need to run another test and completely reinstall the firmware and then check again to see if it behaves the same way.
But the problem that my configuration is not being saved still remains. Every time I restart now, I have to run the auto calibration again.
This is probably correct. The cartesian axes of the stage are in the direction of the motors, at 45° to the front/back axis of the microscope. If your slide was positioned straight with respect to the sample clips, it is at 45° to the camera and the xy axes of the stage.
This is a hangover from the purely manual early OpenFlexure Microscopes, where it is more logical to have the axes of the knobs the same as the axes of the camera. We are considering other options in the hardware, but that would be after v7.0.0
It does look like it is being saved partially, because you don’t have the Pin/Green effect of no lens shading table. But the white balance looks off, maybe remove the sample and run calibration again, or maybe just re-run the white balance.
Do you have a screenshot of what is saying the calibration isn’t saved? If this happens again are you able to download the log file and upload it here?
Thanks for flagging this @SphaeroX, can confirm it’s nothing wrong with your firmware, there’s a problem with our settings which we’re aiming to fix before beta version 2.
Does clicking the Save All Settings button in the settings tab fix the problem for now? We’re not planning on that being the permanent fix!
Unfortunately, the Save All Settings button does not work either. What I also noticed is that as soon as I open the interface, my browser shows four errors. I have attached a screenshot for you.
In the screenshot, you can only see the labels for the Sample ID x,y,z — so this is not directly related to the actual issue.
I have one more question. Can I already use a Raspberry Pi Camera Module 3 (the 12-megapixel version) with this alpha version? If so, I might order one and try mounting it.
Hi, new user here…
I am currently using v3 alpha, and the slide-options in the GUI are great.
What i find missing is a download-button for only the stitched hires-version of the image. especially when operating over wifi, downloading a multi-gig-archive (per slide) gets old pretty fast…
Thanks @zeus, we’ve made an issue on Gitlab to fix this. I’ll let you know when we make the change, and we’ll welcome any feedback on whether the software makes it obvious how to choose what to download!
Will z-scanning be available again in the future? and what will “pyramidal TIFFs” actually do? Or is it the same? As far as I understand it, these are for different resolutions only, not different z-heights.
If the path-planning detects fully closed areas, it would be cool to have the option to fill them somehow. either with low-res-scans, ai-generated background or just by rescanning them “properly”. For sharing scans it would be much nicer, if the scan would have no holes. Most likely you would close the holes by yourself in gimp/PS afterwards anyways with the same strategy, if you intend to share the scans. I would prefer proper rescanning of missing but fully enclosed tiles, as this will also cover non-background-tiles that are sometimes missing without any obvisous reasons. some “try to rescan missing tiles” option before the final stitching may be desirable also. Just for aesthetical reasons: the “background-border” around the probe should be bigger (by one tile). It is unavoidable anyways, so in my opinion it should be bigger, because there is often more content to be scanned, if you just look one tile more outward. edit: I somehow just overlooked the option “Detect and Skip Empty Fields”, but i have the feeling it is not quite exactly the same…
ability to combine/stitch multiple scans together, if the probe is somewhat bigger.
better hints of to how motorsteps translate into actually scanned area. when a slide does only get scanned partways, I have no idea if the motion-range of the flexure has been exceeded or if it just stops because of the configuration when starting the scan. If you want to limit scanning-range, you usually want to do that in mm. What is the default scannable area of a v7-highres-build anyways? I did not find any information regarding this…
The configuration/starting-condition of the scan should be saved as metadata to the scan (which already seems to be the case with scan_data.json) , so that it is easy reproducible, and can be shown as meta-information in the gallery, maybe even an option with “start new scan with this parameters” would be nice, especially when different scans should have the same “format”).
some arrow-buttons in the navigate-pane, that move the flexure by a fixed amount (like 1k for x/y and 100 for z, or even fully configurable) would be helpful, to pan around manually quickly without having to enter numerical values. Think of something like the UI for manual movement of the 3d-printer-hostsoftware “pronterface” (even tough this is a very dated piece of code…)
what does “Images in Stack to Capture” do? Is it the resolution-steps of the pyramidal tiff?
how does tolerance in background detect work?
regarding image gallery:
It would be very nice to again have an image-gallery, even though i found the one in the stable release very buggy (it managed to put different scans with the same filename into the same archive, which virtually no filesystem supports, and very unexpected things happen on extraction depending on extracting-tool, filesystem and operating system…)
Maybe the approach that prusaslicer took, that every option has an explanatory tooltip (and/or a link to the respective documentation) may be feasible here, too. There are just many things that don’t map to real things in my head, and most of them can be very easy addresses that way. I just finished my build a few days ago, but there are some spots, where i feel like having a powerful tool and just missing the right page of the manual to leverage much more potential…
And one more general thing (not only tied to this release):
I think the output-folder for all scans should be exposed as samba share. this would amp up the useability on all OSses drastically, and you can very easy base some cool features ontop of this (think about nightly active-pull-backup solutions from the whole fleet of scopes or a master-node, which collects scans from scopes to have all scans available in one place in a single gallery on an additional pi or mini-pc that just runs openflexure as “viewer” for collected scans). Many things have limited functionality, but are able to automatically read from or write to samba shares because it is easy to implement (like multi-function printers/scanners and such).