An alpha release of the v3 server

That actually totally makes sense. of course the kinematic orientation flips in both axis, when moving the camera to the other side. I swapped the motors as well, and did a recalibration.

Calibration Data before:

Current CSM Matrix: [ [ 4.5, -0.049 ], [ -0.003, -4.545 ] ]
Pixels per motor step: 0.026
Full field of view: [ 21, 16 ] motor steps

and after:

Current CSM Matrix: [ [ 0.051, -4.395 ], [ 4.597, -0.002 ] ]
Pixels per motor step: 4.496
Full field of view: [ 3687, 2770 ] motor steps

Ideally, the calibration routine should before anything else ask about the current axis-configuration to flip all axis accordingly.

A very interesting side note: on all combinations of x and y motors swappd and ticked (or not) the ā€œinvertedā€ checkbox for x and y, ā€œdouble-click to gotoā€ worked, without first moving in the wrong direction. without calibration. i don’t even know how that is possible…

PS:

first scan after swapping x and y looks promising:

3 Likes

Thanks @zealkarel, that is a bug but one we should be able to sort in software. We should be able to detect during CSM that it is moving in the wrong axis. I’ve not looked in detail at how we do CSM but clearly there is a built in assumption as to which is x and which is y.

Thanks both for this. I’ll make sure it gets an issue in GitLab.

2 Likes

There is a hardware update in the pipeline which rotates the camera on the Upright. That was started in order to address cable routing, but it also addresses axis flipping.

It would be best if the software can cope anyway. As you say, the click-to-move just uses the camera stage mapping matrix and so it always works.

On Gitlab, existing related Issues are:

Software:

related to

Hardware:

But scanning also uses camera stage mapping in v3 as we move in ā€œimage coordinatesā€. So I think that if scanning is going wrong then click to move should also be wrong, which implies that something gets confused in CSM in v3.

Edited No I think I am wrong here. I think that smart scan actually only takes the stage-x component of the image-x vector. So if the camera is rotated like in the upright the x-image vector might be [2, 400] and then it moves the x-motor 2 steps not the y motor 400.

1 Like

Hi @j.stirling, We are currently testing the 3 server, but have encountered some issues. Specifically, we are unable to connect to the microscope (please refer to the image below for details).Could you please advise on how to resolve this? Thank you very much for your assistance.

Urm, weird error. I’m suprised how little info is being logged. The issues is that the microscope server won’t boot, the connection to the microscope is fine.

Can you confirm that this is on a Pi4B using our SD card image from this thread?

Have you tested the hardware with another image? Something in the back of my mind is saying that we see error 3 when the camera is disconnected

We should make an issue to go through common hardware issues that we can induce like camera not being connected and see what the errors are raised. We probably cannot say ā€œYour camera is unpluggedā€ as other things might cause error 3. But we could create a message like,

ā€œA system error has been raised, this is most likely due to a hardware connection error. Try checking your camera cables.ā€

1 Like

Hello, I tried the alpha version from October 13, but I wanted to ask if the automatic focus stacking feature has been removed. I ran both Stack and Scan, but the images weren’t automatically merged. When I download the ZIP file, I only get the individual images, not the complete stacked sheet. I also can’t find that option in Slide Scan. Maybe I’m just missing something, but I wanted to ask.

I don’t think the OpenFlexure software has ever combined the images taken in a z-stack. The old scan in v2 just took images at z positions for a grid of xy locations and put them all in a folder.

The v3 scanning I think just stitches the most focused image at each xy position. Any extra z-positions are just images in the folder as before.

Sorry, I meant the stitch feature, but unfortunately I can’t find it in the current version.

That is in the v3.0.0-alpha3. (Although I have not yet got v3 on any of my microscopes..)

We have rebuilt scanning from scratch and so the stacking-like behaviour where it moves in small steps are used to test focus; we don’t yet have a way to save multiple images expose.

We hope to in the future, but we have found that we need to be very care on the data that we keep in memory to make sure the microscope is stable during scanning.

But yes, taking multiple images in a stack will return. We should be able to stitch; but we will always just stitch with the sharpest image we don’t do any 3D stitching or focus merging.

Oh, so you don’t get a live preview during scanning? What happens can you click the Download JPEG button in the scan list?

1 Like

Damn im so sorry! I flashed the wrong Image (Server v2). :frowning:

1 Like

Ah, no worries. Alpha 4 hopefully will be in a week or two. We are quite excited for it.

2 Likes

Hi all, just to share that v3 alpha 4 is now available!

Highlights include improved white balance, more thorough scanning of sample edges, and faster image stitching. It’s still in pre-release, but early data looks good to us and we’d like to hear what everyone else thinks!

2 Likes

I am a little late to the v3 party, but glad to finally be here! :tada:

I’ve now had a few hours with v3 and here are some questions and thoughts, most of which have probably been raised or noted.

First of all, it is a really nice experience! Great work! Calibrations feels a lot smoother, especially the stage calibration. Being able to change focus during calibration is nice. All the extra info during stage calibration gives a nice feedback on what is happening.

  • Regarding connecting via web browser. I’ve just used default V3 img (alpha-4) installation on my SD-cards and it seems that the default browser address is ā€œhttp://raspberrypi.local:5000ā€. Maybe it’s worth to mentioned in the handbook too? I hate to lobby for them, but I guess it is of use to Apple users, since open connect can be an obstacle there.

  • I would vote to re-introduce the Gallery. I like the tag system in the gallery but maybe there should also be a way to selective download images, and not having to rely on tags to select files for zip-download.

  • It would be nice to have support for some kind of hand controller or joystick. Maybe in the future something that is compatible with basic joystick modules, eg arduino Modulino Joystick?

Here’s one of my scans. Marine sample west coast of Sweden (Skagerrak).

Thanks again for distributing these amazing technologies!

2 Likes

Great image!

A gallery functionality is on the roadmap, I think not in the next alpha release.

@JohemianKnapsody has done some work towards making the SNES client work with v3, which allows you to use a controller input. It is a gamepad rather than a joystick, but should show what can be done.

1 Like

Thanks @Pelle.

Just to check? Did you use Raspberry Pi Imager? And if so did you set your hostname? Because it should be http://hostname.local:5000. So I assume raspberrypi is your hostname.

On the topic of the gallery.

Alpha 5 will be less than a month away, it doesn’t quite yet bring back the gallery. But it lays a lot of the groundwork in generalising how we do the scan management and scan data management. This should mean in alpha 6 a couple of months later it should be possible to capture single images to the same list (which will be the new gallery).

I think the alpha6 gallery will not be fully featured, but will be a welcome return. Once we have it we will want to do some feature scoping to really understand how people want to group and manage data so that we can build in that extra functionality in a way that works best for everyone.

And for controllers and joysticks

Again we are not quite at the point where it will work out of the box in alpha5. But we have managed to introduce much much smoother keyboard motion for all axes which will mean that once we get the controllers back they will be much better than before.

The current snes-client relies way to heavily on a very creaky code. The lower level C code I wrote in a rush in Tanzania in 2018. And much of the ā€œclientā€ was written in a branch called ā€œjungle-hackā€ because when we were in Panama I had a damaged Raspberry Pi camera cable and somehow when we loaded up UI the camera stream caused it to lock up, but I was able to do a low level preview without is crashing. So the SNES client was mashed up quickly to allow us to take some data. When we reintroduce it there is going to be a fair bit of tidying and modification needed to make sure we can package it properly into a more maintainable structure. It will come, it just takes a bit longer when we do it properly :grinning_cat_with_smiling_eyes:

2 Likes

Thanks for your reply @j.stirling

I used Raspberry Pi Imager V 1.9.6. My fault is I do no ā€œcustomisation of the settingsā€, choosing ā€œNOā€ to keep it as simple as possible.

I tried it again just now, burnt an img (alpha 4) and plugged it into a OFM. After connecting the microscope the first thing I visit is ā€œhttp://raspberrypi.local:5000ā€ which then brings me to the interface, while the other options doesn’t. So yeah, on a untouched img the hostname is set to ā€œraspberrypiā€. It is my own fault for just running default img settings. (I guess for security reasons) The handbook does suggests to click EDIT SETTINGS and ā€œset a username, password and hostname, which will be the name of your microscopeā€. I should follow the handbook :upside_down_face:

(versions and gallery) thanks for the Alpha 5/6 update and exciting to hear about eventual future extra gallery functionalities, looking forward to it all!

(controllers and joystick) Aaah, so the movements have gotten smoother, it does feel a lot nicer (I first thought it was my brain getting smoother). Nice that these features will transfer to future controllers too. Thanks for sharing the story of how the snes came about :smiley: