ML and Segmentation /w OpenFlexure

Hey all, I’m new here.

I recently got an OpenFlexure microscope and this is the coolest thing ever! My professional background is in machine learning and embedded software for medical diagnostic equipment. This microscope is my opportunity to explore some new things, and I appreciate the work that’s gone into it!

So this past couple of weeks I have been building some tooling for microscopy and computer vision. I’m working on a model to perform accurate pixel-wise segmentation on high resolution images (>4k, like the ones that get stitched together on a slide). The use case I envision is automation and assisted diagnostics. For example, one of the datasets I plan on doing is for segmenting blood cells that have the appearance of spherocytes, target cells, etc.

I was curious if anyone else has been working on ML stuff for this device. Also, has there been any talks of getting hardware acceleration for deep learning extensions? The only option for the rpi 4 (that I know of) is the Coral USB. It’d be cool to see what an Nvidia board could do if it were attached to this device, or perhaps an rpi5 with Hailo AI kit (integrated with the pi and the rpi camera).

1 Like

Integration of the OpenFlexure software for a Pi5 or Jetson would be great, but is a bit of a way off at present.
However the microscope is a web server, so really whether or not you are running segmentation or other algorithms on the server itself may not make so much difference. Most microscopy does not need low latency, so transferring images on wired ethernet to somethimg else for processing probably is not a throttling point?

I definitely agree with you. Sending images over the network, even high res images, has relatively low cost for encoding and transmitting. The application I’m working on is more for automation, so nothing really needs to be live streaming or realtime. So I could even just use the CPU to do inference on the pi if I really wanted everything running on the scope (even though it might take a 10-30 seconds per frame.
I am thinking mostly from a product standpoint - having everything you need one a single piece of hardware would be cool. No need to configure two devices to talk to each other, or configure something to use a cloud service, etc. I think what I’m going to do is to provide an option of running the rpi 4 itself on the scope, or opt-in to using a separate device on the network.

1 Like