We did’t had the time to play much with soils this time but hopefully soon…We also receive very much interests on the microscopes for several other applications (ie pollen/honey analisys)
Saludos
Nano
Hi @nanocastro thanks for posting this - you guys have had a very successful workshop I think! I had a look at the gitlab - my ignorance of Spanish means I’m probably not that useful, but I think there might have been a bit of discussion on v6.1.5 vs v7? I’d definitely recommend v7, even though it’s still in alpha it’s already much better than v6. I am hoping to reach feature-freeze as soon as I can (weeks-months hopefully) at which point we will polish the English instructions - that’s probably as good a point as we’ll get to base a translation off.
If it’s useful for me to chime in on the GitLab I’m happy to - though I think I might need someone to make me an account? Also very happy to respond to issues here or on the openflexure gitlab if that’s easier.
I entirely agree, V7 is generally much better and is a pretty complete version even though it is labelled as alpha.
However it is not quite so suitable for a couple of the particular builds that I see there were in the workshop. First, the C270 webcam basic optics will not fit a V7 body, it requires the lower height of a V6.1.5 without the slide riser. Making this work in V7 is an aim, but it is not a trivial change. Second, the V7 is greatly improved as a motorised microscope, but some of those improvements are not useful if it is used manually. The generally improved robustness helps, and the base is much more easy to use with a Raspberry Pi and Pi camera, but if you don’t need the motor cable tidies and the Raspberry Pi it is all slightly bigger and takes longer to print, particularly the base.
Hi Richard, thanks for the feedback and recommendation. Indeed as @WilliamW says the reason for using v6 is for the possibility of using the logitech webcam. In my case I’m building an openflexure for a small private enterprise, and when presenting the options and calculating budgets, building the v7 can be equal or more expensive than buying a traditional microscope of acceptable quality. And while we understand the benefits of open hardware, the cost weighs heavily on people decisions. From the experiences in the regosh residence I think that the v6 with webcam and professional objectives can fulfill its purpose well, and can serve as a first approach and experience to then make the v7, anyway if there was the possibility of using the v7 with webcam I think it would be very useful for our situation (Argentina / Latin America).
Hi all
Yes, v6 seems more suitable for the webcam version and for many purposes is just good enough and cheap than RPi versions as @topogarcia
However, It would be nice to improve the colors in the image (ie blue background)…it would be possible to access and process the video stream to achieve this? @r.w.bowman@WilliamW what do you think?
maybe we can propose this to some phd student here…
I have only got as far as not being able to screw the lens off my C270 for my web cam build so far. I have no idea what kind of video processing you could do easily.
Hi all,
In regard to the C270 version, we will be working with some students in a simple GUI that can control the motorized stage and show the image. This will be just a proof of concept and we will develop the app in Processing just because I have experience doing something similar.
I know it is possible to do real time image processing, like edge detection, and tracking for instance.
It would be great to develop an OFM motorized version that works with the C270 and use the OFM software. How about that?
One of my intentions for the next major release of the OFM software is to make it more modular, so that it should be able to run on non-Raspberry Pi platforms, and support more cameras. That is currently waiting on someone (most likely me, but I’d be delighted to have help) completing it: there is quite a lot of the underlying architecture implemented already in a MR:
In principle, it’s not too huge a job to implement support for the C270 and include some image processing in there to e.g. fix the white balance/vignetting, but this would need to be done in Python. Also, I’m afraid most of the existing calibration code would need fairly extensive modification, because it’s quite Pi-specific. So, definitely doable, but not necessarily trivial.