Hi everyone! First post here. I’m building the high resolution microscope v7. Ok, here’s the thing: I bought a bunch of lenses for the illumination (here are the lenses I have: https://it.aliexpress.com/item/1005007275456146.html). They fit perfectly and rest in the arm, no issues. But I don’t have the illumination PCB.
I’m an electronics person, so no problem soldering the LED and so on. But I’m new to microscopy.
I read here and there that the illumination spot must be as uniform as possible.
I saw a nice hack for diffusing the LED light (Diffusing LED light hack) – BRILLIANT! But I don’t like sanding things.
Since I have a lot of lenses and they’re so cheap, I made some experiments, and I ended up gluing some paper on the flat part of the lens. It looks like it’s working, but I don’t know if my guess is correct or not. I’m waiting for the Sangaboard because I don’t want to build the stepper motor driver on my own (yes, I’m lazy ).
So meanwhile I’m building things.
Here my findings:
Putting the LED inside the hole in the arm and lighting it gave me a bad result! It is clear that the light is far from being uniform (as expected). So, as mentioned before, I glued some thin paper on the flat side of the lens…
and then the result turned into this:
Does this result have some kind of meaning?
Is it what’s needed for a microscope to work?
Let me know how stupid or smart I am! (And no worries, I like criticism – it’s the way I learn faster)
Thankyou Everyone for time and effort on this projet! ^^
Hi @giorgioFS. There is some information about the purpose of the illumination in the Knowledge Base section of the assembly instructions. To get a good image you need to have light coming onto your sample over the whole field of view at all angles. A piece of paper on the lens will do that. It is not the usual place for the diffuser because it will send light everywhere, which will reduce the illumination intensity where you want it, and also increase the possibility of scattered light by illuminating where you don’t need it.
Have you made the LED board yourself, or are you using the LED workaround? The work around assembly page suggests using PTFE tape (as used for plumbing) on the LED instead of the diffuser. This is effective. You just need to make sure that the tape is not pulled too much, as it can have distinct lines or cracks along the tape.
We have recently found that the recommended polypropylene sheet for the diffuser is damaged by the light over long use. PTFE sheet looks as though it might be a better option, but we have not tried it yet.
Hi @WilliamW ,
thanks for the quick and detailed answer.
I read about the PTFE and everything, but I didn’t fully understand (until now) why the lens exit point it’s not an ideal place.
I’m a close-to-electronics engineer, so I decided on the LED workaround. But as mentioned before, I’m not too much into microscopy, so I’m here to learn too. (sorry for some shy questions)
One last, then I’ll stop bothering, I promise ^^
Does different light color have any meaning? (from visible red to visible blue and everything in between)
Because a single neopixel LED it’s easy to control with a Raspberry Pi and it’s cheap. I read that someone is using and having some trouble with the “neopixel ring”, but what about installing a single neopixel LED in the LED socket? Does controlled color shifting have meaning during observation?
A single colour of light can be helpful in some specific cases where there is information in a particular colour, but not usually. With a colour camera you already have red, green, blue colour channels available. The colourchanging LED is also just rgb. There is no actual yellow light, for example, just red and green together to appear yellow to us.
Where it could help (with a lot of additional work…) is in the detail of colours around the edges of the image. The camera colour channels are not as separated as we would like, see: Flat-Field and Colour Correction for the Raspberry Pi Camera Module | Journal of Open Hardware , Making images with pure red, green and blue illumination would mean that you know the actual colour whichever image colour channel it is recorded in. That either gives the colour unmixing matrix, or you just recombine the separate images into a proper colour image…
Awesome! Thank you! Now enjoy your weekend, and sorry again for being so silly. Lots of information on where to start googling! Have a great day! ^^
And Thankyou SO SO SO MUCH!
Just to add to @WilliamW’s comment: the NeoPixel RGB LEDs don’t have all three colours in the same place. This likely means the colour of the illumination isn’t constant across the field of view, which isn’t ideal.
The diffuser is probably best placed next to the LED rather than next to the lens, because it will minimise stray light and improve image quality. Using paper instead of the specified PP sheet may well be fine, I’d just make sure to check you don’t see any structure in the illumination before calibrating the camera.
Does using a photography diffuser cut down to size make any sense? That’s what I used for my HQ build, but don’t know enough about microscopy to be able to tell if it is alright or introducing artifacts. If it works it might be an interesting and cheap alternative?
Also beware that the basic NeoPixels can have a fairly low PWM frequency (400Hz), so depending on your imaging setting you might encounter some flicker.
A small note regarding color. Depending on what’s being imaged, having a “monochromatic” LED, lens and camera can be helpful in some cases (e.g. imaging mechanical features). I use 450 nm LED, a flat field lens corrected for that one wavelength and a mono sony 462 chip – for imaging 1-micron particles. I’ve been very happy with the resolution (judging from Thorlabs standard patterns) and not having worry about chromatic aberrations.
Does that sensor work nicely with a raspberry pi, or do you also need a juicier computer at that point? Nice monochrome camera options are always helpful to know about, but I’ve not explored sensors beyond the official Raspberry Pi modules for use on the Pi.