For the context, let me quickly present my topic of research : it is academic research which focuses on micro organism detection (using deep learning) for farmers to monitor their soil health, I have come across this wonderful project which is both accessible and customizable.
The microscope was built without major issues but with some tweaks : the motor workaround, the illumination workaround and an infinity x40 objective. Nonetheless, we decided to try removing the raspberry to reduce both the cost and the problem with RPI sourcing. To do this, I use an Arducam B0196 (same sensor as RPI camera v2) which is directly plugged to my laptop, as well as the arduino nano and an external supply for the LED and motors.
To connect the B0196, I first referred to Assembly Instructions , but this module does not work for the high resolution (no objective). I simply tried replacing the pi camera with the arducam and it works. So, I would like to modify the STL file âoptics_picamera_2_rms_f50d13.stlâ such that it better suits the other camera. Therefore, my question is : is there somewhere the original CAD file of this file (a .sldprt, .f3z, âŚ) such that it can be modified easily ?
For the software, I modified the new v3 server version and it works surprisingly well. âofm_config_manual.jsonâ was configured by instantiating an OpenCVCamera, I also changed the port of the stage and it works. But some features are missing, like the auto white balance calibration. To alleviate the problem, I modified the opencv.py to include manual parameters of the B0196 such as width, height, brightness, contrast, white_balance, etc. So, what I am wondering is, are the auto calibration settings embedded on the picamera v2 chip, and I would therefore have no way of doing it myself ? Or is there a way I can make it work, even if it is in software ? Because except the white balance, everything is fine, I just have funny colors on my images :))
OFM uses OpenSCAD, which uses code to describe a solid model, out of which an STL can be exported / generated. You can adjust that, though it is a learning curve.
Hi @oreille, that sounds like a great project. It is really nice that you have been able to use the stage as well. It would be great to see how you did that.
You seem to be proficient in the software, so the architecture of an OpenSCAD file will probably make sense to you. It is just a program. In the repository you will find the high level file rms_optics_module.scad. This calls a module that builds the 3D shape for a set of parameters passed to the module. OpenSCAD can render the 3D shape, and export a .stl file. The parameters define the objective type (infinity or not), parfocal distance, whether it has a beam splitter for reflection mode and the camera type. The camera type determines which mount is placed at the bottom of the module to fit the camera, and how high it needs to be to put the sensor in the right place for the image. Because I have already built the module that defines a camera mount for the Arducam B0196 - to use with the low-cost optics configuration - it is also available to the high resolution module. We just do not build the high resolution optics version by default, and I have not checked that it actually works! It took me a while to find what I had actually called the camera type.
If you set the camera type as:
CAMERA = "arducam_b0196";
in file rms_optics_module.scad and then build it with OpenSCAD it should give you the optics module that you need.
For the software @j.stirling made some improvements to the way a USB camera can be selected recently in !487. It means that you do not have to manually change the config file to select the correct camera on your system.
Colour correction on the Pi camera uses the fact that the initial image processing pipeline for the camera is done in the GPU on the Pi, and we have access to the settings for the pipeline. For any USB camera on the UVC standard, like the Arducam, all of the processing into a video stream is done on the board before being sent over USB. In principle it would be possible for the board to allow you to send settings to its internal image processing pipeline, but usually the pipeline is fixed. You might not even be able to set the gain and white balance, which will be necessary for colour correction and for scanning. All of this means that colour correction and lens shading will need to be done as post-processing of the received stream in software - probably with openCV. Working out how that can be done, and how effective it woudl be, is not currently in the main short-term plan for the software.
I had no prior experience with openSCAD (usually using solidworks) but it is great in the present case ! The .stl file you produce is actually just what I needed. I will print it, test it and will post you if it indeed works; but I do not see any reason it wouldnât.
I can confirm that the stl generated file for the Arducam mount fits perfectly (I said the Arducam was a B0196, it is actually a B0292, only difference is the auto-focus). I also redesigned the electronic_drawer, as I only need three sockets : the camera cable, Arduino Nano connection and the external supply for the motors. I did it on Solidworks, as I do not know Openscad nearly well enough, tell me if it is still of interest that I publish it here.
For the calibration, it is pretty restricted, the ONLY two parameters that communicates with the ISP directly are the exposure and gamma. Note that both of those are integers (from 0 to -11 for the exposure) which means they are just knobs on a black box. From there, I made an auto-calibration which finds the best exposure for a specified brightness, and the white balance for a relative equality between color intensity across the image. The other parameters can be changed manually (contrast, gain, etc).
And my problem is a color gradient across the image, with purple/pink on the edge and green in the centre, see the image hereunder. From Color dots in the image - #5 by MetallicaSPA , I reckon it looks like this post, but I do not have access to flat field correction, is that what the problem might be ? Any idea how I could resolve that ? Disregard the shading on the image, it seems like dust has compromised the sensor.
As you have not got access to the ISP you will need to process the images in software to correct for the lens shading. Each colour needs a different lens shading table, which will then give a uniform âwhiteâ background. (actually grey, because white would mean the image is saturated). Once you are already doing a software correction, you should then also be able to include correction of the colour mixing.
I think live correction would involve webgl which would be some work but wouldnât fix captured images. We could probably record the LST and then fix captured images manually. It will be less good than doing it in the ISP as the full pipeline and compression will have already happened and we will have to guess the gamma curve.
Just for synthesis sake and to be sure I understood correctly : each photodiode is equipped with a lenslet - a small lens - that focuses the light on the diode. But due to oblique rays, chief ray angle compensation is implemented, that is, lenslets are closer to one another to compensate for oblique rays (non-incident). But this poses problem in our set-up due to our specific optics creating different angles (not accounted for in the compensation) of the rays of light. As a result, we get vignetting : a lesser brightness towards the edge, and colour crosstalk.
To account for vignetting : take a picture of a totally white background in the same microscopy conditions and get 2D intensity heatmaps to then correct each channel. What I do not get is this : âWhile vignetting requires only one scalar parameter per channel,â quoted from the paper, I thought it was a 2D image that would correct vignetting, am I missing something ?
For colour aberration, a lens shading table is applied at ISP level which comes from calibrating on a white background as well.
To get this straight : as of now, the background calibration allows the ISP to correct for both issues, but not for color crosstalk ? Does calibration usually works from one session to the other ?
Finally, can I simply put a white sheet under the microscope, unfocus it slightly, and calculate my matrices from there, to then apply them post-process ? Was this tested ? For color crosstalking, was the solution discussed on Focusing and Chromatic Aberration Issues - #9 by WilliamW tested, i.e. taking multiple photos of the same scene slightly shifted to measure some correlation matrices of color channels ? This might be a little out of scope for my project, but I am still interesting in knowing if someone attempted it.
It is a single scalar per channel at each pixel. In contrast to the mixing, which is a matrix applied to the pixel in this channel and the same pixel in the other channels.
In practice the vignetting/shading does not need to be saved at each pixel, it is done on a coarse grid, maybe 64Ă64 points or even 16Ă16, and interpolated for the individual pixels.
As of now, meaning in the standard OpenFlecure v2 or v3 software with Pi Camera V2? Yes. The calibration remains, as long as you do not change the illunimiation relative to the lens and camera. You point the lens at a blank area (not white paper). There is an auto adjustment of gain and exposure to get a bright but not saturated base image, which is used to calculate the lens shading.
Colour unmixing requires much more information, but the matrix calculated in the paper is pretty effective if applied to other Pi Camera V2. It should be similar for the same sensor on the Arducam board.