Arducam B0196 on high resolution v7 microscope

Hello all !

For the context, let me quickly present my topic of research : it is academic research which focuses on micro organism detection (using deep learning) for farmers to monitor their soil health, I have come across this wonderful project which is both accessible and customizable.

The microscope was built without major issues but with some tweaks : the motor workaround, the illumination workaround and an infinity x40 objective. Nonetheless, we decided to try removing the raspberry to reduce both the cost and the problem with RPI sourcing. To do this, I use an Arducam B0196 (same sensor as RPI camera v2) which is directly plugged to my laptop, as well as the arduino nano and an external supply for the LED and motors.

To connect the B0196, I first referred to Assembly Instructions , but this module does not work for the high resolution (no objective). I simply tried replacing the pi camera with the arducam and it works. So, I would like to modify the STL file “optics_picamera_2_rms_f50d13.stl” such that it better suits the other camera. Therefore, my question is : is there somewhere the original CAD file of this file (a .sldprt, .f3z, …) such that it can be modified easily ?

For the software, I modified the new v3 server version and it works surprisingly well. “ofm_config_manual.json” was configured by instantiating an OpenCVCamera, I also changed the port of the stage and it works. But some features are missing, like the auto white balance calibration. To alleviate the problem, I modified the opencv.py to include manual parameters of the B0196 such as width, height, brightness, contrast, white_balance, etc. So, what I am wondering is, are the auto calibration settings embedded on the picamera v2 chip, and I would therefore have no way of doing it myself ? Or is there a way I can make it work, even if it is in software ? Because except the white balance, everything is fine, I just have funny colors on my images :))

Thank you for your ideas :wink:

OFM uses OpenSCAD, which uses code to describe a solid model, out of which an STL can be exported / generated. You can adjust that, though it is a learning curve.

Hi @oreille, that sounds like a great project. It is really nice that you have been able to use the stage as well. It would be great to see how you did that.

First for the hardware
All of the source code is on our GitLab repository OpenFlexure / openflexure-microscope ¡ GitLab. We use OpenSCAD for the parts. The .scad design files are in the repository, and are built as .stl shapes. The reasons for this are outlined in Modifying OpenFlexure: Where to start? - #2 by WilliamW. This choice means that no other file types are possible.

You seem to be proficient in the software, so the architecture of an OpenSCAD file will probably make sense to you. It is just a program. In the repository you will find the high level file rms_optics_module.scad. This calls a module that builds the 3D shape for a set of parameters passed to the module. OpenSCAD can render the 3D shape, and export a .stl file. The parameters define the objective type (infinity or not), parfocal distance, whether it has a beam splitter for reflection mode and the camera type. The camera type determines which mount is placed at the bottom of the module to fit the camera, and how high it needs to be to put the sensor in the right place for the image. Because I have already built the module that defines a camera mount for the Arducam B0196 - to use with the low-cost optics configuration - it is also available to the high resolution module. We just do not build the high resolution optics version by default, and I have not checked that it actually works! It took me a while to find what I had actually called the camera type.
If you set the camera type as:

CAMERA = "arducam_b0196";

in file rms_optics_module.scad and then build it with OpenSCAD it should give you the optics module that you need.

It looks right at least:

For the software
@j.stirling made some improvements to the way a USB camera can be selected recently in !487. It means that you do not have to manually change the config file to select the correct camera on your system.

Colour correction on the Pi camera uses the fact that the initial image processing pipeline for the camera is done in the GPU on the Pi, and we have access to the settings for the pipeline. For any USB camera on the UVC standard, like the Arducam, all of the processing into a video stream is done on the board before being sent over USB. In principle it would be possible for the board to allow you to send settings to its internal image processing pipeline, but usually the pipeline is fixed. You might not even be able to set the gain and white balance, which will be necessary for colour correction and for scanning. All of this means that colour correction and lens shading will need to be done as post-processing of the received stream in software - probably with openCV. Working out how that can be done, and how effective it woudl be, is not currently in the main short-term plan for the software.

Hi @WilliamW ,

Thank you very much for the comprehensive reply !

I had no prior experience with openSCAD (usually using solidworks) but it is great in the present case ! The .stl file you produce is actually just what I needed. I will print it, test it and will post you if it indeed works; but I do not see any reason it wouldn’t.

I looked into the Arducam pipeline and it actually does not seem to be completely fixed. While not having full ISP control like for the picamera, some parameters are made available to be changed in real-time, notably gain, exposure, white balance and others. So, I integrated this into the UI for easy tweaking and it works but need manual tweaking. See picture below. More info in the B0196 datasheet https://www.uctronics.com/download/Amazon/B0196_IMX219_8MP__UVC_Camera_Datasheet.pdf?srsltid=AfmBOoq2u8mKbMbzQkEF71dKjBCMPQrYl4oQx4aB2rP1HmsjKwtbJdWr

Next, I will try to make some kind of auto-calibration with those as well as post-processing and keep you posted !

Thanks again :))

Hello again all !

Just some follow up from the previous messages.

I can confirm that the stl generated file for the Arducam mount fits perfectly (I said the Arducam was a B0196, it is actually a B0292, only difference is the auto-focus). I also redesigned the electronic_drawer, as I only need three sockets : the camera cable, Arduino Nano connection and the external supply for the motors. I did it on Solidworks, as I do not know Openscad nearly well enough, tell me if it is still of interest that I publish it here.

For the calibration, it is pretty restricted, the ONLY two parameters that communicates with the ISP directly are the exposure and gamma. Note that both of those are integers (from 0 to -11 for the exposure) which means they are just knobs on a black box. From there, I made an auto-calibration which finds the best exposure for a specified brightness, and the white balance for a relative equality between color intensity across the image. The other parameters can be changed manually (contrast, gain, etc).

Here is my microscope :

And my problem is a color gradient across the image, with purple/pink on the edge and green in the centre, see the image hereunder. From Color dots in the image - #5 by MetallicaSPA , I reckon it looks like this post, but I do not have access to flat field correction, is that what the problem might be ? Any idea how I could resolve that ? Disregard the shading on the image, it seems like dust has compromised the sensor.

Best regards

The colour shading is discussed in Focusing and Chromatic Aberration Issues - #9 by WilliamW . The link to the paper in @j.stirling’s post tells you why it happens.
(Edit: the journal has moved their pages, so the link in that thread does not work. https://doi.org/10.5334/joh.20 should be a stable link)

As you have not got access to the ISP you will need to process the images in software to correct for the lens shading. Each colour needs a different lens shading table, which will then give a uniform ‘white’ background. (actually grey, because white would mean the image is saturated). Once you are already doing a software correction, you should then also be able to include correction of the colour mixing.

I think live correction would involve webgl which would be some work but wouldn’t fix captured images. We could probably record the LST and then fix captured images manually. It will be less good than doing it in the ISP as the full pipeline and compression will have already happened and we will have to guess the gamma curve.

Thank you so much for both of your replies @WilliamW , @j.stirling .

Just for synthesis sake and to be sure I understood correctly : each photodiode is equipped with a lenslet - a small lens - that focuses the light on the diode. But due to oblique rays, chief ray angle compensation is implemented, that is, lenslets are closer to one another to compensate for oblique rays (non-incident). But this poses problem in our set-up due to our specific optics creating different angles (not accounted for in the compensation) of the rays of light. As a result, we get vignetting : a lesser brightness towards the edge, and colour crosstalk.

To account for vignetting : take a picture of a totally white background in the same microscopy conditions and get 2D intensity heatmaps to then correct each channel. What I do not get is this : “While vignetting requires only one scalar parameter per channel,” quoted from the paper, I thought it was a 2D image that would correct vignetting, am I missing something ?

For colour aberration, a lens shading table is applied at ISP level which comes from calibrating on a white background as well.

To get this straight : as of now, the background calibration allows the ISP to correct for both issues, but not for color crosstalk ? Does calibration usually works from one session to the other ?

Finally, can I simply put a white sheet under the microscope, unfocus it slightly, and calculate my matrices from there, to then apply them post-process ? Was this tested ? For color crosstalking, was the solution discussed on Focusing and Chromatic Aberration Issues - #9 by WilliamW tested, i.e. taking multiple photos of the same scene slightly shifted to measure some correlation matrices of color channels ? This might be a little out of scope for my project, but I am still interesting in knowing if someone attempted it.

Have a great week-end

It is a single scalar per channel at each pixel. In contrast to the mixing, which is a matrix applied to the pixel in this channel and the same pixel in the other channels.
In practice the vignetting/shading does not need to be saved at each pixel, it is done on a coarse grid, maybe 64×64 points or even 16×16, and interpolated for the individual pixels.

As of now, meaning in the standard OpenFlecure v2 or v3 software with Pi Camera V2? Yes. The calibration remains, as long as you do not change the illunimiation relative to the lens and camera. You point the lens at a blank area (not white paper). There is an auto adjustment of gain and exposure to get a bright but not saturated base image, which is used to calculate the lens shading.

Colour unmixing requires much more information, but the matrix calculated in the paper is pretty effective if applied to other Pi Camera V2. It should be similar for the same sensor on the Arducam board.

Thank you for your response,

This is invaluable information, I will test the blank slide, try software correction and respond to this thread with the results once done,

Have a nice one !