Lens shading correction for Raspberry Pi Camera

Good day,

I was wondering if anyone had seen a research paper titled “Flat-Field and Colour Correction for the Raspberry Pi Camera Module”? https://openhardware.metajnl.com/articles/10.5334/joh.20/This paper had done a great job in explaining why vignetting occurs when we swap out the stock lens.

I’m exactly facing this lens shading problem from my Sony 8MP IMX219 camera which had its stock lens replaced with a zoom lens. The image below is supposed to be totally white.

I was hoping to follow the solution stated in the paper to correct for the lens shading problem, but I am totally at a lost where to start due to my limited knowledge in computer vision and Python.

Problems I’m facing after reading the research paper:

  1. The paper discuss the different steps taken for white image normalisation, colour balance, colour response and spatial varying colour-unmixing. Are these steps already provided as a Python script where I can just integrate into my video stream processing steps?
  2. How do I import libcamera into my Python script?
  3. Can someone guide me through the implementation of each correction step, in a detailed step by step manner?

I’m grateful for any help that can be rendered.

Thank you.

Hi @Zhang, thanks for posting.

The paper is accompanied by a code archive, which includes scripts to acquire the calibration images and perform all the processing steps. However, this focuses on full matrix unmixing to correct everything, and only works with raw images (so is pretty slow).

Probably the most useful code actually lives inside the microscope software here:

That file can be used as a stand-alone module (it doesn’t depend on the microscope server software). The code at the end of the file (in the __name__ == "__main__" block) is out of date, but the example code at the top is correct. There are functions provided that:

  • Auto-expose (adjusting exposure time and gain) so that the image is reasonably bright
  • Adjust the white balance gains so the colours are neutral in the middle.
  • Set the lens shading table (that will remove the pink tint at the edges)

You should be able to drop that file into your project and import the relevant functions. Then, assuming you have picamerax (you can install it using pip install picamerax), you should be able to run:

import recalibrate_utils as ru
picamera = picamerax.PiCamera()

lst = ru.lst_from_camera(picamera)
picamera.lens_shading_table = lst

Run that code while the camera is pointing at a uniform white target - the image should now be uniform and white.

This code also freezes the camera settings - it will not now auto-expose, or change its white balance. That should be what you need.

It is important to mention that the colour response will not be uniform: you will probably see a decrease in saturation at the edges of the image. That’s what the more complicated correction described in the paper is designed to fix, but for many applications it’s not a big issue.

I hope that helps! The only thing I’ve not covered is how to save and restore the camera settings from a file. The OpenFlexure Microscope software does do this, but if you are not using it, there are simpler ways to manage it!

Hi @rwb27, your solution total works.

The above image is what I’m getting after implementing your solution.
The lens shading issue was resolved, with slight discoloration at the bottom edges.
Not a big issue as I can crop out the affected region.

def adjust_shutter_and_gain_from_raw(
camera: PiCamera,
target_white_level: int = 700,
max_iterations: int = 20,
Setting the target_white_level at 700 causes the image to be over-expose.
I reduced the level to 500, and the calibrated white image changed to a light shade of grey. However, the images captured is pretty true to their original colors.

Q1. If I fix the target_white_level value, will I get the same “white” level every time the system restart?
My project requires me to measure the R/G & B/G ratio of different samples.
If I don’t get the same starting point every time the system restart, that’s going to be a problem to compare the ratio between different samples.

Q2. Can you share how to save and restore the camera settings?
Q3. Will I get the same starting “white” level if I restore the saved camera settings?

I think it all depends on what you mean by “the same”, unfortunately!

If you re-run the auto-adjustment procedure, the white value you get should be the same, i.e. the raw image should give you a value of about 700 in the brightest colour channel at its brightest point, and the processed image should be uniform and white.

However, if your illumination changes, it will try to compensate - so for example if your image is less bright tomorrow, it will be using a larger exposure time to make it appear the same. Similarly, if the light is greener when you calibrate, the white balance will change to make the image appear white again.

Loading and saving the camera settings means the relationship between the raw values collected by the camera, and the processed images you look at, will stay the same. However, if your illumination is inconsistent, you will then notice those inconsistencies.

Personally I think I’d try to do both - I would always check the uniform white image before a measurement (and probably record it along with the measurement) but I’d use the saved camera settings for consistency. That way, you can correct for any small variations in your illumination, but you don’t have to worry about the camera settings being different. It will also start up much faster. If you’re being thorough, it’s a good idea to check that the calibration image is uniform, white, and not saturated - if it doesn’t satisfy any of those criteria, probably something has changed and you do need to recalibrate.

The code that the OFM uses to load and save the camera settings is a bit complicated, as it’s integrated into a larger mechanism. However, I did write some simple functions to save/restor settings from a YAML file, as part of the code accompanying the CRA compensation paper. You should just be able to copy/paste those two functions into your code, provided you import yaml at the top of the file.