Lens shading correction for Raspberry Pi Camera

Good day,

I was wondering if anyone had seen a research paper titled “Flat-Field and Colour Correction for the Raspberry Pi Camera Module”? https://openhardware.metajnl.com/articles/10.5334/joh.20/This paper had done a great job in explaining why vignetting occurs when we swap out the stock lens.

I’m exactly facing this lens shading problem from my Sony 8MP IMX219 camera which had its stock lens replaced with a zoom lens. The image below is supposed to be totally white.

I was hoping to follow the solution stated in the paper to correct for the lens shading problem, but I am totally at a lost where to start due to my limited knowledge in computer vision and Python.

Problems I’m facing after reading the research paper:

  1. The paper discuss the different steps taken for white image normalisation, colour balance, colour response and spatial varying colour-unmixing. Are these steps already provided as a Python script where I can just integrate into my video stream processing steps?
  2. How do I import libcamera into my Python script?
  3. Can someone guide me through the implementation of each correction step, in a detailed step by step manner?

I’m grateful for any help that can be rendered.

Thank you.

Hi @Zhang, thanks for posting.

The paper is accompanied by a code archive, which includes scripts to acquire the calibration images and perform all the processing steps. However, this focuses on full matrix unmixing to correct everything, and only works with raw images (so is pretty slow).

Probably the most useful code actually lives inside the microscope software here:

That file can be used as a stand-alone module (it doesn’t depend on the microscope server software). The code at the end of the file (in the __name__ == "__main__" block) is out of date, but the example code at the top is correct. There are functions provided that:

  • Auto-expose (adjusting exposure time and gain) so that the image is reasonably bright
  • Adjust the white balance gains so the colours are neutral in the middle.
  • Set the lens shading table (that will remove the pink tint at the edges)

You should be able to drop that file into your project and import the relevant functions. Then, assuming you have picamerax (you can install it using pip install picamerax), you should be able to run:

import recalibrate_utils as ru
picamera = picamerax.PiCamera()

ru.adjust_shutter_and_gain_from_raw(picamera)
ru.adjust_white_balance_from_raw(picamera)
lst = ru.lst_from_camera(picamera)
picamera.lens_shading_table = lst

Run that code while the camera is pointing at a uniform white target - the image should now be uniform and white.

This code also freezes the camera settings - it will not now auto-expose, or change its white balance. That should be what you need.

It is important to mention that the colour response will not be uniform: you will probably see a decrease in saturation at the edges of the image. That’s what the more complicated correction described in the paper is designed to fix, but for many applications it’s not a big issue.

I hope that helps! The only thing I’ve not covered is how to save and restore the camera settings from a file. The OpenFlexure Microscope software does do this, but if you are not using it, there are simpler ways to manage it!

Hi @rwb27, your solution total works.


The above image is what I’m getting after implementing your solution.
The lens shading issue was resolved, with slight discoloration at the bottom edges.
Not a big issue as I can crop out the affected region.

"
def adjust_shutter_and_gain_from_raw(
camera: PiCamera,
target_white_level: int = 700,
max_iterations: int = 20,
"
Setting the target_white_level at 700 causes the image to be over-expose.
I reduced the level to 500, and the calibrated white image changed to a light shade of grey. However, the images captured is pretty true to their original colors.

Q1. If I fix the target_white_level value, will I get the same “white” level every time the system restart?
My project requires me to measure the R/G & B/G ratio of different samples.
If I don’t get the same starting point every time the system restart, that’s going to be a problem to compare the ratio between different samples.

Q2. Can you share how to save and restore the camera settings?
Q3. Will I get the same starting “white” level if I restore the saved camera settings?

I think it all depends on what you mean by “the same”, unfortunately!

If you re-run the auto-adjustment procedure, the white value you get should be the same, i.e. the raw image should give you a value of about 700 in the brightest colour channel at its brightest point, and the processed image should be uniform and white.

However, if your illumination changes, it will try to compensate - so for example if your image is less bright tomorrow, it will be using a larger exposure time to make it appear the same. Similarly, if the light is greener when you calibrate, the white balance will change to make the image appear white again.

Loading and saving the camera settings means the relationship between the raw values collected by the camera, and the processed images you look at, will stay the same. However, if your illumination is inconsistent, you will then notice those inconsistencies.

Personally I think I’d try to do both - I would always check the uniform white image before a measurement (and probably record it along with the measurement) but I’d use the saved camera settings for consistency. That way, you can correct for any small variations in your illumination, but you don’t have to worry about the camera settings being different. It will also start up much faster. If you’re being thorough, it’s a good idea to check that the calibration image is uniform, white, and not saturated - if it doesn’t satisfy any of those criteria, probably something has changed and you do need to recalibrate.

The code that the OFM uses to load and save the camera settings is a bit complicated, as it’s integrated into a larger mechanism. However, I did write some simple functions to save/restor settings from a YAML file, as part of the code accompanying the CRA compensation paper. You should just be able to copy/paste those two functions into your code, provided you import yaml at the top of the file.

Hi @rwb27,

Part 1-
I’m breaking up my post into parts as I can’t post more than one image per post as a new user.

I ran into another issue, and was wondering if there’s anyway I can adjust the camera iso setting after implement “picamera.lens_shading_table = lst”. Reason being I’ll the captured images to be brighter.

Below is an image without the lens_shading_table solution. You can see that the bands displaying another color towards the side due to lens shading.

Part 2-
This resulted in the stitched image displaying alternating bands of greenish and pinkish colors, which is wrong.

Part 3-
However, I’m able to adjust the brightness, and thus can detect keypoints needed for stitching to work.

Part 4 -
Using your solution, I’m able to resolve the lens shading issue, but the resultant image is much darker, and very few, some times zero keypoints are being captured.

Are there anyway I can increase the iso or the shutter speed for a brighter image?

Part 5 -
Very sorry i missed out on an important point.
The lens shading correction was done under white light.
After lens shading had been corrected, the coral specimen images were captured under UV light.

The UV light reflected of the coral specimen wasn’t very strong, and the previous “bright” image of the specimen was obtained by setting shutter speed low and iso high.

Does the auto gain and shutter speed button help? It is in settings under camera (I think) That might still struggle if your image is actually very dark and the exposure cannot be increased enough. I also don’t know how well it copes with a sample with as much black as yours.

Hi WilliamW,

After implementing the lens shading correction, the camera parameters such as auto gain and shutter speed seems locked. I tried to adjust brightness, but any minor adjustment will cause the image on screen display to go either full white or full black.

I’m able to change the iso and shutter speed now to get a brighter image during image capturing.
But when I switch back to default iso and shutter speed (iso = 0 and shutter.speed = 0), the initial white balance value changes.

Just wondering if there’s anyway to only change iso and shutter speed during image capture and change to default setting after image capturing with the original lens shading correction?

Hi @Zhang, if I remember correctly you’re not actually using the OpenFlexure Microscope software, but you are using the LST calibration code from the calibration plugin.

If that’s the case, it should be relatively simple to readjust the gain/exposure settings after calibrating. If you run the calibration procedure as you are currently doing, then switch to the UV illumination, then re-run ru.adjust_shutter_and_gain_from_raw(picamera) it should result in a higher gain, and brighter images. As this doesn’t re-enable anything automatic, it won’t change the white balance. If you run ru.adjust_white_balance_from_raw(picamera) once, that will update the white balance but then keep it constant for all future images.

Hi @rwb27,

You are right. I’m only using the LST calibration code, and not OpenFlexure Microscope software.

The first step of my code is running lst under white light, and saving the lst settings.
ru.adjust_shutter_and_gain_from_raw(picamera)
ru.adjust_white_balance_from_raw(picamera)
lst = ru.lst_from_camera(picamera)
picamera.lens_shading_table = lst
ru.save_settings(picamera, output_path)

Then I switch to UV illumination and readjust the gain/exposure settings.
ru.adjust_shutter_and_gain_from_raw(picamera)
Running ru.adjust_shutter_and_gain_from_raw(picamera) after switching to UV illumination did result in brighter images which allowed key points to be detected.

After getting my images in UV, I switch back to white light and restore the lst settings saved earlier.
ru.restore_settings(picamera, output_path)

There are three issues with this arrangement.

  1. Under UV illumination, vignetting is still present.

    Vignetting was less obvious as a single image captured.
    But when the images are stitched, vignetting effect was more obvious. It resulted in alternating bands of dark and light violet across the specimen.
  2. Under white light, the image after restored_settings is much brighter than the image where lst is first done.
  3. After restored_settings, camera.capture couldn’t capture the image on screen. Only full black image is captured.

Is there any way I can resolve these issues?

  1. I think this is the much more subtle vignetting due to the camera’s CRA compensation - you don’t lose brightness towards the edges, but you do lose saturation (because of some crosstalk between the colour channels). If you can take the right calibration images, it is possible to correct this, at least to some extent, and I wrote a paper about it with accompanying code.

  2. That’s what I’d expect - your settings for UV illumination will have a longer exposure than the settings for white light.

  3. Are you saying you can see the video preview, but the captured images are black? This does sometimes happen with the pi camera - one thing to check is whether you have increased your “GPU memory split” to 128 or 256Mb? If you don’t have enough graphics memory available, it often fails in somewhat hard-to-debug ways.

Hi @rwb27,

I’m using a PTZ 8MP camera from Arducam.
The base camera is Sony 8MP IMX219 camera, which I think is the standard v2 RPi camera.

  1. In this case, I assume the calibration will have to be done together with the PTZ lens.

  2. I refers to your earlier post, Lens shading correction for Raspberry Pi Camera - #13 by r.w.bowman. Before switching back to white light, I restore the lst settings, which should give me the same exposure when the first lst calibration is done. So how is it that the exposure settings for UV light get “passed over” to white light illumination?

  3. Yes, I can see the video preview, but the captured images are black. I’m running this project on RPi 4, and I have not touch any settings on the GPU. Could you please elaborate how to increase the graphics memory?

  1. Yes, you should do the calibration together with whatever lens you’re using, as it will definitely make a difference.

  2. You don’t just need to restore LST settings, you would need to restore all settings. I have observed that sometimes it is necessary to shut down and restart the camera rather than just change the settings, but I have never managed to systematically figure out what those circumstances are.

  3. You can use sudo raspi-config to edit the GPU memory split, or you can add a line to config.txt in /boot/. Hopefully some web searches will give you more specific instructions :slight_smile:

Hi @rwb27,

I tried to follow the color measure_color_response.py code from Github but with some modification.
I didn’t build the jig, but rather printed out the colored test sheet to be used for color_response calibration. I idea was to first test out the color response under normal image capturing condition, and this is the preview image that I gotten.

From the above image, I didn’t know what is the conclusion, but I think the vignetting effect is still prominent on the white image.
There is also this output_camera_settings.yaml file which was generated.
I tried to load this file to my project before Picamera is initiated, and the vignetting effect is very bad on the white image.

I wonder what I did wrong. Does the measure_color_response helps to calibrate the image under white, red, green, blue and black to produce a calibrated lst?
If I’m on the wrong path, what should I do to calibrate lst such that vignetting effect can be minimise for red, green and blue images?

That’s strange. The set of graphs looks as I’d expect: it always plots from the raw data, so you should see the vignetting. The second plot is odd - it looks like the lens shading is doing nothing. It is possible that the YAML file is using a flat lens shading table, which is correct for calibration but wrong for measurement. I think there should be some documentation on readthedocs that explains how to use it to generate an LST (though this LST is unlikely to be any better than the one you are generating already).