Help with fluorescent imaging

I am working with a fluorescent dye that excites with UV and emits in red. The dye is sensitive to oxygen, it expresses more in reduced oxygen environments. I have e single UV Led mounted to the side of a 10x objective. In a reduced oxygen environment I can see the red signal, but I’m having trouble imaging the fluorescent beads. I have tried using just the standard visible light then turning on the UV at the same time. Some success, but not the best. I tried using a green LED, maybe a little better. If I try just the UV light, I don’t see much, it’s not a stable image. I assume there is a way to increase the exposure time? I tried to change some settings, looked at past comments, but no luck. I have also tried by using the raw images, .png in the red channel, but again I couldn’t see a way to increase the exposure time. I have also tried 3 UV LEDs around the objective. I can see better fluorescence, but can’t figure a way to take an image, any help would be appreciated.

Hey @DougKoebler Maybe you can checkout the integration of openuc2 with the openflexure?

There are some people working on it: Optical filter cubes - openUC2 (improved) von LIBRE hub | Kostenloses STL-Modell herunterladen | Printables.com

There is also an adapter for cube=>OFM. Maybe worth a check? UV + orange filter from lee typically is a first good start to explore e.g. textmarkers. For anything beyond, I can probably get decent quality filters from various sources.

I have been unable to image any fluorescent images thru Openflexure software. I tried just increasing the exposure time, maybe I need to expose way longer?

I am currently looking at getting the raw png images using a python program, where I set the exposure time. It’s not working yet, but it’s close. In this program, you set the exposure time and analog gain to get raw BGGR images. If I get the program working I will post it here.

I looked at the openuc2 project, very interesting. I am planning to build up a filter cube system on Openflexure. But for a low cost system, just using an external UV LED, looking at the red channel works on my Nikon TE300. The dye I am using excites in UV and emits in red. So, I would like to see it work on the Openflexure system with no filters.

Have you had a look at the Fluorescence optics for V7? thread?

I did see that and read the thread thru, but I didn’t see any settings. So, I just tried to increase the exposure, probably just didn’t go high enough, will try again!

I have a second SD Card running Raspbian. I ran Thonny with the program below, (help from ChatGPT). This program let’s you set an exposure time on the camera. So, I focus in Openflexure then close/open Thonny with the second SD Card. I can save the raw png pixel values for the B,G1,G2, and R channels along with a combined G1 &G2 and the full Bayer image. I take an image then look at the red channel, to see if I have a good exposure. (I would like to add a viewer to be able to adjust the exposure, but that’s not working. Having difficulties with a library).

This one works for now.

Doug

import numpy as np

from picamera2 import Picamera2

from PIL import Image

import time

def unpack(raw_bytes: np.ndarray):

"""

Convert 2D uint8 Bayer data with shape (H, W\*2) to 10-bit uint16 (H, W).

Each pair of bytes encodes one 10-bit pixel.

"""

h, w_bytes = raw_bytes.shape

if w_bytes % 2 != 0:

    raise ValueError("Expected even number of bytes per row")

w_pixels = w_bytes // 2

raw_words = raw_bytes.reshape(h, w_pixels, 2)

raw_10bit = (raw_words\[:, :, 1\].astype(np.uint16) << 8) | raw_words\[:, :, 0\].astype(np.uint16)

return (raw_10bit & 0x03FF) << 6  # Scale to 16-bit

def extract_bayer_channel(image: np.ndarray, channel: str) → np.ndarray:

if channel == 'blue':

    return image\[0::2, 0::2\]

elif channel == 'green1':  # Green on blue rows

    return image\[0::2, 1::2\]

elif channel == 'green2':  # Green on red rows

    return image\[1::2, 0::2\]

elif channel == 'red':

    return image\[1::2, 1::2\]

else:

    raise ValueError("Channel must be one of: 'red', 'green1', 'green2', 'blue'.")

def interleave_green_channels(g1: np.ndarray, g2: np.ndarray) → np.ndarray:

"""

Combine Green1 and Green2 into full-resolution green channel image.

g1: Green pixels from even rows (Green1)

g2: Green pixels from odd rows (Green2)

"""

h, w = g1.shape  # g1 and g2 should be the same shape

full_h = h \* 2

full_w = w \* 2

green_full = np.zeros((full_h, full_w), dtype=np.uint16)

\# Place green1 at even rows, odd columns

green_full\[0::2, 1::2\] = g1

\# Place green2 at odd rows, even columns

green_full\[1::2, 0::2\] = g2

return green_full

def save_image(array: np.ndarray, filename: str):

img = Image.fromarray(array, mode='I;16')

img.save(filename, format='PNG')

print(f"Saved: {filename}")

def main():

picam2 = Picamera2()

\# Configure the camera for still image with RAW 10-bit Bayer format

capture_config = picam2.create_still_configuration(raw={"format": 'SBGGR10'})

picam2.configure(capture_config)

\# Set manual exposure and gain

exposure_time_us = 1000000  # 1 second exposure (adjust for fluorescence)

analog_gain = 4.0           # Analog gain (1.0 to 16.0 typically)

\# Turn off automatic exposure and white balance

picam2.set_controls({

    "ExposureTime": exposure_time_us,

    "AnalogueGain": analog_gain,

    "AeEnable": False,

    "AwbEnable": False

})

\# Start the camera and give it time to apply settings

picam2.start()

time.sleep(2)

\# Capture Bayer image

raw_data = picam2.capture_array("raw")

data = unpack(raw_data)

\# Save full Bayer image

save_image(data, "bayer_preserved.png")

\# Extract individual color channels

g1 = extract_bayer_channel(data, 'green1')

g2 = extract_bayer_channel(data, 'green2')

red = extract_bayer_channel(data, 'red')

blue = extract_bayer_channel(data, 'blue')

save_image(g1, "green1.png")

save_image(g2, "green2.png")

save_image(red, "red.png")

save_image(blue, "blue.png")

\# Create full-resolution green image

green_full = interleave_green_channels(g1, g2)

save_image(green_full, "green_full_res.png")

if _name_ == ‘_main_’:

main(

Here are some images of the red channel at 1000 where 1,000,000 = 1 second

10um beads at 10x, express more fluorescence in reduced oxygen

Green LED no UV

Green LED with UV (single LED) in air

UV only (single LED) in air

1 Like

Can you post the published es/em spectra of the dye? I am guessing the peak excitation is not in the UV, but somewhere in the green. UV can get lost in the ex optics, might bleach the particles, might not be the optimal WL for excitation etc. A closely matched visible LED, or even a laser pointer, might be more efficient for the excitation. Then you just need to block the excitation light with a basic filter for the emission WL. Even a colored glass is OK, as long as the spectra are published.


Platinum(II) tetra(pentafluorophenyl)porphyrin
(PtTFPP) is commonly excited around 400-409 nm (Soret band/blue LED) and shows strong phosphorescence, peaking around 650 nm (red region), crucial for oxygen sensing due to intensity quenching by oxygen. Absorption features (Q-bands) are seen near 508 nm and 541 nm, with the Soret band at ~392-396 nm.

Looks like I should also try a blue LED