Lens Calibration in Lightburn is nonfunctional/worthless

I gave up trying to calibrate the lens while attached to my laser, so I put the test pattern on a 4’ x 4’ sheet of flat white melamine/mdf board, sitting vertically against a wall with my camera on a tripod. I have tried using the flat brown side as well. I have an 8MP ELP camera module that has not worked; and is supposedly the same camera that Lightburn are selling as 8MP N 75°, and a new ELP 16mp 100° FOV Aspherical Manual Focus, which has ALSO failed to calibrate during this testing. The camera I am currently testing is an AverMedia Cam340 - 4k, 85°H/55°V FOV Aspherical lens (This means distortion free - NOT a fisheye) The only reason I’m testing this camera so extensively, is because the 8mp and 16mp cameras I tried have had even lower capture recognition rates.

This is the 3rd time through my testing to get a solid, repeatable lens calibration with a specific camera. I have been testing this for almost 7 hours today.

I got so frustrated I wrote myself a program to track how many times I’ve clicked the ‘Capture Image’ button in Lightburn trying to get my camera lens calibrated. Right now the counter is on 2843 clicks… with ZERO pattern recognitions beyond the first image.

I have the correct test pattern with the round dots offset - 4 rows of 6 dots alternating with 4 rows of 5 dots. I have tested the camera from 15cm to 1 meter from the target, even putting tape lines on my monitor screen over the top of the live camera view at one point to guarantee the lens calibration target properly fills the center of the 3x3 grid like others have said is required, which places the camera about 38cm from the target. I even have external fill lighting set up to provide even lighting so the camera wont see any washed out hotspots or shadows on the target.

In my second round of testing, before my current setup, I tried changing orientation and fisheye settings, and I got some odd results during roughly 1000 captures, give or take a couple hundred. I have sequential screenshots of Lightburn giving me a recognition score of 0.68, then 8888.0 (with no image), then a clear capture image with No Pattern Recognized, a 9.61 with a badly fisheye distorted image, and then a recognition score of 85047208.00 with a solid gray capture image. Really? Should this score even be possible when you want something that is less than 0.9?

Maybe a half a dozen times, the ‘recognized’ image was INVERTED from the live camera view, both when the paper target was physically inverted on my board (the capture image was right side up) and when the target was right side up, the capture image was then inverted and gave me a score in the 1.xx range.

I’ve set this same camera to fisheye, and I’ll get small bursts of recognitions with a capture image, and those captured images sometimes look perfect like a flat photo that should be perfect (with abysmal recognition scores), and sometimes look like they came from a 360° camera with a score in the 1’s and 2’s.

Whatever capture recognition processing is being done, also does this random fisheye warping to a much lesser extent in the standard lens setting. Probably 5% of the captured image located around the borders is curved sometimes but not always. it’s like every attempt you click “Capture Image”, it uses a different variation of image processing and recognition, and it swings wildly in what it changes instead of using incremental shifts.

In the VERY few instances where I got an acceptable score below 0.9 for the center capture, I spent DOZENS of attempts to get the second capture position and had zero recognitions, so I started the process over.

Camera lens calibration should NOT be this unreliable. camera lens FOV are not infinitely variable here. You should be able to tell Lightburn that you have a camera with a resolution of X, and an FOV of Y, and it should be able to tell you “set your camera to approximately this height”, and then adjust what type of recognition algorithm it’s using so it isnt making attempts using garbage where every image capture is distorted in new LSD inspired ways so that lightburn can’t find the dots that even 20 year old OCR software can find.

Ideally the distance from calibration pattern to camera should match the planned distance from lens to material and be in focus. However, this setup should not prevent the capture from actually occurring although at that size it may be a bit cumbersome to move the camera to achieve the appropriate position and orientation to the calibration pattern.

Maybe I’m missing it but how does that make this distortion free?

If you capture a photo of something rectangular that nearly fills the frame, do all sides come out perfectly straight?

Typically the capture should be slightly smaller than a single cell of the 3x3.

From what I’ve seen lighting is the single biggest factor in calibration success. You want enough light to where there are no visible shadows. However, you want a very soft light ideally with very even lighting and no bright spots. Think a bright overcast day.

The other major factor is having no distractions within the camera frame. Ideally a plan neutral background.

These figures indicate to me there’s something fundamentally wrong with the setup. You should not require that many captures and be getting significantly better values.

You should ignore entirely the resulting captured image. Rely on the score only to determine quality of capture.

Unless you have a genuinely distortion free lens then you should stay on fisheye.

Can you provide some screenshots of the capture process? There may be something else going on.

A normal round lens will start getting pincushion or barrel distortion as you get to the sides of the lens area on the image. Aspherical lenses are made specifically to prevent this distortion and are used in rectilinear optics to keep straight lines as straight. Fisheye lenses have VERY short focal lengths and very wide FoV that result in spherical distortion throughout the image the further you get from center, with no optical correction for the distortion. There should be NO need to use fisheye settings for anything under 110° FoV unless your camera actually has a fisheye lens on it, which is VERY easy to identify just by turning on the camera and looking at the preview.

My first sets of tests were done on a camera frame mounted over my laser bed with height adjustment from 25cm (10") up to 50cm (19.75"), with and without the honeycomb detection, and both the standard and fisheye settings for all 3 cameras. I even pulled the honeycomb out of my laser bed and tried it direct onto the chipboard underneath.
Recognition on the camera mount was even worse than my test board because of lighting and shadows caused by the laser’s frame, so I moved the calibration to a flat board off the laser to eliminate this, with nothing else present in the camera’s view. Rounds two and three of my testing were on both the neutral tan side of my board, and on the white side, because other users have posted quite good success with filling the view with a plain white posterboard or sheets of paper. I can also adjust the brightness and color tone on my fill lights from 3000k to 5500k in color scale (yellow incandescent to white sunlight in color), and none of them shine directly on the surface. they’re in diffuser boxes.

In Lightburn’s documentation online, they have an image with a big green circle on it saying the calibration image should roughly be in that circle, which is larger than center of a 3x3 grid. Other troubleshooting posts in the forum say it should roughly fill that center 3x3 grid area. This is why I put the camera on a tripod. I can move the camera to position it so that I can properly size the calibration image in that recommended area, and get an accurate mounting distance from the camera to the test pattern.

I moved everything back to my laser and remounted the camera over the bed at 38 cm(15") re-centered everything and when I restarted Lightburn, the camera preview image started up zoomed in on the calibration page so close that the dot pattern filled the preview window like the camera was less than half that distance. I had to restart Lightburn multiple times before the preview reset to the proper image coming from the camera.

It’s almost as if lightburn is expecting a square image from the camera because my burn area is square, so it’s distorting the image to match that, and then cant recognize the dot pattern as a result. I noticed in Lightburn’s ‘Camera recommendations’ has dimensions listed with ratios for the camera FoV, but I havent found any place to input similar information if a different camera is in use. Their presets for the 5 and 8mp cameras are visible at the very beginning if I select the CAM340, but if I swap over to my 8mp or 16mp camera modules, those presets disappear entirely.

I agree, I should be getting significantly better results.

Can you post screenshots of the calibration process?

There isnt any options in lens calibration other than select the camera, select the lens, and then start capturing. I went back through the documentation online and saw that it was done using Lightburn, and the images they use are all from an actual fisheye lens, so you can see the round distortion in every image using their specific camera.

I did receive a message from a user on another forum that said Lightburn has two different capture methods that can be selected. I didn’t find this anywhere in the documentation and it took me a while to find it. Lightburn was on custom. I changed it to default. This apparently made it better, and I managed to get through the first 4 captures consistently, not lass than 0.3, but still under 0.9 and good enough to pass, but not the last 4 in the corner captures.

I did run into a handful of other users having the same issue as I am where Lightburn wouldn’t’ recognize the test pattern and I ran across one suggestion to trim down the calibration page so that there was only about 1/2" of white space around the dots, instead of the 1"+ that would normally be there from printing it out at 100% size, or to scale up the test pattern to fill more of the page. I didn’t even know this was a viable option, but it worked. I trimmed the page down, and managed to get through the lens calibration, and then the alignment test with no issues, but I’m probably going to have to re-run it again to try and get the recognition better - but I’m backing up the prefs.ini file first!

This has now taken me multiple days of attempts to get it working, and it’s still not working right.

Now I’m having the same wonderful problem that a few other people have posted about where starting up Lightburn causes the camera image and overlay to be zoomed by about 300%, and the only way to make it reset is to repeatedly close Lightburn and re-open it.

Custom, if it works with your camera, is generally going to be the preferred option. The most obvious benefit would appear in the resolution of the capture. Has there been a change to the captured resolution? If the resolution is the same then not as big of a deal.

Note that you can save just the camera calibration by right-clicking on the Camera Control window.

If you want any feedback, then post screenshots of your process.