Bed camera calibration ignored

have a 16MP arducam on a 1.6m x 1m bed.
it’s a varifocal zoom lens, so i adjusted the fov to include all the sides with minimal extra margin.
the lid hinge is about a foot in from the end of the bed, so the camera cannot see all the way to the back. it naturally as a significant portion of visible space on top and/or bottom that is beyond the bed or blocked by the lid hinge. i can adjust the camera tilt if necessary.

this camera has significant barrel distorrtion so i selected fisheye lens

i used the april9 image with while posterboard covering the honeycomb.

it graded it ok and got valid images on the preview window. but this didn’t actually apply it. the update overlay is still fisheyed. in fact, the Align Image step that immediately follows the bdc cal still has all the uncorrected image

what’s going on?

but, the image is not fla

1 Like

hmm quit lb and did “run as administrator”.

looks like it lacked permissions to save to the directory, but did not parse the error response?

my result was not that great. i made a couple of fiducials about 2/3rd of the way to opposing corners, aligned perfectly, tried some fiducials elsewhere but they’re off by as much as 10mm.

would it help to make a bigger april9 test image card? the 8x11 is pretty small on the bed overall

like i say, the camera fov doesn’t cover all the way to the bed’s top and bottom, so the test card shots went to near the fov limits on left and right but vertically they were far from the fov limit

actually, the errors I’m seeing are much larger in the x than y. is taking the shots near the fov edge where the distortion is highest making it worse overall?

Good question, I asked something similar at LBX to our camera guru @JediJeremy and the response he gave at the time (as I understood) suggested it is better to keep the calibration process within this zone in green:

Can you show preview image from the Camera Control window of what the camera is currently “seeing” of your workarea? and put your calibration card in the view at the distance you were collecting calibration samples. And what version of LightBurn are you using?

found a bug!

if you do the cal routine and update overlay, you DON’T get the new comp

you have ti deselect the camera and reselect

then it’s using the new compensation

ok, with april9 blown up WAY bigger… like 1/3rd the bed size, so with 9 shots it covers all the bed…

this compensation is MUCH better than I’ve ever seen! it’s perfect, like 1mm of error!!

1 Like

You can, and should, engrave posterboard with the laser itself, each covering 1/3rd the height and width of the bed. It’s not the blackest of blacks but it works,

This is more accurate than printing. Paper distance along the height is measured by feed roller rotations and an estimate of the paper dia on the roller. And the paper can skew a bit. I would not call a printer “inaccurate”, it’s plenty accurate for most purposes. The gantry laser steps out exact coods, with little source of error at any length. Linear rail straightness, I guess.

I have a 1600x1000 bed so I needed 9 blank posterboards and one April9 engraved.

I did notice that LB’s sequence seems tailored to be the least convenient possible. It maximizes the amount of shuffling around that must be done for each shot. “convenient” would be adjacent, like “center, front center, next 7 clockwise”. The OCV algorithm may need them to keep jumping as far as possible like this, though.

Then it occurred to me, why am I even doing this 9x over? I could engrave sheets of posterboard with a little margin for overlapping its neighbor and give you one image to shoot. This would save a lot of time.

And the detection might actually be simpler- with the FOV full of alignment geometry, it’s not confused about what to do with areas without alignment data that could end up warping the whole shot.

Better yet, we could get rid of alignment variation between the 9 sheets by just taping it into one huge 1600x1000 sheet and lasering in all 9 patterns at once.

There’s another advantage there- until you move that sheet, it’s not just flattening info, it’s absolute scale, skew, and offset reference. The laser’s coord system sets every vector.

It would be fairly quick to create good repeatability just by adding a reference line to align it when putting an old one back on. Or just do red dot alignment like Print and Cut.

1 Like

I’m trying to install an Arducam 64MP now. This one is impressive!

The camera has to plug into interface hardware like a Raspberry Pi 4/5, and you install software to stream it to the PC. Ethernet is supposed to offer the lowest latency, an issue when we’re talking about an imager of this size.

The RPi seems like an expensive middle component, but it offers something pretty useful. It has low-level control over the camera’s autofocus lens, and AGC. It could even do the flattening on the RPi and offload that from LB. There’s a whole lot of potential with OpenCV library.

The single biggest prob I found with using a bed camera is the camera’s internal AGC (brightness control) which sums up the brightness level of the whole field of view. Probably weights the influence of the image center higher, too.

This is bad because most of my FOV in normal use is our jet-black steel honeycomb. So it sees 90% jet black and 10% light plywood and turns the gain up to make the different shades and detail of the black honeycomb show up, but that’s raising the gain WAY too bright for the plywood, which can appear washed-out. The edges will be blurry, and lines engraved it in would be lost.

You’re probably thinking “just adjust your lighting”. It has little effect. More light, less light, they affect the brightness of honeycomb and stock equally. If you see the light-colored work totally washed out by over-AGC, you dim the room lights, then the camera again sees black but even dimmer honeycomb on 90% of the FOV, adjusts the AGC to counter the lower light condition as per its design principle, and once again the light-colored work on 10% of the FOV is again washed out just the same. Shading with a neutral density filter will give a similar result.

Adjusting the brightness/contrast in LB was of little use. The information just isn’t there in the stream. An overexposed, washed-out image of the work has only 100% white or near to it across the whole thing and a small margin around it, and the original edges are lost. There’s nothing to recover with LB. And the regular USB cameras I’ve used do not allow the PC to command the AGC to the “right” point

I really wouldn’t want autofocus normally, since I’d expect it to erroneously lock onto the honeycomb if it’s most of the FOV. Or, it could lock on the gantry. But since the RPi has low level hardware control, it can disable the autofocus and lock it at an arbitrary setting.

This would be the best- and seems achievable, OpenCV software running on the RPi tried to recognize honeycomb areas. On my laser, it would be easy, that stuff is really black even after being pressure-washed.

Once identified as honeycomb, that s pace would be assigned zero weight for AGC and autofocus adjustment. So it will normalize the AGC for just “the work” and let the honeycomb fall out of the new dynamic range. It would be lost in blacker blackness. But fine, don’t care.

Autofocus could be helpful for tweaking the focus. Again, the RPi has direct control, so the range of focus it tries can be limited You’d want to lock its range to the camera’s distance to the place where your material top surface is in focus, and +/- a couple centimeters to focus.

I see the fundamental element here is being able to detect honeycomb vs “work”.

I also made an A4 paper pattern assembled with the printed marks on it, no more measurement errors. Afterwards, 30 cm is still short. I think there must be software under Windows to run all this or even synthesize two, three or four cameras, use a Kinect module with distance measurement, ultrasound or laser module for telemetry, etc. I should look into the matter. Afterwards, I now achieve 0.1 mm precision with the 16MPX USB cam autofocus FOV 120°, but I am more precise without using the good old centering methods, it is much more precise.

0.1mm? NICE!!

Did you check for consistency all over? With smaller test cards, it seemed to accurately locate the cards, use them as reference points, and be precise where the pattern was found. But with large gaps between the 9 placements of smaller cards, it has to make wild guesses how to flatten there without having any actual reference data. So it aligned well near where the card features were placed but warped around between those spots.

I got fooled a few times thinking I had it precise, fine-tuned with resized and realign the XY for 4 points I just cut and took a shot of, only to find it could be perfectly aligned for those 4 points alone, but the rest of the bed, not so much. Got as bad as 3-4mm in the worst spots.

I probably noticed it more than most because of the 1600mm x1000mm bed size.

With the card cut from a large piece of board on the laser itself and placed in 9 places, huge april9 features blanketed the whole work area. almost nothing was far from where one of the many april9 corners was found.

It is interesting reading for me, especially because I have also dealt a lot with the LightBurn Camera System.
As I see it, image information processing within LB is not optimal or better said, good results are very difficult to achieve.
It starts with Lens Calibration, which gives some random results sometimes, mostly because the autoexposure does not work satisfactorily. As is also rightly stated, honeycomb is a special challenge for LB’s image engine. If I for example do not cover my entire honeycomb, I get useless results.
Right now I have a 4mp camera with an 85 degree lens, the result/resolution is not the best. Before that I used a 90 degree 5mp camera which was actually better suited for my 400x600 laser, but it has been returned to be examined for exposure errors.
In the meantime I have found that it is not the camera but the software itself that is the “problem”. The resolution that other software presents with the same camera LB just cannot present.

Because I love challenges and sometimes do not want to accept a “no”, I have experimented a lot with my cameras. My conclusion is that if a few things are fixed in LB, all adjustments and working with the camera in LB will be much easier and more precise.
To show what I think is possible, the picture shows the result of the achieved precision - all over the table. It is just not achievable for people who do not have the passion or patience that a few geeks have.

To achieve this result I have adjusted all angles of the camera mount in relation to the displayed image in LB, for example I had to change the angle in the X (camera) axis by 1.5 degrees. I used many many calibrations until I was satisfied. Every time I have had a deviation of more than 0.5 mm in fine adjustment, I started a new lens and camera calibration. (I am now at x+0.2 and Y+0.2)
The deviation is currently less than 0.1mm across my entire machine bed!, the “larger” deviations seen in the picture are related to the very poor image quality my camera has under certain lighting conditions, here the placement of the target marker becomes pure guesswork.

Ps. I used the old black point map for calibration because it works best on my old Linux system.
(picture taken with an iPhone)

The good news is, that LightBurn is completely over the problem and is working intensively on a much improved camera software. :+1:

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.