Camera Alignment Accuracy

Thanks for the useful/meaningful contribution

Can you elaborate on this?

I’m trying to work back your approach based on what you’re saying and the laser in your profile. Are you planning to physically place the laser over the material, engrave, move to next area, engrave, move, engrave, etc. until you’ve done this 8 times to cover the full area of the material?

Print and Cut absolutely can be used for this and has a reasonable chance to get you sub-millimeter accuracy although possibly not the .2 mm you’re aiming for. It’s certainly going to be tighter than the camera. I’d suggest you experiment with it before writing it off. I’ve used it to cut out a design in 9 jobs of a 3x3 grid. My tolerances there were closer to 1mm, however.

Check out this video:
Cutting a single project larger your laser (pass-through version) - YouTube

Another small thing to add (I don’t know if it was mentioned explicitely before): The accuracy is heavily dependent on the camera image source. If you want to have a higher accuracy, try a higher resolution camera with no wide angle. There are industrial cameras that might provide a better result (taking into account that your use case is dealing with expensive material, you should consider having the fitting equipment as well).

A 4K UHD camera has 3840×2160 pixels (after lens calibration, this is even less, but let’s neglect that here). If you cover an area of 600×400 mm, you would have about 6.4 pixels per mm. So the theoretical absolute maximum resolution would be 0.15 mm (0.3 mm taking Nyquist–Shannon sampling theorem into account). This is only a theoretical value, since you will never manage to cover the workspace perfectly with the camera not losing a single pixel, having no distortion, no lens correction, perfect viewing angle.

To realistically have a resolution of 0.1 mm, you would need 20 pixels per mm (Nyquist–Shannon). This would give 12,000×8,000 pixels under ideal circumstances (for 600×400 mm). Let’s add another 10% to compensate for camera positioning errors, lens correction etc. Then we have roughly 13,000×9,000 pixels. This is 117 megapixels. Try to find such a camera, and I guess you would need some quite powerful PC to handle that, too.

So I guess it’s less to blame it on the algorithm, it’s the hardware that needs to fit as well.

I am a little confused about the different sizes of machines/projects we are talking about. I am a little confused about the different sizes of machines/projects we are talking about.

I would think that a head mounted camera will be a better solution for your project requirements.
There may be industrial solutions for this size of machines. How high is your camera located over machine bed and hviken angle has your camera lens?

No, Print & Cut cannot be used, as explained below:
Print & Cut relies on the absolute coordinates of the Registration Marks; when the laser engraver is moved to a different position, to cover a different area of the stone
The Registration Marks do not make sense.

I don’t see what you’re missing

Assuming you’ve watched the video I have to assume I have a fundamental misunderstanding of what you’re trying to do. But I can’t surmise what based on what you’ve stated so far. Let me know where I’m missing it. If you haven’t watched the video then please do so as you may be pigeon holing the capabilities of the feature.

Here’s what I understand:

  1. You have a large stone material. Much larger than the area of your laser, xTool D1 Pro based on your profile
  2. It will take 8 separate jobs of the laser to cover the full area of the stone in some sort of grid arrangement
  3. Between jobs, you will have to lift the machine and physically relocate it to an adjacent non-engraved area of stone
  4. Because you are doing separate jobs you are looking for a method by which you can reliably align one job to the other within .2 mm of accuracy

If this is not correct let me know where my understanding is off.

If this is all correct, I don’t see why Print and Cut is immediately out for consideration as it can do the basic workflow. The .2 mm accuracy is a question mark. With fine tuning of the equipment and the workflow specific for this it may be possible. The most difficult part from an accuracy perspective being beam to target alignment.

1 Like

I have already covered this. Below is a short summary, please see previous posts, for details.

First of all, the alignment area is 200x200 not 600x400, as specified in the relevant posts; second of all, 1/0.1 equals 10, not 20.
That changes your numbers by a factor of 12, an 8MP should be sufficient and 5MP good/close enough

Regardless of the resolution of the camera, the image captured by Lightburn, compared to Linux/Firefox/… seems to be

  • Skewed
  • Of poor quality (A bit fuzzy, low resolution)
  • Poor brightness

I haven’t say a word on the algorithm, only the process/workflow of the calibration/alignment, you should not confuse the two.

Yes, the calibration/alignment process is not tight/controlled/streamlined enough to guarantee repeatable/accurate/predictable results.
(Accuracy is not the only criteria, it should also be predictable and repeatable)

If the input (data/image) is of poor quality, and the process is loose, then one cannot expect clean/reliable results i.e. calibration/alignment, regardless of the camera.

The dimensions of the photo/stone are 72x36 inches, the dimenions of the bed that I am using to test the canera are 430x400mm; eventually, the bed size would eventually be about 37x17 inches (Xtool D1 Pro with extensions)

Thanks for the suggestion, but I’m walking away from this solution as

  • The Lightburn capture is poor, as explained previously
  • The calibration/alignment process is not unreliable/unrepeatable

The implemantation of the camera feature does not seem mature and robust enough.

I have started considering a different approach

Print & Cut relies on the absolute coordinates of the Registration Marks; when the laser engraver is moved to a different position, to cover a different area of the stone
The Registration Marks do not make sense. as the absolute coordinates of the Registration Marks would/may be different.

The registration marks are registered at the start of the print and cut workflow. This establishes position in real space vs virtual workspace and thus your design. You use multiple sets of registration marks. Which registration marks are used at each job depends on which portion of the image you would want to burn.

I understand the statement that you’re making but it seems to be too narrowly confined.

Did you review my workflow? What step was off?

What’s your take on the video?

15k worth of stone and youre trying to rely on horrible software calibration to do it? Engineer a solution isnt that what you said you are?
Figure out a way to physically index your machine along this giant slab of stone. There are a ton of physical solutions that will allow you to move on rather than sit here treading water.
Bust out your engineers ruler and perfectly square and center raised pockets to fit the feet of your xtool to allow moving the machine with minimal variance.
Install slide rails on / by the stone with indexing where needed.
Buy a machine that can fit your giant 15k slab of stone.
Outsource your job to someone with the physical capability to do it.
Design a shrink ray to make the stone fit then engrave, though you then have to calculate for expansion later of course.

That last ones a joke of course but as Ive commented since the beginning youre beating a dead horse and arguing semantics when there are very straightforward mechanical solutions that can get your project on the way.

You havent shared the dimensions of your project, how your work flow will be, or what the hold ups are outside of the camera tool isnt good enough (obviously!) for your application. The more data you provide the forum with, the more tools those that are attempting to engineer a solution for you will have at their disposal to accomplish it.

I’m not saying that the process/workflow described in the video is off or incomplete, the video seems quite articulate and detailed; I’m simply saying that, contrary to what you seem to believe, Print & Cut is not a good/best fit for my Use Case, as coordinates change.

To add, I will simply quote others:

Both Print & Cut and the camera are too approximate, but, the camera may allow tricks that Print & Cut may/does not.

In any case, I came to the conclusion that I should consider a different approach, Lightburn does not provide enough accuracy/repeatability/predictability…in other words, there’s no well defined SLA

You misunderstand. I’m not saying it’s a best or even good fit. I’m saying it might fit and accommodates the workflow. And for sure closer than using the camera. I interpreted what you were saying to be that it doesn’t work at all.

As stated earlier you’re better off coming up with a solution that allows you to do this in a single go. However, I was trying to work within your stated constraints.

Did you review my response to that post? I explained why it’s not apples to apples and why you’re likely to get closer tolerances. Narrowing the variables in the process would allow you to get even tighter. For example, by not allowing for rotation. If you can get alignment without having to change both axes that should make things tighter.

In any case, let us know what you come up with as a superior solution.

I don’t want to continue the discussion on this point, I think it won’t lead anywhere. I just wanted to clarify that your statement above is wrong. To have it explained for the next readers of this thread.
1/0.1 equals 10, that’s right, but to have a detection accuracy of a specific resolution, you need TWICE the resolution of the sensor (=20). That’s the Shannon/Nyquist-Theorem I mentioned. It’s not enough to reduce the workspace, you also need to physically place the camera that it only sees 200×200 mm, not a pixel more. So, for a 200×200 workspace, you need a 16MP camera sensor under absolute perfect conditions (which you will never have). In reality, it will be a 20MP sensor, I think.

I’m a computer science engineer and did a PhD in robotics including robotic/computer vision, that’s why I’m quite confident that my approach is correct :slight_smile:

Anyway, that won’t help with your problem, so you can ignore this message, I just wanted to be certain things are correctly explained :slight_smile:

1 Like

Sure but how does that apply here? :wink:

1 Like

:slight_smile: Just wanted to mention that I dealt A LOT with image registration, object recognition, object tracking, so I’m VERY certain that a 8MP is not enough. And I learned that from (painful) experience and a lot of science :slight_smile:

It’s good to know that we have few points in common; in any case, that is not an argument in favor of one idea/approach or another.
Furthermore, that doesn’t/shouldn’t mean much, as in any other profession, different software/computer engineers have different technical/understanding/creativity levels/capabilities/talent/skills etc.

The resolution of an 8MP is 3264 x 2448, enough for 0.1mm accuracy when dealing with a 200x200mm area, specially when the objects are simple shapes that may be easily vectorized (even 5MP may be sufficient???)
You’ll be surprised what creativity and technical excellence can achieve in software development.

Though both approaches seem to suffer from a lack of accuracy, the camera remains the preferable approach/solution as it allows to fine-tune and test, using more elaborate patterns (see bernd.dk’s Test Pattern, the evaluation/assessment process)
(For The Record: I have considered the Print & Cut approach before considering the camera approach, I have nothing against Print & Cut)

Yes, I have read.
The Print & Cut is based on a limited pattern that is even simpler than the simple Test Pattern that I have used i.e. the matrix of squares. The accuracy of the Print & Cut can only be lower than the camera approach.
I don’t see how a simple pattern based on two points is “likely to get closer tolerances”; it can’t even detect a “two-dimensional skew”??? For some reason, I constantly get the impression that you may be underestimating the challenge???

Of course. Why would I complicate my life if doing it “in a single go” was an option??? I haven’t reacted to that statement earlier, in order to stay in focus; that is a different Use Case, a different situation, a different topic/discussion.

OK, I’ll do my best.

In the mean time, I believe that there are few obvious shortcomings in the calibration/alignment process/workflow (“low-hanging fruits”, if you prefer) that, I belive, Lightburn should address

  1. The camera capture in the Camera Control windows should be improved; the quality is too poor (I have provided enough details in previous posts)
    If the Calibration/Alignment Workflows depend on such a poor capture, then the process can never be accurate.
  2. The Calibration Process should be tightly controlled: The word “roughly” should disappear in “Place the ‘circle patterns’ in…”); there should be minimal room for discrepancies
    As of today, the Standard Deviation of scores is huge, it goes from anywhere below 0.2 to above 1; that is an indication that the process/workflow is not tight/repeatable enough, the process should not allow such a variance
  3. During the Alignment Process, I have often noticed that the captured image is skewed, therefore the patterns were distorted; that may make the information provided by the four marks less accurate.
    I think that
  • The process should be further automated
  • To avoid ambiguity/errors, each pattern (1, 2, 3 4) should be captured and presented separately to users, in an order defined by the workflow (not the user)
  • The user should be allowed to fine-tune the position of the crosshair/mark; undoing/redoing is not enough (as I have said above, the patterns often looked skewed, so it’s hard to tell if the position of the mark is accurate)
  • A different pattern may improve the accuracy of the information provided by the four marks???

Of course, the above may not be enough, and include only few low-hanging fruits; further enhancements may be possible based on the internals of Lightburn

It seems to me we’re fundamentally solving for different things. You’re trying to state what doesn’t work as you expect today and how you’d like it to change to work for your use case. I’m trying to devise a mechanism that will work with the available tools to meet the limited stated conditions that you have.

Are you familiar with how Print and Cut functions? It doesn’t rely on a calibration mechanism. The statement about lower accuracy than the camera is surprising and telling.

Have you experimented with Print and Cut at all? Please do so. It will give you a better understanding of how it works.

Is there a reason this would be necessary?

If I am, it’s because I haven’t heard anything about what you’re doing or the workflow that would enlighten me about the challenge. Please clarify.

Because solving for the single-go offering may be easier than alternative routes.

You describe a problem you have in a thread that is “inactive” for almost a month.
Isn’t it better to start a new thread with your specific problem?