Dot width correction?

Hi Jack
Sorry I forgot to add info for elephant. I was a bas relief picture from internet search. That I processed thru imagR. It was done on a 60 watt OMT red and black Co2 laser in pass thru mode. The speed was 125 mm/s and I had a max 45% and 13% minimum power settings.

Not really, no - the amount of play time I get has dramatically reduced. LightBurn is now 20 people, with 200k+ users, feature demands from vendors, numerous internal projects going on, etc, etc. I just donā€™t get large blocks of time to monkey with things these days. Part of the reason for staffing up is to improve that, but for now itā€™s costing time, not freeing it up.

1 Like

OK, got a method incoming. Total common sense and objective.

But Iā€™m seeing something really interesting- LBā€™s Jarvis is coming up flawed, only slightly improves with dot width correction. In contrast, basic ā€œhacktoningā€ is MUCH more accurate. Will write up soon.

Measure of popularity, drawbacks of successā€¦

Weā€™re here to encourage you, plus I just purchased the multiyear deal on your software. :crazy_face:

:smile_cat:

Hi What is this multi year deal you sperk of? I dont see it anywhere.

If you renew before your license expires, you will receive an extra two months.

Slightly exaggerated the time ā€¦ :crazy_face:

:smile_cat:

And i thought i could renew and get five years for only $9.99 !

Welcome. Not sure where you got this information about renewal and pricing, but it is incorrect. :slight_smile:

This is worth review:

The quoted text says your key expires a year from when you bought it.

Is this correct? I thought it expired a year from when you register your software with it in order to accommodate people who buy a license when buying a laser but then need to wait for the laser and possibly the key to arrive.

Thatā€™s not correct, itā€™s actually from the moment itā€™s first activated on a computer. Iā€™ve corrected the text.

1 Like

@LightBurn -
The gradient exists inside the first radius inside the endpoints, just like it exists outside.
In burning wood, it might (or might not) result in a shading gradient response, sure.
In anodizing or burning off paint or etching glass, we are more like a threshold of removal vs not. But that doesnā€™t change the situation- the amount of burn is constant past one radius inside the endpoints, but a continuous gradient from one radius inside to one radius outside.

On a threshold burn (anodization removal), the energy exposure that causes the burn can occur anywhere along that gradient. So it can occur in the region one radius inside from the endpoint as well as outside- so itā€™s possible to need negative dot width compensation.

Iā€™m really thinking the DWC isnā€™t the best answer here- we ultimately need to be able to just give a calibration curve. Itā€™s not that complicated- with some simple programming, this makes much more sense and easier to use.

I think weā€™re on the same page that people are using way too high of a DPI, but I donā€™t think weā€™re seeing the same thing. People are using it to try to overcome a fundamental shortcoming in LB that is actually easier to fix than you realize.

People are using line intervals smaller than their actual burn width, and it shows objectively better resolution- until the deeper shades, when the dashes from one line collide with the dashes from the next line, and whether weā€™re threshold burning or shade burning, we go badly nonlinear. Thus people adjust the brightness up or make other changes to simply remove the darker shades from the design. This will yield better results than the ā€œcorrectā€ way to do it, by doing something that is actually a complicated hack that needs to be tuned for each photo- some photos donā€™t contain a lot of darker shades to begin with, and would not need to have the blacker areas lightened to avoid the dash collisions.

But the primary reason theyā€™re getting improvements by hacking the DPI is because LBā€™s currently stuck on square pixels (a dash whose width equals the LI). This is not the greatest implementation, and I think itā€™s based on the above misunderstanding- the burn is NOT constant one radius inside and one radius outside the endpoints, not just outside. A dot CAN be effectively smaller, better, higher resolution than this, and absolutely should be.

Greater horizontal resolution is possible, and people are demanding it, in fact theyā€™re already widely doing it, just in a hacked and fundamentally misunderstood way.

When you see a machine that has a 0.18mm burn width and thus ā€œcorrectā€ is a 0.18mm LI (141 DPI), yet people are setting it to something like 280 DPI, this actually does improve the image because LBā€™s square pixel paradigm is only capable of doing 0.18mm long dash commands (the command is not the actual mark, though). The command is not the physical marking result- whether diode, DC CO2, RF CO2, etc, and ALSO whether this is a gradient material or a threshold material, the dash mark result on the material CAN be smaller.

People are widely doing this by hacking the LI (DPI). I suspect that more people are doing this hack with LB than using it as intended, and getting better results (but at the cost of creating complex and inconsistent results) and itā€™s because of the very fixable problem inside LB. The dithering algorithms need to be able to work in rectangular, not square, pixels that make dash commands shorter than the LI (which, again, lots of people are doing, just by breaking the LI along the way). The material WILL respond.

But thatā€™s actually only a stopgap measure. Bottom line is all these algorithms- Stucki, Jarvis, etc were designed for a different tech, and are actually NOT the right methid for lasering on any of these machines or materials. The laser does not have discrete horizontal pixels- square or rectangular- so this is not the method to be using.

Iā€™ve seen enough of the transport format to see whatā€™s going on, and this is easily fixable and will implement on any controller. All along I thought LB was stuck with this because of machine limits, but thatā€™s not the case at all. Basically all these dithering algorithms can go away.

I can see I need to start another thread on this. What we need will implement easily with probably 3 parameters. There are two notably different modes- one if you have sufficiently controlled backlash all across the length of the axis (Iā€™ve noted that doesnā€™t seem to always be possible), and one if you donā€™t.

The toning solution already dramatically improves the effective horizontal resolution no matter what. The next option is to offset lines or not.

Because if you have enough precision after trying to remove backlash, we can halve the LI and do offset rows that avoid collisions. This will effectively double the vertical resolution for the most part, albeit at the cost of doubling runtime.

Will start a new thread explaining the algorithm in more specific detail and some microscope shots of why this is necessary and what it helps. Also, really important- this is simpler to support than all these dithering modes and trying additional features to fix the problems with them. The goal the methods are chasing, but could never actually do optimally, is right here, and itā€™s dead simple!

So does this feature apply to the threshold mode as well? I engrave rubber stamps and the beam width limits the detail possible in the design - thin lines and small details are basically destroyed when engraving the negative image. With LaserCAD I had to correct for it in the image before importing it. Does this setting only affect the horizontal width?

It does, and it only affects horizontal width, not vertical. It sounds like what youā€™re after is more like a Kerf offset, but for engraving, not cutting. Iā€™ve considered adding that option as well.

I do not require 15 paragraphs of effusive word salad littered with condescension telling me how ridiculously simple all of this is. I need a solution that works across the range of machines we support, as well as one that works for a very broad range of user skill sets and input data.

The code to resample images gets really complicated when the user has their image rotated, resized, sheared, or some combination of those. Adding non-uniform sampling into the mix makes it that much more difficult to do the image transforms, so I need to rewrite the code that handles that first, and then I can potentially do something about the forced square aspect sampling.

As I mentioned before, LightBurn originally did not impose this limitation, and real-world user data convinced me to change that, because far too many people didnā€™t understand how it worked or how to use it, and kept choking their systems with a fire-hose of command data, in addition to all the problems getting the resampling right that I mentioned above.

Also of note is that all of this resampling has to be accurate, so vectors drawn over images align correctly, and it has to handle transparency if present, so there are a number of different code paths to be written.

4 Likes

Yes I think itā€™s effectively a Kerf offset. I add a stroke around the black positive image before creating the negative for engraving. Doing it graphically requires starting with a black image on a transparent background rather than black and white.

@LightBurn Granted! It DOES work with ANY laser. Proven here to work on a W6- like any raster detail, itā€™s limited by the abysmal on-off time (<=1mS) with DC-excited tubes, but itā€™s cranking out amazing detail at 100mm/s for a 0.25mm spot size. I sat down with a microscope to do a pretty thorough analysis of tube bandwidth and dynamic response to understand the process with real-world data.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.