@LightBurn -
The gradient exists inside the first radius inside the endpoints, just like it exists outside.
In burning wood, it might (or might not) result in a shading gradient response, sure.
In anodizing or burning off paint or etching glass, we are more like a threshold of removal vs not. But that doesn’t change the situation- the amount of burn is constant past one radius inside the endpoints, but a continuous gradient from one radius inside to one radius outside.
On a threshold burn (anodization removal), the energy exposure that causes the burn can occur anywhere along that gradient. So it can occur in the region one radius inside from the endpoint as well as outside- so it’s possible to need negative dot width compensation.
I’m really thinking the DWC isn’t the best answer here- we ultimately need to be able to just give a calibration curve. It’s not that complicated- with some simple programming, this makes much more sense and easier to use.
I think we’re on the same page that people are using way too high of a DPI, but I don’t think we’re seeing the same thing. People are using it to try to overcome a fundamental shortcoming in LB that is actually easier to fix than you realize.
People are using line intervals smaller than their actual burn width, and it shows objectively better resolution- until the deeper shades, when the dashes from one line collide with the dashes from the next line, and whether we’re threshold burning or shade burning, we go badly nonlinear. Thus people adjust the brightness up or make other changes to simply remove the darker shades from the design. This will yield better results than the “correct” way to do it, by doing something that is actually a complicated hack that needs to be tuned for each photo- some photos don’t contain a lot of darker shades to begin with, and would not need to have the blacker areas lightened to avoid the dash collisions.
But the primary reason they’re getting improvements by hacking the DPI is because LB’s currently stuck on square pixels (a dash whose width equals the LI). This is not the greatest implementation, and I think it’s based on the above misunderstanding- the burn is NOT constant one radius inside and one radius outside the endpoints, not just outside. A dot CAN be effectively smaller, better, higher resolution than this, and absolutely should be.
Greater horizontal resolution is possible, and people are demanding it, in fact they’re already widely doing it, just in a hacked and fundamentally misunderstood way.
When you see a machine that has a 0.18mm burn width and thus “correct” is a 0.18mm LI (141 DPI), yet people are setting it to something like 280 DPI, this actually does improve the image because LB’s square pixel paradigm is only capable of doing 0.18mm long dash commands (the command is not the actual mark, though). The command is not the physical marking result- whether diode, DC CO2, RF CO2, etc, and ALSO whether this is a gradient material or a threshold material, the dash mark result on the material CAN be smaller.
People are widely doing this by hacking the LI (DPI). I suspect that more people are doing this hack with LB than using it as intended, and getting better results (but at the cost of creating complex and inconsistent results) and it’s because of the very fixable problem inside LB. The dithering algorithms need to be able to work in rectangular, not square, pixels that make dash commands shorter than the LI (which, again, lots of people are doing, just by breaking the LI along the way). The material WILL respond.
But that’s actually only a stopgap measure. Bottom line is all these algorithms- Stucki, Jarvis, etc were designed for a different tech, and are actually NOT the right methid for lasering on any of these machines or materials. The laser does not have discrete horizontal pixels- square or rectangular- so this is not the method to be using.
I’ve seen enough of the transport format to see what’s going on, and this is easily fixable and will implement on any controller. All along I thought LB was stuck with this because of machine limits, but that’s not the case at all. Basically all these dithering algorithms can go away.
I can see I need to start another thread on this. What we need will implement easily with probably 3 parameters. There are two notably different modes- one if you have sufficiently controlled backlash all across the length of the axis (I’ve noted that doesn’t seem to always be possible), and one if you don’t.
The toning solution already dramatically improves the effective horizontal resolution no matter what. The next option is to offset lines or not.
Because if you have enough precision after trying to remove backlash, we can halve the LI and do offset rows that avoid collisions. This will effectively double the vertical resolution for the most part, albeit at the cost of doubling runtime.
Will start a new thread explaining the algorithm in more specific detail and some microscope shots of why this is necessary and what it helps. Also, really important- this is simpler to support than all these dithering modes and trying additional features to fix the problems with them. The goal the methods are chasing, but could never actually do optimally, is right here, and it’s dead simple!