Can I recreate this great raster effect?

OK. They’re two different machines, and ULS, while I believe their halftone scheme is excellent, ULS images are not going to stand up because the Line Interval is flawed and not readily controllable. We can get like 500, 333, 250, 125 lines/in. I can’t dial it in much at all. And I’ve probably got the backlash set suboptimal on the Jarvis dither, I had limited time hopping between machines and it was the best I could do at the time. Also I was on the X100 which might be doing something different where it appears to commit to repeating the same line twice, I think the PLS660 was one line, but its PC was borked at the time.

For this, I’ve got small text in I think 75%, 50% and 25% grey, and a gradient bar. The case I make is that ULS’s non-random, variable-length-dash burn guarantees a determinate feature thus even a few raster lines makes the letter readable. No one solution is perfect- a Jarvis dither, or any square-pixel solution, while excellent in its own way, cannot represent detail like continuously-variable-width-dash halftoning used by ULS.


As we create grey text, the lighter the shade, the probability of a pixel being black becomes rare and at some point there are too few pixels to compose a recognizable feature. At a distance of course the pixels visually blend and create a great result. But small features composed of only a few raster lines and/or of narrow width cannot be discerned.

Let me describe it this way. Any pixel has a binary choice to fire a pixel or not. Any square-pixel method has the horizontal pixel resolution the same as the line interval (although there are intermediate concepts of non-square pixels where the horizontal resolution can be greater than the line interval). If you have a feature that is, say, a 45 deg diagonal line in a 5x5 grid of pixels and the line’s stroke is 2 pixels wide, if it is a 100% black line or Threshold is used, you get a recognizable black line, but our goal here is to represent shades too. If the line is 50% or 25% grey, and/or aliasing places the original bitmap’s pixels resampled across the raster lines which turns one black pixel into 2-4 grey-shaded pixels, the occupied pixels only have a probability of firing, and the resulting noise makes the feature indiscernable.


Here’s ULS. We also have a vertical line interval, and an equal horizontal interval of START points, but unlike square-pixel solutions we do not make a binary decision to burn/not burn based on probability or pattern. The element is a dash of continuously variable length, created by varying the on-time from 0% to 100% within a dash element. This means a diagonal line of even 3x3 intervals of 25% gray would likely be recognizable.

The patterning does have a lower threshold, you start with pure white and at some point you either have to burn a dot or not. So very light shades may create a pulse so brief that we don’t get any color change in the target so no feature is created. And its offset-square pattern of start points is apparent, which may or may not be aesthetically desirable. I’m not saying it’s perfect, but it can represent considerably smaller detail for a particular line interval.

Could you still achieve better resolution by using a shorter focus engraving lens with a tighter spot and a tighter line interval? Of course. Or could you reduce the power for the same lens so the spot size is slightly smaller, and increase the line interval? Also true, but not the point. These methods are going to take proportionately much longer machine times to run more lines. The metric is how small a feature can be in terms of raster lines and still be effectively represented. At that, ULS’s variable-length-dash halftone scheme is much higher than any sort of square-pixel random dithering.

The case for nonsquare pixels like this variable-width-dash halftoning is bigger than that. If you do a “deep” burn which actually has depth similar to- or deeper than- the line interval distance, then we have a structural problem. Some people would stop and dismiss me right there as “you’re just doing this raster wrong”, but hear me out. It’s just different.

The prob is the resulting unburned spots will simply burn up and/or break off due to their thinness in certain conditions. The problem conditions can effectively be circumvented within the variable-width-dash-offset pattern, but not others unless they kill the resolution by composing each pixel element out of multiple horizontal lines.

If you have the backlash set correctly and line interval is “correct” for the spot size and does not overlap lines, halftoning makes a more structurally sound feature because it has guaranteed width. If you keep the gray level below 50% black, then one line’s dash burn will not overlap with the offset dash on the next raster line, and structurally that is a very important threshold. If you can meet that criteria, certain effects are possible that just do not work with other methods. Going through a presentation on structural burning, maybe that’s a big thing for another day. But, it’s quite a powerful option to have work down to single-raster-line features.

Oz-

Upon reflection in a nice long shower, I see a long-term strategy which may serve us all much better. I have lots of ideas of how to best exploit laser properties to try different raster concepts that go significantly past this.

I’d have to rewrite Lightburn to even try them. I don’t want to. Even if the case for this technique is solid (which it is, IMHO), you can’t write up and test code for every theory I want to explore (totally understand that).

So, can we just open this up to a Python API where users can create or import arbitrary raster methods? LB passes it the original bitmap, and the Python script offers sliders and passes back the result, with lines resampled according to the new Line Interval. LB just displays whatever it gave back, for better or worse.

This gives unlimited capability if we are not locked into square pixels. But the only capability we need is a horizontal resolution up to 256x the vertical resolution (the line interval), and it needs to be able to display that and zoom in. Maybe 256x is impossible to resolve on the laser anyways, and/or would be painfully slow due to too much data, but it’s totally fair for LB development to say “not our problem that it’s slow, that’s what you asked for and that’s what happens”. And maybe it doesn’t need to go THAT far in horizontal resolution, we could maybe do everything possible with 16x? Ideally, again, best outcome is LB can technically support up to 256x on the horizontal resolution, and maybe it will be hard to show on screen and/or slow or even buggy to send to the Ruida, but LB can say “not our prob, that’s what you asked for” and maybe the Python user-author works in 16x or whatever that performs well.

You move the sliders, it reruns the python which passes back a new bitmap. I think maybe we need to stop with the “recalculate with every change in sliders/fields” strategy, the lag is already there. Just a “recalc” button beside the parameters. Leave it up to the Python author to work that out if we can. If you leave the Preview window with the settings and appearance you want, then resize or otherwise alter the image, well, LB’s gonna do what it does- it recalcs (based on whatever the Python does) without the Preview window before it sends data to the laser. That’s the user’s prob, they need to Preview after a resize.

This would be totally effective in a way that external processing of bitmaps and then “passthrough” is not. Because you’re unable to resize and tinker with the image inside the project without breaking a passthrough image’s strategy, the processing needs to be live inside of LB.

Then we can roll ahead with some really good progress, unlimited, without loading down LB’s devplate any further.

I am currently only seeing this in terms of rastering along a single axis, X or Y. Diagonals… maybe it could still work, but in any case I currently don’t see that as a high priority at all.

I guess this means LB overall would have to call the Python image code outside the Preview or Image Settings to recalc whenever the image is resized, based on existing slider settings. That’s totally fine. It may also mean that once we deviate from square pixels, the appearance of the processed image on the main workspace may be less than accurate. Also fine, as long as the Preview makes a good effort to show it.

One further thing- similar strategy for Fill. As our makerspace has many users, many have asked how to make Fill do patterns. I offered an explanation that you’d Convert to Bitmap, then… well, you have a thing that I don’t see a way to edit anyways without saving as a bitmap file, edit with an external tool, and somehow reload back into the context of the LB project in the right place spatially. I couldn’t give them a viable method. If we just had user-generated Python Fill processors that create the Fill patterns, that would unlock unlimited capabilities with no further effort from LB dev.

This could be amazing Like say you click Fill, on the pulldown menu there’s an option for MyFill1. LB could either pass the vectors of the closed shapes to Fill, or just start from a line interval to generate and pass 1-bit bitmap data of the shape to the user’s script which returns a textured bitmap. But the result need not be a simple texture masked with the Filled shape. If the user wants to make a sophisticated script, it can use the shape’s edges and process any way the author wants. For example, taking the shape for the block letter “A”, detecting the corners, and making a grayscale bitmap for it that does what rubber stamps do- bevel the edges inward to give it a prismatic look.

For that, it would make sense that LB not sample it into a raster, but just pass the vectors for the selected object(s) being filled, offer sliders for line interval and power at a minimum but also whatever parameters the Python code needs, and get back a bitmap that by default burns as grayscale. Maybe the Python only returns black and white 1 bit bitmap data, in which case it’ll effectively be like a Threshold, but if it’s got shades, it will burn that too.

That leads me down the question of the square pixel problem again, because a bitmap format is traditionally square pixels. Well, that’s not a big problem. The Python script just needs to pass a scaling factor. e.g. we Fill a 10mm square object with a 0.15mm line interval, LB just passes 4 vector lines the script returns a rectangular bitmap with 67 vertical lines of 200 pixels in width, and a 3:1 scaling factor on the width. So the final result is the laser will burn 67 lines at 0.15mm interval, and the line data has 200 pixels that might turn the laser on/off rendered across 10mm of physical space. Perfect!

Maybe it’ll just be a block Fill, maybe polka dots, maybe waves, maybe an offset “brick” texture, but maybe the Python script does something more sophisticated that is not merely using the shape as a mask but has to react to the shape’s edges and corners, such as a Mario block:
mario_block

And it could do something neat when passed something nonrectangular, or a closed vector shape to Fill like the letter “A”. Or maybe the vector features will break the Python script and just create junk bitmaps. Point being, LB dev isn’t involved in that, it just gets a bitmap to scale and place in the design and send to the Ruida.

Actually, can we go one step further, and allow the external tool to also return new arbitrary vectors, which may be outside the original vectors given to it? Then it can not only do its own Fill+Line, but the Line placement can compensate for Offset, or do some weird stylized thing in whatever way the Python author wants to take off in. Again, bottom line would be LB’s only got to pass it along to the laser.

I suspect half the reason your text is unreadable is that you haven’t adjusted your scanning offsets - you’re getting doubled-up results, meaning that the smaller versions of the text look much blurrier than they would otherwise, and that’s going to compromise every result you get from your machine.

I’m not saying that’s going to fix everything, but it’s certainly going to make for a better comparison. :slight_smile:

I do agree I see a backlash prob. I thought I had offset set, but maybe not well enough. It’s degrading the result, but that is not changing the result.

Bottom line, say we have a diagonal line 5 line intervals high and wide. It’s 75% black on a 25% black background. In the ULS halftoning method, that would be discernable. The cells the line crosses will be wider than the cells it does not. In fact even a 3x3 or 2x2 diagonal would technically be discernable.

But in a random dithering method, the “noise” smooths out the image as a whole, but the detail is not going to be determinate at this level, it’s just noise. The diagonal line would be unlikely to be apparent on a 5x5 line interval grid. Evaluated at that level, it could be anything- a mirror image diagonal line, circle, dot, horizontal, vertical line.

“Newsprint” does a similar thing, if it’s randomized it’s only a decision of how to represent minor pixels in a cell. But the cell in newsprint is relatively large, looks like 4-5 line intervals. Which has its benefits, newsprint can look great, but no one solution is the best for all applications.

Oz, how do you feel about the suggestion for opening this up to user-generated code, like Python scripts? I think it would enable a lot of amazing possibilities on an ongoing basis for relatively little one-time dev effort. It doesn’t seem like it would compromise the LB’s proprietary IP work, either.

It’s feasible, but low priority - we’re busy adding support for galvo systems, and have a few vendor requested features being worked on as well.

There’s more to it than just “give me pixels” - Orientation of the source image and the scanning angle affect the generated output, as does the scale, and some controllers are fussy about the generated data - Ruida has rules you have to follow if you’re scanning an image. It’s likely not trivial for me to just pass you an image you can monkey with and have you generate the vectors to be sent.

Of that, I would say scanning angles not aligned with X or Y, and GRBL controllers that can’t handle the throughput, would be the low-priority problem. I know we can raster at an angle, but no one I know out of our scores of makerspace users has ever felt the need to try or even ask.

And controllers that can’t handle the bandwidth, also, if it’s a lower-end, it shouldn’t be expected to have the same capabilities. As long as the existing Image/Fill exists then it shouldn’t create an issue.

I’m quite interested- what are Ruida’s rules on raster data? If I had more info I would be happy to rethink and tailor the request to something with optimum feasibility.

First thought, I did describe ULS’s halftoning method as nonsquare pixels with the horizontal resolution being 256x the vertical (line interval), but that’s not arbitrarily so, and clearly pointless to have arbitrary capability.

That is, say again that the line interval is 0.100mm (mostly because it’s a round number). In ULS’s method, the pixels begin at 0.100mm intervals and odd lines are offset by 0.050mm. If they’re truly representing 256 shades of gray (a common number, but not necessarily the case) then the endpoint varies with a very fine number. But that just requires the numeric integrity to specify endpoint of that same dot with, well, if it was truly 256 then it would extend 0.000390625mm per shade of grayscale value. BUT, the number of start and endpoints for each dot is no different than what Lightburn is sending now.

And I’m not fixated on getting 256 shades possible on a dot width. That’s mostly the max grayscales commonly contain, but probably pointless as I doubt any laser could represent that sort of detail in a burn. Probably 16x resolution in the line vs the line interval would be plenty.

I floated some of this and the makerspace is SUPER excited about opening up new options for Fill in particular that this might make possible.

No one has ever rotated an image? It’s the exact same mechanism that handles both.

I was thinking about raster angle so it mechanically uses the X and Y axes. Sorry, I may have misunderstood there

@LightBurn OK thought this through and tested with passthrough mode. I think we can get this going well enough with minimal effort

Currently, I can specify Passthrough on any image. The current LB implementation is a limit of 0.02mm line interval. Since we use square pixels, then we know the software and hardware interface already have a 0.02mm max resolution. That is, the Preview Window handles that resolution, and it transfers ok to a controller at anything down to that.

All LB needs to do is pass the image to the plug-in along with the target number of lines and almost totally just reuses the Passthrough code past that. As the very simplest effort, the plug-in is limited to a integer multiplier of horizontal res to line interval, so LB’s Passthrough and all its code is still seeing square pixels anyways.

e.g, the design imports a 640x640 bitmap. The user scales and does whatever and ends up with a 50mm x 50mm image, the Layer is Image with a custom plugin. We hit Adjust Image and LB asks line interval and resolution multiplier. We select 0.150mm line interval and a multiplier of up to 7, because that is more than 0.02mm, and whatever slider(s) the plugin needs to do its job. This calculates to an ultimate 2331x333 pixel image and that’s what the plugin returns for LB to display.

So, here’s the easy part- now, the right side of the Adjust Image is now JUST the Passthrough code for a 0.02mm line interval (thus a 0.02mm horizontal resolution), but it is just going to graphically repeat each vertical line 7x to stretch it for the display window- otherwise, it’s just the same square pixels. So it’s going to appear as square image again.

When it actually sends data to the Ruida, it’s going to send 2331 x 333 with a 0.150mm line interval. Same thing it displayed on the Image Preview, just not repeating lines to square out the display.

I’d say “this is easy!” but I know nothing is “easy”. I can say this is looking like a pretty lean effort for a very high return.

The way to handle custom Fill textures with a “lean” code mod- I need to think about that.

So I think the Custom Fill solution is very similar. The Fill mechanism instead reuses the Convert To Bitmap code under the hood and creates a bitmap based on the requested line interval. It passes this to the user-generated plugin which renders a texture or does whatever it’s going to do and returns a modified bitmap, and LB’s code just treats it as a Passthrough Image from there.

I thought for awhile about “OK, so LB has a font here, a capital ‘B’. Does the plugin get vectors or bitmap? If it’s bitmap, then the user’s code is more difficult as it may need to detect edges for some concepts” But conveying vectors to a plugin and getting a bitmap back sounds like the complicated thing. Naw. Just pass a bitmap of what needs to be filled as a 1 bit black and white image, at the target line interval and the horizontal res multiplier capped at 0.02mm product resolution. If the plugin draws outside the lines within the specified rectangular bitmap and screws up the Fill, that’s not LB’s problem as long as the plugin returns a bitmap of the same x and y pixel counts.

The Fill’s “Fill All Shapes At Once”, “Fill Groups At Once”, “Fill Shapes Individually” could change that from a single call to the plugin to multiple calls and multiple bitmaps. That makes sense.

The only thing additional here is that LB won’t easily be able to display the user Fill texture without additional code. That code would have to dynamically call the plugin to regen the bitmap whenever something in the layer is rotated or resized (but not simply moved) to have something to display. This sounds simple enough- and actually, while nice, this isn’t necessary right away because the Preview window can at least hint at what the texture is going to look like.

Hacking something awful into the Ruida through Lightburn demonstrates great value here! Even with relatively coarse Line Intervals, photo engraving can much MUCH higher quality. So much I’m seeing all the dithering methods out there as obsolete.
This explains a LOT about the paradoxes of doing the “right thing” and matching line interval to burn line side and following the rules, but then breaking that rule by reducing line interval actually does notably improve the resolution until a certain shade of black is exceeded and the rendering breaks, leading to a lot of fiddling and strange conclusions about how to handle it. I now see none of that needs to happen that way

This dutifully follows good line interval practices, or close to it. The shading effect is reliable at a distance. This is 0.250mm line interval, and 42mmx31mm actual size.

This is “hacktoning” and creating much better horizontal resolution. Same line interval, speed, same image size. Vastly better image resolution and totally reliable rendering properties. The results floored everyone who saw it! It renders shades and small features MUCH better.

Individual eyelashes and wrinkles now resolve. Again, same line interval and image size- and 0.250mm is not a fine engraving lens.

What I had to do with LB and Ruida to accomplish this isn’t “pretty”. This is going to take a bit of explaining and take awhile to wrap your head around how it was hacked in.

Have you tested this on glass-tube CO2, diode, and your RF, or just the RF?

I’ve been thinking about the varying line length dither, and I can likely make it work, but I have a bunch of tasks piled up ahead of it that need doing first.

Hello Oz!

Here’s what I did:
Normally there are at least 2 modulators here. One, you’ve got the 20KHz PWM to modulate the beam power. Call that “primary”. Then, on any sort of dithering/halftoning, the beam is modulated on/off with a much lower period as a secondary modulation. LB is currently committed to square pixels, so the spatial quanta is the line interval, and that also means there’s a time quanta of line interval divided by the raster speed. I’m avoiding the term “PWM period” on that secondary modulation because by nature it’s not a repeating period except “sort of” in halftoning mode. It’s a resolution, a pixel.

Grayscale, however, has some sort of other format, I do not understand how it’s implemented. LB is passing analog values and Ruida does a thing but I do not know how. Because, for example, I could take a 1024x1024 image and shrink it down so it’s only 2mm tall with a 0.2mm line interval and thus only 10 raster lines total. I don’t know how much data LB transmits to Ruida, the work started with a horizontal line of 1024 grayscale pixel values but it won’t burn like that inside of 2cm.

So, what is going on here? Well, first I changed the Image mode to grayscale. But this is a hack, and this is not grayscale.

The trick was exploiting the “PWM override is persistent to all other layers” bug to set the PRIMARY modulator to 4KHz. Normally, this is so low it would “break” things, but this is intentional. The rastering speed is 1500mm/s, and line interval is 0.25mm, roughly equal to my spot size diameter. So the primary modulator is now adjusting with grayscale level at 4KHz time period with a 0.375mm horizontal spatial period- and that’s larger than the spot size, so we’ve officially crossed into different territory here.

The laser will PWM on/off and make different length dash marks at this period. This yields a stunning new level of detail due to adding horizontal resolution, but without changing the line interval. Changing the line interval to be less than the spot size kinda breaks dithering/halftoning strategies, and directly increases the runtime needed. And we didn’t do that. We just massively increased the horizontal resolution.

So your head may be spinning as to what this is. Here’s a catch though- like I say, it’s a hack, and an ugly one:there is no longer any primary beam modulation. It is not possible. We faked it and used the primary modulator for secondary modulation.

That is, when it fires to make a dot with the 4KHz period, it fires at 100% power. It can’t do anything else, there’s no high freq (20KHz) primary modulator to hold it back. I’m RF-CO2 here so there’s no analog current trim pot on the supply, either. All I can do to limit the amount of burn is keep the raster velocity up. It’s a brutal mess doing beautiful things.

So I’m trying the same or similar hack on a DC-excited tube and I certainly see a difference. They simply can’t extinguish the arc nearly as fast. I’m trying to nail down what the impulse profile of a DC-excited tube actually is. I can guarantee a diode laser CAN switch quite fast, but it’s up to the driver hardware to be able to switch the power that fast.

So, at least for RF and diode, what this needed was to leave the ~20KHz primary modulator controlling the power like a sane system, and then a secondary modulation of a freq that creates a spatial PWM period of only a few spot sizes in width total, and the width is capable of being finely divided into as much as 256 lengths.

This actually does NOT appear to be more data to the Ruida, but I don’t know, this is why I was begging for info on the transport format. I’m starting from a wild assumption that a pixel has a start and end point? If so, with square pixels and a 0.25mm line interval, dithering can come out as tight as having one start point and end point per 0.25mm.

This is actually LESS. For example, we could set this to a 0.4mm spatial period. Now there’s only one start and end point per 0.4mm. The start point is fixed, the end point is finely variable, but there are actually fewer points per line. The 20KHz primary modulation is not in the transport format AFAIK- that is, the image data is not 20KHz on/off coordinates, the Ruida hardware does that.

But, you may also note “isn’t there a problem here? The modulation starts at the same point!” LOL, yes, it kind of does. And I’m not clear on whether the Ruida syncs the start of its 20KHz primary mod at the start of a line, but I think so. With the Backlash tuned “correctly”, I have had it sync and it creates a hot mess of tiny vertical banding. So… get this… I went into Device Settings and actually added an incorrect backlash so the left-going lines and right-going lines DON’T line up. I still do see “moire” patterns that are apparent in the above image if you zoom in enough.

Which gave me another thought- in theory, the odd lines should be offset by one half the halftoning period. But backlash isn’t that predictable, and lines can shift enough to start lining up the period and produce vertical banding.

So, I would propose another feature- random noise in the halftoning’s horizontal interval. There’s still roughly one dash- one start/stop pair per roughly, say, 0.4mm. The shade of the bmp’s graphical cell still controls the % length of the dash, but the horizontal period can randomly vary from 0.3mm to 0.5mm. So there’s no significant syncing between lines and neither vertical banding nor moire patterns can form.

Ah, one further feature- for technical reasons, this effect worked best when the max duty on the horizontal interval capped at about 55%. So that really could help with control to have that as a field. As long-winded as this was, it’d be longer to go through why that period needs to be capped, so I’m going to leave it at that for now unless someone wants to grill me on that part.

I’m not sure if I’m following you 100%, but I ran my machines pwm set at 1kH.

50% at 500mm/s looked normal, at 1000mm/s I could see it lase on and off…

Did you try going below 4kHz?

I’m not sure I follow you very closely after you reset it back to 20kHz.

Response of the lps has been a question to me since I started this trip. I don’t really have enough voltage generated across my hv meter to really get a good handle on what it’s doing.

Currently look for resistors so I can tap it at about a volt/kV. 30 volts would be an easy read instead of the small uV I’m trying to see now.

Mine seems to work well, but it’s a 44 watt tube with a 60 watt supply.

I have a LPWM1 female connector hanging off the Ruida, so I can trigger the scope off it. What I really need is a way to determine when the tube actually starts lasing.

:smile_cat:

I’ve never been 100% sure if the slow on/off times of DC excited tubes are due to the arc taking longer to turn off, or the power supply’s high voltage DC can’t change that fast. Like if it goes through a capacitor.
The Ruida LPWM will do like 100KHz, if asked. But the power supply or tube can’t on/off that fast.

It’s likely that that long DC arc itself is the problem though.

Say, what supply did you use? I tried this on a ZYE, and I don’t think I could see pulses until 0.5KHz @1000mm/sec. Actually lemme take some pics later.

I have no doubt, dsp are generally pretty fast.

When the power is shut off the arc has to stop, doesn’t it. How can it continue to be ionized with no power?

I suspect with the type of transformers and circuits they use, it would take some time to get the voltage generated/shut down. We know there is some type of ‘storage’ or capacity mechanism involved.

Some people have been ‘lit up’ a few hours after the power was remove.

I have a hv meter, so mine ‘bleeds’ off relatively quickly. I can’t see it move after I power it off, but it’s a 600 mOhm path to ground.

I think a pwm of 1kHz will have a period (or cycle) of 0.1 seconds. If you are moving at 1000mm/s you should cover the 0.1mm during the ‘period’ or cycle. If the laser is on only 1/2 the ‘period’ the line ‘burn’ should be 1/2 of the period of 0.1 or 0.05.

Needed my magnifying lens and used a microscope on part of it.

Maybe my math is off… But I remember what it looked like…

I don’t think I was running it at 1500mm/s…

:smile_cat:

It could be that the supply has a capacitor after the high voltage flyback that takes milliseconds to drain, so the tube’s current won’t stop instantly once the PWM stops. And it has to have a capacitor AFAIK- the flyback will probably run somewhere 15KHz-50KHz and produces AC, so an HV diode is used to create pulsed DC and a capacitor is placed after the diode to create a DC source. Also a HV cap’s actual capacitance is inaccurate- it would be natural to install a larger cap with some margin to avoid not having enough capacitance if the temp is different, or the capacitance drops with age, or just lot-to-lot variation in capacitor mfg.

Or, any sort of supply filtering could. That actually makes sense, a high voltage flyback runs via feedback where its own current triggers the next switching cycle. That cycle may not start or stop instantly.

But I think it’s more fundamental that the tube’s arc may not stop instantly once the current ceases. Even with no external source of energy, the arc has plasma in an excited state. The “laser” process starts with putting gas molecules in an excited state, but to be a laser, a molecule emits a photon when stimulated by a photon coming by, then it adds its own photon with the same wavelength, phase, and direction. If it was not stimulated but spontaneously emits photons after being excited, that’s possible- that would just be like a neon sign.

There will be some time period where the current is cut off, but the gas is still excited until photons happen by and stimulate the emission of an in-phase photon. That sounds like as the remaining volume of excited molecules decays, the rate of that decline will also decay along with emitted beam energy. Because the probability of an excited gas molecule seeing a photon also decreases along with beam intensity.

This will slow the total extinction of the beam power after current is removed, but I don’t know if the scale of that effect accounts for the ~1millisecond decay in beam power.

This also begs the question why RF CO2 turns off and on nearly instantly, being it’s the same gas mix. The supply being electrically unable to stop current instantly, by design, would be one good explanation. But I’m also open to the idea that the gas’s energy won’t stop immediately once the supply current stops.

The power supply spec does say its response time is “<=1mS”. At 1mS, it wouldn’t be able to resolve much past about 500Hz. 1mS on, 1mS off.