Personal hackathon devlog

Howdy.

I’ve alluded to a project I’ve been tinkering with for a while. Now is the time to finalise the goals, and since I’m off work until next year, time to hack at it!

There are a few frustrations in Lightburn for doing non standard and experimental stuff. I’m not knocking the devs; this is a commercial program with support (and a fantastic community!), supporting weird requests and implementing them costs time and money.

The goal of this project are to implement an (eventually) open source galvo laser controller on an ESP32-S3. Why an ESP32? They are cheap, and a have a whole bunch of different ESP32 dev boards sitting around. Why galvos? because I have a whole bunch of em too! Plus, I’m sick of paying $$$ for laser controllers.

The S3 specifically has a dual core LX7 processor running at 240Mhz, which should be plenty to implement the layer required.

One core will be dedicated to running the scanhead. The other will be for, well, everything else.
The XY2-100 protocol is pretty darn resilient when it comes to speed variations; The standard does specify 2Mhz, but it seems most implementations are pretty flexible when it comes to the absolute speed. Certainly will be aiming for 2Mhz though.

The ESP32-S3 has built in wifi and Bluetooth BLE5, but when these are in active use they hog various parts of the microcontroller, and reduce the available resources for other time critical tasks. As such, the plan is to leave them disabled, but, depending how the development goes, there may be enough extra resources available to re-enable them, allowing for all sorts of other possibilities.

I won’t be using Arduino or esp-idf for this, as porting to other boards in these environments is cumbersome and time consuming. This project will be built on the Zephyr RTOS ecosystem. Bringing up new board support in Zephyr is writing a devicetree file. Most of the time, that’s not even necessary, as there are 800+ supported boards already! By using their hardware abstraction philosophy allows great flexibility in overall software design.

Whilst most galvos are pretty darn standardises in their pins, behaviour and protocol support, the laser sources can be quite different. I mentioned in another thread that I’ve found various documented protocols for JPT, SPI, Raycus etc etc. I’ve also come across another few undocumented ones, including the one in my UV source. The vendor software is hot garbage, already had to poke around inside it to fix a bug that prevented it launching on a machine unless it had a specific Chinese font installed and was set to Chinese as the system default language! Thankfully, the protocol seems to be pretty well thought out, and the software is written in .net, so dnspy happily spits out enough source code to easily reimplement it without figuring out some nightmarish soup of unnamed functions and variables, or busting out ghidra or IDA pro. I was also dreading staring at usb packet dumps and writing a dissector.

The other side of this is there are different pinouts for different sources at one end, and variations between BJJCZ/BSL/Other controllers. They all have different drivers, software support, compatibility etc. There are certainly similarities with pinouts on these, but they are more for analog/parallel control rather than implementations of the vendor’s internal control scheme. Given the manual for my UV source is, uhhh… “lacking”, as it only describes 3 pins, I intend to implement the full serial protocol as a library. This “bit” can be considered another protcol layer, and will be abstracted to allow implementing others, eg JPT and Raycus.

The only open source driver I know of for any galvo controller board is galvoplotter, which draws on the work in baldor, which is compatible with the common JCZ boards. It also gives enough information in it’s implementation to write (yet another) abstraction layer in Zephyr. What this means is the initial design will implement the other side of the JCZ protocol as a starting point as currently defined by galvoplotter. This means the ESP32 can appear to a host running a JCZ driver to be a JCZ board. Which means instant lightburn compatibility :slight_smile: This can initially be a straight pass thru of the USB in to out, but does allow for modifying packets on the wire. This isn’t the final form of the design, but it does give a nice “out” in case my scanhead control code foo is weak as I can still use the existing board to talk to the scanhead and use the microcontroller to set source parameters.

So at the bottom layer, we have one microcontroller core to drive the XY2-100 signals, and another to talk to the laser source and the host. We have initial lightburn compatibility via spoofing the JCZ protocol. This is the primary goal, but what isn’t there is the hackability.

Onto the other software layer. This is where lightburn compatibility starts to break. As mentioned in previous threads regarding Z control, the way lightburn batches things wrt to the JCZ driver makes certain things difficult to implement. I won’t dwell on it here, but there is another way. I want Z control so I can do 2.5D bubblegram stuff. (The next hackathon is going to be building a tri axis galvo for true 3D stuff, but I digress). I also want to generate the bubblegram stuff from 3d data, so this layer has to be fully 3D from the ground up. Similarly, having several 3d scanners integrated into my various workflows I want to experiment with more direct 3d control. These may require protocol additions, and may start to break things.

What all this means is writing something like lightburn but in 3d. This bit may end up closed source for a while. I’m currently torn between implementing in Grasshopper 2 for Rhino, because of the exceptional nature of the 3d engine and the integrated python. But the Rhino license fee is pretty out there for most, and the learning curve can be brutal. Plus it’s Windows only, so getting the galvoplotter driver to work is a whole other thing.

The other option is implementing in Godot 4. Turns out, game engines and 3d laser path planning software are a great match! The downside is the 3d format support is limited compared to rhino, although that can be said of pretty much everything. (I just checked; Rhino 8 supports 72 different formats!). But, as an engine, it is very well equipped to deal with 3d data, and is well documented and open source. I have a bit of a codebase established in Godot to bring .lbrn2 file support. It was written in the first iteration of this project, but it’ll need a refactor, as I can smell the terrible code from here!

Since there are several use cases for how this could all go together, and will vary for different people with different setups, I’m trying to make the workflow modular. As such, I’ve decided on a node graph approach because a) that’s the core paradigm of Godot and Grasshopper, and b) it fits well with the level of parameterization that a laser workflow has. Plus, allows for essentially infinite layers. I got some cool gradients on titanium by smoothly varying mopa parameters 800 times over a few cm.

I haven’t set the final scope of the software in stone, but what it isn’t is a complete lightburn replacement. There will be no X/Y support, nor G-Code, nor direct design/CAD tooling. It’s a Galvo- and 3D-first software for doing weird things! Plus some secret sauce MOPA things…

Anyway, enough typing for now. Watch this space.

4 Likes

Really interesting project! I won’t be able to really contribute since my days of software engineering are mostly over, but at least I have a jcz board Galvo (Monport GPro 30W) and some esp32-s3 (at least I think the MKS DLC32 Max is using one). If this can be of any use, let me know. :grinning_face:

That is too bad! GCode is capable of full 3D motion (tapered spiral), and the latest controllers can likely handle the speed requirements. If you could GRBL a galvo, you would have a market killer.

Update 1: I did precisely nothing mentioned in the plan. I ended up abusing RK-Cad to make bubblegrams. This is a story of despair and autohotkey abuse.

Turns out RK-Cad is not the best written piece of software in the world. There are some really annoying bugs, but it does have a few features that I would love in lightburn, and that enabled bubblegrams to work without paying $1000+ for the license for inner carving. The axis move function and the delayer come in handy, as does the axisentitiesmark command.

The first one allows you to add arbitrary axis move commands into a job, and there is a variant that can do 2 axis simultaneously. The axisentitiesmark allows you to specify a file and a template and do an arbitrary number of cuts per axis distance. Exactly what’s needed for a bubblegram! Or so I thought.

Turns out that axisentitiesmark has a pretty bad memory footprint for anything more than a few layers. With 152 layers, the entirety of the 64 GB of ram on this machine was swallowed up for 10+ minutes. I suspect that the implementation loads the file in question for each entry and each sub entry in that, I.e., if you had 5 layers and wanted 5 cuts, it could load the file 5 times for cut1, 5 for cut 2 etc, so for 152 layers, 1 cut per layer, it would load the file 23104 times. Given a file size of ~3Mb, 69Gb of ram would be needed (instead of, you know, 3Mb). I did successfully use this to make a couple of test bubblegrams, but it’s a non-starter for anything more that a hundred layers or so. let a lone the 1k+ target Im going for.

Onto the other axis command, It’s essentially a meta-layer, than when the job gets to it, it moves an arbitrary axis a relative amount (or to an absolute position). So, layer1 of bubblegram, insert axis move, layer2, axis move. rinse, repeat 1000 times. Except there are some truly ridiculous UI issues, that mean in certain (frequent) circumstances, you touch something and the entire layer order gets juggled. So now layer 1 is layer 275, and at some point you have 15 axis moves in a row.

The other issue is that you can only import one vector at a time. And given the previous issue, you have to import 1 layer, add 1 axis, import another layer add another axis. Not sane or doable by hand for 1000 layers. Autohtotkey to the rescue! I’ve got it down to about .6 seconds per layer now, Keeping the test files to ~250 layers has kept me sane, as 3 minutes is more tolerable than 10 minutes of watching the mouse move around on its own. The script is not exactly well written, but I’ll clean it up and post it somewhere if theres any interest.

Now to talk about file prep. The vector formats supported by RK-Cad are .ai, .plt (old HP plotter format), .dxf, .dwg, .rkq ( rk’s internal format) and .jww. I had to google the last one, Japanese cad software jw_cad’s internal format. None of these are native 3d formats, not that that was expected. The internal carving module support stl files, but again, this exercise was to not spend $1000+. So to go from 3d file to bubblegram, we first have to chop it into layers. I used the excellent Kiri:Moto (Kiri:Moto) as a slicer. There are a whole pile of option, but in this case I’m using 3d stack mode, as it helps with the alignment problem.. Whilst it can directly output dwg, it’s not going to work in this case. RK-Cad, when importing a DXF, lumps everything onto one layer, with no way to ungroup. Not that that would help, due to the other issues above. The other options for output are g-code and svg. I went with the latter, as I know it to be more readable.

So now we have to split each svg file in to it’s respective z-layers. But SVG isn;t a format supported by RK-cad. Despite them using QT5 and including the QT5Svg.dll. So plt and jww are out the window for being to obscure, ai doesn’t seem to have any open source libraries around, which leaves dwg and dxf. Dxf is at least documented, wo went with that. Wrote a quick python program to bring in the spg, extract ech polyline/path per z height and output a DXF with the equivalent LWPOLYLINE in it.

Another thing about dxf is it’s model_space vs paper_space. And how RK-Cad implements it’s dxf parser. And how it took an insane amount of time to get things to line up when split across 100 dxf files. But it mostly works now, except for when it doesn’t.

Anyway. the method works. it’s janky as hell and takes way longer than it should, but it’s good enough for now! All this has driven me even further toward implementing what I set out to do in the first post, as this is for the most basic style of bubblegram. If I want to hatch each layer, I have to change the autohotkey script (and subsequently increases the time poer layer for file prep to 1.3 seconds) and regenerate the file. Rather than clicking a single button. Or if I want to do something with Z moves with my MOPA, i’d either have to build another MOPA, or switch out the control panel to a RK one.

In summary, the software that can, with a lot of fiddling, create bubblegrams is so bad that I’m now determined to write my own.

Since you mentioned it, it does look like there’s an existing GRBL port for Zephyr; It’s something I’ll consider, but scope creep is a killer. On the other hand, it’s a proven, well developed piece of software so may be a saving in the long run.

1 Like

Everyone likes a picture, so have a Jab helmet (or 3) in bubblegram form. Still getting some settings dialed in, but starting to look good!. The cylindrical test piece is distorting them a bit, but have more on hand.

FWIW, I’m relatively close to having Z-axis support done for LightBurn, and if you have a board that looks like a JCZ controller but can implement your own commands, and provide a unique ID for LightBurn to identify the board, it wouldn’t be difficult on our side to implement commands for Z moves streamed as part of the job.

If the lift required on our side is only adding a couple commands, I might be game to try it.

Having said that, we don’t have bugglegram or full 3D model support available yet, and likely won’t for a while.

3 Likes

That’d be great! I made the first steps to getting this working, by having an arbitrary ESP32 board presenting itself as an LMCv2 board over USB.

The other projects have progressed at varying rates too. After the frustration of the autohotkey stuff with RK-Cad, I decided to try and implement native support for .rkq file parsing and generation. How hard could it be, it’s just a slightly modified dxf file, right? Turns out I was way, way, WAY off with that! All sorts of security and integrity check, nested structures and variable length fields. I did manage to figure out how the format works in general, but to successfully implement even basic rkq generation functionality essentially requires a complete reimplementation of 80% of RK-Cad. It’s arguably as complex as the pdf spec, albeit without any documentation whatsoever. For the time being, I’m going to leave this part alone as it’s a massive time sink. It has got me close to understanding the inner workings of the RKQ-LM-441 board, but also far less enthusiastic about implementing a driver for it, as the tie-ins to the RK-Cad code are deep. It is essentially a serialized QObject stream.

What did have some success was writing code to generate bubblegrams from 3d files. The approach I took was implementing a full physics simulation of how a galvo laser works. It’s far from performant, but is certainly a cleaner workflow than the previous described approach. Fine control over the slicing algorithm and adaptations that are specific to different fill strategies make good quality easily achievable. I even wrote one algorithm to account for a fixed speed z move, which may allow some testing sooner rather than later for lightburn stuff.

edit: sorry for the vid quality, forgot you couldn’t upload mp4’s here and imgur isn’t playing nice.

sliced in my program, from an obj mesh. red are the jump moves, or moves that fall outside of the tolerance to the mesh surface. green → yellow is how close the lines conform to the mesh surface. right now, only green lines are exported.

ezg

rendered in lightburn preview. All cut optimiztions off, to preserve the order generated in my algo seems to work, as the jump moves and starting point per layer seem to be preserved.

sr1

1 Like

Another update. Got a partial implementation running on Zephyr’s native_sim. That was quite a coaster ride! The current state of developing on Zephyr’s USB “Next” is sketchy at best, but the fidly bits are mostly hashed out. The problem I’m having is getting the code to run quick enough in the sim to hit the tight timing required for genuine board emulation, but I suspect once it’s on a physical board it should improve significantly. Plus the overheads of usbip to middleman the connection can’t be great, and emulationg a host controller stack is terribly slow compared to an actual, physical host controller. Currently I can start up my fake board in the emulator, pass it through wsl and present as an LMC board. Lightburn can recognize the board, and will do the first part of the handshake.

The next steps are fleshing out the implemented commands to cover a lot more of the instruction set. I think I’ll take a break from this side of it for a couple of days and work on a test implementation of the xy2-100 side of things. I am considering using one of the colorlight boards I have for a cnc project, but adding yet another architecture and toolchain into the mix may be too much. Plus fpga’s are no fun! Scope creep is real and is the enemy.

I may even turn on a laser today! Plan to make some bubblegram xmas ornaments. I have 2 xmas parties to attend, so the dev efforts may slide a bit, or end up a bit more rum infused than hoped.

1 Like

Brief update: fully implemented a rkq encoder. Implemented multiple point cloud strategies. Shifted the UI to the node based design. Great strides in getting the full jcz simulation going.

Now time to fill my belly with ham and rum!

2 Likes

Another update. Implemented 70+ nodes. Time to feature freeze, clean out redundancy, start writing unit test and aim for an alpha release in the next week. A couple of regressions to investigate, but current pre tidy up is loading with out any compiler errors or warning. Got some cool image segmentation workflows. Also implemented some cool algorithms for mopa colour space mapping that I’ll focus on for the alpha 2 release

3 Likes

So much for feature freeze. there are now 144 nodes. I juse asked myself why I was futzing with kerning code ar 2 am. I guess midnight is a good enough time for a actual freeze. Oh, looks like out of the box support cross platform support for jcz boards will make the alpha, once you have a working python install. LBRN2 import and export are implemented too, but some of the things that can be achieved can only be done without the layer count limitation of lightburn. You want circles in your material test? no problem. You can export to lbrn. You want 10000 circles? unless you want to reload lightburn 300+ times, the inbuilt driver will have to do.

2 Likes

Got a couple more features in under the wire (albeit stretching that to its absolute limits, being when the whole world had passed into 2026)

The big one is GPU acceleration of vectorizing and depth mapth slicing. On my middling 3 year old laptop, I can slice around 150 layers a second. currently capped at 8096 layers, buts that’s just an arbritary cutoff for now.

Getting all the GPU abstraction also got the boilerplate for an experimental MOPA speciifc hatching algorithm done in time, now I’m just “fixing bugs” in it. (lol)

Getting ready to test the usb driver on a real machine now, as the simulated connection is working as expected on Linux and Windows. Will be looking for some testers soon, but will wait till the devs are back to get permission first.

Gonna put together some clips of the ui/workflow and post them shortly.

1 Like

Success!! I just made my first mark entirely using the software. Albeit just a square, but it represents a validation of the entire approach. Looks like a busy weekend ahead, as I’ve spent the last couple of weeks building a toolbox full of fun tools to try, without actually having tried them in reality. Payoff for meticulously defining an architecture and protocol at the start, and only modifying it when absolutely necessary means this is looking like a pretty robust solution for communicating with the JCZ boards independently.

Christmas morning for me is tomorrow because I get to play with all the toys!

(or in a few hours when you see another post from me, you know I couldn’t wait!)

3 Likes