Laser stuttering at high speed, ability to increase grbl buffer size

I run xtool d1 pro with macbook pro/osx 13.1. I’ve been running into a problem that the laser starts to stutter when engraving a larger dithered image at high speed. I traced this down to inadequate throughput with the USB. Not exactly sure what is causing it.

I could overcome it in a way, by saving the gcode and then sending it directly to the laser with a simple python script but batching up many commands. So I think it’s a latency/buffering issue.

I have selected “buffered” transfer mode at device settings, but where is the buffer size specified? I think it needs to be increased. Or maybe there is something wrong with my osx or d1.

Remember that the laser controller is running GRBL on an 8 bit microcontroller teleported from the mid-1990s: in round numbers, it has no memory and no speed. The controller’s serial input buffer is fixed, because the hardware has only 2 kB of RAM.

This is how to keep the buffer as full as it can be:

Note that the character-counting method requires pre-verifying the G-Code file to eliminate parsing errors before sending it.

Within those limitations, you can pick any two of:

  • Large image
  • High resolution
  • High speed

GRBL works well within its limitations!

It doesn’t take away from the point but note that xTool D1 runs a 32-bit MCU. I believe ESP32 but don’t take that as gospel.

Image data is particularly high volume.

Does the stuttering go away if you slow down the engraving?

No I think it’s a USB serial driver’s buffering/latency/IO issue. I don’t have python2 so I can’t run that repo’s antique code. Here’s the crude script that works:

import serial
import sys

ser = serial.Serial("/dev/cu.usbserial-14110", 230400)

buffer = ''
count = 0

for line in open(sys.argv[1]):
    line = line.split(';')[0].strip()
    if line == '':
        continue

    if len(buffer) + len(line) + 2 > 1024 or count >= 32:
        ser.write(bytes(buffer, 'ascii'))
        # read the "ok\n"-s
        ser.read(count*3)
        buffer = ''
        count = 0

    buffer += line + '\r\n'
    count += 1

ser.write(bytes(buffer, 'ascii'))
ser.read(count*3)

Yeah it goes away (or is not perceivable) if I slow it down (110mm/s is fine, 163mm/s is very pronounced). Also there is no stuttering when I use the xtool’s wifi gcode upload feature. Or my script.

I assume you mean at any speed?

Does the upload feature stream the code or does it get transferred to the SD card?

Note that for normal GRBL implementations LightBurn uses the message available at connection time to determine buffer size. xTool implementation is far from standard so not sure if it works the same way.

Post from Oz about target buffer size:

At 163mm/s is where I see the stuttering. Before today I only ran at 110mm/s or lower and didn’t notice it.

xTool’s Creative Space appears to upload the data to the SD card and then it gets executed – so it does not stream. xTool’s Creative Space also stutters when using the USB connection and engraving at high speed.

Curious. I believe XCS doesn’t even use the GRBL firmware implementation in favor of their proprietary firmware. So something inherent to the serial connection.

I’m wondering why your Python script would perform any differently. From looking at it I don’t see any provisions for buffering. If this is happening only at high speed I’m wondering if the buffer is getting underrun. So is having to wait on commands which is causing the stuttering. Seems like your script is running as fast as it can manage. I wonder in that case if you’re potentially overruning the buffer at times.

On a related note, grayscale should be even more information dense. Do you have issues at even lower speeds with grayscale?

Yeah that’s what I thought. It appears to me that the machine is waiting for commands and that causes it to slow down to a momentary stop. My hypothesis was that Lightburn is waiting for responses for a very limited number of commands at the same time and due to high latency of the usb serial interface the buffer, as you said, will be underrun.

With that in mind I wrote this script and managed to improve the performance. The script is very crude though and has some problems. But it works in principle. Mind here that I have no prior experience with gcode.

I have not tried grayscale yet as my medium does not really work with it.

Now I wondered if I maybe just have a bad cable. I tried a different one. Did not help.

Maybe OSX has a very bad usb serial driver and that is the limiting factor.

Device information shows:

Product ID: 0x7523
Vendor ID: 0x1a86

XTool uses Arduino and this is CH340.

I haven’t seen anything to indicate this. What is this information based on? I was almost certain they used an Espressif solution.

I’m almost certain I’ve seen others describe similar behavior. I think it was chalked up to a firmware issue in those cases but can’t be certain.

Gcode clustering is meant to alleviate some of these issues but requires support in the firmware for this to work.

Maybe I’m mistaken. When I had the D1 Pro connect to my WiFi access point, it reported that it was an arduino-esp32-something. If I remember correctly. I renamed it, can’t find the old name anymore. Checking the MAC address it identifies as Espressif. This is confusing.

I improved my script:

DEVICE = '/dev/cu.usbserial-14110'
BAUDRATE = 230400
PARALLEL_COUNT = 32

import serial
import sys
import sys
import subprocess
import os

if 'darwin' in sys.platform:
    print("Running 'caffeinate' on MacOSX to prevent the system from sleeping")
    subprocess.Popen('caffeinate -w %d' % (os.getpid(),), shell = True)

ser = serial.Serial(DEVICE, BAUDRATE)
count = 0

def read_response(allow_pending):
    global count
    while ser.in_waiting > 0 or count > allow_pending:
        line = ser.readline().decode('ascii').strip()
        if line.strip() == 'ok':
            count -= 1
            if count < 0:
                print(line)
        else:
            print(line)
            assert not line.startswith('err')
            # sometimes I get "okok". what does this mean?!? newline goes missing/off by one bug? buffer overrun and missing data?
            if line.count('ok')*2 == len(line):
                count -= line.count('ok')

for line in open(sys.argv[1]):
    line = line.strip()
    if line == '' or line[0] == ';':
        continue
    ser.write(bytes(line + '\r\n', 'ascii'))
    count += 1
    read_response(PARALLEL_COUNT)

read_response(0)

edit: running caffeinate to prevent macbook from going to sleep; increased parallel statements to 32

1 Like

Am I missing something here or is 163mm/S not very high speed. That is 9,780mm/m seems very fast for that laser.

Great question. xTool advertises the D1 Pro as engraving at 400mm/s (24000mm/min).

Looking for the answer led me to their FAQ that has an item about what to do when the laser “shakes”: What to do when the laser shakes during processing?

  1. Set the speed to below 105 mm/s and try again.
  2. Do not select the Grayscale as Bitmap mode when it is a bitmap file you are engraving.
  3. Switch to a Wi-Fi connection if it was a USB connection when the problem occurred.

I just found this. Apparently they are aware of the problem and recommend to reduce the speed to <105mm/s. That’s just great /s.

But I think I will be using my workaround script. When you push the boundaries, the boundaries push back. The proof is in the pudding. If my script works, then it works. I need to properly test it to see if there is any loss. But I’m pretty confident it is fine.

USB serial is not a COM port. And they appear to be using a hardware/software platform with much larger buffers. I can tell this form the start when the laser starts to move. The laser moves fine for 3-4 entire lines of my work before the buffer is underrun. That is a lot of data, probably 100X more than my current parallel push code does.

I’ve tested it now on a larger piece. It appears to be working properly. Aside from randomly exiting. Which turned out to be caused by my macbook going to sleep. Quite funny actually – took me quite a while to figure this one out :wink:

I caffeinated the script to prevent OSX terminating the burn with sleep. Looks good.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.