Is there a way to obtain a log or other programmatic output from LB to monitor variable text (CSV) marking activity?

We are marking product housings (tens of thousands of them) using a BJJCZ-based UV galvo laser and a turntable holding 8 units so that parts can be loaded and unloaded somewhat continuously while the marking goes on. The two “variable text” QR codes use fields from a CSV file. I made another posting about some issues with the QR codes here if you want to see what I am talking about:

It is very important to us that no two housings get marked with the same unique information row in the CSV. Since it takes several sessions / days to mark 10k+ units, and there could be errors / restarts, we have to be very careful that operators do not restart marking on the wrong CSV index and end up making multiple housings with the same serial numbers. In addition, everything else that happens in this factory is also logged in our databases, so I would like to keep records of what serials have been marked and when for other reasons as well.

So my question is - is there some kind of log output or connection I can make to LB that can provide me with real-time information about what is being / has been marked, e.g. what variable text is being used or what index in the CSV it is at currently? All I can find is an option to turn on a debug log, but I haven’t looked yet to see if it contains what I need.

I also haven’t looked yet to see if LB writes the current index to a registry or other file (I am using linux) so that it doesn’t lose track of where it is when the app crashes or there is a power loss. That could be another way to infer what indexes are being marked if sampled frequently.

To add to this, I just tested on my macOS machine (LightBurn 1.7.08 - I can’t go to 2.0 LB due to linux EOL and running linux only at the factory) and it looks like the current CSV index only gets written out in the document when you save or close it - it does not seem to be stored anywhere else. This is bad news if the LB app were to close suddenly due to a crash or power failure and you lose track of where you are in the marking. All the more reason to have a log for this.

No such logs at this point. I suggest creating a Feature Request to extend/expand the current ‘Save Job Log’ to include additional data of interest. Then posting back here the link for others to support and vote up. :slight_smile:

Thanks. I have created an imperfect solution to keep my factory going in the meantime. This demo python program will monitor your .lbrn2 file (and it’s backup, which gets written every 2 minutes while LB is running) for changes in the variable text index into your CSV file. If you assume that this index changes only because you have successfully burned something, this does the job, although only with 2 minute time resolution on what is actually going on. Whenever you save the .lbrn2 file or save upon exiting the program, it does update more immediately, so in the end you do get accurate tracking of this index. Not as good as getting a positive message from LB that a certain index has been marked, but it works:

#!/usr/bin/env python3
"""
LightBurn CSV Monitor - Monitors LightBurn project files for variable text indexchanges.

Usage:
    lbcsvmon.py <filename>
    lbcsvmon.py -h | --help

Arguments:
    <filename>    Path to the LightBurn project file (.lbr2)

Options:
    -h --help     Show this help message
"""

import os
import sys
import time
import xml.etree.ElementTree as ET
from datetime import datetime
from pathlib import Path
from docopt import docopt


def read_lightburn_value(filepath):
    """Read the Current Value from a LightBurn project file."""
    try:
        tree = ET.parse(filepath)
        root = tree.getroot()
        
        # Navigate to LightBurnProject > VariableText > Current
        current_elem = root.find('.//Current')
        if current_elem is not None and 'Value' in current_elem.attrib:
            return int(current_elem.attrib['Value'])
        else:
            print(f"Warning: Could not find Current Value in {filepath}")
            return None
    except ET.ParseError as e:
        print(f"Error parsing XML file {filepath}: {e}")
        return None
    except FileNotFoundError:
        print(f"Error: File {filepath} not found")
        return None
    except Exception as e:
        print(f"Error reading file {filepath}: {e}")
        return None


def get_backup_filename(original_filename):
    """Generate backup filename by replacing extension with _backup.lbrn2"""
    path = Path(original_filename)
    return str(path.with_suffix('')) + '_backup.lbrn2'


def monitor_file_changes(original_filepath, backup_filepath, initial_value, initial_timestamp):
    """Monitor both original and backup files for changes and track value updates."""
    print(f"Monitoring both files for changes:")
    print(f"  Original: {original_filepath}")
    print(f"  Backup: {backup_filepath}")
    print("Press Ctrl+C to stop monitoring")
    
    # Store initial reading
    readings = [(initial_value, initial_timestamp)]
    last_modified_original = 0
    last_modified_backup = 0
    
    # Get initial modification times if files exist
    if os.path.exists(original_filepath):
        last_modified_original = os.path.getmtime(original_filepath)
    if os.path.exists(backup_filepath):
        last_modified_backup = os.path.getmtime(backup_filepath)
    
    try:
        while True:
            # Check original file for changes
            if os.path.exists(original_filepath):
                current_modified = os.path.getmtime(original_filepath)
                
                if current_modified > last_modified_original:
                    last_modified_original = current_modified
                    
                    # Wait in case the file is still being written
                    if current_modified > datetime.now().timestamp() - 1:
                        time.sleep(1)
                    
                    # Read the new value from original file
                    new_value = read_lightburn_value(original_filepath)
                    if new_value is not None:
                        timestamp = datetime.now()
                        readings.append((new_value, timestamp))
                        
                        print(f"[{timestamp.strftime('%Y-%m-%d %H:%M:%S')}] Original file changed - New value: {new_value}")
                    else:
                        print(f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] Failed to read value from original file")
            
            # Check backup file for changes
            if os.path.exists(backup_filepath):
                current_modified = os.path.getmtime(backup_filepath)
                
                if current_modified > last_modified_backup:
                    last_modified_backup = current_modified
                    
                    # Wait in case the file is still being written
                    if current_modified > datetime.now().timestamp() - 1:
                        time.sleep(1)
                    
                    # Read the new value from backup file
                    new_value = read_lightburn_value(backup_filepath)
                    if new_value is not None:
                        timestamp = datetime.now()
                        readings.append((new_value, timestamp))
                        
                        print(f"[{timestamp.strftime('%Y-%m-%d %H:%M:%S')}] Backup file changed - New value: {new_value}")
                    else:
                        print(f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] Failed to read value from backup file")
            
            # Wait before checking again
            time.sleep(0.5)
                
    except KeyboardInterrupt:
        print("\nMonitoring stopped by user")
        print(f"\nTotal readings captured: {len(readings)}")
        print("All readings:")
        for value, timestamp in readings:
            print(f"  {timestamp.strftime('%Y-%m-%d %H:%M:%S')}: {value}")


def main():
    """Main function to run the LightBurn CSV Monitor."""
    args = docopt(__doc__)
    filename = args['<filename>']
    
    # Check if the input file exists
    if not os.path.exists(filename):
        print(f"Error: File '{filename}' does not exist")
        sys.exit(1)
    
    print(f"Reading initial value from: {filename}")
    
    # Read initial value
    initial_value = read_lightburn_value(filename)
    if initial_value is None:
        print("Failed to read initial value. Exiting.")
        sys.exit(1)
    
    initial_timestamp = datetime.now()
    print(f"[{initial_timestamp.strftime('%Y-%m-%d %H:%M:%S')}] Initial value: {initial_value}")
    
    # Generate backup filename
    backup_filename = get_backup_filename(filename)
    print(f"Backup file will be monitored: {backup_filename}")
    
    # Start monitoring both files
    monitor_file_changes(filename, backup_filename, initial_value, initial_timestamp)


if __name__ == "__main__":
    main()

1 Like

Pardon me for jumping in, but to me , the low tech solution would be to check the serial number of the last one completed and restart the project from there.

1 Like

I had considered that, but these end up in a bulk bin of about 10,000 housings. Unless someone is smart enough to always leave the last marked unit on the turntable, this will not work. I also want to let them know more immediately if they are starting at an index in the file that has already been marked. It is very easy to mess with that CSV file index and not be aware of the change. I record in a database table what was marked and when. Our markings are fairily complex, so it takes about 15sec to do the markings nicely right now, although with some tuning it could be 10sec. With the 2 min backup file interval this means I am within about 8-12 units of realtime, which isn’t too bad.

What I don’t have time to do, but is along the lines of your thinking is install an “always on mode” Honeywell scanner along the turntable path post-laser. This would read each housing as it “comes off the presses” before a person picks it up. This would make a nice log of the successful markings, but isn’t 100% reliable. Between that and monitoring the .lbrn2 files it could be a robust solution.

This value can be adjusted to one minute if that is a helpful stopgap.

I also share that I have generated an internal request for further discussion around enhancements to the current logging.

Can you share how to adjust this on Win/Mac/linux?

Absolutely, I should have shared in my last post. Apologies. :wink:

Auto-save Interval (minutes)

Sets the frequency of auto-saves, in minutes. Setting the Auto-save Interval to 0 disables auto-saving.

Auto-saves are stored in the same location as you saved the original file, with _backup appended to their name. If you’ve never named a file, the auto-save will be in your computer’s Documents folder.

Each auto-save overwrites the previous auto-save, and when you manually save the file, the auto-saved copy is deleted.