3D slice algorithim

Hi,
I’ve been working with depth map / 3D slice algorithm for a little while now and noticed something that I’m trying to understand.

My goal is to understand the 3D slice algorithm better with the end goal to optimize the number of passes, hopefully defining the geometry as much as possible.

As I understand it, The depth map is a series of values white to black (255-0).
255 would be the closest to the “camera” and 0 would be the furthest.

The algorithm appears to start defining the engraved object (leaving material that defines the geometry) at a specific value (x) in white this number appears to be a number less than (255). If the value is higher than that it will not engrave (leaving the surface unmarked). so x to 255 will not be engraved.

Conversely, It also will not start to define the geometry for a number of passes for a reason that I’ve yet to identify. I doesn’t start to define geometry until it gets to a certain level of “white” (y). I’ve observed variations here in this case. In a perfect world, the “engraving” would start at the second pass or similar.

The basic question(s) is

  1. short of trial and error, has anyone been able to measure or know the white value that Lightburn starts to engrave (leave material behind)?
  2. if there is a way to control how many passes will occur to start 3D slice engraving?

(edit: apparently ctrl enter posts not what I thought it would do)

3D slice mode does a threshold on the image, and the threshold value is based on the number of passes chosen.

Threshold mode means any value greater than or equal to the threshold value will be skipped.

The threshold goes from 255 to 0, evenly divided by the number of passes you choose. If your image contains pure white, it will always be skipped. If the image contains pure black it will always be engraved.

For a depth image, you want the image to use every available gray shade from 0 to 255, if possible. LightBurn assumes that the image uses the full range.

1 Like

Thank you for your time and response, I thought that was the case. In the depth images I have worked with I had chosen 256 passes so you would think that a value of 254 or 253 would be an engraved layer and it hasn’t demonstrated that in at least 5 cases that come to mind. It appearst to be a “cluster” of values near there that cause a hicup. I hadn’t thought of a way to measure it but it “looked” like a white “washed out” area. If I adjust the image with brightness/contrast to reduce that appearance that “white area” would be engraved.

In the case of the grayscale image algorithm you control you can set your power lower and upper threshold and have learned to control the image better. I was hoping there was a similar technique. At this point I’m just adjusting by eye and that’s a mixed bag of results. Hence the questions

I’m having a look at the trace tool to see if there is a correlation (long term) to be able to see how much geometry is in that white range also trying photoshop to see if there can be a tool applied there.

When you get time I would love to see a deeper dive into the subject matter. For example, what happens if I run 512 passes? Is each grayscale value produced twice? 1024 passes? I’ve found that using 3D slice produces results of varying quality. Of course this is entirely due to the depth map used and not a LB issue. Perhaps a better understanding for how LB handled the depth map/engraving would help enhance our collective results. Again, not a LB problem, but frustrating, is the human eye is seemingly not very good at evaluating grayscale depth maps. Maps that I felt would produce good results often do not and vice versa. Perhaps a little more knowledge of the process could help? I know some of the best depth maps are being produced by some of the CNC wood carving programs; Asprire, etc. Why do they get better variation in depth values? Because they are using a mesh? Enquiring minds would like to know:)

1 Like

If you run 512 passes then each slice would be run twice. If you run 384 passes then every 2nd slice would run twice - The slices are evenly distributed through the passes chosen.

I also have code in flight that will allow for arbitrary rotation of passes, and support for 16 bit images, which would give much better depth resolution.

Aspire or similar programs get better results because they’re actually modeling real depth, not trying to infer it from lighting information, which is what Depth-R and similar tools are doing.

Generating fake depth information from a photo or rendering is always going to produce sub-optimal results, because it’s simply not possible. They’re making “educated guesses”, but that’s all it can ever be. Aspire actually lets you edit a depth map directly - you don’t have to start from a 3D mesh at all.

3 Likes

Hey Bruce, You bring up good points.
re grays v passes
I sent a note to support touching on the n=256v512v1024 type question and the summary I got back was that “you would get an additional pass on each layer”. If I quote it well enough. Given that, it begs the question what about n of 300? Where do those 44 passes get distributed? From the above response from Oz I believe he said that the “256” grays would be divided into 300 passes. I believe him, so I’m going under that assumption.

re depth maps
I have created much higher quality depth maps using 3D models created / imported into 3D software and relatively “low res” (by that standard) topography maps and 2D image imports as well. The topography depth maps were the primary source of the question. Close inspection shows that the depth map is somewhat “discrete”, not continuous grays as that’s how the data is collected from a distance. If you apply a blur, in a photo editing software you can reduce it but trying to upscale has met with some challenges.

The 3D models so far have translated to engraving fairly well. The others are a work in progress. Using the trace tool and or a photo editing software that can filter/identify the grays for you is helpful for us to understand how the algorithm will translate what it sees as well as sitting through the preview sequence. I’m not sure the “CNC” carving programs are creating “better” depth maps at this point.

Discovering a little more every day.
David

Edit: Oz replied while I was juggling “events” thank you for the response.

the code sounds great. I was trying to devise a depthmap gray scale layer separation to allow create something similar to what I think you were describing (lots of steps involved there). When can people “see it”?

And that is EXACTLY the kind of info I was looking for! Thanks! I know there are many items that require attention….but for me…you can’t put this on the “to do” list fast enough! I knew they were doing something different and I’ve used my best efforts to duplicate the results but it doesn’t seem possible using a simple depth image - and I’ve tried them all.

Yes, I’ve had the same experience using actual 3D models. As you can see from developer’s response one simply cannot achieve the same result without using a different methodology in modeling.

1 Like

I asked a question, but no one answered it. I make models in ArtCam. And from it I get a depth map. Before buying Lightburn, I engraved this in ezcad, everything was fine. When I engraved this in Lightburn, artifacts appeared. They are not present either when engraving in ezcad or in the 3D model. I tried re-saving and so on. At first, I engraved this in grayscale, then tried to do it through a slicer. The result is the same.
This is a link to my post. There are photos there.
Problem with grayscale engraving - LightBurn Software Questions - LightBurn Software Forum

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.