If you run 512 passes then each slice would be run twice. If you run 384 passes then every 2nd slice would run twice - The slices are evenly distributed through the passes chosen.
I also have code in flight that will allow for arbitrary rotation of passes, and support for 16 bit images, which would give much better depth resolution.
Aspire or similar programs get better results because they’re actually modeling real depth, not trying to infer it from lighting information, which is what Depth-R and similar tools are doing.
Generating fake depth information from a photo or rendering is always going to produce sub-optimal results, because it’s simply not possible. They’re making “educated guesses”, but that’s all it can ever be. Aspire actually lets you edit a depth map directly - you don’t have to start from a 3D mesh at all.