I'm on a mission......images like this without ArtCAM

I don’t really know what ArtCAM does/did but I’m sure it was more than just making images like the one below. However, I’ve found a load of sample images prepared like this (depth maps) and tested a bunch of them in ESTLCam and they work beautifully for 3D carves. However, I feel like I’ve exhausted my search for a magic "click this button and voila you have a depth map or 3d carve that will give you solid positive caving results. I’ve found oodles of the greyscale image based pixel height ones which are great…if you want a lithophane. I don’t, and the lithophane ones give you mountains where you do not need mountains - people’s teeth for example. Because they are white they shoot to the highest height. Add some dark lips and you have a bird beak, not teeth. Not even a really good starting point IMO.

Has anyone here found a good method for positive 3d carves that don’t involve thousands of dollars spent on software? I’ll keep reporting to this thread how I make out otherwise. I definitely possess the skills to create imagery like this on my own - but I’m thinking there has to be a better way than manually. Maybe I’ll figure it out.

[attachment file=75812]

 

What if you made a negative image of your troublesome lithopane image?

Edit: Nevermind, I don’t think that would solve the problem, it would just change the problem.

Yup. You’re correct. And inverting a greyscale image doesn’t work either. Because the eyes go black and then the image2code kinda solutions see that as “deepest cut”.

I have a plan in mind involving the levels panel in Photoshop but it’s fall break here so I’m busy playing with the kids while trying to monitor a 3D print. There’s something my parents never said when I was a kid hey? Hahaha

Maybe. After a little while of googling, I came across this defunct website called 3dsee. You can see it here: https://web.archive.org/web/20090228100258/http://3dsee.net/Main.aspx , but it’s old and doesn’t really work. It was supposed to take photos and turn them into bump/normal maps, which should work with what you are doing. This and this are articles (from 2009) about it. This is the guy that made it; it looks like he’s a professor at Queensland University, and he either is or was commercializing it, but it’s been 10 years, so I don’t know what he’s doing. I sent him an email to see what he’s actually doing with his project and if he is considering putting it back on the internet; I’ll post an update if he responds.

Edit: I may or may not be stupid. It looks more like a 3D scanner that takes multiple images and turns them into a 3D model. Perhaps this is not very interesting or useful for your needs.

Thanks! There is/was a function in Photoshop to create bump and normal maps but they still aren’t quite right. It’s specifically a depth map that we need for the “Photo to Code” functions. The difference is that blacks that are already halfway up the z depth can’t be blacks any longer - they need to be the 50% grey at the darkest point of that part of the image. I might be able to use the levels inPhotoshop after meticulously cutting up the subject into bits and pieces to set the max black/white for each layer of the image as I determine it.

Wish me luck!

I guess if there was an “App for that” every one would be doing it, right?

There is a program called CamBam that I use. It has a heightmap generator in it and about 40 free sessions of use before you buy. If you never shut it it still counts as the same use…

I purchased it when I was working on my other routers and it also lets you do some cad design basics.

1 Like

Thanks Mike. I’ll have a look at it.

It seems to function much the same as the others (where teeth and whites of eyes would jump to the material surface). I’ve built a sample ramp graphic that I think clarifies what I already understood and that is that I need to decide in Photoshop what layer of my image stays near the material surface and the details within that must fall within range of the depth reach of just that layer +/- one or two steps up or down. If for example the eyes go to black in the corners I need to bring that up to the upper end of the greyscale (20-40% I’m guessing).

Which means I literally need to cut photos into individual elements (eyes, nose, cheeks, hair etc…). Still working on this…

I wish I could find a better preview method but I’m making my way through. Parts of this image have been blown out, parts flipped to negative, parts within brought back to positive but blown out more still…

[attachment file=75936]
[attachment file=75937]

Ah crap…and now that I focus on the “next area to look into” I see her checks need to be flipped back to the positive…What did I say earlier? If it were easy everyone would be doing it? hahaha…sigh

Well, it’s not far off but it’s not terribly close either. The eye and cheek masses look okay - need to find a way to get better detail in the eyes. A finer bit or a third pass with some drawn details just a step or two darker than the surrounding greys. The lips did just what they were supposed to do - unfortunately not what my brain was thinking but definitely how the image was set up. Kinda impressed how the stray strands of hair crossing the face popped up and held.

This is 15cm x 15cm. Roughing pass took only 25 min - would have been quicker without the margin applied. Finish pass is only going to take about an hour. 1/8” ball mills.

[attachment file=75952]

The problem is depth maps are generated from actual 3d models, not 2d images. Most applications that create cut paths from 2d images are making them for lithophanes, which look weird when not backlit. For a lithophane, the darker parts will be thicker, lighter parts thinner, so less light, more light gets through the material.

I figure I’m still way shy of the mid point here in terms of effort - the aim being finding a way to do it without having to 3d model the photograph. While that would be the easiest way to get the file I need for the cut, the modelling process is not easy. For me at least…and I’ve been at it a while - just never had to model human faces - we could always get away with just loading an image onto the model.

It sure sound like I missed out on a gem in ArtCAM…to think, had ai got off my duff and built the MPCNC just a few months sooner I may have just caught ArtCAM in time.

Or maybe ArtCAM just does what all of those others do but bit different which nets an acceptable result.

I’m not a software developer but to me I think it “should” be easy to remap those pixel depth pieces of software to be able to understand if a dark area is surrounded by lighter area then it’s possible that that dark area needs to be recessed in relation to the lighter areas around, not right to the lowest Z depths.

[tangent]

We were out at our local museum earlier this week and they have a new kids toy that blew my mind. Using an X-Box camera setup aimed down at a sand table they had software that would map the hills and valleys in the sand and project/overlay an “accurate” topographical map complete with green fields, waterways/rivers and animals. When you shift that sand around it would rescan in real time, move the hills and animals and if you moved where a lake once was the water would actually “run” to the new low points. It was amazing!

[/tangent]

I think I saw a demo vid of the projector table you are talking about. How freaking cool. I would love to have one.

I’ve used a X-Box Kinect as a 3D scanner using the example software from Microsoft, works OK but haven’t really tried to anything with the models.

1 Like

I scanned with X-Box Kinect v2 for quite a long time and found that third-party software from major manufacturers of 3d scanners provide better results in terms of scan quality and accuracy of 3D models. Try the software from Faro, Artec3D or Scanect (personally, I used the trial version of Artec Studio)

How about converting to B/W in Photoshop then using the alpha channel and change the depth settings?

Doesn’t work the way you think it would. Shadows aren’t depth. That works alright for lithopanes, but not real 3d geometry.

1 Like

You would need a 3d scanner.
2d photographs are completely different information. They are for light and color… and have nothing to do with form.
Imagine this…
A person with a white shirt vs that person with a black shirt.
Although there are some pretty cool filters that can make 2d images into 3d… not sure if you could find one for this application where it outputs the depth map instead of a color image.
A 3d scanner is really the best route IMHO.

Does a “pin art” 3d scanner exist?

I couldn’t find any with a quick search but that might work?