This video came across my video feed this morning, and it is something I have not seen before. I’m linking the video at the section applicable to a CNC. You can watch the video from the beginning for more context.
The idea is that a hologram can be created by scratching black acrylic. The linked offset shows Matt Brand’s work. He models 3D objects, evaluates the model to generate scratching, then uses a CNC (and I assume a diamond drag bit) to scratch the black acrylic.
I would like to, but building the software is a steep climb for me. I’d gladly play if someone has the tools available. The biggest issue for me is figuring out the math for the “correct” arc. I found this reference paper, but it makes my brain hurt. Even with the arc figure out, there are some time-consuming problems, like figuring out the sampling for the points and figuring out how to calculate height from triangles in an STL.
The minimum viable product would probably be starting with a bitmap binary image. Where white pixels were ones you were interested in tracing. You would start with a constant depth and then for each white pixel, create the arc toolpath for that depth and x,y of the pixel. That would result in the smiley face, but with the correct arc, it would not have that distortion.
The next iteration would be to tip the image, so the first row would have a different depth than the final row. For each pixel, you are again determining a single toolpath for a specific x,y,depth.
Once you have that, you could create a separate tool that would create a bitmap where the intensity of the pixel represented depth of that point. Black pixels would not get any toolpaths. You could take some linear pattern and ray trace the 3D model to determine the depth for each point. The pattern on Matt’s image is sparse and not solid. I assume that is because there is a limit to the number of scratches a surface can handle before your eyes stop seeing the details or the material just ends up tattered. Your pattern could be a grid, or some flowing lines. Each ray trace would end up with an x,y,z, where Z would be mapped to intensity in the bitmap. Your first program can interpret those pixel intensities into a depth and create an arc toolpath. I’m guessing we could find a library to do the stl format interpretation and the ray tracing for us (or else, opengl/webgl could). Getting to this stage would be the icing on the cake.
I was thinking a starting point would be a program taking as input a list of x, y, z points and generating either SVG or straight to gcode toolpath from it. Then various pre-processing could generate the points. To begin with, the arcs are circular segments of a fixed global arc angle.
Then as an improvement, the arcs can be modified non-circular curves to decrease distortion. (I’m still trying to wrap my head around this one. It feels like it would be not quite possible (overconstrained) unless you use multiple parallel curves.)
Then as a further improvement, the starting and ending angle can be specified for each point, to produce the occlusion effect.
Then as a further improvement, a simulator to predict the visual effect of what you would get. Or maybe this is first.
I was also thinking it should be possible to create a sundial with this technique. To work, the viewing angle would need to be constrained with a peep-hole or something. Maybe with a web-cam you can enjoy the full anachronism of using modern technology to tell time using the sun.
I think you kust have to constrain the problem to a certain viewing angle or with the light only in one place and just be ok with some distortion in any other situation.
That was my first step. The stereo images he made were easy enough to see. Dealing with tiny scratches in black acrylic sounds frustrating. I would only want to do that once the simulator said it would work.
Having thought about it, I’m not convinced figuring out the arc is necessary. Take an object is 3D space, an axis of rotation for that object, and a viewing plane. For each point you want on an arc, rotate the object through a set of frames, and project the point to the viewing plane. The result will be a series of points that are on the arc. More frames in the rotation create a smoother arc, and the resulting points can be simple G1 g-code moves that scribe the arc.
But there are other problems reproducing something similar to Matt’s work that seem complicated to me. First, there is the point selection. It appears that he applies a monochrome texture to his objects and uses the “white” in that texture to identify the points. Second, is the highlight and shadow. In many of the knots, a sweep of highlight or shadow can be seen during the rotation. Assuming this is reproduced in the CNC version, it implies he is using some sort of lighting model and varying the Z depth/pressure based on the intensity of the light for each point.
You would basically ray trace a bunch if points from different angles. If the point was visible from multiple angles, you could connect where those rays intersected with the surface to form the arc. If there were angles where the points were not visible, you could just lift the pen there to create occlusion.
The trade off is that it is more computationally intense. But even if it took a few minutes (which it shouldn’t), it would be fine.
The user would have to supply a 3D model and choose a texture to map. The program could use that to generate an svg or toolpath.
Makes sense to me. It seems solved, so I’m no longer obsessing over it. I’ve lost interest .
Matt Brand only rotates the object a limited amount. A person could select the points visible half way through the rotation, and ignore occlusions. Or, if a person wanted to deal with it, simply save the normal of the triangle from the STL where the point lives, and eliminate any points on the arc where projection of the normal does not intersect the viewing plane.