Preview G-Code projected to live camera feed

I’ve built a simple web app called camNC. It overlays CNC G-code onto a live camera image and corrects the perspective based on a given stock height. The idea is to help visualize tool paths—basically drawing the G-code right on top of the camera feed to avoid mistakes like hitting fixtures, wrong rotation, or misalignment. It also lets you jog to any point just by clicking on the camera image (thanks to a FluidNC WebUI V3 integration).

It’s still highly experimental but already quite useful, IMO. If you’d like to check it out, it’s on GitHub here:

Setup requires mounting a camera / old phone over the workspace, calibrating it though the app and placing 4 aruco markers in (or next to) the wasteboard.

Example overlays:
grid

(other images moved to github since I can only upload 1 here)

12 Likes

That’s really cool! I like seeing the LR4 in there. This is really a neat application of a camera.

What kind of computer vision are you using? Is it traditional opencv or some type of NN?

1 Like

Man, I really love when people post for the first time and are like: Here, I made something fabulous, now you can use it as well…

Does it work if you do not use FluidNC? Can I overlay my nc files as well (basically gcode with a different ending)?

3 Likes

Thanks. This is “just” classic OpenCV stuff. Essential estimating camera extrinsics from chessboard calibration and extrinsics (relative to machine coordinates) from aruco markers in the wasteboard. That allows to calculate which camera sensor pixel sees a given x,y,z position in machine coordinates.

Internally it uses a 3d scene consisting of a simple rectangle at the stock height (which a user has to provide) and looks up the texture via a OpenGL shader in the camera feed using regular perspective camera projection (just the other way round as one would do it for rendering what a perspective camera would see from a 3d scene).

I might experiment with some NN models e.g. for depth estimation later. That might allow some features like measuring stock dimensions automatically or “seeing through” the machine beam by using old pixels for hidden areas.

Technically this does not depend on fluidnc - it’s just a small integration for convenience. You could also copy&paste zeroing commands manually.

3 Likes

Right up my alley! Thanks for the explanation. That’s good quality stuff. I have been doing that kind of computer vision for a long time and I haven’t learned how to do the NN stuff. It seems like the right solution to me. But if all you have is a hammer, everything looks like a nail. This is so close to how I would have done it that it makes me wonder if we worked together or just read the same textbook.

1 Like