Feature suggestion

Hi there!

I came across your project while looking for a handheld CNC, and I really like what you’ve implemented so far. I do, however, have two suggestions for possible improvements, and I’d love to hear your opinion on them.

1. Downward-Facing Camera for Setting Origins
I’d like to add a downward-facing camera to help set the X and Y origins. For example, it could be mounted at the top edge. A simple USB webcam or a USB microscope with a short focal length could work well for this. The goal would be to view markings on the work surface and use them to accurately set the X and Y origins.

2. Smartphone as the User Interface
I’d also like to replace the built-in screen with a smartphone connected via USB OTG to both the camera and the Teensy. The smartphone could access the webcam using the Android Camera2 API and camera HAL. (I’m less familiar with the iPhone ecosystem, but I imagine something similar could be possible there as well.)

Here’s how I imagine the workflow:

  • Mark the work surface using a ruler.

  • Move the camera roughly over the first marking (doesn’t have to be perfectly centered).

  • Select the marking on the smartphone screen and set it as X0.

  • Repeat the process to set X1, Y0, and optionally Y1.

  • Once the origin is defined, select the G-code file on the smartphone and send it to the Teensy to begin working.

I’d love to hear your thoughts on this approach!

During the milling process the smartphone would display everything which was displayed from the teensy before. So the phone would act as an UI only.

Hey @flexbex42 , really appreciate you sharing these ideas!

This definitely has some appeal, and could maybe also be used to get a glimpse of what is being cut while operating (right now everything is pretty occluded). The tricky thing is integrating a camera with the microcontroller. This is a pretty computationally expensive thing to do, which microcontrollers are generally not amazing at. I do know that there are ESP32-CAMs which seem to work decently with ESP32.

Very cool. It reminds me of a DJI drone setup, where the controller connects to your phone to display what the drone is seeing. This works fine for Android and iPhone so it should theoretically be possible. I don’t have personal app-development experience, but it seems like a neat path for anyone who wants richer UI without extra hardware

“Could maybe also be used to get a glimpse of what is being cut while operating.”

Yes, that’s actually where my original idea came from. I saw in a product video of the Shaper Origin that they have such a camera. But I would be concerned about how to keep the lens clean in that scenario. That’s why I thought it might be better to place the camera away from the cutting area.

“This is a pretty computationally expensive thing to do, which microcontrollers are generally not amazing at.”

Yes, I’m aware of these limitations. I already have an ESP32 with a camera, but I think it would be overwhelmed if it had to generate real-time pulses for the stepper motors while also operating the camera. That’s why I suggested using a smartphone instead.

“I don’t have personal app-development experience.”

I actually have some experience. I did app development a few years ago. I think I’ll create a mockup next month. I just wanted to make sure you don’t already have something similar in the pipeline, so we don’t end up developing the same feature twice.

1 Like

Recent ESP32s have parallel/DMA units that can do this with relatively low overhead - the real problem with mixing an ESP32 camera with the rest of the Compass is the low number of GPIOs (compared to a Teensy)

1 Like

Oh ok. I’m actually really new to the ESP32 world. So I am not fully aware of the features. But it’s a good point that they have not enough GPIOs. How do you like the idea with the Smartphone approach? Because it is relatively easy to build UIs. And for that no new hardware has to be bought. Besides the Camera of course. And it could be build on a lot of existing libraries, It is also compatible to a wide variety of cameras.

I used to work on a little microcontroller called IOIO. (Pronounced like yo-yo). It would run GPIO from serial commands sent over USB. It was pretty slick. I don’t think there are many better values for robot control than a slightly used android phone. It has a lot of builtin sensors and human interface, along with a very fast set of processors.

They aren’t as easy to work with reliably as microcontrollers. But they are very powerful.

2 Likes

That would be awesome! I’ve thought about phone integration before, but like I said — zero experience — so it’s just been on the backlog.

Hmm how many GPIOs does the camera require?

Builtin sensors are definitely nice… :eyes:

I like the idea of the camera as well. we could use it to solve the axis drift that I had proposed using a edge for. If I take my plywood sheet and put some rules in the places I want to keep accuracy, it can use those with the camera to lock in on a given axis. This would pretty much solve the large work problem for the cost of drawing some reference line landmarks. I remember some questions about linear vs. rotational accuracy. This solves both.

There are definitely some UI and reference pieces to write, but it really unlocks the device.

For me, +1 on the cheap android phone interface.

There seems to be some confusion which Android phones actually support UVC

UVC, or Universal Video Class, is a USB standard that enables video devices like webcams to stream video to a computer using generic, built-in operating system drivers.

Since Android 14 (Release date 2023) there are some improvements on that matter.
Also OTG isn’t working on all phones. But that affects only really old phones.

For the UVC compatibility issue a workaround would be to use the macro lense of a phone directly. But the macro cameras of many phones are really bad and also the phone would need to be placed horizontal with at least a distance of 10-15cm to the workplane.

To be clear, they are definitely not an easy thing to convert to. Android is a huge solution space and changing everything CAM has done to work in android would set the project back by months or even years.

They are neat, and they should be leveraged whenever possible. I just want to point out that it may not be feasible in the near or mid term.

Oh my idea was not to replace the teensy at all. That would be definetly too much work. My idea would be to just use the smartphone as an extended user interface. So basically everything that is routed to be displayed on the small screen could be rerouted to be displayed on the smartphone via OTG plus some extras. The Teensy has a dedicated library for this. But it is hard to estimate the amount of work this would be for an outsider

2 Likes