Questions about Compass

Ok nevermind. On average, the polling time for a full frame seems to be ~47.8ms. That means with 4 sensors, you could theoretically pull at a max of about 5Hz. Not amazing, but not terrible either for simple edge detection.

1 Like

This is looking good!

What do you think about the method of grabbing the images just of one or two sensors that just changed the liftoff-condition recently, as described above?

When just reading two sensors instead of 4 the speed would double.

What SPI frequency do you use to read out the sensors? It looks like the sensors support up two 2 MHz SCLK. If you currently use a lower frequency that would be another way to get a higher readout rate.

1 Like

I have a thought regarding the absolute positioning system:

One potential approach could be to use IR triangulation in conjunction with sensor data. By integrating an IR camera and strategically placing IR beacons on the workpiece, the machine could obtain a rough estimate of its position. Subsequently, the data from PMWR sensors could be employed to refine the exact location, drawing inspiration from how the old Wii remote operates (which still even for todays standard has a pretty good IR camera built in).

Another possibility is to utilize one or more Time-of-Flight (ToF) sensors from ST (https://www.st.com/resource/en/datasheet/vl53l0x.pdf). By incorporating several reflection targets/sensors and implementing a suitable algorithm, it may be feasible to achieve sufficient accuracy combined with the PMW sensors data.

Alternatively, we could consider using a CIS/CCD sensor—commonly found in scanners—to capture a small section of the workpiece’s surface. This could be accomplished by either mounting the sensor on the x/y axis for automated scanning or positioning it externally and using PMW sensor data to correct any distortions caused by manual movement.
For example, scanning a previously milled section of the piece and comparing it with the G-code paths can help the machine determine its exact location.

Alternatively, a scanning approach inspired by the Polargraph design could be intriguing for x/y movement. This configuration would utilize two additional stepper motors, while the existing ones would operate in dual axes, enabling a scanning mode. To stabilize the system, you would need to secure four anchors to the corners of the build plate, which could be accomplished using clamps, screws, double-sided tape, or other fastening methods.

Next, you would connect the pull cables from each motor to the anchors and position the CNC in one corner of the workpiece. By inputting the dimensions of the work area, the CNC could effectively scan the entire surface in very high quality. Placing a reference sticker on the build plate would allow the CNC to map the entire area and recalibrate as needed. With A4-sized sensors, the scanning process could be notably efficient. The CCD could maybe also be used to improve the accuracy of the PMW sensors.

In theory, the Polargraph, when combined with the standard compensation of the Compass CNC, could be utilized in milling mode to automate machine movement rather than relying on manual control. The Compass, equipped with PMW sensors, could effectively counteract the instability inherent in the Polargraph method. However, this setup would require a total of seven stepper motors.

Here are some cool projects which use a cheap CCD sensor from old scanners to get an idea. Possible to achieve very high image quality and accuracy.

1 Like

I have tested several generations of ToF sensors from ST for another project. They work well for things like rough distance detections. I used it for detecting hand gestures, like moving my hand towards or away from the sensor to control light brightness without needing to touch something. This worked well.

But the absolute distance reading is too imprecise and volatile to be usable for a measurement application like on the Compass. The distance read jumps back and forth a few centimeters, depending on the lighting situation and reflections. Averaging does not help here as the problem is not regular noise, but lighting and reflections changing.

Yeah they’re pretty volatile and inaccurate (for the power they use they’re pretty amazing though, crazy what’s possible today). The question is if these Sensors need to be very accurate if you combine them with the PMW Sensors. So the ToF sensors for a rough estimate where the machine is and then the PMW sensors have a much easier time knowing where they are exactly based on their surface scan. But yeah that would be only practical for tool changes but then you could just put it back roughly in the same spot.

Using a CCD sensor together with the PMW sensors (for stitching the scans together etc.) would be way more elegant and would give you a high quality scan of the surface.

The machine has a lot of potential, thanks again for making it open source

1 Like

The only issue with this is that you can’t seamlessly switch between normal navigation mode and camera mode. You have to reset the sensor, re-write the firmware, and then start sensing again—which takes at least 250ms. I guess one way to go about it would be to start in navigation mode and lock in roughly to the edge using the liftoff condition, then switch to camera mode with 2 sensors to get a more precise homing. But I also like the idea of using at least 3 sensors to be able home to a corner as well. Would have to test to see how much of an impact doubling/halving the polling speed would have.

Oh man, it’s been a while since I looked into this. I’m not sure exactly. I think it’s just using the default Teensy SPI settings, set using SPI.begin().

A lot of super cool ideas to explore! I thought through a few different sensing solutions when I first started this project to try and determine what would make the most sense. The trickiest part is optimizing for cost and simplicity, which often go hand-in-hand, while still maintaining an effective approach.

I wonder how much computation is required for stitching the images of those CCD sensors—i.e. if it could be done on a standard arduino style controller, or if it would require a raspi type computer.

I’m glad! Open source makes it possible to have conversations like these,which is super awesome—and the best way to move the tech forward :slight_smile:

I’m using a desktop CNC with mach3 and an aspire from vectric.

This topic seems like a spindle camera topic. I have a spindle camera… simple and cheap USB cam like this;
https://www.amazon.com/Endoscope-Adjustable-Semi-Rigid-Waterproof-Inspection/dp/B0CZHT2VHS/ref=sr_1_7_sspa?crid=1B52WR4NZKSCO&dib=eyJ2IjoiMSJ9.avyzkgdDXGFDPwinkKoezI_ZaVWxMAO77ffz_eDgPt_OZtXUhzfpaZpMHb_6MxZ2nhtTqfWDcHCKy6LWf_QvGUHbarDyXmJ46QLF1Mq1c12qYJ-vRd4YZziAct6fQAfqwzcEKscU36VTRwH4FtWPdw7c1lS11vc4FNTIEtcClFJR-_JMVIZQtGVU2sHo74_YLMrvfMPIE3ib1dLqMmdEKUMGPOea4NQt8kwbdVORC6U.QlWDPJTSf73eBglhTKdVXULkbnnkEq1xoOkd_FP4F7k&dib_tag=se&keywords=usb%2Bendoscope%2Bcamera&qid=1753925169&sprefix=usb%2Bendoscope%2Bcamera%2Caps%2C253&sr=8-7-spons&sp_csd=d2lkZ2V0TmFtZT1zcF9tdGY&th=1

(endoscope camera, USB)

These cameras are perfectly suited for edge finding, etc., and crosshairs can also be created with software.

The camera is fixed parallel to the spindle(Router) with blacket, so the offset is fixed.

I can operate a USB camera perfectly with a 3rd party plug in on Mach3.

I don’t expect a small PCB board to handle the information and offset values of a camera with sufficient resolution.
For various reasons, spindle cameras require very clean, high-resolution images.

Therefore, it’s better to use a high-spec product like a laptop or smartphone for the spindle camera. Unfortunately, I don’t know how to integrate laptop software with the Compass CNC PCB.

Ok probably dumb question, I’m trying to visualise how V1 all goes together, in particular the lower layers that aren’t shown in photos I’ve seen - is it possible that there are 2 baseStage.stls down there? (rather than 1 mentioned in the readme?)

1 Like

Here are some resources for spindle cameras:

1 Like

Yes indeed! Thank you for the catch

1 Like

Yeahhh, real time camera processing is a bit too much for the Teensy to handle unfortunately. That is a cool idea, though

thanks - just FYI, as a noob, things are a bit confusing coming in - probably you need to have the web site explain the two designs, and in particular explain that the “build instructions” are for a different design than git main (because that was really confusing) - doesn’t have to all be perfect all at once, just explain that it’s a work in progress and what’s going where

Heard loud and clear! To be honest, I was not expecting such a high volume of interest right off the bat. There’s a lot of stuff I was hoping to procrastinate with (namely writing the build instructions) but I guess I’ll have to get to them sooner than expected :laughing:

4 Likes

thanks - short term a set of photos of the layers as you assemble up from the base plate would go a long way, I mostly have figured it out by comparing where holes go, but for the longest time I thought there were .stls missing :slight_smile: (that and a “there are two designs the build instructions are for the old one, not the current repo” note somewhere obvious)

2 Likes

Just found this video, would be a kind of easy way for positioning.

3 Likes

The interesting thing about those strings on the goliath is thay they can give you absolut positioning. That doesn’t have to be great if it could be combined with the very high rate, high resolution optical flow sensors.

This is pretty common in autonomous cars. The inertial sensors and encoders are used for high precision, rapid updates. The GPS is used for absolute positioning despite how slow and low res it is comparatively.

I think the operational requirement for adding string probably makes it more trouble than it’s worth. But it is definitely interesting.

4 Likes

Like those endpoint-towers that have to be positioned at the correct height and so on.

But do you really need the full absolute positioning? I’d guess that the sensor drift of the mouse sensors over bigger areas is the issue. You wouldn’t need full absolute positioning to fight that, just relative without that drift.

So you could take two or three spools of wire mounted to the compass. When you set up you pull out the wire and fix it somewhere convenient, height or exact position doesn’t matter. You then have a rectangular reference jig you move the compass along on once. The firmware knows how big the reference jig is and how the spool lengths changed during the reference movement. It can then calculate distances and angles to the wire endpoints.

I would say so for a lot of use cases, especially with bigger jobs that you want to have a closed path or where you want to be able to cut in multiple passes, or where you’ve got tool changes and features that need to be relatively referenced to one another.

It’s not an absolute requirement for those things, but will make certain workflows much easier, I’d imagine.

Oh wow that’s pretty shnazzy!

I explored the idea of using string potentiometers for positioning when I first started the project, but eventually went with optical flow to make setup easier and cut cost. String potentiometers are kinda crazy expensive (or at least from what I was able to find 2 years ago). I tried making my own with a badge reel and a rotary potentiometer, but it wasn’t amazing. I also couldn’t think of a good way to mount it, but the magnetic mounting to a bearing (?) is pretty clean. Could be a cool thing to explore.

Absolute positioning definitely is a plus, especially for bigger projects. The main hurdle is finding a way to do it simply, without exploding the cost of the machine.