My setup is MPCNC Primo kit with SKR PRO1.2 from v1eng shop.
I found that default steps per mm parameter of 80 does not result in desired movement. If I set it to 50 via lcd screen it’s precise. I used the following formula:
[new_steps_per_mm]=[current_steps_per_mm * how_much_it_should_moved] / [how_much_it_moved]
When the board is restarted, this parameter (and the others, e.g. max feed rate) are set back to default values.
I tried choosing [Save to EEPROM] but I get [EEPROM disabled] message.
I read in similar threads that I can set it with M500 parameter from software like Repetier.
Is it the right way?
No, I have not flashed it. I figured that the kit comes preflashed with the correct one.
I’m currently using series wiring but I ordered some endstops that I plan to add.
Noise is fine as long as it still moves the right distance. We do not use any of the advanced TMC features in CNC because quiet is less important than strong.
I’m using the current setup as XYZ positioning system with a light camera on top. The original mode of work sufficed in terms of power to move the camera.
If I do want to reduce the noise, can I lower the driver current? It’s value is 900 for XYZ axes now. Not sure what was the value in previous firmware.
On 3D printers stealth chop is what makes it very quiet. But they turn it on over a certain speed, like 100mm/s. Our cncs are much larger and we often need the most torque when moving at 8mm/s while milling. So we turned off stealth chop. You will have to enable it and set the threshold for changing to spread cycle over 50mm/s, and then recompile and flash the board again.
What kind of project are you working on? It seems neat.
Thank you all for the prompt and detailed answers!
I want to try to refrain from recompiling the firmware, but I will try available settings.
Lowering the current from 900 to 500 did not effect the sound.
I didn’t find “cool and quite” in the settings.
The project is in computer vision. Accuracy of the computer vision algorithms depends on many factors, including lighting conditions and distance of the object from the camera.
Using the xyz positioning system allows me to setup more accurate experiments. I can change environment conditions while positioning the camera at the exact same locations over and over again and also automate the process.
Well, now I am even more curious . I am a software engineer in robotics (usually unmanned ground vehicles) and I have done a lot of computer vision. Mostly things in a mono camera setup like pedestrian detection and lane or unmarked road detection. One heroic project doing forward collision warning with a mono camera without object detection. I have done my fair share of camera calibrations. Is this for work? Proprietary stuff? I don’t want to pressure you for more details if you are worried about that.
Wow, sounds so interesting! Now it’s even hotter field with all the machine learning stuff advancing so fast. I guess you can call it proprietary. Let’s not bore the folks who will read this thread later with camera calibrations, intrinsics, extrinsics I will send you a pm.
I might not understand all the “down in the weeds” elements, but if you’re able to share I would be interested in following this conversation. Maybe in a more targeted topic.
Sorry Heffe is right, stealth chop. There is also the holding torque multiplier, you can turn that way down as well if there is little static load. I have been bouncing between boards, drivers, and firmware lately, and it is starting to get jumbled.