Creating track from video motion tracking

Hey folks: New to this community. I’m following an instructable to build my own version of this table, and have become obsessed with the following idea. I’m a bit confused as to the overall workflow, and how my idea might fit into it.

I want to create a workflow where I can execute motion tracking on a point in a video (specifically, a double pendulum, or those used car sales windsocks lol), export that motion tracking data as a vector path, and have the marble draw out that pattern. Or that’s how I’m thinking about it, theoretically.

As best as I can tell, most of these tables use either thr or gcode files, but I have minimal experience with those, and don’t understand the basic workflow for even something like sandify.

I know I’m starting from scratch here, so I appreciate any help that can be provided!

Examples of this:

  • Pendulus ( for one hour ) (1 hour of pendulus, fuzzy, but still satisfying. )

  • AppDynSys : Double pendulum : SDIC (clearer visualization, also has one of a triple pendulum.)

  • The Double Pendulum Fractal (has some description of the math, and plotting the position on XY, which would also be a cool visualization, in addition to the actual physical motion)((there’s also this cool color visualization which would be neat to also incorporate into the lights!))

Sorry for the boldness of the last few lines! Ididn’t know markup was available and that I can’t edit!

Assuming you are using a rectangular (X/Y) implementation of a sand table like ZenXY, then all you need to do is render your SVG out to g-code using just about any CAM solution. How you deliver that g-code to the control board will depend on your build, but SD card, USB, and WiFi solutions are available.

For doing the CAM, I would recommend Lightburn. It is not free. Lightburn has been optimized for handling complex/detailed CSV files. Lightburn is designed for laser cutters and laser engravers, but it will generate g-code you can used in a rectangular sand table. I’ve imported SVG paths with over 500,000 segments without any issues.

My second choice might be Inkscape (free). I understand There is a g-code add-on (I’ve never used it), and Inkscape also does a good job of larger SVG files.

Thanks Robert! (Good name!)

I am a student at the moment, so I have access for an unknown amount of time to tools like Fusion and Adobe products. I’ve used inkscape before, so it’s a good reminder that that works too.

Yes, the table is rectangular, so I am more comfortable with gcode simply through the familiarity coming from 3d printing.

I would assume there’s a way to export SVG data from premiere or after effects, but I’m only finding how to import data to use for motion… Any thoughts on that?

For this particular application, Fusion is likely a non-starter. With all of its constraint logic, it does not handle complex SVG files well.

I have no knowledge of Premier or After Effects. Sorry. If all that is available for output is DXF format, then usually it works as well and is easily converted to SVG if needed. Lightburn loads DXF. Lightburn does have some sort of educational license, but I don’t know if it will work for you. If you have laser engravers on campus, you might see what software they are running.

Estlcam (used by many on this list) might work. Kiri:Moto is also worth trying. What you can use will depend on the complexity of the paths you generate.

I did a somewhat similar project to what you are doing, but my paths were generated by the equations for things like pendulum movement, not actual movement. For my use, hundreds of thousands of segments were often generated, so, while lots of programs will load and handle SVG files, few handle paths with lots of segments.

There are free tools for inkscape to trace an svg and create gcode. They are used for plotters.

Plotters have a Z axis though, so you may not be able to use that or laser tools for a sand table.

Gcode is pretty easy though:

G1 F1200 Set the speed for all future moves, until the speed is set again.

G1 X10.123 Y12.345 Move from wherever the machine was, to (10.234, 12.345). You can do this again and again. It looks like you are describing point, but really, you are describing line segments, from the implicit previous point, to the current point

If the machine starts at 10,10:

G1 X10 Y10
G1 X20 Y10
G1 X20 Y20
G1 X10 Y20
G1 X10 Y10

That draws a square at 10,10, 10mm wide and tall. The first move is a no-op if the machine is already at 10,10.

If you can write a script or software to determine the X,Y points you want, then you can easily spit out the G1 commands to move around between them.

BTW, tracking something in a video is 100x harder than just simulating the motion with some math. Especially for something like a double pendulum.

1 Like

@rgbrobot , sounds like an interesting project. It makes me think of the old windows screen savers. :grin:

Blender has motion tracking capabilities and you should be able to export the paths to different formats. I haven’t had a chance to play with it yet but there are a good number of videos on YT. I look forward to seeing your results.

Don’t know your background but if you can write Python you may want to look at opencv which has a lot of motion tracking libraries baked in. Isd imagine you could add a red dot to what you want to track and have it generate xy location data on where it is on the screen. You could then scale that to match your machine. The gcode generation would be trivial from there.

1 Like

Well @Bigchepin, looks like I was able to track the motion. The only way I’m able to see the path is using the motion paths function, but As far as I can tell, that is only diagnostics data, to let the animator see where the tracked point is moving. I’ve played with this for about 4 hours now and can’t for the life of me figure out how to export this data. I think we’re on to something, but it may be a red herring.

@jeffeb3, Maybe… The only engineering i’ve had is that of an Audio engineer… not terribly helpful here. I don’t know where to begin to simulate stuff like that, but developing this workflow is helpful for other motion that I may want to simulate, but can only capture via camera.

if anyone knows Blender, I’m all ears on how to Export this data to SVG… either directly or via another app. I have all the data I need, even in XY form, but I’ve scoured the web and can’t find a way to export the X/Y data.

@rgbrobot , do you mind uploading the blender file so I can try to play with it when I get an opening? You may have to zip it to upload.

1 Like

Just to add to the confusion… Do you like javascript and math? There’s some TensorFlow based .js libraries available that can be used to capture/track objects. Tracking will be easier if you’re able to mark objects with distinguishing color/markers. Math for interpolating motion will depend on how accurate you want to be, and the object(s) motion speed, complexity, record frame rate and playback speed. If you get this far, then generating SVG based paths will be the easy part.

1 Like