Laser engraving - not really getting great results

If it was the original bitmap or ppm, then it would have a hard transition on the pixel boundary, and sloping shapes like the circle in the japan flag would have an approximation of a circle, but each transition would be sharp, and there would be no ringing. JPEGs aren’t the only format with these artifacts, BTW, but they will always have them.

If you took the Japan flag, and converted it to a bitmap, then ran it through imagetogcode, then I’m guessing it wouldn’t have the same artifacts as if you went from svg to jpeg to imagetogcode.

Reducing the resolution could help.

Adding a white background around the whole image will eventually help (if we can get by the buffer issue), and the width of that border would have to be larger than the distance it takes Marlin to stop. That distance (for the curious) is travel_speed^2/(2*acceleration). So at 50mm/s and 400mm/s/s, it’s about 3.125mm. A little less due to jerk (a jerk of 8mm/s would make it approximately 3.05mm).

I will point out that David’s original comment was that this is better than it was before (right), so this might just be icing on the cake. There’s a lot more that goes into this than I would have thought originally.

Judging by the size we see in the horse and the grey boxes I would agree. 5 to be safe, then maybe we can increase the speed as well.

 

The program has a switch for 1 bit color, maybe we can convince victor for a few more options to reduce the possible laser level without impacting the picture much, 4 bit with a laser might look just as good as 8 bit?

 

Well, 0.2 does look much better than 0.3…twice the amount of commands, but half 0.1.

The error isn’t proportional either. There’s a point when it will just work (for a particular image). Unfortunately, you won’t know unless you run it. If Marlin can see the next long move while it’s doing the last long move, then it won’t stop. If it has <3.125mm in the buffer, then it will start slowing down.

BTW, a 5mm gap would allow a max of 63mm/s speed, assuming still a 400mm/s/s acceleration.

That is great news and it fits what i have seen previously. I have a theory that I would like some help testing if someone has time. I think the reason we are seeing the Pause at each block in the Shades of Grey is all the Repeat F3000 commands. Hear me out on this. Every time Marlin receives an Fxxxx Command it will need to clear the buffer before it starts any other movements at the new feedrate. Marlin doesn’t realize the new feedrate is the same as the old feedrate so it still pauses to clear the buffer.

Since Victor’s Image to Gcode was written to use a different speed for the white spaces and the burns it is setting the feedrate every time it gets to the edge of a burn. So the file is full of hundreds of the F3000 commands. Like in this text file here.

[attachment file=88675]

So I have removed all but one of the F3000 commands from the shades of grey gcode we have been using. Can someone please test this file? Otherwise I will do test it when I get home.

If I am correct you won’t see the pauses anymore. If that is the case then we have two options to fix it. #1 Tweek the firmware to not clear the buffer if the new feedrate is the same as the old feedrate. Or ask Victor to make that another tweek to the Image to Gcode converter. Honestly I could see both options being worth while.

 

 

ShadesChange.jpg

Shades-No-F3000.zip (35.9 KB)

Why would it need to do that? Is there something in the code that indicates that? AFAIK, even if it has a command with F3000 and then F2000, it will only do accelerations from 3000 to 2000, not all the way to zero.

Also, it’s not changing speeds, it’s set to F3000 through the whole file, right?

My Theory is just a theory at the moment. I need to run the modified Gcode to see if I am correct. But My theory makes sense from all the symptoms we have seen.

  • Having several moves in the same direction with occasional M106 SX changes sprinkled in the middle doesn't cause the Pausing we see in the Shades of grey. We have proven this with your test and with my laser calibration gcode file.
  • Shades of grey keeps pausing for reasons we don't yet understand at the burn breaks.
With that said I think the F3000 is causing the pausing. The only way that would make sense to me is if the moves all stop and restart with the new feedrate. It is the only

“Why would it need to do you that?” you ask. I don’t know seems like an unintended consequence that nobody has noticed in the 3D printing world.

“Is there something in the code that indicates that?” Don’t know but first I want to see if my theory is correct. Is the pause caused by the F3000. if so then we can dig into the code. I don’t want to assume that is the cause then dig into the wrong part of the code.

You missed that I tested the buffer_test.gcode, with a bunch of 0.1mm moves, and it was pausing a lot. The 0.1mm movements in the gcode are what (I think) are causing the pauses.

My mental model for this is the planner has a certain buffer, something like this in the buffer at once:

(starts a x=0)
G1 X10.0 ; long move, it’s at top speed…
G1 X10.1
G1 X10.2
G1 X10.3
(end of buffer)

As it’s executing that move from X=0 to X=10, it’s looking ahead, and trying to decide if it needs to stop or slow down when it gets close to X=10.0. Since it only sees 0.3mm more of the path, it has no choice but to slow down. The next command (that it doesn’t know about) could be:

  • G1 X20.0 (which would mean it could keep going) or
  • G1 X-10.0 (in which case, it had better be slowing down or something could break).

If the buffer was big enough to include the next command, or there were fewer 0.1mm commands, then it could see far enough ahead to know it doesn’t have to stop between blocks.

I’m happy for you to test this theory, I just wasn’t sure if there was more to it.

1 Like

Well that makes sense. I did read your earlier post but I clearly didn’t understand what you meant at the time. It’s a good theory. If that is the case then when we I run my modified Shades or Grey then I should still see the pauses.

Yeah, and your power calibration tool looks like it’s in steps of 0.5mm, and they are all the same size, so that might be why it works.

1 Like

Yup. Confirmed. The F3000 had nothing to do with it. The machine still pauses in the same spots with them removed.

1 Like

Been out most of the day. Just tested a file with all but the first F3000 removed. Same pauses, as Aaryn noted.

2 Likes

This thread has been awesome. You gentlemen are legends. I have learned an incredible amount even though there’s so much of it that’s over my head.

For whatever reason, I decided to run our “shades_of_grey” JPG through Image2Gcode… the other one, the MPCNC-customized one from a much earlier thread. Interestingly, it DOES NOT have the darkened edges we’ve been looking at, though it seems to have other problems less interesting (probably user error)…

[attachment file=88804]

Here’s the gcode file it created… this file scans top-down vs. bottom-up

[attachment file=88803]

– David

 

shades_of_gray.gcode.zip (509 KB)

3 Likes

David, can you share the settings you used for ImageToGcode or Image2Gcode or whichever you used to make the Garfield gcode? I scanned through the thread and couldn’t find them, but I may be overlooking them.

Here’s the ImageToGcode screenshot showing the parameters for the Garfield gcode we’ve been running… this is running under Wine, which explains the slight misplacement of field labeling…

[attachment file=89177]

– David

1 Like

I would say 0.2mm resolution is probably unnoticeable and less buffer work.

1 Like

I’ve started playing with LightBurn. Very nice software. But my first attempt to use it with a new image on a new material didn’t go so well. Too many changes at once. I wanted to take a step back and set a baseline with the same settings in the same software that made the Garfield gcode but a new image and then transition that to LightBurn.

Not unnoticeable, I printed Garfield at three different resolutions using Viktor’s ImageToGcode… our original 0.1mm, 0.2mm, and 0.3mm. I left all other parameters the same… travel and burns rates were both 50 mm/s. File sizes went down accordingly. print times got quicker, and putting fewer burns down on the material lightened it considerably. The 0.2mm resolution did indeed seem to work well for my laser, spot size, and the cereal box cardboard I was printing on…

[attachment file=89246]

[attachment file=89247]

[attachment file=89248]

– David

3 Likes

I saw that on my tests as well. I believe it is not as dark because it is not moving as slow, because the buffer is not as starved. So I think we kept trying to overcompensate the speed for the buffer issue. Using the right sized resolution and the right speed (30-35?) Might get us better results. Or even trying to limit the power output of the laser and find the speed that works without starving the buffer. I might get a chance to play with it some more, but, exciting…I am moving to a spot with a larger work area for me. I have to Pack and I kinda need to try and get all of my prep done so when I do move my shop I won’t miss shipping for more than a day.

2 Likes