Madgrizzle's Robot

Transforms are hard. If they are right, they make a lot of sense, but they aren’t obvious until then. And I have always been burned if I try to just hammer through guess and check. I always have to write down the transforms completely and work it out.

My concern is that their ROS SDK is really designed for their robots and that I might not be able to get it to work on my robot. The nice thing about the mapper is that it publishes the map and odom (don’t need to worry about motor encoders). The problem is that I don’t know that I can load a saved map. I’m not a c++ developer, but looking through the SDK code suggests that it can only save a map to memory in some proprietary format and then reload it.

edit:
There’s a separate package called robostudio that I can use to load and save maps so it might be possible to save a map in robostudio at the same time I save the map in map_server. Then, if I have to do a cold start, I just load the map in robostudio (for the mapper) and load the map from map_server for the global and local planner. At least that’s the hope. Life really would be much easier if I could use all the ROS SDK and it published cmd_vel.

So I’m thinking that if I load a map in robostudio, that map will be published by the m1m1 server node… therefore I don’t need to use map_server at all. Ideally I would be able to load a map from map_server into the m1m1, but I’ll take loading it from robostudio if that’s all I can do.

Furthermore, it seems that if I wanted to, I might be able to use the global planner of the slamware ROS SDK since it publishes global_plan_path (nav_msgs/Path). Therefore, I think all I would need to do is implement move_base with a local planner that works off that published path. The nice thing about the map from robostudio is that you can readily draw virtual walls that restrict the global planner’s computed path… at least I think that’s what they do.

You can and should learn rosbag to do things like record a message and play it back. For something like a mao, it is a great tool. The timestamps are recorded with the original times, but you can probably get by without fixing it. There are also some easy ways to fix it.

Move base should be able to handle an incoming Path message. You may need to remap some topic names to get them connected. rqt_graph is your friend here. Remember that “from” is what you are renaming and “to” is what it will be named. It has nothing to do with the direction of the messaging.

1 Like

I had to do that last night… I had it backwards at first because I thought from/to was with respect to the node and not the topic. So I was trying to remap from ‘map’ to ‘slamware_map’ rather than the other way around.

The joys of learning ROS. At first, everyone hates remaps because they seem so hard to get right, or to scope properly, but after you learn it, you find out how much you can do with it.

The m1m1 server node publishes /odom/pose and its launch file for the mapper has this line:

<node name="map2odom" pkg="tf" type="static_transform_publisher" args="0 0 0 0 0 0 1 /slamware_map /odom 100"/>

With my limited understanding, I think this sets up a static transform between odom and the map. I was trying to figure out why I needed to run AMCL and a simple google search regarding difference between /odom/pose and /amcl/pose said:

Well, /odom/pose is the relative pose, /amcl/pose is the absolute pose.

But if there is a static transform between the map and the odom, would /odom/pose not be an absolute pose? If so, do I need to run AMCL at all?

The two typical coordinate frames are a map and an odom. The map always tried its hardest to be right absolutely. So if you are trying to find the fridge, it will always have the most up to date estimate of where the fridge is. Or if this is a car, the absolute would be like looking at google maps. If your GPS is popping around, it will happily move around, updating its estimate to give you the closest estimate to the “truth”.

Odom is a self consistent, but not absolutely right coordinate frame. It usually starts with 0,0,0 where the robot boots up, and is moved by incremental measurements, like wheel encoders. It is used for local path planning and sometimes mid level algorithms like cruise control. Absolute measurements can really mess up odom. If you used gps, for example, and you had a 1m pop to the left. Then the robot would immediately try to go hard right. Even if it was just traveling down a straight road.

For that to work, you need to let the absolute and relative coordinates vary from each other. If the gps estimate pops, then the difference between the absolute and the relative should pop too.

I am guessing a bit here. But since your ladar/slam solution doesn’t use wheel encoders, it really doesn’t have a relative estimate. That is why they have a static transform. They aren’t able to compute different frames, because they only have absolute measurements.

Hopefully this gives you some more understanding of what amcl is trying to say.

If you are still running the m1m1 nodes, then i don’t think you want or need amcl. I am assuming it is doing a similar thing to amcl. The slamware frame will be used as both the absolute and relative coordinate frames. Hopefully, that will be fine, since your house is relatively easy to control, and the robot isn’t going to knock over any priceless antiques or people.

If you see the location of the robot on the map jump around, then you would consider trying again to generate a twist from wheel encoders along with an imu message from an imu, and run those through a robot_localization node to compute a relative frame, like odom. Then you would use that for the low level planners, but keep the global planners on the absolute frame.

That’s very helpful and I think I understand. I’ve succeeded in running I without amcl so I know it works, but to your point, I’ll keep an eye out for issues. I think for the time being I’m going to work on just running move_base with global and local planning and maybe one day tackle using the m1m1’s global planner. I might reconnect the Kinect and use it for point cloud source for the local planner to use for obstacle avoidance.

1 Like

I think I need to rearrange things a little and lower the lidar so it can see the couch cushions and the ottoman in the family room. Doing so will require me to put it on my second (mid) level and my support pipes (1.5-inch pvc) will be blocking the laser at a few spots, but as I understand it, this shouldn’t really cause a problem. This second level is where I currently have all the dc-dc converters and terminal strips, but they are all low profile and if I clean up the installation and secure the wires down to the base plate, they shouldn’t interfere with the lidar. To solve my x86 PC issue, I’ve been using an ASRock N3150 (celeron) passively-cooled PC out in the shed to run my CNC machines. But since I’ve started using an RPI to run cncjs, it just runs a browser. So my plan is to repurpose that PC for the robot and use an old laptop or another RPI or something out in the shed… I just need an ATX psu that will run off 12v battery input to make it work. I think I can run everything off that one PC even though I might stick with the jetson nano to do the computer vision part.

Did you repurpose the Maslow motors for your robot? NICE! I still have all of mine in a cool dry corner of my shop. Haven’t thought about what to do with them. Good to see you’re making use of them in other ways!

I repurposed one set of motors… I still have another set.

Battle Bots!

2 Likes

HAHAHAHA I will attach my ridgid router to mine :rofl: :joy:

I’ve made a lot of progress over the last couple of weeks. First, I now have a nice large “lab” in the house that allows me to simultaneously see the robot while working on my computer. Previously, my office was basically a small closet without enough room for the me and the robot… so the robot was in the living room and I’d have to get up and walk through 3 rooms to see if it was doing what I was wanting it to. Fortunately, the robot makes PWM noise when it moves and it annoyed my wife enough that she agreed to rearrange three rooms in the house so I could have all my stuff in the same place.

Second, I replaced the RPI with a quad-core celeron mini-itx so I could run the slamware ros sdk on the robot. I successfully got the launch and parameter files figured out so that move_base would use the map provided by the mapper rather than a static map. I does somewhat successfully move to a 2D goal, but it’s not very consistent. The issues are:

  1. The control loop does not complete within 5 Hz. Seems to run closer to the 2-3 Hz, but if I change the control loop frequency, it doesn’t work well. Even with it spamming the error message, it still manages to reach the goal.
  2. I get sporadic error messages about unable to find pose and maps (iirc)
  3. Sometimes things timeout enough that the robot does it’s spin recovery thing.

I’m not sure what’s causing these issues, but wonder if its a CPU power issue. I would have thought a quad-core celeron would be able to handle things, but maybe not… In order to get move_base to use the map from the mapper, I had to set static_map to false. Therefore, I assume the global mapper is doing extra processing and maybe that’s the issue. So tonight I’m going to try to run move_base on my development machine (ubuntu VM running on a Core i5) to see if it performs better. If it does, then I need to look at parameters to tune to see if I can get it to run on the celeron. The celeron is so low-power that I’d like to keep it since its running on the battery bank. https://www.asrock.com/mb/Intel/N3150-ITX/.

2 Likes
  1. The control loop is too slow? Did write it yourself? Is it blocking on some input? Control loops usually run the fastest, in a hierarchy, the bottom needs to move the fastest, and 100-200Hz is typical, even on crummy hardware.

  2. Have you looked at rqt_graph, rostopic info, and rosnode show? You can look for missing connections/remaps that way

  3. If you’re having performance issues, you can use top or (more user friendly) htop. Then you can see what process is taking too much cpu or memory.

That computer looks pretty good. Maybe it’s a matter of using less processing on less time critical pieces and using more processing on things that are more critical in timing.

No, that message comes from the session running move_base. iirc, changing the controller_frequency value in move_base parameters file from 5 to 2, significantly reduced the error messages, but the robot wouldn’t really move consistently after doing so.

I’ve watched everything with rostopic hz and did occasionally see hiccups with the map, but they didn’t seem to correlate with issues reported by move_base. I’ll check the others… I’m using all very basic static transforms (no urdf). I positioned the laser scanner to be directly above the base footprint / base link.

Also, not previously mentioned, when I ran Rviz, after a period of time, I’d start getting laserscan error messages (unknown reason for transform error). I ended up modifying the mapper’s SDK code to change time_increment to 0 versus a computed value based upon some post I read. I’m not sure its related. Since it happens after a period of time, I wonder if there’s a clock issue, but I can’t change the time on the mapper from what I can tell. I installed chrony on the celeron and I think its working.

I did use top but didn’t see any issues. I’ll try htop. Initially, I was running nomachine to remotely access the celeron machine, but turned it off and went to ssh sessions to see if it helped… it didn’t.

1 Like

Weirdness, I had a heck of time getting the computer to reboot into Ubuntu today… was as if it was loading as fast as a commodore 64 floppy disk. It never worked until I booted in recovery mode.

Anyway, I connected the mapper directly to the computer using Ethernet and I got none of the errors. Everything seemed to work well. Maybe it was just trying to send data over WiFi or the computer was having issues and just needed to be rebooted. Nevertheless, it’s working so there’s no performance issue with the Celeron.

1 Like

Made huge progress last night. I got a kinect attached and working to scan for obstructions and got it working with the local and global planner. But then I noticed that the global planner wasn’t creating a costmap from the map from the mapper. This goes back to my big concern about trying to use the map from the mapper rather than a static map. However, I found one post were someone was trying to do something similar (use a map from a published map topic as the static layer). I used the same method and it amazingly worked. So now I have costmaps using the map and the kinect and the global planner actually produces a path that’s not just a straight line to the goal.

However, the robot still likes to snag corners and then complain about not being to find a path. I think I will end up spending a fair amount of time trying to optimize the kinect and the inflation layer parameters to get it to work well enough. I probably should also build a basic urdf of the robot to visualize it in RViz.

edit:
Discovered a typo in my common parameter file where instead of “robot_radius:” I had “robot radius:” This would explain it snagging on corners.

Had a little failure night before last where the front caster got stuck in a gap in the floor and the back motors just kept spinning and ended up breaking the bolt axles free from the coupler. So I just went ahead and epoxied the axle and coupler together to be done with it.

Normally the computer runs at around 40% cpu utilization and I decided to see if I could reduce that by filtering the pointcloud data from the kinect (less data to process, should result in lower cpu utilization). To my surprise, the voxlelfilter to do the reduction consumes 100% cpu utilization and makes it unusable. It does “work” (i.e., reduces the pointcloud) but does not help the situation at all.

I then worked on saving and reloading the map in slamtec’s robostudio and it seemed to go okay, but I still need to spend more time with it. I think I have to put the robot in localization mode after loading the map so the robot finds its pose and produces correct odom. After loading the map and manually setting the pose, I tried to get the robot to navigate the same way as before but this time it cut the corner of doorway a little close and ended up in the lethal zone and at that point, move_base never could get the robot out of it. Perhaps because every move was invalid since it was in a lethal zone? I don’t know if the two events are correlated (loading a map and getting stuck in the doorway) or coincidental.

1 Like