High level movement commands

Hello guys

I’m trying to create a node which just let’s the go1 edu move straight forward. However I can’t find any packages or high level control nodes which provide this movement. Is there a way to activate such nodes in my own nodes under certain conditions? The goal is to make he go1 walk around randomly while avoiding surrounding objects, but I have no clue on how to use the qre repo you provided or if it already contains such scripts without having to code it low level.

Hi @Kenneth-vd-H,
I hope you have successfully built the ROS driver and its dependencies. Everything for the robot is documented on our docs. You should follow the quickstart guide.
You have to ensure the robot is up and running and you are connected to the robot(preferably via LAN). The robot is on normally 192.168.123.* network(IPs of the different boards can be found here).
In case of a successful ping to You should follow the guide here for moving the robot in high-level mode(no joint planning required) just via arrow keys on your machine.
I hope this will solve your issue.

Hi Tahir.

I built the environment and the the keyboard control (teleop_twist_keyboard)works as expected, however I now want it to move autonomously using object avoidance without using the keyboard, but I can’t figure out how to even move the robot in a straight line by itself. The go1 edu doesn’t have gps as far as I know and I don’t have a lidar or any other external modules.

Hi Kenneth,

you could be able to do visual SLAM with the integrated camera, but I would highly recommend adding some LiDAR.

I’ve got some notes here that may be useful: GitHub - MAVProxyUser/YushuTechUnitreeGo1: 宇树科技 Yushu Technology (Unitree) go1 development notes

Likewise feel free to stop and ask us on Slack we can try to help you middle through it sometimes in real-time if folks are around Slack

We’ve also rewritten the UnitreSDK as raw Python, so you don’t need their weird sdk .so based environment. We’ve made it so scripting high level movement is more trivial: GitHub - Bin4ry/free-dog-sdk

Here is an example use of the sdk

What lidar module do you recommend or use yourself? Also, is there any other hardware necessary for letting the go1 edu move autonomously? At first I thought the camera’s on the go1 alone were sufficient for this. We have the go1 edu, but unfortunately we don’t have a lidar, and the one that comes with the go1 is very expensive.

Hi @Kenneth-vd-H,

What lidar module do you recommend or use yourself?

We typically use ouster for navigation as it enables recording of the 3D environment in point cloud format. However, I think you want to go on a budget so I would recommend using orb slam or a similar ROS vslam algorithm for navigation without purchasing equipment. Alternatively, you can purchase some inexpensive lidars such as rp lidar or such.

Some unique options would be the Livox lidar which has an intensely dense point cloud in one direction. A safe option for 2D navigation that is typically employed i via the Hokuyo Lidar. And if you want to get more of an advantage then go for the Zed2 series depth camera although as compared to Lidars they require more configuration but in terms of computability etc, they are much more powerful.

Also, is there any other hardware necessary for letting the go1 edu move autonomously?

The GO1 with the provided ROS package can move autonomously but the caveat is that is basically blind and would be moving purely based on odometry without any source of rectification such as from amcl/gps/beacons/markers etc.

At first I thought the camera’s on the go1 alone were sufficient for this.

It could be but there is a lot of challenges as compared to buying/integrating and off-the-shelf Lidar.

Hope that answered your question.

S. O. Sohail

1 Like

Object avoidance has already been built in right? Is there a way to turn on object avoidance via a python node instead of via he controller? The standard functionality should be available through ros I presume.

Hi @Kenneth-vd-H,

The object avoidance in the GO1 basically uses the camera to stop motion commands when an object is close to a certain distance (very primitive and unreliable). To my knowledge this functionality is not open via the code, furthermore, better and more complex collision avoidance systems can be performed using simple computer vision techniques which I would recommend and some directly have ROS bloom packages.