I hope this message finds you well. I’m reaching out for your expertise on a few specific queries related to the qre_go1 repository on the noetic_robotis branch:
Does this controller version facilitate obstacle avoidance? If so, could you elucidate its mechanism?
What are the capabilities concerning localization within this framework?
In Rviz, is there a way to input a goal position for the robot, prompting it to navigate towards that point?
The reason behind these inquiries is my current project involving the development of a Reinforcement Learning (RL) algorithm. For this, I need to gather data from the GO1, particularly from its Lidar and front camera. The objective is to set up a realistic or simulated environment for data collection, which will be stored in a replay buffer. The desired observations include environment status (like completion flags), the robot’s state, its linear and angular velocities, and camera and lidar outputs. I’m contemplating whether to collect this data manually using the actual robot or through a simulation.
Your guidance on these matters would be greatly appreciated.
By default no, but if you are receiving a /scan message from the Lidar then it can be activated using the go1_navigation ros package. Once it receives it, it should be able to successfully avoid obstacle.
Localization is the normal 2D localization which works well in most cases using adaptive monte carlo localization algorithm or AMCL. It is in the qre_go1.
Yes, once the go1_bringup. and go1_navigation is running you can give a 2D Nav Goal in Rviz while avoiding obstacles. (Before this you have to perform slam and save the map)
I would recommend viewing the tutorials in the pdf in the melodic-robotis branch so that you can get a proper overview of ROS.
The reason behind these inquiries is my current project involving the development of a Reinforcement Learning (RL) algorithm. For this, I need to gather data from the GO1, particularly from its Lidar and front camera. The objective is to set up a realistic or simulated environment for data collection, which will be stored in a replay buffer. The desired observations include environment status (like completion flags), the robot’s state, its linear and angular velocities, and camera and lidar outputs. I’m contemplating whether to collect this data manually using the actual robot or through a simulation.
Your guidance on these matters would be greatly appreciated.
I would recommend performing in the real robot as simulations for quadruped generally do not translate well into the real world unlike differential or omni wheel drive robots.
Thank you for your reply Sohail. Can you please guide me through the following questions:
I can access all of these in a high-level mode, right? Or should I connect via LAN in low-level mode?
If I want access to the /scan topic, I should connect to the GO1 via wifi (because there is only one port on the robot. This can be used either for the LAN connection or LiDAR connection.) Am I right?
I know that the robot can create a map env by the slam of the robot (I saw that my colleague created a map on its mobile phone with the unitree mobile application). Can I do that by using your packages?
When I have created the map, where should I save this map, and how to use it?
I can access all of these in a high-level mode, right? Or should I connect via LAN in low-level mode?
I am not sure what you are asking about? However, the topics such as odom and joint states should be visible.
If I want access to the /scan topic, I should connect to the GO1 via wifi (because there is only one port on the robot. This can be used either for the LAN connection or LiDAR connection.) Am I right?
This complex, easiest would be to get an ethernet switch to allow access to more ports otherwise you have to do port forwarding and linking of the ips. From what I have understood, you robot has a ros master already running in either pi or the nvidias, if you connect via the LAN cable and configure your pc to have ros multi-machine setup correctly configured then you should be able to see the scan topics directly.
If you the WiFi route, setting mutlimachine may prove to be quite challenging and lengthy if you are not familiar with the process/
I know that the robot can create a map env by the slam of the robot (I saw that my colleague created a map on its mobile phone with the unitree mobile application). Can I do that by using your packages?
Yes, f you receive the scan topic and configure the go1_navigation it should be able to perform SLAM as well as map based navigation quite well. (We have tested this)
When I have created the map, where should I save this map, and how to use it?
I used this, as you mentioned, for the ethernet switch. The white cable is now connected to my laptop and the robot. The lidar is connected to the robot with this ethernet switch.
if you connect via the LAN cable and configure your pc to have ros multi-machine setup correctly configured, then you should be able to see the scan topics directly.
Could you please guide me step by step on how I can configure my PC to have ROS multi-machine setup? (For /scan and front camera topics?)
When I set this configuration, I can see all the topics on my PC, right?
I could set up ROS multi-machine via Wifi. I think I couldn’t find the scan topic via LAN because of the ethernet switcher in the previous image. Do you have any idea how to fix this issue? Because the data from Lidar was not transmitting.
What you have to do is find out the ROS_MASTER_URI which can be done by ssh’ing into one of the go1_computers where the scan topic is coming and running
echo $ROS_MASTER_URI
echo $ROS_IP
echo $ROS_HOSTNAME
Then in your own PC you can add the following exports to your ~/.bashrc file e.g.
export ROS_MASTER_URI=http://<The IP you found while echoing>:11311/
export ROS_IP=192.168.123.52 # Your static lan ip
export ROS_HOSTNAME=192.168.123.52
There is an issue in the image. There are two links here. /base_link & /base. The robot is spawned at the /base link. This is wrong. The robot should spawn at /base_link. Can you guide me on how I can do that?
There is an issue in the image. There are two links here. /base_link & /base. The robot is spawned at the /base link. This is wrong. The robot should spawn at /base_link. Can you guide me on how I can do that?
There is no issue, the convention is that usually the main frame of the robot is called base_link, however, unitree has named their link base, so to enable compatibility with their work flow as well as ROS convention (as many native ROS packages are dependent on the robot having base_link), we have added a in the go1_description urdf file a fixed joint. Feel free to modify it according to your requirements.
I have another question. Why don’t I see the camera output on RViz? I mean, the topic is publishing point-cloud data, but RViz cannot show it.
Can you echo the topic to check if the data is having an output on the terminal? If not then it means the data is not being received from the point cloud driver.