[Unitree Go1] ROS Multimachine Camera

Dear Sohail,

On your website, GO1 OM Quick Start — GO1 Tutorials 1.0.0 documentation, you mentioned that we have to set up the multimachine on robots boards (NVIDIA & Pi) and our PC. This is related to subscribing to the ROS master, which is already running on Pi, right? Or something else?

Because as soon as the robot is turned on, all the topics are visible on the Pi board. What I did was set up a multimachine between my PC and Pi. Is that enough, right? Or do I have to check whether the multimachine is already set between all the boards or not? If yes, how can I check the multimachine is not set up already?

Kind regards,
Amir

Dear @Amir_Mahdi,

As previously mentioned in another post, someone has already set up a ROS environment in your GO1 with their own custom ROS packages and drivers. Running our driver and their driver most probably may result in a conflict.

I would advise you to contact the previous go1 owner to verify what has been modified, and what is running and to guide on the utilziation of their packages.

Dear Sohail,

Thank you for your reply. I have contacted the vendor from which we bought the robot and am seeking help. I am waiting for their response. However, I have been checking all three NVIDIAs (ips: 192.168.123.13/14/15), and I found these:

In all of NVIDIAs, there is a folder called Unitree. Within this folder, there are two folders, auto start and sdk. Then, in the sdk folder, there are three folders as follows:

  1. FaceLightSDK_Nano
  2. UltraSoundSDK_Nano
  3. UnitreeCameraSdk

These folders are related to the Unitree GitHub repository; nothing extra is installed. I believe each NVIDIA board is related to one camera, right?

I was considering the following solutions if I couldn’t get an answer from the vendor we bought the robot:

  1. I checked one of the posts, [Unitree Go1] Rviz model Issue / ROS Camera Activation - #13 by Kenneth-vd-H. Do you think if I clone your repository on each board and set up ros multichannel, it will work? (follow the tutorial as you have mentioned). Can you confirm the following procedure: 1. install the ros packages from the MyBotShop repository(which branch? Melodic or Noetic?) on each NVIDIA board 2. Set up the multimachine as you have mentioned on your GitHub 3. Disable Camera Driver startup with this command: ./Unitree/autostart/camerarosnode/cameraRosNode/kill.sh

  2. should I follow the tutorial as Unitree has mentioned in their repository? GitHub - unitreerobotics/UnitreecameraSDK: Unitree GO1 camera SDK

  3. I write my own script in order to convert the point cloud data into a 2D image.

Thank you for your time and for helping me.

Best regards,
Amir

  1. Those SDKS primarily work for communication to the unitree go1 app. You can clone the noetic branch and it should work. However, in your case either the vendor or someone else has probably already disable the Unitree sdk and is running a driver. If you run the driver concurrently to theirs it may provide interference as both drivers would be using the same resource.
  2. I would recommend first investigating from were the point cloud is running from and starting from there. The issue is the same as above.
  3. This would be the fastest solution in your case.

Those SDKS primarily work for communication to the unitree go1 app. You can clone the noetic branch and it should work. However, in your case either the vendor or someone else has probably already disable the Unitree sdk and is running a driver. If you run the driver concurrently to theirs it

If the UnitreeCameraSdk is being disabled (by the vendor or someone else), it should not show any images on the mobile app, right? Because I am checking the mobile app, I see the cameras are active!

This would be the fastest solution in your case.

Do you know any source that I can do this? (any hints, links, etc. is truly appreciatable :slight_smile: )

If the UnitreeCameraSdk is being disabled (by the vendor or someone else), it should not show any images on the mobile app, right? Because I am checking the mobile app, I see the cameras are active!

Yes if they killed it.

Do you know any source that I can do this? (any hints, links, etc. is truly appreciatable :slight_smile: )

I am not aware of anyresources for this and now after thinking about it, if the depth image does not contain the color information it might not be that useful. In which case I would suggest going to the Nvidia manually stopping their nodes that are for the camera and running our nodes for the camera stream.

Alteratively, you can also obtain the camera stream from the online ip. This would work if you are able to see the camera stream on the mobile app then if you are on the same network you should be able to get the camera stream in the browser by typing in 192.168.12.1/videostream or something of the likes. I do not remember exactly what. But you can type 192.168.12.1 when connected via Wifi and put it in the browser and it should show the go1 state from which you can check the other ip addresses.

From there you can capture the stream and use it for processing.