Latest News / April ‘23 / Indy Autonomous Challenge (IAC) joined the Autoware Foundation!

Scalable Autoware.Auto through k8s

Author: SERVANDO GERMAN SERRANO
Scalable Autoware.Auto through k8s

We showed how it is possible to split the 3D Perception demo using k8s in the last post. Since we were using just the PCU to showcase the split modules using the k8s infrastructure to manage them, apart from the simplicity to spawn and manage the deployments, the benefits of k8s vs plain docker or ade-cli are not easily discernible. Therefore with this blog post we show how we can easily distribute and manage our Autoware.Auto modules within a multi-board k8s cluster by combining AutoCore’s PCU and the Qualcomm® Robotics (RB3) Dragonboard-845c Development Platform.

A distributed hardware approach can be beneficial in terms of redundancies in the system or to avoid higher-end components to reduce the overall cost of the system. Nevertheless, going down the distributed hardware route we need to find a way to easily manage the different software components, in terms of how to deploy them and also whether specific modules need to be allocated to a particular piece of hardware.

As an initial proof of concept we will suppose that the LIDAR for the 3D Perception demo is available just for the RB3 which needs to host the LIDAR drivers to make the pointcloud available in a ROS2 format for the rest of the system. This situation might arise when we want to do some preprocessing on the raw sensor data before it is fed into the system but without overloading other board, in our case the PCU which will take care of the robot state publisher, point cloud filter transform, ray ground classifier and euclidean cluster nodes.

It is worth mentioning that the added easiness of using the k8s setup is achieved through the use of Eclipse Cyclone DDS middleware for ROS2 as the masterless discovery capabilities of ROS2 mean that we do not need to manually set up the different IP addresses for all the individual containers within each k8s pod.

As we did when running the 3D Perception demo through k8s on the PCU we will use 3 Kubernetes deployments:

  • One to replay the Velodyne pcap data.
  • One responsible for the sensing module, which in this case kicks off the front and rear Velodyne drivers.
  • The last one to run all the 3D Perception stack nodes, namely point cloud filter transform, ray ground classifier and the euclidean cluster.

Of these deployments, the first 2 are run on the RB3 and the other on the PCU. To do so we have modified the deployment yaml file using the keyword nodeName to associate the deployment to a particular worker node as shown below (where the node named linaro-alip is the RB3 and localhost is the PCU).

After introducing these changes is just a matter of kicking off the different deployments and check that each is run in the desired node.

And as we did when we were using just the PCU we can use the laptop to visualize through Rviz2 to verify that everything is working nicely.

This shows that both ways of deploying the modules (docker/ade vs k8s) live together as they are meant to target different use cases, development vs closer to production when we are using “stable” software that we can run directly as a service.

To get the full step-by-step blog post is here.