ROS2 and Real-Time Performance: The Key to Driving Embodied Intelligence towards Commercialization

This image has an empty alt attribute; its file name is image-2-1024x527.png

With the rapid development of artificial intelligence technology, we have witnessed the widespread application and tremendous success of generative AI, such as ChatGPT. However, in real-world AI applications, such as autonomous driving, autonomous machines, and embodied robots, despite continuous technological breakthroughs, commercialization still faces many challenges. Among them, how to achieve efficient and seamless interaction between these intelligent systems and the physical world has become an urgent problem to be solved.

The Robot Operating System (ROS), as a fundamental software framework in the field of robotics and autonomous machines, has always provided a solid foundation for this interaction. ROS not only integrates various hardware and software resources but also enables robots to better understand and respond to the physical world by providing a flexible and scalable platform.

However, in the practical application of ROS, timing, as a core element, cannot be ignored. Especially in safety-critical areas such as autonomous driving and industrial automation, the precision, and real-time nature are directly related to the stability and safety of these systems. ROS2 has made significant efforts to ensure real-time performance.

  • ROS2 significantly reduces data transmission delays by improving the DDS (Data Distribution Service) communication mechanism. This improvement not only ensures the real-time delivery of information but also allows robots to quickly receive and process data from sensors and execute corresponding control instructions. This is crucial for robot systems that require rapid response.
  • ROS2 optimizes task scheduling and resource management, enabling multiple robot tasks to be processed in parallel, significantly improving the overall system’s response speed and efficiency. This parallel processing capability is especially suitable for robot systems that need to process multiple.

Despite the significant progress made by ROS2 in real-time performance, it still faces some challenges. One of them is the time synchronization problem of multi-sensor data. Due to differences in sampling periods and transmission delays of different sensors, it is often difficult to achieve time synchronization in the multimodal perception process, leading to temporal deviations in data fusion. Such kind of problem can seriously affect the accuracy of the robot’s perception of the external environment and may even lead to decision errors and safety accidents.

To solve this problem, the message filter mechanism of ROS2 is particularly important. This mechanism aims to ensure that data from different sensors can be precisely synchronized in time, thereby improving the perception accuracy and overall performance of the robot. However, how to ensure the temporal consistency of multi-sensor data fusion in the ROS system remains a research frontier and hotspot, requiring further exploration and practice.

Main challenges

A significant challenge faced by the real-time and reliability assurance framework of ROS is the issue of temporal consistency in multi-sensor data fusion. Due to the varying sampling periods of different sensors and the random delays in data preprocessing and transmission, it is difficult to achieve temporal synchronization during the aggregation and integration of data from multiple sensors. Although the traditional message synchronization mechanism (ApproximateTime) in ROS employs complex runtime prediction mechanisms implemented with nearly a thousand lines of C++ code, it still struggles to effectively address the temporal synchronization issue of multi-sensor data messages. Temporal deviations in multisensor fused data can directly affect the perception accuracy of autonomous machines towards the external environment, thereby interfering with their ability to make high-quality intelligent decisions and potentially triggering serious safety accidents. Ensuring that the timestamp differences between multi-sensor data remain within acceptable thresholds to guarantee temporal consistency of message synchronization has become an urgent challenge in ROS systems.

Algorithm design

Our research work revealed the limitations of the traditional message synchronization mechanism, ApproximateTime, in ROS systems. When handling multi-round message synchronization decisions, ApproximateTime excessively focuses on optimizing the alignment of timestamps in each individual round, but fails to fully consider the potential negative impact of this strategy on subsequent rounds. This short-sightedness can lead to a deterioration of the overall temporal synchronization effect over time.

This paper proposes a novel message synchronization mechanism for ROS systems called Synchronize Earliest Arrival Messages, abbreviated as SEAM. Compared to the traditional ApproximateTime mechanism, SEAM is concise and elegant, requiring only a hundred lines of C++ code to implement. The core of the SEAM mechanism breaks away from the excessive pursuit of optimal timestamp alignment within a single round. Instead, it prioritizes the synchronization of the earliest arriving message data, provided that the timestamp difference remains within an acceptable threshold. This innovative approach provides a fresh perspective for the message synchronization mechanism in ROS systems. The name SEAM is derived from “seam, ” suggesting thatin each individual round of timestamp alignment, there should be a “suitable seam.” This “seam” may appear suboptimal in the short term, but it leaves more room for optimization in subsequent rounds of timestamp alignment.

This image has an empty alt attribute; its file name is 图片1.png
Figure 1. Example of running the ApproximateTime algorithm

The traditional ApproximateTime algorithm and our proposed SEAM algorithm share the same trigger mechanism, both of which are activated when all message queues are non-empty. However, they differ significantly in the way they handle messages. The ApproximateTime algorithm predicts future messages based on the message with the current largest timestamp and includes these predicted messages in the candidate pool to select the output message set with the smallest timestamp differences. If the final selected output message set contains predicted messages, the algorithm will wait for these predicted messages to arrive before combining them with the already arrived messages to form the output.

Figure 1 illustrates an example of the ApproximateTime algorithm’s operation on a message synchronizer with two message channels. The timeline represents the progression of time, and differently colored areas distinguish between already-arrived messages and predicted future messages. We assume that the maximum acceptable threshold for message synchronization is 5, and synchronized messages cannot be reused. At the point when the ApproximateTime algorithm is triggered, it selects between already arrived messages and predicted messages from the other channel, aiming to construct an output message set with the smallest timestamp differences. This selection mechanism is followed in each round of the algorithm’s operation. However, it is worth noting that the excessive pursuit of minimizing local timestamp differences may result in a selected message set with timestamp differences exceeding the predefined threshold, as shown in the figure. This example demonstrates that while the ApproximateTime algorithm aims to minimize the timestamp differences in the current output message set, it may sacrifice the real-time performance and correctness of global message synchronization.

This image has an empty alt attribute; its file name is 图片2.png
Figure 2: An Example of running the SEAM Algorithm

In contrast, the SEAM algorithm does not excessively pursue minimizing local timestamp differences but focuses more on the overall real-time performance of the system. It does not rely on predicting unarrived messages but instead focuses on already-arrived messages to quickly form an output message set, thereby enhancing the system’s real-time response capability. When the ESAM algorithm is triggered, it searches for messages in other message queues that satisfy the timestamp difference constraint based on the currently arrived message with the largest timestamp. If eligible messages exist in all queues, the algorithm selects the earliest arrived message in each queue that meets the timestamp difference requirement for output. By doing so, the SEAM algorithm aims to output a set of messages with the earliest arrival times possible and timestamp differences within the preset threshold, optimizing the overall real-time performance of the system.

Figure 2 depicts an example of the SEAM algorithm’s operation on a message synchronizer with two message channels. To facilitate an effective comparison, the input message sequence is kept consistent with Figure 1. At the point when the SEAM algorithm is triggered, it examines the timestamps of already arrived messages in other queues based on the timestamp of the latest arrived message to determine if they meet the output requirements. This mechanism allows the SEAM algorithm to form valid outputs without waiting for predicted messages. Experimental results show that, under the same input message sequence and runtime conditions, the SEAM algorithm is able to construct more output message sets compared to the traditional ApproximateTime algorithm, and each set of messages satisfies the preset threshold requirements for timestamp differences. This not only improves the overall real-time performance of the system but also enhances the accuracy of message synchronization. Therefore, the SEAM algorithm demonstrates its superiority and effectiveness in the field of real-time message synchronization.

This image has an empty alt attribute; its file name is 333-1024x392.png
Figure 3: Theoretical Proof for the Optimality of the SEAM Algorithm

To ensure the validity and superiority of the SEAM mechanism, we conducted in-depth theoretical studies and successfully demonstrated the optimality of the SEAM mechanism. The key conclusion is that, within any given time period, the SEAM mechanism is always able to complete the most rounds of effective message synchronization operations that satisfy the timestamp difference threshold.

Evaluations

Through experimental validation, the SEAM mechanism significantly improves the performance of ROS (Robot Operating System) systems compared to the traditional ApproximateTime mechanism. Specifically, the SEAM mechanism achieves up to 70% higher success rate in message synchronization, while reducing the computational time for a single round of message synchronization by 90%. This innovative research successfully addresses challenges in real-time and reliability for ROS systems, providing new perspectives on the evolution of message synchronization mechanisms in ROS. Moreover, the SEAM mechanism paves the way for ROS systems in complex application environments such as autonomous driving and industrial automation. It is anticipated that the SEAM mechanism will provide efficient and accurate time synchronization solutions for more intelligent real-time embedded systems.

This image has an empty alt attribute; its file name is image-1024x534.png
Figure 4: Experimental Results on Fusion Success Rate and Computation Time under Different Parameter Configurations

Authors
Jinghao Sun is an associate professor, in the School of Computer Science and Technology at Dalian University of Technology.
Nan Guan is an associate professor, in the Department of Computer Science at City University of Hong Kong. Paragraph

DisclaimerAny views or opinions represented in this blog are personal, belong solely to the blog post authors, and do not represent those of ACM SIGBED or its parent organization, ACM.