Can Machines Collectively Think?

The idea of creating machines able to reason like humans is not new and has been introduced by Alan Turing in his seminal paper ”Can Machines Think?” which gave raise to the imitation test, together with other work on computability and the premises of artificial intelligence. Today, with the increased prevalence of i) autonomy, and ii) connectivity of compute systems, the question of whether machines can think, and further think collectively, is becoming more relevant than ever to consider in order be able to sustain the development of future compute systems.

Future Compute Systems

Future compute systems will be autonomous and connected. The IEEE International Roadmap on Devices and Systems (IRDS) has classified autonomous systems as one of the main application sector drivers of society and technology development in the next decade, predicting that autonomous machines will completely revolutionize our daily life and our economy [1]. Autonomous systems are already increasingly becoming integral parts of many application domains, such as mobility, production systems, healthcare, smart home, and smart energy systems. IRDS further suggests that the impact of autonomous machines on our society is likely to be much deeper and broader than any other information technology revolution that we have experienced so far. The focus on designing future autonomous systems should therefore be at the heart of both research and technology development. Technology supporting connectivity of compute systems on the other hand is highly emerging and is, as well, further driving the development of future distributed intelligent systems. With the development of the next generation of wireless communication e.g., 6G, more future autonomous systems, like in automated driving, will depend on cooperation with smart infrastructures enabled by a continuum of compute ranging from device over edge to cloud. Infrastructure-based autonomous driving relies on cooperation between intelligent roads and intelligent vehicles sharing data, like currently speed and location in the form of collective awareness messages. This approach is deemed to be not only safer but also more economical compared to the traditional on-vehicle-only autonomous driving.

Machines that can Self-Decide

Autonomous systems can be defined as self-governed and self-adaptive systems that can interact independently with the environment by solving complex tasks without human intervention. We are not referring here to systems able to perform well-defined (repetitive) tasks like automata, but rather systems that can operate in a dynamic and open environment. That is, performing tasks in an environment with uncertainties, either due to inherent complexity of the environment and unpredictability or due to the manner in which it evolves. In such environments, systems must autonomously decide e.g., about trajectory planning for maneuvering in self-driving cars or about whether it is safe for a human to cross an intersection without colliding with other systems or pedestrians in case of a robot dog guiding a visually impaired person [2]. What we expect by delegating decision making to autonomous machines is good decision making. There are several quantitative measures, like increased reliability and efficiency for qualifying what good decision making actually means. In safety-critical fields like the above mentioned examples of robotics for social care or automated driving, high levels of trustworthiness and good decision making need to be guaranteed by design. According to the Cambridge dictionary, reasoning is the process of thinking about something to make a decision. Both learning and reasoning are essential part of intelligence. While a lot of effort in the research community is steered towards the development of machine-learning based intelligence, very little effort is dedicated to computable forms of advanced reasoning to support intelligence. The design of autonomous systems goes far beyond the integration of individual learning-enabled components and raises challenges such as designing and verifying system behavior evolving during operation time. When deploying autonomous systems for safety-critical tasks, it is currently not possible to guarantee a 100% safe inference outcome (e.g., in perception systems for automated driving). By enabling collective sharing of information and collaboration between autonomous systems, we have today a real opportunity to increase reliability, by for instance, building trustworthy models of the operational environment and improve collaboratively decision making.

Collaborative Autonomy: “Can we be Safer Together?”

Reasoning in groups is what humans regularly do to collect more knowledge and improve their decision making process, considering their own opinion and other (possibly heterogeneous) opinions. Collective intelligence in social sciences refers to the exploitation of distributed intelligence for better decision making in groups. Collective intelligence is defined as a form of universally distributed intelligence, constantly enhanced, coordinated in real-time, and resulting in the effective mobilization of possibly different skills [3]. However, in the presence of conflicting information, aggregation becomes a major issue for trustworthy decision making based on collaborative data sharing. In the past decades, social choice theory and judgement aggregation have been extensively studied in philosophy, welfare economics, AI, and multi-agent systems, in order to provide a principled definition of the aggregation of individual attitudes into a social or collective attitude. One of the most fundamental results in the field of judgement aggregation theory in social science is the impossibility theorem [4], that states there is no aggregation procedure that can guarantee both rationality of the outcome and fairness of the aggregation at the same time. The proof of this result is mainly based on the majority voting exploiting Arrow’s impossibility theorem, or Arrow’s paradox. These results were the premise to award their author Kenneth Arrow the Nobel Prize for his contributions. Another important fundamental result from distributed computing is attributed to Leslie Lamport in his formulation of his famous Byzantine Generals Problem [5], where a set of distributed computer systems communicating through messages must cope with the (malicious or not) failure of one or more of its components, where conflicting information are being sent to other systems. In this case, it becomes exceedingly difficult to distinguish faulty statements from non-faulty ones, based on collectively gathered information. In his paper, Reaching Agreement in the Presence of Faults [6], Leslie Lamport further shows that if the number of faulty statements is at most third of the number of total statements (e.g., three computer systems and only one is faulty) that it will always be possible to infer correct information (i.e., the ground truth). In the presence of an unknown subset of faulty processors which are sending incorrect information, the problem becomes unsolvable. Interestingly enough, existing results often consider (implicitly or explicitly) homogeneous systems like in classical distributed computing. Moreover, consensus and majority aggregation rules are dominant in classical collective reasoning theory and multi-agent autonomous systems. Both homogeneity of systems and majority as an aggregation rule are the main assumptions to seminal results around both impossibility theorem and the Byzantine Generals Problem. Currently, the heterogeneity nature of collaborative systems considering the differences that can affect the quality of collected information when aggregated is not sufficiently exploited.

A New Form of Distributed Systems?

Collaborative autonomous systems can be viewed as distributed systems in the classical computing sense, where components are autonomous agents that communicate and coordinate their actions based on exchange of data. In the past and with the early emergence of parallel and distributed computing, several models of computation were introduced to deal with concurrency, synchronization and deterministic execution sequences. Examples of popular models are communicating finite state machines, Kahn process networks, and synchronous data flow, to provide a formal functional semantic of concurrency and synchronization. In this case, managing the availability of shared data, in the sense of the ability to respond to reads and writes accessing shared variables, becomes a major focus. Further work, such as the Logical Execution Time (LET) and system-level LET, focus on timely synchronization where exchange of data is known and predictable in a distributed system. When dealing with distributed autonomous systems, these models are still valid and have to be used like in any distributed cyber-physical systems. Unlike classical distributed computing, where data represent the flow of functionally computed data from output(s) to other input(s), data required for reasoning in collaborative autonomous systems can represent “beliefs” and arguments that serve as “evidence” to support these “beliefs” when further communicated to collaboratively contribute to a safe decision. A ”belief” can be in the form of a predicate, such as a pedestrian has been detected by the perception systems of a given vehicle, ”evidence” of that belief can correspond to sensors quality or forms of explainability of learning-enabled based perception system of the vehicle. Given the dynamicity of the context of operation, some data might be relevant for good decision making in a given context and not for others. Identifying proper interaction interfaces in terms of determining which data needs to be exchanged and is relevant to consider for reasoning to improve decision making, is of primary importance. Moreover, defining new aggregation rules that consider semantic of data and supports heterogeneity of systems, as well as deriving conditions under which aggregation is possible and is coherent is highly needed, to be able to deliver the promise of collaborative autonomy.

Authors:
Selma Saidi is full professor of Computer Engineering at the Technical University of Braunschweig in Germany. Her research focus is on the design, implementation and validation of innovative intelligent computing systems where connectivity, real-time and safety requirements play an important role. Her application areas include avionics, automotive and more recently human-assistive robotics. She is one of the initiators of the DATE Special Initiative on Autonomous Systems Design.

Sources:

[1] S. Liu and J.-L. Gaudiot, IEEE International Roadmap For Devices and Systems (IRDS), Autonomous Machines White Paper, 2022.

[2] Aman Malhotra and Selma Saidi. Summary Paper: Use Case on Building Collaborative Safe Autonomous Systems-A Robotdog for Guiding Visually Impaired People. 2024. arXiv: 2403.01286 [cs.RO].

[3] P. Levy and R. Bononno, Collective Intelligence: Mankind’s Emerging World in Cyberspace, USA: Perseus Books, 1997.

[4] C. List and P. Pettit, Aggregating Sets of Judgments: An Impossibility Result, Economics & Philosophy, vol. 18, p. 89–110, 2002.

[5] Leslie Lamport, Robert E. Shostak, Marshall C. Pease: The Byzantine Generals Problem. ACM Trans. Program. Lang. Syst. 4(3): 382-401 (1982).

[6] Marshall C. Pease, Robert E. Shostak, Leslie Lamport: Reaching Agreement in the Presence of Faults. J. ACM 27(2): 228-234 (1980).

DisclaimerAny views or opinions represented in this blog are personal, belong solely to the blog post authors and do not represent those of ACM SIGBED or its parent organization, ACM.