The Coevolution of Humans and Machines
I am an engineer. I design things that never before existed. For me, these “things” are mostly software, although I have in the past also designed some hardware. For most of my 40 years doing this, I harbored a creationist illusion that these “things” were my own personal progeny, the pure result of my deliberate decisions, my own creative output. I have since realized that this is a bit like thinking that the bag of groceries that I just brought back from the supermarket is my own personal accomplishment. It ignores centuries of development in the technology of the car that got me there and back, agriculture that delivered the incredible variety of fresh food to the store, the people risking virus exposure to make goods available, and many other parts of the socio-cultural backdrop against which my meager accomplishment pales.
In my new book, The Coevolution (MIT Press, 2020), I coin the term “digital creationism” for the idea that technology is the result of top-down intelligent design. This principle assumes that every technology is the outcome of a deliberate process, where every aspect of a design is the result of an intentional, human decision. I now know, 40 years later, that this is not how it happens. Software engineers are more the agents of mutation in Darwinian evolutionary process. The outcome of their efforts is shaped more by the computers, networks, software tools, libraries, and programming languages than by their deliberate decisions. And the success and further development of their product is determined as much or more by the cultural milieu into which they launch their “creation” than by their design decisions.
The French philosopher known as Alain (whose real name was Émile-Auguste Chartier), wrote about fishing boats in Brittany:
Every boat is copied from another boat. … Let’s reason as follows in the manner of Darwin. It is clear that a very badly made boat will end up at the bottom after one or two voyages and thus never be copied. … One could then say, with complete rigor, that it is the sea herself who fashions the boats, choosing those which function and destroying the others.
Boat designers are agents of mutation, and sometimes their mutations result in a “badly made boat.” From this perspective, perhaps Facebook has been fashioned more by teenagers than by software engineers.
Today, the fear and hype around AI taking over the world and social media taking down democracy has fueled a clamor for more regulation. But if I am right about coevolution, we may be going about the project of regulating technology all wrong. Why have privacy laws, with all their good intentions, done little to protect our privacy and only overwhelmed us with small-print legalese?
Under the principle of digital creationism, bad outcomes are the result of unethical actions by individuals, for example by blindly following the profit motive with no concern for societal effects. Under the principle of coevolution, bad outcomes are the result of the procreative prowess of the technology itself. Technologies that succeed are those that more effectively propagate. The individuals we credit with (or blame for) creating those technologies certainly play a role, but so do the users of the technologies and their whole cultural context. Under this perspective, Facebook users bear some of the blame, along with Mark Zuckerberg, for distorted elections. They even bear some of the blame for the design of Facebook software that enables distorted elections. If they were willing to pay for social networking, for example, an entirely different software design may have emerged.
Under digital creationism, the purpose of regulation is to constrain the individuals who develop and market technology. In contrast, under coevolution, constraints can be about the use of technology, not just its design. The purpose of regulation becomes to nudge the process of both technology and cultural evolution through incentives and penalties. Nudging is probably the best we can hope for. Evolutionary processes do not yield easily to control.
Perhaps privacy laws have been ineffective because they are based on digital creationism as a principle. These laws assume that changing the behavior of corporations will be sufficient to achieve privacy goals (whatever those are). A coevolutionary perspective understands that users of technology will choose to give up privacy even if they are explicitly told that their information will be abused. We are repeatedly told exactly that in the fine print of all those privacy policies we don’t read. And, nevertheless, our kids get sucked into a media milieu where their identity gets defined by a distinctly non-private online persona.
I believe that, as a society, we can do better than we are currently doing. The risk of an Orwellian state (or perhaps worse, a corporate Big Brother) is very real. It has happened already in China. We will not do better, however, until we abandon digital creationism as a principle. Outlawing specific technology developments will not be effective. For example, we may try to outlaw autonomous decision making in weapons systems and banking. But as we see from election distortions and Pokémon GO, machines are very effective at influencing human decision making, so putting a human in the loop does not necessarily solve the problem. How can a human who is, effectively, controlled by a machine, somehow mitigate the evilness of autonomous weapons?
When I talk about educating the public, many people immediately gravitate to a perceived silver bullet, that we should teach ethics to engineers. But I have to ask, if we assume that all technologists behave ethically (whatever that means), can we conclude that bad outcomes will not occur? This strikes me as naïve. Coevolutionary processes are much too complex.
A few people are promoting the term “digital humanism” for a more human-centric approach to technology. This point of view makes it imperative for intellectuals of all disciplines to step up and take seriously humanity’s dance with technology. That our limited efforts to rein in the detrimental effects of digital technology have been mostly ineffective underscores our weak understanding of the problem. We need humanists with a deeper understanding of technology, technologists with a deeper understanding of the humanities, and policy makers drawn from both camps. We are quite far from that goal today.
Author bio: Edward A. Lee has been working on embedded software systems for 40 years, and after detours through Yale, MIT, and Bell Labs, landed at Berkeley, where he is now Professor of the Graduate School in EECS. His research is focused on cyber-physical systems, where he strives to make composable, secure, and verifiable timing sensitive systems. Recently, he has branched out and published two books on the philosophy of technology.
Disclaimer: Any views or opinions represented in this blog are personal, belong solely to the blog post authors and do not represent those of ACM SIGBED or its parent organization, ACM.