Computers Know How to Compromise Better than Humans
Could we learn how to compromise better from machines? Two Brigham Young University (BYU) computer science professors, Jacob Crandall and Michael Goodrich, seem to think so, thanks to a new algorithm they developed with colleagues at MIT and other international universities.
The algorithm was developed to teach machines not just to compete and win games, but to also cooperate and compromise, according to a study published in Nature Communications.
“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,” says Crandall. “AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”
Researchers programmed machines with the algorithm, which is called S#, and then had the machines play a variety of two-player games to see how well they could cooperate in different relationships. Researchers tested machine-machine, human-machine and human-human interactions. In most instances, machines programmed with S# outperformed humans in finding compromises that benefited both parties.
“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” Crandall said. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges.”
The algorithm-programmed machines were also given a small vocabulary of “cheap talk” phrases. In matches against human participants, if the human cooperated with the machine, it would reply with a “Sweet. We are getting rich!” or “I accept your last proposal.” When participants lied, tried to betray the machines or back out of a deal with them, the machines would respond with trash-talk, such as “Curse you,” “You will pay for that!” or “In your face!”
Interestingly, cheap talk doubled the amount of cooperation, regardless of the game or pairing, and human participants were often unable to tell whether they were playing a human or machine in the midst of cheap talk.
Crandall thinks that these research findings could have long-term implications for human relationships.
“In society, relationships break down all the time,” he says. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”