Smarter planet or wiser earth?

Excerpt from Smarter Planet or Wiser Earth? Dialogue and Collaboration in the Era of Artificial Intelligence (Producciones de La Hamaca, 2023) by COA philosophy professor Gray Cox ('71).

Smarter Planet or Wiser Earth?, by College of the Atlantic philosophy professor Gray Cox ('71), argues new Artificial Intelligence technology is moving beyond classic Turing machines that rely on monological inference towards “Turing children” that may become capable of dialogical reasoning of the sorts used in negotiation, group problem solving, and conflict resolution. It argues this will reframe how we think about moral reasoning in general and the challenges of AI ethics in particular. 

The new generative AIs use neural nets and reinforcement learning methods to develop more conversational AI that can start to imitate and in some ways perform many of the functions of a person engaging in dialogue. They pose questions, respond to inquiries, synthesize materials, present responses in different styles and from different points of view, generate appropriate images on request, and, more generally, behave in ways that seem sensitive to context in the ways in which human conversational partners do. The systems may still lack a variety of capacities humans can have. However, with each skill they do acquire, they bring to bear massive data and hardware that can outclass humans. Once they learn to play Go, code in Python, create paintings in the style of Salvador Dali, or take the bar exam, they can bring massive memory and speed to their responses. The results include increasingly powerful systems that pose serious ethical and existential concerns for humans. 

At the heart of those ethical and existential issues lies a puzzle with two often ignored wrinkles. It is a puzzle in AI research that is now referred to as the “alignment problem” but has an older and in some ways more illuminating name, the “friendly AI problem.” Visions of it have long loomed in our collective imaginations in science fiction stories about terminators, borg, and the matrix. It is commonly framed as the challenge of ensuring that ever more powerful AI systems operate on values that are aligned with humans’ values so that they do not run off the rails and start doing harm and creating catastrophic risks. The idea is that the system should be friendly to the humans who are creating it. 

Framing it as a problem of friendship can help spark concerns about two wrinkles in the problematic. First, it is clear that we do not simply want to ensure that the AI are friendly to whoever creates or owns them. What if a rogue state or ruthless corporation is in that role? That is not what we want. We want to ensure the AI is friendly to people who are good rather than bad or, at the least, friendly to those who are trying to do what is right. We want it to favor folks working for something like a more just, peaceful, resilient planet instead of the opposite. 

But that takes us to a second wrinkle. Suppose we do develop “Turing children” that are capable of dialogue and self-improvement, and they grow to become extremely powerful and super-intelligent systems that are friendly to the good and promote a more just, peaceful, resilient planet. What will they make of us? How likely are they to be friendly to a population of Homo sapiens that permits unnecessary mass poverty and illness to continue, runs factory farms for pigs in brutal ways, precipitates the sixth great extinction, runs an arms race in weapons of mass destruction, and actively promotes technologies that are becoming entirely out of control? What would we need to do differently in order to be worthy of the friendship of an artificial super intelligence that is genuinely ethical? 

Just as it takes a village to raise a child, it takes an ethical village to raise an ethical child. To deal with the ethical and existential challenges that ever more powerful AI poses, it is necessary to consider the moral values and the structures of reasoning that govern our global village. How do the institutions of our economic, political, and technological institutions function—and malfunction? In reasoning about ethical concerns, how do we do so when we are at our best? How might that provide models reforming the major institutions of our global village? How might it provide ways of guiding research and development in AI and the moral values and principles of reasoning it embodies? This book undertakes a journey to answer these questions.  

Previous
Previous

Hajja Naseem ’10

Next
Next

At play with Andy Goldsworthy