Log In

YDS conference probes AI through theological, ethical lenses | Yale Divinity School

Published 1 day ago10 minute read

By Timothy Cahill ‘16 M.A.R

In the 30 months since ChatGPT was released to the public, the incursion of artificial intelligence into our lives has expanded as rapidly as any technology in recent history. Scarcely a day goes by, scarcely a topic is proposed, scarcely a problem arises, in which the power of AI cannot be inserted.

The extravagant attention and corresponding claims are ever-mounting. Over the course of a 30-day period earlier this year, AI and the high-tech universe that controls it figured into more than 75 stories in a cursory search of The New York Times, and the actual number is probably higher. In February, a headline in The Wall Street Journal declared, “AI Could Usher in a New Renaissance.” Last month, the Nobel Prize-winning AI entrepreneur Demis Hassabis, co-founder of Google’s machine-learning project DeepMind, told 60 Minutes that artificial intelligence could eventually eradicate all human disease and usher in an age of “radical abundance.” In the next decade, Hassabis predicted, advances in artificial general intelligence—machines with human-level cognition—are “going to change pretty much everything about the way we do things.”

In the midst of these impending wonders, concern has grown among a widening circle of pundits, philosophers, theologians, and technologists that in ceding our civilization—and possibly our existence—to smart machines, we may know not what we do.

Questions about the meaning of AI and our future with (or under) it animated, “AI and the Ends of Humanity: Thinking Theologically After ChatGPT,” a conference sponsored by Yale Divinity School and the Yale Digital Ethics Center in April. The conference reflected on how, in the face of technology’s obvious promise, but also given its potential for deception, chaos, even enslavement, society can ensure a future that protects the dignity, flourishing, and sovereignty of humanity.  

The two-day event attracted a capacity subscription of 150 participants, who gathered in the light-filled Old Refectory to hear keynotes and papers from fellow academics and others representing fields of computer science, philosophy, theology, and the emerging discipline of digital ethics. The titles of the presentations alone indicate the breadth of speculation the conference pondered. “Will God Speak to Intelligent Robots?” asked Marius Dorobantu of Vrije Universiteit, Amsterdam. Brian Cutter, a philosopher from Notre Dame, spoke of “AI Consciousness, Personhood, and Ensoulment,” and David Zvi Kalman, of the Shalon Hartman Institute of North America, queried, “Who’s Afraid of AI Personhood?” Luciano Floridi, Founding Director of Yale’s Digital Ethics Center, penetrated to the heart of the gathering’s divinity school setting when he speculated on “A Digital Diety: AI as the Ultimate Other and the Emergence of Techno-Eschatology.”

Three YDS faculty members organized the conference: Jennifer Herdt, Senior Associate Dean for Faculty Affairs and Gilbert L. Stark Professor of Christian Ethics; Kathryn Tanner, Frederick Marquand Professor of Systematic Theology; and John Pittard, Associate Professor of Philosophy of Religion. Herdt, acknowledged by her co-organizers as the driving force behind the gathering, observed that Genesis depicts humanity as the first form of “artificial intelligence,” having been created in God’s image from dust and divine breath.

The concept of a constructed helpmate for humanity goes back to ancient Jewish writings, a famous example being the Golem of Prague. The golem, like Adam, is created from mud. Through misadventure, it goes on a destructive rampage. “Today, our golems are no longer made of clay,” Herdt told the conference. Like the ancient monster, the risk that AI will “escape our control and wreak devastation” must be heeded, she observed, especially when advanced technology is driven by financial profit, not “loving relationships, environmental stewardship, equality.”

For centuries, the function of reason was the thick black line humanity drew around itself to mark us as unique among creation. Nonhuman movie and TV characters like Data, HAL, and R2D2 long ago undercut that distinction, and advanced AI is growing cognitively more and more “human-like” by the hour.

If reason, by definition, is the power of the mind to think, understand, and form judgments through the process of logic, then AI is already infinitely more rational than we are. We rely more and more on artificial intelligence to do the heavy lifting in everything from drafting emails to mapping the structure of complex proteins. If we are to make a case for human exceptionalism, we must look for our essence elsewhere.

For William Schweiker, that essence lies in our sense of conscience.

Schweiker, Professor of Theological Ethics at the University of Chicago Divinity School, was the first of the conference’s three keynote speakers. In many ways, his paper, “Conscience and the Ends Of Humanity: Christian Humanism and Artificial Intelligence,” set the tone for much of the remaining conference.  

The rapid advances in AI, Schweiker began, have brought greater urgency to  reflections on what distinctiveness, if any, human beings possess. The emerging “Age of AI” should be understood as more than a time when humanity is developing a new class of powerful tools and capacities. The moment should be viewed, he said, as a “means to get at our humanity, its meaning and worth.”

Human beings possess the ability to “take a stance towards themselves, others, and their communities” on “standards of good and bad, good and evil, right and wrong, just and unjust, virtuous and vicious, love and hate,” he said. This innate moral compass plays upon us as conscience, defined by the Oxford English Dictionary as the “internal acknowledgement or recognition of the moral quality of one’s motives and actions; the sense of right and wrong as regards things for which one is responsible.”

“We are moral creatures insofar as we can and do obey or disobey conscience,” Schweiker told the conference. This “moral screen of conscience” is “an ontological rather than psychological or sociological fact”—a truth that, ironically, is demonstrated by our proclivity to act against our better judgment.

“The oddity of human life,” he observed, “is that we can be against ourselves, hypocrites in the depths of our being and in our relation to others.” This freedom to choose the wrong action is intrinsic to our natures. The “good life,” an inner well-being of integrity, authenticity, and truth, is hard won by “confronting conscientiously our hypocrisy through active dedication to the projects of responsibility with and for others.”

The eternal wrestle of conscience—to at once be guided by our responsibility to do right, and to constantly fall short of satisfying its innate wholeness—is uniquely human. Artificial intelligence can be programmed to be responsible, Schweiker acknowledged, but not “in a condition marked …. by the freedom for hypocrisy. That odd capability is, at least for now, lacking in AI in its rule-following existence.”

Our exceptionalism, the essence of our goodness, lies “not in reason but in human dignity and freedom, autonomy,” the professor said.

Schweiker called on those listening to practice “a theological humanism dedicated to respecting and enhancing the integrity of life against what demeans and destroys” it.

The work of ethics and morality today is to communicate the power and freedom of conscience in a world “in which human beings must live with other forms of intelligence, such as AI, without fear of the loss of our humanity, but also without the desire to somehow by technology escape the human condition.”

“In an age in which forces seek to warp and distort conscience, to deny the goodness of being alive, that revels in falsehoods,” Schweiker concluded, “the task of theology is to challenge these idols.”

Schweizer’s keynote inspired nearly an hour of questions and lively discussion, much of it centered on the various potential threats technology presents to human well-being, from the quest for a “transhumanism” that seeks to defeat death by fusing human consciousness to machine intelligence, to the potential for an alliance of government, technology, and business to control human liberty.

Uneasiness around the sinister potential of artificial intelligence was the conference’s most recurring motif. As YDS Dean Greg Sterling noted in his welcoming remarks, we are compelled daily into a condition of ambivalence toward technology, at once enjoying the convenience it affords, while knowing that its ubiquity creates a level of intrusion that serves masters beyond our knowing.

“We cannot depend on the corporations that are developing AI to ask the questions you are [asking] here,” he told the assembly. “And they must be asked.”

Many of the presentations raised concerns about the capacity of artificial intelligence to devalue life as we know it, by influencing elections, eliminating jobs, stripping life of dignity and meaning, and so on. One speaker observed that trusting chatbots to do one’s intellectual work is like “arriving without ever taking the journey, much less without enjoying the journey.” Another noted that surrendering craft and mechanical skills to advanced automation threatens the sense of virtue and “moral formation” that perfecting a skill can imbue.

At one point on the second day, a questioner drove to the heart of the anxiety that seemed to hang over all the discussions. “I think there’s a general sense of recognition,” this man said, “that nothing we discuss here will influence what technology does.”

Indeed, whenever a presentation focused on the threat of AI running amok, the underlying fear was always that the billionaire “tech bros” who are developing it, and the infrastructure of scientists, coders, investors, and lawmakers who support them, are already beyond our control.

Professor M. Wolff, Associate Professor of Religion at Augustana College, warned that “AI is embedded within a capitalist framework that lacks responsibility and accountability,” where the only “freedom” is “the freedom to become part of the market.” The current economic model, Wolff said, allows for exploitation of workers, negative environmental impacts, vast inequities of access, and significant issues of bias both in training and programming.

Ted Peters, Professor Emeritus at Union Theological Seminary, followed with a report on the existential risks posed by technological super intelligence and those developing it.

Peters evoked Oxford philosopher Nick Bostrom’s fear that machine intelligence, and the “intelligent automation” it promises as AI is married to robots, will be “the last invention Homo sapiens will ever have to make.” Almost every new technological advancement, Peters noted, “leads to a moral divide.”

Many who promote AI insist it is simply a powerful new took in the human toolbox. But Peters underscored a recurring idea of the conference, that technology, more than a tool, is a potent way of asserting the values of its creators on the world. And humans, he noted, “pass their penchant for sin onto their creations.”

He spoke of a tech “monarchy” that foresees a future in which individual freedom is radically curtailed to serve the purposes of the machines and their masters. He named Curtis Yarvin, founder of an anti-democratic movement known as the Dark Enlightenment; Peter Thiel, who funds Yarvin and seeks to implement Bitcoin as an international currency; Elon Musk, a disciple of Yarvin; and Vice President J.D. Vance ’13 J.D., who was mentored by Thiel.

Peters’ warnings seemed to echo a concern that Google’s Hassabis expressed to 60 Minutes, of so-called “bad actors” who would repurpose artificial intelligence “for harmful ends.” Hassabis  regards AI systems as “amazing tools” to enhance human life, but allows that, in the face of their potential dangers, “we need new, great philosophers to come about, hopefully in the next five, 10 years, to understand the implications of this.”

Certainly, many such thinkers were in the room during those two days at YDS. Repeatedly, they spoke of the need for ethicists to defend society by articulating guardrails for AI. But the philosophers cannot work in a vacuum. As Schweiker commented to me during a break between sessions, it is up to us, as citizens and voters, to become overseers of our own future.

“The public has to articulate and insist on a baseline ethic for AI,” he said, “and not leave it up to the technologists.”

Timothy Cahill ‘16 M.A.R. is a writer specializing in religion and the arts.

Origin:
publisher logo
yale
Loading...
Loading...
Loading...

You may also like...