The New York Post e-Edition

Time for a timeout on ‘horrors’ of A(lien) I(ntelligence)

World faces a real risk to human existence

STEVE PETERSEN Steve Petersen is a professor of philosophy at Niagara University.

Ihave spent most of my life as a techno-optimist. I believe the world has gotten better on average over the centuries — I would rather be a random person in 2023 than a random person in any other time in history. This progress is largely thanks to advances in science.

I started working in the philosophy and ethics of artificial intelligence in large part because I was enthralled by its potential as the most transformative technology of my lifetime.

Google’s DeepMind says its mission is to “solve intelligence” and from there use the enhanced intelligence to solve all our other problems — global poverty, climate change, cancer, you name it. I find this vision compelling, and I still believe AI has that potential.

But I have reluctantly come to believe that the path there is much narrower and much more dangerous than I once hoped.

This is why I signed the Future of Life Institute’s open letter calling for a moratorium on further development of the most powerful modern AI (notably “large language models” like OpenAI’s GPT).

First, there are dangers of AI already present but quickly amplifying in both power and prevalence: misinformation, algorithmic bias, surveillance and intellectual stultification, to name a few. I think these worries are already sufficient to call for more reflection before we proceed.

My colleague Phil Woodward has pointed out that though OpenAI has promised to proceed cautiously, it has released to the public a perfect cheating machine that has already done to higher education what publishing an easy recipe for an undetectable athletic enhancement would do to professional sports.

No doubt ChatGPT and its successors will have many positive effects on education, too — but the disruption in the meantime is undeniable and not obviously for the best.

OpenAI is arguably one of the bestintentioned AI outfits out there, but intentions have a funny way of being warped when profit motives are also on the line. That one of OpenAI’s most notable founders (Elon Musk) also signed FLI’s letter is telling.

Danger’s here and now

The near-term AI concerns are not all, though.

Like many, I am convinced the long-term “existential risk” is very real.

About a decade ago I read the preliminary papers behind Nick Bostrom’s book “Superintelligence,” which basically argues 1) AI is likely to self-amplify until it reaches a level of scientific and technological sophistication far beyond what humans have, and 2) once it reaches that state, humans will basically have no say in what happens next, and 3) it is very hard to make sure such an AI will have our true interests at heart.

Since reading that work I have gone through some of the classic stages of grief. First, I was in denial; I wrote an academic response to Bostrom in which I thought I could show his arguments were misguided.

The more directly I engaged his book, though, the more I realized he had already considered my objections and refuted them. In the years since, it’s been a mix of bargaining, depression and acceptance of the fact that advances in my much-beloved AI are a serious risk to human existence.

These arguments are not always portrayed well in the media. Like many subtle issues, arguments for the position don’t fit well into a sound bite or 280 characters, but apparent takedowns (of oversimplified, strawman versions) do.

Popular science fiction, especially, can mislead our imagination in two opposing directions. First, it can give us the impression artificial general

intelligence (AGI) is in the far future or pure fantasy. But we have started to see hints of AGI now.

No sign of ‘humanity’

And even if real AGI is still far off — say 50 years or more — the hurdles are at least as hard as those of climate change, and the stakes at least as high.

Second, because science fiction is ultimately about human concerns (and its AI must be portrayed by human actors), we are used to the idea that AI will be like us in most ways. But, as Yuval Harari, Tristan Harris and Aza Raskin recently put it, we are in the process of summoning a truly alien intelligence.

AI will not share our biological history and so will not have our contingent wants.

In this space, I can only urge the curious and concerned to engage the nuances. Myself, I now devote most of my research time to the “alignment problem”: roughly, the problem of trying to make sure the goals of a superintelligent system are sufficiently aligned with ours to enable human flourishing.

This is a truly interdisciplinary field: It needs computer scientists, ethicists, psychologists, formal epistemologists, governance experts, neuroscientists, mathematicians, public-relations experts, engineers, economists . . . and it needs many more of each.

If you find yourself wanting to know more, you might perhaps start with Brian Christian’s excellent overview, “The Alignment Problem.” For those who want to help but aren’t sure how, you might start at the website 80,000 Hours.

As a philosopher I am often haunted by a phrase from “Superintelligence” that AI alignment is “philosophy with a deadline.”

Lately, as we’ve all noticed, that deadline has shortened dramatically.

CHRISTIAN SCHOOL ATTACK

en-us

2023-04-01T07:00:00.0000000Z

2023-04-01T07:00:00.0000000Z

https://nypost.pressreader.com/article/281951727082361

New York Post