The future's new frontier: How AI might shape important parts of our lives

Nadia Thalmann, right, a professor at Nanyang Technological University in Singapore, talks to a humanoid March 1 that she and her team created. With her brown hair, soft skin and expressive face, the humanoid is a new brand of robot that could one day, scientists hope, be used as a personal assistant or care provider for the elderly. (CNS/Edgar Su, Reuters)

Artificial intelligence can mimic our voices, dress the pope in Balenciaga, and control parts of our brains. Will it help, or doom us?

Editor's note: This article originally appeared in Angelus, the news magazine of the Archdiocese of Los Angeles. It is reprinted with permission.

Artificial intelligence (AI) has dominated news headlines in recent months, particularly revelations and claims about how the technology works, what it’s doing now, and what it could do in the future.

AI promises to transform areas of daily life such as work, health care, and education. Because AI has the capacity to change the way human beings interact with the world and with one another, leaders are asking for boundaries to ensure its ethical use and to prevent catastrophe in the form of regulation.

In order to better understand AI’s promises and perils, Angelus spoke to two Catholics who have tracked the technology’s rise from different places: Justin Welter, senior vice president of Gaming at software publishing group Bidstack, has worked in marketing and advertising in Silicon Valley for almost two decades, including for Facebook and Google. Joseph Vukov is a philosophy professor at Loyola University in Chicago, whose research has focused on philosophy of mind and neuroethics.

Elise Ureneck: Let’s start by establishing a common definition of artificial intelligence. I understand it to be a computing process that uses algorithms to recognize patterns in large datasets and present them back to the user in an ordered, coherent way. Am I missing anything?

Justin Welter: It’s statistics at a massive scale. You have all kinds of information that’s coming in, and these large language models say, “Statistically we believe that the next thing should be this” … whether that’s a word, equation, or code. They determine what the next logical step would be.

Joseph Vukov: I would add that what counts as the “next step” varies from model to model. They are trained to do a specific task, so the next successful step is going to be relative to what you’re trying to get it to do.

Something like ChatGPT (Open AI’s generative tool) is trained to produce human-like language, so what counts as success is whether or not the output looks like something that a human would write. It’s not trained to do other sorts of things like track the truth or create meaningful speech.

Joseph Vukov. (OSV News/courtesy Joseph Vukov)
Joseph Vukov. (OSV News/courtesy Joseph Vukov)

Ureneck: Where is AI already deployed?

Welter: Google’s use of predictive text in email, in which Gmail offers suggested text to complete sentences based on algorithms, is one example.

When it comes to the advertising space, it’s still to be determined where AI will land. I’m struggling to concretely understand the difference between artificial intelligence and machine learning. We use a lot of machine learning algorithms in online advertising to determine what type of ads are pertinent to show someone. We also use it for tracking, to understand and ensure that who we targeted downloaded an app versus someone else.

Wherever there’s an opportunity to make money, there’s going to be more development.

Vukov: AI is also operating behind the scenes, which raises important ethical questions. In health care systems, it can do things like read charts. It turns out that certain machine learning algorithms are better than radiologists at picking out differences between some types of scans.

There’s talk about using AI to help triage patients to determine which tests to order or what a diagnosis might look like. Given things like the biases that can factor into algorithms, these are things that we need to have our eyes on. AI could affect the way that health care is administered.

Ureneck: There’s a lot of talk about the promises and perils of AI. The New York Times, for example, has written about technology that could help “read the minds” of people who are unable to speak by mapping how parts of their brain light up — this is obviously promising. On the other hand, nearly every developer giving interviews has said something to the effect of, “If you’re not scared about what AI might do, something’s wrong.” What’s exciting to you about AI? What’s scary about it?

Vukov: I think there’s potential for AI to provide capacities to people who don’t otherwise have them. The tool in that Times story you mentioned, for example, was able to pick up on the semantics of words. It’s an exciting application for somebody who is unable to speak.

There are also possible applications for prosthetics. Elon Musk has been touting one of his many projects called Neuralink, a computer chip that goes into your brain. An algorithm could help you communicate with your prosthetic and move about better than current prosthetics allow.

My biggest worry is the erosion of trust that’s so essential to community and democratic life. You see this already in education, like the question: How do we know if essays were written by a student or ChatGPT? There’s an undermining of trust in the professor-student relationship. You can see how this will affect other relationships. We’re going to start to ask, “Am I interacting with a person or a chat bot?” “How can I tell if what’s written is by an AI or by a person?”

Lastly, there’s this really big worry about how models are getting trained. Are those who are training the models providing — intentionally or unintentionally — misleading information? I might assume a tool was trained on information that’s pertinent to the topic I’m exploring, when it actually wasn’t. Could that lead to a covert way of swaying people’s opinions?

A fake image of Pope Francis in Balenciaga generated by Midjourney AI. (Pablo Xavier/Midjourney)
A fake image of Pope Francis in Balenciaga generated by Midjourney AI. (Pablo Xavier/Midjourney)

Welter: My biggest fear is around the question, “What is truth?” The thing that scares me the most is pictures, like Midjourney, a tool that allows you to create any photo. It was used to create that famous picture of the pope and the Balenciaga jacket.

What’s going to be interesting is what is real and what is not, which includes voices, too.

Our iPhones have a new accessibility feature where you record your voice for 15 minutes and then Siri begins speaking in your voice. I’m worried about having my voice recorded anywhere, including my voicemail. If my voice can be recorded and manipulated or recreated, it could be used for things like accessing bank accounts. My voice is my fingerprint and security code.

There’s going to be regulation. Google, Facebook … they’re all going to want regulation, because it’ll allow for them to move forward without having to address the public’s fears. I think there will be requirements for watermarks — every picture or thing created will have to have some type of watermark that says, “Made by AI.”

And there are a lot of trust issues with the media as it is. You can’t imagine how that will be amplified if anybody can go on the internet, create their own truth, put it on Twitter and claim, “This is what is happening in the Middle East or in Ukraine right now.”

Ureneck: How do you see AI affecting education?

Welter: I think it’s going to be interesting for careers like engineering. We’ll likely lower the bar for engineering or coding. Will it make sense to go to coding school for one year when you can learn how to code pretty simply on ChatGPT or Bard [Google’s generative AI tool]?

I also think AI will widen the gap between public education and private education. Private schools can afford to have a maximum ratio of 1 to 30 students per teacher. They can do oral exams. But in some public schools the ratio is 1 to 50, 1 to 60 students per teacher. As AI proliferates to essays and standardized tests, there will be opportunities for students to leverage that to do better.

Vukov: I agree with Justin. I’ve started making sure that students understand what exactly a large language model is and how machine learning works, and in light of that, helping them to understand what its limitations are. ChatGPT can do a really good job constructing boilerplate, B-essays. It can’t do a good job of including robust citations, or bringing someone’s personal experience into the essay. I think once students understand this, it clicks that AI can’t do the same thing they would do in an essay that they actually want to write.

It’s a sea-change moment in education. It’s forcing me to think more about what we’re teaching students, just like the internet made us think about the value of memorizing gobs of information. We can ask if students need to be synthesizing information into four- to five-page essays.

AI is not particularly good at evaluating meaning or ethics or religious frameworks for viewing the world or bringing personal experiences to bear on big ideas. I’m wondering, how can we prepare students in such a way that they’re bringing what’s distinctively human about themselves into their education and work?

Ureneck: The Surgeon General recently released a report detailing the “loneliness epidemic” in our nation. More than 75% of adults reported experiencing loneliness. We already know that teens’ mental health is abysmal. Some have proposed that AI robots might offer some relief. How do you see AI affecting our human relationships and well-being?

Vukov: I’d start with the Turing test, named after Alan Turing, who’s considered the father of computer science. Basically, the test says that if you can’t tell if the thing you’re interacting with is a computer or a human, that thing is conscious or sentient. That concept has led to functionalism — one of the leading ways of understanding who we are as psychological beings today.

Functionalists believe that what humans are is the functions that we can perform. So if you can build a program that can reproduce and replicate the function, that’s as good as you need to be to be a human being. This is why we tend to anthropomorphize AI. We call them Siri and Alexa. That’s the trickle-down of the Turing test. We’ve reduced what a human being is to our psychological function.

I watched the congressional hearings with Sam Altman (CEO of Open AI) on regulation, and one thing they kept on coming back to is that we need to know if the thing we’re interacting with is AI. The worry here is that we can’t tell the difference anymore. And that worry is predicated on the deeper assumption that AI is not a person.

Of course the Catholic tradition has tons to say about this.

Humans have souls; humans are embodied; humans are made in the image and likeness of God. We’re more than our functions. And that’s why we think, “Even if it can act like a human being and paint like a human being and talk like a human being, it’s not the real deal.”

Welter: Will AI replace human relationships or solve loneliness? I don’t see it. I think it’s important to emphasize the word “artificial” in artificial intelligence. One of the things that we as Christians understand is self-sacrifice, dying to ourselves within love and all that that entails. I just don’t see that happening with an algorithm. To really have an interaction with or to love someone requires at least one of those people to know what it means to sacrifice oneself.

Justin Welter. (Submitted photo)
Justin Welter. (Submitted photo)

Vukov: Could chatting with a chatbot in the evening help address some issues of loneliness? I wouldn’t be shocked if there was some study that came out that showed that a chatbot is particularly good at this kind of therapeutic intervention. But ultimately it’s a Band-Aid on deeper social problems, not a real fix. Human beings crave relationships and interactions with other human beings. And a superficial fix is something that ultimately leads to deeper wounds.

Ureneck: Some believe the greatest threat from AI is human extinction, based on the premise that an AI could one day build its own body. Eliezer Yudkosky, considered the father of the field, has written, “Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.” How do you respond?

Welter: I think it’s more likely that it’ll be humanity that destroys ourselves as opposed to an AI robot that ends the world. There are opportunities for nefarious activity. But to make the jump that this is what’s going to be our ultimate demise is a bit far-fetched. That kind of prediction makes a lot of assumptions, including that it becomes sentient. I just don’t see that happening. Original sin is probably more powerful than AI, and that’s ultimately where the battle will be.

Vukov: I’m on the same page as Justin. Human beings are fallen, and our fallenness is part and parcel of the things that we create, including AI. I’m a big sci-fi fan, but my take is that if AI leads to vast changes on the social scale, it’ll be T.S. Elliot’s version of the way the world ends — not with a bang, but with a whimper.

My big worry is not the giant robot trampling over cities and making us into its slaves, but more things like the spread of distrust and misinformation, the erosion of democracy, the failure to understand what a human being is in the way that the Church teaches.

I think there is the potential for big negative social consequences. And in that way it’s maybe more nefarious because the solution is not to build up a big army that can defeat the Terminator. It’s regulation and careful articulation of what our values are.

There’s an opportunity for Catholics and people of goodwill to step in and add their two cents about what makes a human being a human being, about why it is important to build up real communities of humans, why it is important to trust each other, what a good, healthy, functioning democracy looks like. It’s an opportunity to think about the bigger questions.

Elise Italiano Ureneck is a contributor to Angelus writing from Rhode Island. This article originally appeared in Angelus, the news magazine of the Archdiocese of Los Angeles. It is reprinted with permission.



Share:
Print


AOD: Christmas Mass Finder - Article Bottom
Menu
Home
Subscribe
Search