AI not intelligent like its maker

Paper by UC’s Anthony Chemero explains AI thinking as opposed to human thinking

The emergence of artificial intelligence has caused differing reactions from tech leaders, politicians and the public. While some excitedly tout AI technology such as ChatGPT as an advantageous tool with the potential to transform society, others are alarmed that any tool with the word “intelligent” in its name also has the potential to overtake humankind. 

The University of Cincinnati’s Anthony Chemero, a professor of philosophy and psychology in the UC College of Arts and Sciences, contends that the understanding of AI is muddled by linguistics: That while indeed intelligent, AI cannot be intelligent in the way that humans are, even though “it can lie and BS like its maker.”

headshot of Anthony Chemero

Anthony Chemero often publishes on the interface between humans and technology. Photo/Joseph Fuqua/UC Marketing + Brand.

According to our everyday use of the word, AI is definitely intelligent, but there are intelligent computers and have been for years, Chemero explains in a paper he co-authored in the journal Nature Human Behaviour. To begin, the paper states that ChatGPT and other AI systems are large language models (LLM), trained on massive amounts of data mined from the internet, much of which shares the biases of the people who post the data.

“LLMs generate impressive text, but often make things up whole cloth,” he states. “They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t actually know what the things they say mean,” he says. “LLMs differ from human cognition because they are not embodied.”

AI chatbot usage and concepts

"[Large language models] generate impressive text, but often make things up whole cloth,” says Anthony Chemero. “They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t actually know what the things they say mean." Photo/iStock/Vertigo3D

Things matter to us. We are committed to our survival. We care about the world we live in.

Anthony Chemero UC professor of philosophy and psychology

The people who made LLMs call it “hallucinating” when they make things up; although Chemero says, “it would be better to call it ‘bullsh*tting,’” because LLMs just make sentences by repeatedly adding the most statistically likely next word — and they don’t know or care whether what they say is true.

And with a little prodding, he says, one can get an AI tool to say “nasty things that are racist, sexist and otherwise biased.” 

The intent of Chemero’s paper is to stress that the LLMs are not intelligent in the way humans are intelligent because humans are embodied: Living beings who are always surrounded by other humans and material and cultural environments.

“This makes us care about our own survival and the world we live in,” he says, noting that LLMs aren’t really in the world and don’t care about anything.  

The main takeaway is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Chemero says, adding "Things matter to us. We are committed to our survival. We care about the world we live in."

Feature photo at top of Chemero by Andrew Higley/UC Marketing + Brand.

Impact Lives Here

The University of Cincinnati is leading public urban universities into a new era of innovation and impact. Our faculty, staff and students are saving lives, changing outcomes and bending the future in our city's direction. Next Lives Here

Related Stories

1

AI not intelligent like its maker

November 20, 2023

In a paper in Nature Human Behaviour, UC's Anthony Chemero explains the difference between AI thinking and human thinking: although AI can "lie and BS" like its maker, it is not embodied therefore doesn't have the same definition as intelligent.

2

The future of work: How will AI and automation affect work?

June 15, 2023

For decades, advances in technology have changed the ways people work, and now artificial intelligence could be the next big disruptor. Three professors from the University of Cincinnati's Carl H. Lindner College of Business discuss how artificial intelligence and automation will change the way people work, what types of jobs will be lost to artificial intelligence and the value human workers still provide.

3

UC student explores AI art ethics

July 31, 2024

Ro Basty, a second-year doctoral candidate in the University of Cincinnati’s School of Information Technology, is studying the applications and implications of technology in art, a subject in which tastes and sensibilities can be contentious even without the added complications of AI.