UC student explores AI art ethics
Alumna's research combines her love of art, information technology
ChatGPT and other artificial intelligence programs can’t own copyrights. A selfie taken by a monkey helped settle that issue.
But when a person uses generative AI to produce an image, the ethics and legalities get murkier.
Ro Basty, a second-year doctoral candidate in the University of Cincinnati’s School of Information Technology, is trying to bring some clarity to the subject for her dissertation. She’s studying the applications and implications of technology in art, a subject in which tastes and sensibilities can be contentious even without the added complications of AI.
“Art is very subjective,” Basty said. “The domain of ethics is very gray, very vague.”
Basty is a bit of an artist herself. As a hobby, she works with watercolors, colored pencils and markers to create portraits and nature scenery. She’s also created animations, which combines art with her other passion of information technology.
She’s been interested in computers since she was a child when she watched her uncle work as a computer science professional. After earning a Bachelor of Science in information technology from Miami University and a master’s in health informatics from UC, the art and IT enthusiast began working toward a doctorate from UC with studies focused on generative AI.
With the rapid advancements, Basty saw a need for a greater understanding of the tools, complications, opportunities, legalities and ethical issues of AI-generated art.
“I am an advocate of using technology if it’s used ethically and effectively,” said Basty, who works as a researcher assistant at the Impact Accelerator in UC's College of Education, Criminal Justice, and Human Services, helping to write AI guidelines and policies for the college. “Every technology comes with its own flaws and benefits. Uncertainty instills a lot of fear, a lot of apprehension. Promoting literacy will help embrace change.”
New tools
Computers have been used to create art since the 1950s, and computer-generated art started to gain prominence in the 1960s. With personal computers becoming more common and the creation of programs such as Adobe Photoshop, the 1990s and early 2000s saw the tools to digitally create art become more accessible.
“I saw there was this huge proliferation of digital arts coming out,” Basty said. “Even at that time, there were debates about which was more valuable, traditional art or digital art.”
While there are valid concerns, Basty said, she doesn’t think that should diminish the appreciation for digitally created art.
“Art is always valuable, no matter the platform, the genre, the medium,” she said. “Each type of art has its own appreciators. Whatever you create is always accepted, always needed, always indulged in. You just need to find the right audience.”
With computers, new opportunities were created for artists as they can help make things that are only possible through the combination of human creativity and machine ingenuity.
“It’s important to look at these AIs as a tool, similar to Photoshop, similar to other Adobe products,” Basty said. “Look at them as a tool you can cocreate, collaborate, brainstorm together and help you overcome some barriers. These tools can be very useful if you know how to use them ethically and effectively.”
Inspiration or theft?
A major concern that has arisen with the advent of AI-generated art has been infringement upon other artists’ work.
Some generative AI programs allow users to create a new image in the style of an established artist. That leads to questions of whether the new art is stealing from the original or simply is inspired by it.
“Anything we see comes from a source of inspiration, whether from nature, media or even another medium,” Basty said. “Some people get inspiration when they’re listening to music or watching a documentary or movie; that inspires them to create something. That goes back to how we define originality, how we define creativity.”
Even when something new is created with AI, ownership can be complicated. Does the person who entered the prompt deserve credit for creating an image or does the person who created the program deserve the credit?
Animals or the AI itself don’t have the need for any economic value, and that’s why the copyright is only for the human creator.
Ro Basty Doctoral candidate in UC's School of Information Technology
There's been a push to redefine AI-generated works versus AI-assisted works, Basty said. AI-generated works would be any item created with little to no human intervention while AI-assisted works would be items created with more human contribution and intervention. Some people are pushing for AI-assisted art to be copyrightable, Basty said.
It’s already been decided, at least in the United States legal system, that the computer programs themselves don’t have any copyright claims. U.S. courts have ruled that nonhumans are ineligible for copyright protection, such as when a judge ruled that a monkey didn’t have an ownership stake of a selfie it took.
“When it comes to money, things get complicated,” Basty said. “The system of copyright and intellectual property is, in a way, affiliated with the economic system. Animals or the AI itself don’t have the need for any economic value, and that’s why the copyright is only for the human creator.”
Along with the final products, there’s ownership legalities with the text and images that have been used to train AI models. Many of the works that have been used to train AI are copyrighted.
It’s likely that copyrighted materials will have to be removed from AI training sets or their creators will have to be compensated for their work, Basty said.
“It’s a work in progress, and slowly but surely we’ll get some balance between what was the norm and practice and what can be introduced as a new norm in the future,” she said.
Potential for malice
Beyond ownership, there’s also ethical questions about how AI image generators are used.
Images of Pope Francis wearing a puffer jacket went viral last year, fooling many people into believing they were actual photos. While those images may have seemed harmless, it showed how people could be tricked by a fake image and increased concerns that other images made with malicious intent could be just as deceptive and more disruptive.
“The image of the pope with a puff jacket was a very, very benign thing,” Basty said. “There are other things coming along that have started to concern politicians, policymakers and society overall.”
Basty said it's imperative that generative AI users are transparent about how they create their art. Government support and regulations also likely will be needed, she said.
“When it comes to the use of these technologies, it’s the responsibility of the users, the developers and those involved in it to practice transparency and accountability for the works that are being generated,” she said. “If you’re working with the generators, you’re obliged to disclose what kind of tool you’ve used, what kind of algorithm, what kind of platform.”
As people develop programs that can be used to create art, inevitably people will use the tools in unintended ways. While it's impossible to guarantee 100% safety, Basty said, education and awareness can help.
“What can be done falls on users as well as developers,” Basty said. “It’s the responsibility of developers to make sure they continuously review, approve and set up guidelines and measures to help ensure their tools are safe and secure. On the user side, it goes back to education.”
Featured image at top: AI-generated art created using Bing Image Creator 4. Image/Ro Basty
Become a Bearcat
Whether you’re a first-generation student or from a family of Bearcats, UC is proud to support you at every step along your journey. We want to make sure you succeed — and feel right at home.
Related Stories
UC student explores AI art ethics
July 31, 2024
Ro Basty, a second-year doctoral candidate in the University of Cincinnati’s School of Information Technology, is studying the applications and implications of technology in art, a subject in which tastes and sensibilities can be contentious even without the added complications of AI.
UC-developed chatbot to help supervision officers, parolees
June 2, 2023
The University of Cincinnati's School of Information Technology professors are developing a chatbot for correction agencies that will help supervision officers and people on parole.
How to spot an AI-generated image
May 29, 2024
It's getting harder to tell the difference between real photographs and artificially generated images. In the near future, it could be impossible with just human vision, a University of Cincinnati professor told Fox 19.