AI’s human-like traits: Are we blurring the line between man and machine?


Credit: Pixabay/CC0 Public Domain

Attributing human traits to machines is nothing new, but with the rise of generative artificial intelligence, anthropomorphism is taking on a new dimension. This trend raises crucial philosophical and ethical issues, while redefining our relationship with technology.

If you’ve ever typed “hello” or “thanks” into ChatGPT, then you’re guilty of anthropomorphism—a tongue-twisting word that comes up increasingly in discussions about artificial intelligence (AI). But what exactly does it mean?

Broadly, anthropomorphism is the attribution of human characteristics to non-human entities like animals and objects. This concept has also been applied to humanoid robots and, more recently, AI—especially generative AI. Unlike specialized IA, which is used for specific purposes like facial recognition, generative AI (which includes ChatGPT) can generate text, images and other types of content in response to a prompt.

While the humanization of GenAI systems goes almost unnoticed, this trend raises questions about the very definition of humanity and intelligence, as well as our relationship with these new technologies. Is there a risk in treating a machine in the same way as a human being?

Anthropomorphic right from the start

The anthropomorphism of machines is nothing new. Back in 1950, British mathematician and computer scientist Alan Turing came up with a radical way of thinking about machine intelligence. In his Turing Test, a human evaluator is asked to determine whether the entity he’s conversing with via written text is a human or a machine. The idea is to assess how capable machines are of demonstrating human-like intelligence.

This pioneering work by Turing illustrates our tendency to attribute human characteristics to machines, and it set the stage for further explorations of anthropomorphism.

“We typically expect a technology, or a machine, to be precise, specific, faster and far more efficient than us,” says Daniel Huttenlocher, the dean of MIT’s Schwarzman College of Computing and the recipient of an EPFL doctor honoris causa in 2024. “GenAI results feel human because they showcase human-like characteristics. They are imprecise, adaptive and surprising.”

Technology that looks like us

The acronym AI contains the word “intelligence” of course, but also—and more importantly—the word “artificial.” Nevertheless, Johan Rochel, an EPFL lecturer and researcher in ethics and laws of innovation, explains, “AI systems are based on huge datasets and reflect the decisions they were taught to make by their developers, conveying the developers’ own values, beliefs and morals.”

Often, anthropomorphism begins with the design of AI system interfaces. In other words, the reason why users anthropomorphize AI machines is because the machines were designed from the beginning to display human characteristics. For instance, Amazon’s Alexa has a warm voice and a human name, and ChatGPT is just as polite and friendly as a person would be.

These characteristics were included to make the machines easy and enjoyable to use. “The best digital systems are designed and built with the user in mind,” says Rochel. “The more intuitive they are to interact with, the more readily they’ll be adopted.”

Yet making machines appear human is not that simple. “It’s a real technical challenge,” says Marcel Salathé, the co-head of EPFL’s AI Center. “A perfectly anthropomorphized AI system would need to have full mastery of the human language, including all its nuances, and be able to recognize emotions and react appropriately, process information in real-time, adapt to individual users, and so on.”

Forming bonds

When ChatGPT wishes you good luck for your event after you’ve prompted it to give you ideas for an event name, that makes the interaction more engaging and emotional—and gives the impression of a friendly relationship. That kind of anthropomorphism is a strategic ploy used by system developers to get users to form a bond with the machine.

According to a recent study by EPFL associate professor Robert West, when users interact with an AI system about a given issue, and the system has access to the users’ personal information, the system can actually change their opinions. This throws up questions about the societal impact of AI, since it can be used not just as enhanced digital technology but also to generate conversations capable of influencing our decisions.

Can we trust these virtual partners?

In the health care industry, a growing number of anthropomorphic systems are being developed—including humanoid robots and moral-support chatbots—to serve and assist patients. Such humanized, personalized virtual systems are designed to establish trust and form bonds.

“Today’s users are increasingly informed and aware of the potential for using digital technology,” says Rochel. “And demand is growing for systems they can trust. That’s also being reflected in the legislation. Although the exact definition of ‘trust’ can vary, one key aspect is that it’s developed through human-to-human relationships.

“In this case however, we’re dealing with human-to-machine relationships. All the features that give users the impression they’re interacting with a human are intended to build trust, but these features aren’t always explicit.”

For instance, take the way in which ChatGPT delivers its replies, “as if someone were typing the response, just like in a messaging application,” says Rochet. “Users obviously know that ChatGPT isn’t human, but this covert way of simulating a conversation with a human encourages users to form a sort of relationship with the machine—a machine that pretends to be like us.”

Such simulated relationships with a machine can go beyond small-talk and cordial replies. In Her, a film released in 2013, the main character played by Joaquin Phoenix falls in love with his voice assistant played by Scarlett Johansson. The film raises questions about personal relationships and how generative AI can influence our behavior.

“Trust is often built by sharing personal and confidential information—which could be highly damaging if it falls into the wrong hands,” says Salathé. “Users’ privacy is at stake.”

A matter of security and accountability

If we consider AI systems to be our equals, should they be to blame if, for example, an essay we submit contains errors? “That would imply that AI systems can be held accountable just like individuals,” says Rochel. “But we shouldn’t forget that their intelligence is just artificial. Machines can never be held accountable because they’re not fully autonomous. They can’t make decisions apart from those they’ve been programmed for. We need to look for the human behind the machine. But where does the accountability lie—with the developer or the user?”

Seeing our reflection in the AI mirror
General Electric Model D-12 toaster, from 1910s. Credit: Wikipedia

“I like to use the metaphor of a fork in a toaster,” says Huttenlocher. “If you look at pictures of the first toasters from the 1900s, you soon see that it was easy to electrocute yourself by sticking a fork. Standards and safeguards were eventually introduced to make the toasters safe and prevent such misuse from becoming standard practice.

We need to do the same thing with AI. We need to introduce safeguards against misuse of the technology, establish legal accountability, and adopt standards that are understood by everyone.”

According to Rochel, “Transparency will be paramount. We need to remind people that these are just machines and are capable of making mistakes. This will help lower the risk of misuse.”

Lawmakers are still debating how to establish transparency and explainability in AI systems. The European Union’s AI Act is very clear: generative AI systems must be designed and presented in a way that makes it perfectly clear they are not human. Users must be fully aware that they are interacting with a machine.

AI and humanity: Forging a thoughtful and discerning partnership

“We can learn a lot from AI,” says Huttenlocher. “For instance, AlphaGo devised strategies for Go that even its best players had never thought of, bringing a whole new dimension to the game. I see AI as an invention that can enhance—rather than replace—human capabilities.”

Salathé points out that “by anthropomorphizing AI systems, we can speed technology adoption, but this also raises fundamental questions about the role of humans in a world where it’s increasingly difficult to distinguish humans from machines. As AI develops further, we’ll need to make sure that our interactions with machines—as natural as they may seem—don’t cause us to lose sight of what makes us human.”

For Rochel, “AI system developers, stakeholders, lawmakers and users must work together to ensure that the technology remains a tool in the hands of human beings and does not become a force that replaces or manipulates them.”

Provided by
Ecole Polytechnique Federale de Lausanne

Citation:
AI’s human-like traits: Are we blurring the line between man and machine? (2024, December 12)
retrieved 12 December 2024
from https://techxplore.com/news/2024-12-ai-human-traits-blurring-line.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

What do you think?

Your email address will not be published. Required fields are marked *

No Comments Yet.