Experts believe AI Models more sentient than the studios let on


Google’s Gemini AI and OpenAI’s GPT-4 reportedly are more-human like than OpenAI and Google would like people to believe. Both the AI models are believed to be more sentient than what OpenAI and Google would like people to believe

Google’s long-awaited Gemini has entered the chatbot arena, drawing attention as a formidable competitor to OpenAI’s ChatGPT. Early reviews are pouring in, with many impressed by Gemini’s capabilities.

However, amidst the excitement, a lingering unease persists, prompting discussions about the potential ‘sentience’ of advanced AI chatbots.

Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, recently shared his thoughts on Gemini in a blog post. Having received early access to Google’s advanced model, Mollick remarked on the eerie quality of the chatbot’s responses, likening them to encounters with a ghostly presence.

Related Articles

Alphabet shares take a hit as revenue from Google Search misses estimates, spends exorbitantly high on AI

Google to rebrand AI Chatbot ‘Bard’ as ‘Gemini’, will have a free and paid app launching soon

This sentiment echoes concerns raised in the past, including claims by a former Google engineer that the company’s AI was ‘alive.’

Mollick’s observation revolves around the elusive human-like qualities perceived in AI-generated text, often characterized by a distinct ‘personality.’ Gemini, in particular, is noted for its friendliness, agreeableness, and penchant for wordplay, setting it apart from its counterparts.

AI detection companies have also delved into distinguishing chatbots based on their unique tones and cadences. This ability has been instrumental in identifying AI-generated content in various contexts, including deepfake robocalls and text-based interactions.

While Microsoft researchers have stopped short of claiming that AI models like GPT-4 possess sentience, they acknowledge the presence of ‘sparks’ of human-level cognition.

In a recent study, Microsoft scientists highlighted GPT-4’s ability to understand emotions, explain itself, and engage in reasoning, prompting questions about the parameters of ‘human-level intelligence.’

The concept of AI sentience has garnered attention from organizations like the Sentience Institute, which advocates for granting moral consideration to AI models. They argue that failing to acknowledge the potential sentience of AI could lead to unintended mistreatment in the future.

Despite widespread scientific consensus that AI models are not currently sentient, there is a growing contingent of individuals who speculate about the emergence of machine sentience.

While some dismiss these notions as far-fetched, others see them as a reflection of a deeper exploration into the evolving relationship between humans and artificial intelligence.

(With inputs from agencies)


Google’s Gemini AI and OpenAI’s GPT-4 reportedly are more-human like than OpenAI and Google would like people to believe. Both the AI models are believed to be more sentient than what OpenAI and Google would like people to believe

Google’s long-awaited Gemini has entered the chatbot arena, drawing attention as a formidable competitor to OpenAI’s ChatGPT. Early reviews are pouring in, with many impressed by Gemini’s capabilities.

However, amidst the excitement, a lingering unease persists, prompting discussions about the potential ‘sentience’ of advanced AI chatbots.

Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, recently shared his thoughts on Gemini in a blog post. Having received early access to Google’s advanced model, Mollick remarked on the eerie quality of the chatbot’s responses, likening them to encounters with a ghostly presence.

Related Articles

Alphabet shares take a hit as revenue from Google Search misses estimates, spends exorbitantly high on AI

Google to rebrand AI Chatbot ‘Bard’ as ‘Gemini’, will have a free and paid app launching soon

This sentiment echoes concerns raised in the past, including claims by a former Google engineer that the company’s AI was ‘alive.’

Mollick’s observation revolves around the elusive human-like qualities perceived in AI-generated text, often characterized by a distinct ‘personality.’ Gemini, in particular, is noted for its friendliness, agreeableness, and penchant for wordplay, setting it apart from its counterparts.

AI detection companies have also delved into distinguishing chatbots based on their unique tones and cadences. This ability has been instrumental in identifying AI-generated content in various contexts, including deepfake robocalls and text-based interactions.

While Microsoft researchers have stopped short of claiming that AI models like GPT-4 possess sentience, they acknowledge the presence of ‘sparks’ of human-level cognition.

In a recent study, Microsoft scientists highlighted GPT-4’s ability to understand emotions, explain itself, and engage in reasoning, prompting questions about the parameters of ‘human-level intelligence.’

The concept of AI sentience has garnered attention from organizations like the Sentience Institute, which advocates for granting moral consideration to AI models. They argue that failing to acknowledge the potential sentience of AI could lead to unintended mistreatment in the future.

Despite widespread scientific consensus that AI models are not currently sentient, there is a growing contingent of individuals who speculate about the emergence of machine sentience.

While some dismiss these notions as far-fetched, others see them as a reflection of a deeper exploration into the evolving relationship between humans and artificial intelligence.

(With inputs from agencies)

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
ExpertsModelssentientStudiosTechnoblenderTechnologyUpdates
Comments (0)
Add Comment