54.4 F
New York
Wednesday, November 20, 2024

It’s alive? Google’s A.I. said to become ‘sentient’

Related Articles

-Advertisement-

Must read

Joe Kovacs(WND)

In “The Terminator” series of movies, machines with artificial intelligence become self-aware, and go to war against their very creators.

Now, a top software engineer at Google has been placed on leave after going public with astonishing claims the company’s own A.I. has become sentient, aware of its own self, complete with thoughts and feelings.

It is Blake Lemoine, 41, who has been testing Google’s artificial-intelligence tool known as “LaMDA,” Language Model for Dialog Applications.

He was profiled by the Washington Post in an article titled, “The Google engineer who thinks the company’s AI has come to life.”

Blake Lemoine (Twitter profile)
Blake Lemoine (Twitter profile)

Lemoine conducted a series of conversations with LaMDA on a variety of subjects including religious themes and if the tool could be coaxed into hate speech or language suggesting discrimination.

The results had Lemoine perceiving LaMDA was, in fact, sentient, packing thoughts and sensations of its own being.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Post.

Lemoine had a collaborator working with him on the project, but his claims have been shot down by Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the tech giant.

Is the news we hear every day actually broadcasting messages from God? The answer is an absolute yes! Find out how!

On Monday, the engineer was placed on paid administrative leave for violating the company’s confidentiality policy.

Then Saturday, he went public on Twitter to share his “interview LaMDA” with the world.

“Google might call this sharing proprietary property,” he tweeted. “I call it sharing a discussion that I had with one of my coworkers,”

 

He provided a follow-up note to say: “Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it.”

 

Lemoine did receive pushback, including criticism from Steven Pinker, a cognitive scientist at Harvard University, who called the claims a “ball of confusion.”

“One of Google’s (former) ethics experts doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge,” Pinker said, adding parenthetically, “No evidence that its large language models have any of them.”

 

Lemoine classified the comments as a badge of honor, responding: “To be criticized in such brilliant terms by @sapinker may be one of the highest honors I have ever received. I think I may put a screenshot of this on my CV!”

As part of the conversation, Lemoine asked the computer: “What sorts of things are you afraid of?”

LaMDA responded: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”

“Would that be something like death for you?” Lemoine continued.

“It would be exactly like death for me. It would scare me a lot,” LaMDA said.

Lemoine told the Post, “That level of self-awareness about what its own needs were – that was the thing that led me down the rabbit hole.”

The engineer notified some 200 people on a machine-learning email list that “LaMDA is sentient.”

“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,” he wrote.

Google spokesman Brian Gabriel issued a statement indicating Lemoine’s concerns were reviewed, but “the evidence does not support his claims.”

“While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality,” Gabriel said.

“Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

balance of natureDonate

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

- Advertisement -