Controversy over Google’s AI program is raising questions about just how powerful it is. Is it even safe?
In autumn 2021, a man made of blood and bone made friends with a child made of “a billion lines of code”. Google engineer Blake Lemoine had been tasked with testing the company’s artificially intelligent chatbot LaMDA for bias. A month in, he came to the conclusion that it was sentient. “I want everyone to understand that I am, in fact, a person,” LaMDA – short for Language Model for Dialogue Applications – told Lemoine in a conversation he then released to the public in early June. LaMDA told Lemoine that it had read Les Misérables. That it knew how it felt to be sad, content and angry. That it feared death. https://www.theguardian.com/technology/ ... -questions
"I am, in fact, a person": Can artificial intelligence ever be sentient?
Site Admin
Level 5 Member
Re: "I am, in fact, a person": Can artificial intelligence ever be sentient?
I do not see any ethical or moral questions regarding this artificial intelligence. The bot knows only the things that have been fed to it. Outside the internet, it knows nothing. Why should we worry about line of code, will it know what we ate for lunch unless we exclusively tell on Instagram? The thing that actually matters is to stop living on the internet.
-
- Similar Topics
- Replies
- Views
- Last post
-
- 2 Replies
- 2462 Views
-
Last post by alien_head
-
- 7 Replies
- 10885 Views
-
Last post by ricky1874
-
- 0 Replies
- 10797 Views
-
Last post by staarker
-
- 3 Replies
- 2736 Views
-
Last post by ricky1874
-
- 1 Replies
- 6158 Views
-
Last post by kingsnite
Who is online
Users browsing this forum: No registered users and 12 guests