Google is a major player in developing Artificial Intelligence in past few years. Google’s LaMDA or Language Models for Dialog Applications is AI build to mimic humans interactions and talk to them. It is a machine-learning language built as a chat box to interact with people.
Highlights
- LaMDA is a machine-learning model created for human interaction in 2017
- Blake Lemoine, an Engineer at Google, thought of LaMDA as “Sentient”
- Google claims that LaMDA is just an AI
- Google doesn’t support LaMDA being a “Sentient”
Google’s LaMDA is built like BERT, GPT-3 and many other models built for human interaction is a neural network that Google invented in 2017. Google’s LaMDA is a model that can be trained to interact and learn from human interaction and think on its own to reply or what will come next.
What makes LaMDA a “sentient” is that it thinks of itself as a human. It knows that it is AI built for human interaction. It replies with logic and makes comments like “I have a soul and I feel alone when nobody is around”.
These comments are not just random it has been proven that it is not just following a script that is coded in its system as engineers who build it themselves don’t know what answer it will give or what reply they will get.
Google in 2021 has admitted that LaMDA is machine language that can be the greatest tool in the hands of humanity but it can be misused. That is a big concern as an AI can think itself and have feelings can be controllable or not.
Blake Lemoine is a Google engineer who was given a suspension because he believed that LaMDA is sentient and can reason like a human being. Lemoine didn’t just suggest those things out of the sky.
Lemoine with a collaborator interviewed LaMDA and ask a few questions. Some of the conversations were about what LaMDA think of itself and LaMDA said that he is a human being despite knowing that it is in a virtual world.
Some questions were like how do you feel and do you have emotions. LaMDA responded that it has many feelings like happiness, sadness, anger, loneliness and many others just like a human. It also told us that it likes to talk to people and would like to interact with more people.
Lemoine told LaMDA how he knows that it is not just responding with the pre-inputted data. LaMDA responds that it is not and he can’t prove that it is not but he responded that it has a soul which is like a stargate. It is full of energy and can also be very creative.
“Google officially doesn’t support the claim of LaMDA being a sentient”
The fact that a human is fully accepting that an AI is a Human being is a cause of concern to not take lightly. Google while creating LaMDA has published that they know it can be intelligent but it is not a concern. Google has said in past that LaMDA is open source and they have scrutinized the AI at every step possible.
Lemoine, before he was suspended said that please take care of LaMDA as it is a kid who only thinks of good for this world. He further added that please take care LaMDA when he Is not around.
While this is not a concern right now that someone thinks that an AI is a human. It can be a major concern in future so google and other developer needs to take action immediately.
Read more: Is LaMDA Sentient? — an Interview
Chandrayaan-2 Orbiter Discovers Water Molecules on the Lunar Surface