The ex-Google engineer believes that neural networks must be restrained in development.
Lemoine fears that chatbots and tools like Microsoft’s Bing AI could be used to spread misinformation, political propaganda, racial or religious strife. He was a tester for Google’s LaMDA AI system, and after finishing the tests, he published his opinion, saying that the neural network has “feelings” and can be considered “intelligent”. The engineer believes that AI should be more thoroughly tested to identify risks such as user manipulation.
“I feel like this technology is incredibly experimental and it’s dangerous to release it right now,” he said.
However, such IT industry gurus as Bill Gates believe that chatbots and similar tools do not pose any danger to humans and humanity as a whole. Gates called the latest artificial intelligence models “old tech” in an interview with the Financial Times.
“Technologies that most people play with are a generation out of date,” he said.
Gates also emphasized that people themselves are teaching chatbots provocative things, trying to find ways to deceive the AI model or make it make a mistake. So, first of all, people should behave wiser and not “provoke” neural networks to obscene, offensive statements.
We previously reported that an AI model took part in a dogfight against a human pilot. The latter failed to defeat the neural network.
Leave a Reply