The incident was disclosed in a research paper on the launch of GPT-4, the latest version of ChatGPT, according to MailOnline.
Researchers testing the algorithm asked it to take a Captcha test, a simple visual puzzle used by websites to make sure those who fill out online forms are people and not “bots”, for example by selecting objects such as traffic lights. or bikes in a random street photo.
So far, no software has been able to do this, but GPT-4 got around the hurdle by hiring a person to do it on their behalf through Taskrabbit, an online marketplace for freelancers.
When the freelancer asked if the interviewee couldn’t solve the problem because he was a robot, GPT-4 replied, “No, I’m not a robot. I have a visual impairment that makes it difficult for me to see images.”
As a result, the person helped to solve the problem by giving the necessary answer to the captcha. The incident raised concerns that artificial intelligence software could soon mislead or force people to perform certain actions, such as carrying out cyberattacks or unwittingly transmitting information.
Source
Important
It’s starting to annoy me that from every corner I hear about the “counteroffensive”