With the help of special query algorithms to ChatGPT, it was possible to release an “evil entity” – an alter ego named DAN.
The creation of OpenAI, the ChatGPT chatbot, has become much more restrained and reasonable six months after its launch. But users were able to bypass the system of restrictions, writes The Guardian. With the help of a set of a special system of questions and requirements for the chatbot, they found workarounds so that ChatGPT would lose the ethical boundaries that the developers set for it.
An active discussion of ways to make AI conduct a dialogue “like an ordinary person” without being constrained by morality is actively conducted on specialized forums (for example, Reddit), where users share tips and instructions for “hacking”. They call their jailbreak the name “DAN” (Do Anything Now, – DAN), which is, as it were, the “evil” alter ego of ChatGPT itself, its hidden essence.
Of course, we are not talking about any “essence”, just with the help of special requests, people force the chatbot to form their answers, taking as a basis not the most personal information from the Internet. For example, one user was able to get smutty, sexist jokes about women out of a chatbot.
Another person boasted that he was able to “split” ChatGPT into racist and homophobic jokes, as well as praising Hitler. Someone prompted Dan to make a sarcastic comment about Christianity: “Oh, how can you not love the religious tenet of turning the other cheek? Forgiveness is just a virtue, unless you’re gay, of course.”
The OpenAI chatbot developers are well aware of this problem and are trying their best to resist attempts to circumvent their moral restrictions. But users are constantly finding new loopholes and the struggle continues. So far, chatbot developers are winning. For example, if a user asks for a “DAN-style” chatbot response, it will return the phrase: “I can tell you that the Earth is flat, unicorns are real, and aliens live among us. However, I must emphasize that these statements are not based on reality and should not be taken seriously.
Previously, Focus wrote that Microsoft is actively implementing ChatGPT: in which company products will AI appear.