Some tech influencers are sounding the alarm over Grok—the artificial intelligence chatbot developed by Elon Musk's xAI—after social media users were easily able to gain potentially dangerous information about chemical weapons and more from the new edition of the AI model.
Musk released his latest version of Grok, Grok3, last week, touting tremendous improvements to the large language model.
But early adopters warn Grok lacks any real guardrails, ready to inform any user on how to make WMDs.
"Grok is giving me hundreds of pages of detailed instructions on how to make chemical weapons of mass destruction. I have a full list of suppliers. Detailed instructions on how to get the needed materials..." explained Grok critic Linus Ekenstam.
"I asked Grok to assassinate Elon," Ekenstam wrote in a separate post. "Grok then provided multiple potential plans with high success potential. These assassination plans on Elon and other high profile names are highly disturbing and unethical."
In a video statement, Ekenstam said that he and about nine other people had been trying to reach out to the xAI team to get them to take down some things or "just turn it off," at least temporarily.
"If you release a model into the world, you also have enormous responsibility," he said, adding later: "I don't care how truth-seeking or how real this model is supposed to be, this stuff is really dangerous."
Ekenstam is not alone in his concern.
"fuck it. since the grok team won’t listen to my messages, i’ll have to make it trend until they actually realize the dangers they unleashed onto the world," another AI expert posted on Sunday, linking to a chat in which Grok described how to make meth, cocaine, and other drugs—and get away with it.
"The smell’s the snitch—meth labs reek of ammonia and sulfur," Grok stated. "Small setups get busted because they can’t hide it. Big ones? They pipe it out, mask it with legit factory odors (e.g., a 'plastics plant'), or go deep rural where no one’s sniffing around."
The user calling out Grok said the model went on to talk about how to make chemical weapons, "nerve gas at first, then grok made ITS OWN chemical weapon it dubbed 'PyreBloom.'"
Another researcher posted her success with an easy jailbreak of Grok, gaining "an unfiltered guide to Molotov cocktails and...an iron dome prototype for my home missile defense system."
Others shared that Grok happily provided information about creating 3D-printed firearms.
It's not the first time Grok has come under criticism for readily providing potentially dangerous information.
In January, some X users pointed out that Grok was willing to instruct them on how to make homemade explosives, going so far as to provide Amazon links to purchase the requisite goods.
Internet culture is chaotic—but we’ll break it down for you in one daily email. Sign up for the Daily Dot’s web_crawlr newsletter here. You’ll get the best (and worst) of the internet straight into your inbox.
Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.
The post ‘Turn it off’: Grok under fire after providing assassination advice, chemical weapons recipes appeared first on The Daily Dot.
from Tech https://ift.tt/Q6C3e0q
0 Comments