Random Posts

OpenAI sued for kickstarting the robot apocalypse

Robots with red eyes over legal papers

A Florida woman filed a lawsuit against OpenAI in a California court on Tuesday, calling for a shutdown on artificial intelligence development by the AI company because of a variety of threats like the development of “slaughterbots”—autonomous military agents devoted to destroying humans—massive privacy violations, and the threat of widespread fake pornography, among a host of other issues.

The suit, which summarizes many of the arguments of “safetyists”—people who think the pace of AI development is proceeding far too quickly for it ever to be implemented appropriately—called for a pause in AI development by OpenAI and other companies so that the technology’s risks can be properly accounted for before being deployed.

“Technological safety measures must be added to the Products that will prevent the technology from surpassing human intelligence and harming others,” argued lawyers for the plaintiffs in the suit.

A large chunk of the suit summarized the development of OpenAI and its flagship ChatGPT chatbot. One criticism it highlighted is the company’s shift from a non-profit model in 2015 to an ambitious, nearly $30 billion entity projected to generate $1 billion in revenue by the end of 2024.

OpenAI’s “180-degree transformation—from open nonprofit for the benefit of humanity to closed corporate profit machine fueled by greed and market power” has “raised eyebrows given its unprecedented nature, and the moral and legal questions it raises,” the lawsuit argued. Lawyers also cited Microsoft’s $10 billion investment into the company as evidence for its quickly shifting missions.

“OpenAI is prioritizing short-term financial gains over long-term safety and ethical considerations,” the lawyers argued.

Warning of a corporate monopoly over a transformative technology, the lawsuit worried that the technology’s rapid integration into a suite of everyday products would lead to massive privacy violations as the tool vacuums up a wealth of sensitive data users might not be aware they’re giving up.

To fight back against that, the suit called for “transparency,” “accountability,” and “control.”

“OpenAI should open the ‘black box,’” the lawyers argued and disclose all the data their tools are collecting, as well as how it’s being stored and used.

“The developers of ChatGPT and the other AI Products should be responsible for Product actions,” the lawyers also said, arguing that further commercial deployment of AI products should be put on hold until AI products can be developed to follow a code of “human-like ethical principles and guidelines and respect for human values and rights,” and only after the plaintiff and other members of a potential class-action suit are compensated for the stolen data the AI tools were trained on. 

The lawyers also called for allowing users to opt out of all data collection, and for OpenAI to delete (or compensate) users for “ill-gotten” data or any algorithms built off that data.

“Before any further commercial deployment, technological safety measures must be added to the Products that will prevent the technology from surpassing human intelligence and harming others.”

Lawyers for the plaintiff said that AI poses a "unique threat to international security and human rights through the development of autonomous weapons known as 'Slaughterbots,'" which it described as "weapons systems that use AI to identify select, and kill human targets without intervention," and that OpenAI's conduct is exacerbating the threat.

Recent reports reveal Israel may have used AI for the selection of targets in Gaza, raising additional fear about the technology creeping into the battlefield.

Gary Marcus, an Emeritus Professor of Psychology and Neural Science at NYU—and frequent public commentator and critic of AI hype—told the Daily Dot that after a quick skim he found the suit “current and well-thought through" and that it "raises important challenges to OpenAI’s business practices. It will be interesting to see what happens with it.”

But overheated warnings of the risk of apocalyptic destruction from the technology might not necessarily be the area to focus on or center the debate.

“Pushing towards … transparency, accountability[, and control] is very important.”

The suit is just one of many OpenAI is currently facing, although the only one to directly reference slaughterbots.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post OpenAI sued for kickstarting the robot apocalypse appeared first on The Daily Dot.



from Tech https://ift.tt/nh3YWxk

Post a Comment

0 Comments