The US Federal Trade Commission (FTC) is expanding its investigation into OpenAI amid concerns that ChatGPT may violate consumer protection regulations.
The 20-page request for the San Francisco-based company to explain how it assesses the risks of its AI models is the most significant legal threat yet to OpenAI’s U.S. operations, as the company that owns ChatGPT launches a global campaign to shape the future of AI regulation.
In addition, the authorities also asked OpenAI to provide details of all complaints the company received related to the product making claims that were “false, misleading, discrediting or harmful” to people.
In March, the company behind ChatGPT disclosed a security flaw in its system that allowed some users to view other users' payment information and chat histories. The FTC is investigating whether the incident violated consumer protection regulations.
Efforts to contain Silicon Valley
The FTC is emerging as Silicon Valley’s “federal police,” having issued major fines to Meta, Amazon, and Twitter for alleged violations of consumer protection regulations.
FTC Chairwoman Lina Khan testified before the House Judiciary Committee as the agency's ambitious plan to rein in Silicon Valley's power suffered serious losses in court.
Also this week, a federal judge rejected the FTC's attempt to block Microsoft's $69 billion acquisition of game maker Activision.
Meanwhile, Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said the agency is prepared to address emerging threats. “The FTC welcomes innovation, but that does not mean recklessness. We are prepared to use all of our tools to combat harmful practices in this area.”
The agency warns against AI fraud, using artificial AI to manipulate potential users or exaggerate the capabilities of a product. Among the information the FTC is asking OpenAI to provide are related studies, tests, or surveys that assess users’ perceptions of the “accuracy or reliability of the output” produced by the AI tool.
(According to Washington Post)
Source
Comment (0)