ru24.pro
News in English
Июнь
2024

It’s time for the FTC to act on ChatGPT

0

Several current and former OpenAI employees recently published an open letter warning about the lack of oversight for the artificial intelligence industry’s rapid growth. “AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” they wrote.

The letter continued, “AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm.” The writers warned that the lack of effective government oversight left few means of accountability.

The letter from OpenAI employees echoed concerns raised by Helen Toner, a former OpenAI board member. Toner stated in an interview that OpenAI CEO Sam Altman was fired by the former board of directors because he provided inaccurate information about safety mechanisms, did not clear major product releases with the board and kept related investments confidential.

We anticipated these concerns.

More than a year ago, our group, the Center for AI and Digital Policy, filed a formal complaint with the Federal Trade Commission about OpenAI. We alleged that OpenAI had violated U.S. consumer protection law by releasing a consumer product without sufficient safeguards. And we urged the FTC to act to ensure independent oversight of OpenAI and other AI companies.

Much of the evidence for our compliant was gathered from OpenAI itself. The company’s GPT-4 “system card,” a technical specification of the product’s design, listed many risks following the deployment of ChatGPT. Bias, cybersecurity vulnerabilities, election interference and threats to consumer privacy and public safety were all acknowledged by the company prior to release of the product.

We described real risks to the general public — giving bad investment advice or telling children how to lie to their parents. We also cautioned about the risks of “accelerated deployment,” a risk OpenAI itself acknowledged in its GPT-4 system card. 

We also cited the famous “pause letter” in which experts in AI urged companies to delay releasing new AI models until adequate safeguards were established. 

In supplements that the Center for AI and Digital Policy sent to the FTC, we gathered additional evidence and provided more reasons for the consumer agency to act. We also appeared twice before the commission at open hearings and urged the commissioners to open an investigation.

In July 2023, the New York Times reported that the agency had launched the investigation we requested. The detailed document request to OpenAI also indicated that the FTC identified copyright concerns that we had missed and that OpenAI had failed to disclose in the system card.

We thought we were making progress.

But since July, there has been no word about the investigation of the world’s most impactful publicly available AI product. The FTC’s delay is all the more troublesome because OpenAI, like many big tech firms, is cutting safety and security teams at the same time competition is increasing. Remarkably, experts inside and outside the company warn that the problems are far greater than the public is aware. 

The race to replace people in the workplace, in schools, for elder care and nursing is on. Writers, actors and musicians are already seeing their work replaced by programs ultimately controlled by foundational models managed by a few large companies. Considering the collection of vast amounts of personal data, the increasing opacity of automated systems and the risks to public safety, it is clear the FTC needs to act.

Outside of the U.S., public agencies are not waiting. The Italian Data Protection Authority launched an investigation within weeks of the release of GPT-3 and was able to produce a preliminary judgment against the company just a few months later. Similar investigations are underway in Canada, Korea, Brazil, Germany, Spain and Japan. 

A recent report from European privacy officials identified three privacy vulnerabilities with ChatGPT: web scraping that gathers personal data to build models; data collection through prompts; and outputs about people that are false, misleading or obtain the commercial value of someone else’s name, image or likeness. The privacy officials have said simply that the company’s claim that they cannot comply with the law can never be an excuse.

The FTC has the ability and the track record to confront Big Tech. More than a decade ago, the public agency undertook investigations of Google and Facebook and brought sweeping orders that gave it legal authority over the companies and helped improve business practices. More government oversight means more accountability — precisely what the dissident OpenAI employees are now urging.

The FTC has held many workshops on artificial intelligence, issued business guidance on AI products and made strong statements saying that it would hold companies accountable. But that is not enough. The longer the FTC delays, the more difficult these problems will become. 

OpenAI has acknowledged the problems. Employees have warned that there are more dangers ahead. It is time for the FTC to complete the investigation of OpenAI and establish guardrails for the AI industry.

Merve Hickok, Christabel Randolph and Marc Rotenberg are president, law fellow and executive director of the Center for AI and Digital Policy, an independent research organization based in Washington, D.C.