WHITE HOUSE SAYS AMAZON, GOOGLE, META, MICROSOFT AGREE TO AI SAFEGUARDS -- WSJ
21 July 2023, 12:00, By Sabrina Siddiqui and Deepa Seetharaman
WASHINGTON -- The Biden administration says it has reached a deal with big tech companies to put more guardrails around artificial intelligence, including the development of a watermarking system to help users identify AI-generated content, as part of its efforts to rein in misinformation and other risks of the rapidly growing technology.The White House said seven major AI companies -- Amazon.com, Anthropic, Google, Inflection, Meta Platforms, Microsoft and OpenAI -- are making voluntary commitments that also include testing their AI systems' security and capabilities before their public release, investing in research on the technology's risks to society, and facilitating external audits of vulnerabilities in their systems.Most of the companies declined to comment or didn't immediately respond to a request for comment. Leaders from the companies will meet with Biden at the White House on Friday.There aren't enforcement mechanisms for the commitments outlined on Friday, and they largely reflect the safety practices already implemented or promised by the AI companies involved.The announcement comes as President Biden and his administration have placed an increased emphasis on both the benefits and pitfalls of AI, with a broader goal of developing safeguards around the technology through both regulation and congressional action. Biden convened a meeting with AI experts and researchers in San Francisco last month and hosted the CEOs of Google, Microsoft, Anthropic and OpenAI at the White House in May."We're going to hold them accountable for their execution," White House chief of staff Jeff Zients said in an interview. "Companies can and will need to do more than they're doing now and so will the federal government."Amazon said it was committed to collaborating with the White House and others on AI. "Amazon supports these voluntary commitments to foster the safe, responsible, and effective development of AI technology," it said in a statement.Before OpenAI launched its GPT-4 model in late March, the company spent roughly six months working with external experts who tried to provoke it to produce harmful or racist content. Most companies also developing large language models rely heavily on humans who teach these models to be engaging and helpful and avoid generating toxic responses through a process called reinforcement learning with human feedback.OpenAI also introduced a bug bounty program in April to reward security researchers who spotted gaps in the company's system. Some of the companies, including Anthropic, talk about their safety testing methods in academic papers and on their websites and have ways for users to flag problematic responses.Many of the companies are also exploring ways to tag images made by AI, a step that could potentially avoid the uproar caused in late May when a fake photo of an explosion at the Pentagon went viral online and caused a momentary dip in the stock market. OpenAI's image-generating system Dall-E produces images with a rainbow watermark at the bottom. Google said this spring that it would embed data inside images created by its AI models that would indicate they are synthetic.But not every image-generation model follows this rule, and it remains simple for people to remove indications that an image is AI-generated. OpenAI's content policy allows users to remove the watermark, and there are instructions on sites such as Reddit that explain to users how to eliminate those details.The guidelines outlined Friday don't require companies to disclose information about their training data, which experts say is crucial to combat bias, prevent copyright abuse and understand models' capabilities.A White House official said that under its agreement, the companies will develop and implement a watermarking system for both visual and audio content. The watermark would be embedded into the platform so that any content created by a user would either identify which AI system created it or that it was AI-generated, that person said.White House officials have said their hope is to establish rules around artificial-intelligence tools sooner rather than later, citing lessons learned from Washington's inability to crack down on the proliferation of misinformation and harmful content on social media. The new commitments, they noted, aren't a substitute for federal action or legislation, and the White House is actively developing an executive order "to govern the use of AI."Zients said the commitments put in motion external checks and balances that would ensure technology companies aren't simply holding themselves accountable."It's not just the companies and them doing a good job and making sure that their products are safe and that they're pressure tested," he said. "But also, they can't grade their own homework here."
Write to Sabrina Siddiqui at
sabrina.siddiqui@wsj.com and Deepa Seetharaman at
deepa.seetharaman@wsj.com (END) Dow Jones NewswiresJuly 21, 2023 05:00 ET (09:00 GMT)Copyright (c) 2023 Dow Jones & Company, Inc.