By Cat Zakrzewski, washingtonpost.com, 7-26-23
Leading artificial intelligence companies on July 26 unveiled plans to launch an industry-led body to develop safety standards for rapidly advancing technology, outpacing Washington policymakers who are still debating whether the U.S. government needs its own AI regulator.
Google, ChatGPT-maker OpenAI, Microsoft, and Anthropic introduced the Frontier Model Forum, which the companies say will advance AI safety research and technical evaluations for the next generation of AI systems, which companies predict will be even more powerful than the large language models that currently power chatbots like Bing and Bard. The forum will also serve as a hub for companies and governments to share information with each other about AI risks. » Read More