OpenAI and Anthropic Sign AI Safety Agreements

OpenAI and Anthropic Sign AI Safety Agreements
Depositphotos

AI firms OpenAI and Anthropic signed agreements with the US government to allow authorities to research, test, and evaluate new AI models before public release. Each company’s Memorandum of Understanding (MoU) establishes the framework for the US AI Safety Institute to receive early access to major new models and continue to study them after release.

The MoUs also cover collaborative research to evaluate the capabilities and potential risks of AI models and methods to mitigate any dangers. AI companies face increased regulatory scrutiny over the ethical use of large language models and how they are trained.

“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said Elizabeth Kelly, director of the US AI Safety Institute. The institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models in collaboration with its partners at the UK AI Safety Institute.