OpenAI to Grant U.S. AI Safety Institute Early Access to Next Model

In a move aimed at addressing growing concerns over artificial intelligence safety, OpenAI CEO Sam Altman has announced that the company will provide early access to its next major generative AI model to the U.S. AI Safety Institute for evaluation. This development comes amid increased scrutiny of OpenAI’s safety practices and commitment to responsible AI development.

Altman revealed in a post on X (formerly Twitter) that OpenAI has been collaborating with the U.S. AI Safety Institute, a federal government body, to establish an agreement for early access to the company’s upcoming foundation model. The partnership aims to advance the science of AI evaluations and ensure safety protocols are in place before the model’s public release.

This announcement follows a letter from five U.S. senators questioning OpenAI’s dedication to safety and its treatment of employees who raise concerns. In response, OpenAI has taken several steps to reaffirm its commitment to responsible AI development:

  1. The company pledged to allocate at least 20% of its computing resources to safety research across the entire organization.
  2. OpenAI has eliminated non-disparagement clauses from employment contracts for both current and former employees, encouraging open discussion about potential concerns.
  3. A new safety and security committee has been formed to review the company’s processes and safeguards.

The collaboration with the U.S. AI Safety Institute, which is part of the National Institute of Standards and Technology (NIST), represents a significant step in OpenAI’s efforts to address safety concerns. The institute was established to assess and mitigate risks associated with advanced AI systems, focusing on national security, public safety, and individual rights.

OpenAI’s decision to grant early access to the U.S. government for safety checks mirrors a similar agreement with the United Kingdom’s AI safety body, announced in June. These partnerships demonstrate the company’s willingness to engage with regulatory bodies and address safety concerns proactively.

However, some observers remain skeptical, noting that OpenAI has recently reassigned a top AI safety executive and staffed its internal safety commission with company insiders, including Altman himself. Critics argue that these moves may not provide sufficient independent oversight.