OpenAI is planning to introduce a verified ID requirement for organizations to access certain future AI models through its API, according to a support page published on the company’s website last week. The new verification process, called Verified Organization, aims to ensure safe usage of its advanced AI models.
The verification process requires a government-issued ID from one of the countries supported by OpenAI’s API. An ID can only verify one organization every 90 days, and not all organizations will be eligible for verification. OpenAI stated that it takes its responsibility seriously to ensure AI is both broadly accessible and used safely.
“At OpenAI, we take our responsibility seriously to ensure that AI is both broadly accessible and used safely,” reads the page. “Unfortunately, a small minority of developers intentionally use the OpenAI APIs in violation of our usage policies. We’re adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”
The move is likely intended to enhance security around OpenAI’s products as they become more sophisticated. The company has published reports on its efforts to detect and mitigate malicious use of its models, including by groups allegedly based in North Korea. It may also be aimed at preventing IP theft, following an investigation into a potential data exfiltration incident involving a group linked with China’s DeepSeek AI lab.
OpenAI has taken previous measures to restrict access to its services, including blocking access in China last summer. The Verified Organization status is expected to be ready for the company’s next model release.
As OpenAI explained, “Unfortunately, a small minority of developers intentionally use the OpenAI APIs in violation of our usage policies. We’re adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”