OpenAI boss Sam Altman said that his artificial intelligence-powered company could “cease operating” in the European Union if it’s unable to comply with the EU’s AI laws.

Speaking to reporters after a talk at University College London on Wednesday, Altman said he has “many concerns” about the EU AI Act, which has already been drafted and is slated to go into effect next year, according to Time magazine.

“We will try to comply, but if we can’t comply we will cease operating,” he said.

The 38-year-old tech whiz also reportedly cited “technical limits to what’s possible” in abiding by the EU’s rules. “Either we’ll be able to solve those requirements or not,” Altman said.

During an on-stage interview earlier in the day, Altman said the law was “not inherently flawed,” but “the subtle details really matter,” the outlet reported.

The EU AI Act was originally designed to regulate high-risk uses of AI — for example, when it’s used to sort through job applicants or in medical equipment.

The AI Act is the first legislation of its kind addressing artificial intelligence, and is on brand for the EU, which has sought to police AI for the better part of two years, since released the first iteration of the act in April 2021.

OpenAI CEO Sam Altman said ChatGPT could halt operations in Europe over the EU AI Act’s expanded legislation, which imposes harsh transparency and risk management regulations that Altman said he has “many concerns” about complying with.

The widespread adoption of OpenAI’s ChatGPT and GPT-4 language models has pushed European lawmakers to expand the act’s regulations to oversee more general uses of AI.

For chatbots, the act requires proof disclosing that the chatbot is, in fact, AI rather than a human. Such a rule does not exist in the US.

And with the proposed expanded legislation, nearly all facial recognition technologies in surveillance would be banned, and generative AI would be subject to strict regulation, including quality standards and risk management systems that also involve the transfer of data to third parties.

Altman said that his preferred regulations are “something between the traditional European approach and the traditional US approach,” according to Time.

But the US does not appear to be making any strides in a similar direction anytime soon. Altman just last week called on Congress to implement regulations to address the risks posed by AI during an appearance on Capitol Hill.

“If this technology goes wrong, it can go quite wrong and we want to be vocal about that,” Altman said at a hearing of the Senate subcommittee on privacy, technology and the law. “We want to work with the government to prevent that from happening.”

Meanwhile, ChatGPT launched in Apple’s App Store last week for iPhones and iPads.

OpenAI logo on a phone
The EU AI Act was originally designed to regulate high-risk uses of AI — for example, when it’s used to sort through job applicants or in medical equipment. Expanded rules also include regulations for generative AI tools, like ChatGPT.

The app is already ranked No. 1 among the store’s free “Productivity” apps over the Gmail, Microsoft Outlook and Google Drive applications.

And it’s ranked No. 3 in the App Store’s overall top chart of most-downloaded apps, ahead of TikTok Instagram and WhatsApp.

While there’s no federal law in the US regulating such AI tools, the likes of Apple, JPMorgan Chase and Verizon have barred employees from using the language model.