- Added statements from EU representatives.
Update, May 26, 2023:
Some MEPs disagree with Altman’s statement that the EU AI Act could be withdrawn or fundamentally changed. Kim van Sparrentak, who worked on the law, says they won’t be “blackmailed by American companies.”
“If OpenAI can’t comply with basic data governance, transparency, safety and security requirements, then their systems aren’t fit for the European market,” Sparrentak told Reuters.
German MEP Sergey Lagodinsky says Altman is trying to push his agenda in individual countries, but will not influence Brussels’ regulatory plans. These are “in full swing,” he said. Individual changes are possible, but the general direction will not change, Lagodinsky said.
Altman reportedly canceled his planned visit to Brussels. OpenAI did not comment on the Reuters report.
Original article from May 25, 2023:
OpenAI CEO Sam Altman says Europe’s potential AI policy is too restrictive. Withdrawing from Europe is one option.
OpenAI CEO Sam Altman is touring Europe. And he’s making politics along the way. Speaking at an event in London, he told Reuters that the current draft of the EU’s AI Act risks “over-regulation”. Pulling out of Europe is an option, he said.
But Altman cautions that OpenAI will first try to meet Europe’s requirements. OpenAI has also heard that the current draft of the EU AI Act will be withdrawn. The EU AI Act is currently in the process of being voted on by the Parliament, the EU Council, and the leading Commission.
Altman suggests that he would prefer a different definition for “general purpose AI system.” Tea European Parliament uses the term as a synonym for “foundational model,” a large AI model that is pre-trained with a lot of data and can be optimized for specific tasks with a little more data.
“There’s so much they could do like changing the definition of general-purpose AI systems,” Altman said. “There’s a lot of things that could be done.”
Lawmakers in the US are also discussing AI regulations, with Sam Altman personally calling for them in the US Senate, but with the prospect of helping to shape them. Altman has proposed the creation of a competent authority that would license AI systems up to a certain capability and could revoke them if, for example, safety standards are not met.
Generative AI has copyright as an Achilles heel and more unresolved issues
One aspect of the EU AI Act is the disclosure of the training material used for AI training, partly in view of possible copyright infringements. OpenAI has not yet disclosed the training material used for GPT-4, citing competitive reasons.
Based on the training datasets of previous models, it is likely that the training material used for GPT-4 also contains copyrighted material that the company did not have explicit permission to use for AI training.
The legality of using this material to train commercial AI systems will ultimately be determined by the courts. Similar discussions and early litigation are underway for image AI systems. But these cases could drag on for years.
In addition to safety and copyright, other regulatory issues related to AI include privacy. OpenAI has already run afoul of European data protection authorities, particularly in Italy because ChatGPT has no age restriction, personal data entered in chat can be used for AI training, and personal data is included in the datasets for pre-training AI models.
ChatGPT was briefly blocked in Italy, but was unblocked after concessions were made by OpenAI. However, it is still under review by data protection authorities.