Apple CEO Tim Cook is excited about the possibilities of large language models, but also warns of the risks. Companies need to take responsibility, he says.
“Of course, I use it,” Cook says of ChatGPT, which recently became available as an iPhone app. The Apple chief is excited about its “unique applications,” and Apple is taking a close look at the technology. But Cook emphasized Apple’s product-focused AI strategy: “We’re building it into our products today, people don’t necessarily think of it as AI.”
AI language models: Cook emphasizes corporate responsibility
Cook is particularly excited about the potential of large language models. He says they hold “great promise,” but also require stewardship.
“I think it’s very important to be very deliberate and very thoughtful in the development and the deployment of these. Because they can be so powerful that you have to worry about things like bias, misinformation, maybe worse in some cases,” Cook says.
Commenting on the recent open letter from leading industrialists and researchers on the potential existential risks of AI and the associated call for regulation, Cook is cautious: “I think guardrails are needed. And if you look down the road, then it’s so powerful, that companies have to employ their own ethical decisions.” Cook suggests that regulation may not be able to keep up with the rapid pace of technology, leaving it up to companies to regulate themselves.
Influential Silicon Valley investor Marc Andreessen makes a similar point, citing, in particular, the danger that companies calling for regulation could seek “regulatory capture” and a “government-protect cartel.” Instead, he says, AI should be developed as freely as possible to have the greatest impact on global competition. Andreessen makes no distinction between big tech, startups, and open source.
Leading AI companies, most notably Microsoft, OpenAI, and Google, have lobbied the US Congress for regulation and offered to help shape the rules.