OpenAI and other AI providers need to improve their privacy and enterprise offerings, according to a survey commissioned by BlackBerry.
The online survey, commissioned by BlackBerry and conducted by OnePoll, was polled in June and July among 2,000 IT executives at companies in the U.S., Canada, Germany, France, the Netherlands and the U.K., as well as Australia and Japan.
The results show that 75 percent of companies are considering or have already implemented a ban on chatGPT and generative AI. 61 percent see this ban as a long-term or permanent solution.
The reasons cited by the responsible executives (CIO/CTO/CSO/IT (72%), CEO (48%), Legal Compliance (40%), CFO/Finance (36%), HR (32%)) are primarily (67%) data protection and privacy risks. 57% see reputational risks.
Companies slam on the brakes while employees explore a new world of efficiency
The reflexive blocking of generative AI tools is likely related to a sense of loss of control in many organizations. This is at least partly the fault of OpenAI, which has long used data entered in ChatGPT for AI training without providing comprehensive and transparent information.
Only recently has there been an opt-out option, but it is only effective if employees are aware of it and use it responsibly. And even then, the data is still being run through OpenAI models, where the exact data processing remains in the dark.
Even Microsoft says about its business partner OpenAI that entering data into ChatGPT is insecure and could lead to loss of intellectual property for companies. At the same time, it markets Azure ChatGPT, a privacy-compliant ChatGPT variant with Azure connectivity.
Presumably, many companies feel forced to put on the brakes now because AI tools are perceived as so useful by employees that they are already being used across the board without guidelines. But banning them now will lead to frustration.
Companies blocking generative AI is likely just a temporary phenomenon
If the efficiency gains of generative AI are indeed measurable in many work processes, from marketing to human resources, this defensive stance may no longer be sustainable.
IT executives surveyed in the Blackberry study also see the potential for generative AI to increase efficiency (55 percent), innovation (52 percent), and creativity (51 percent). A whopping 81 percent want to use AI to improve cybersecurity.
So the conversation is likely to soon shift from “if” to “how” to implement generative AI in a privacy-compliant way.
Microsoft is strategically well-positioned here, given the presence of Windows, Office, and Azure in many enterprises, combined with exclusive OpenAI licenses. At the same time, it is a formidable opponent of OpenAI, which is planning its own business customer offerings.
Smaller AI providers like Midjourney and open-source models are likely to follow the path of privacy-compliant cloud services from Microsoft, AWS, and Google in the future. This is already the case, for example, with the open-source image AI Stable Diffusion or Meta’s Llama 2. Google and others are also planning their own image and video models.
Generative AI is also migrating as a tool into existing and already implemented software such as Microsoft Word, Google Docs, or Photoshop. For this reason alone, a general ban on generative AI in companies is not the answer.