One of the world’s leading artificial intelligence (AI) research companies, OpenAI, has removed language from its usage policies that expressly prohibited using their powerful language technologies, such as ChatGPT, for military purposes.

As reported by The Intercept, the previous version of the firm’s policies included a clear ban on “weapons development” and “military and warfare” about how its AI tools and services could be used. That blanket prohibition, which experts say would rule out any direct applications by defense departments or militaries, was quietly removed in a revised usage policy published earlier this month.

The new policy retains a general ban on harmful activities, but no longer singles out military applications as expressly prohibited usage. When asked about the change, an OpenAI spokesperson stated the goal was to simplify the policy into “universal principles” like “Don’t harm others,” though the potential implications concerning military use remain unclear.

In a statement to The Intercept, OpenAI’s Niko Felix wrote: “We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs.

“A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.”

How could the military use generative AI?

The language change represents a weakening of OpenAI’s previous hard line against military use. Some experts speculate it could leave open a path for the company’s AI technology to be indirectly used in combat scenarios by aiding operational infrastructure, as long as not directly involved in weapons systems themselves. There are also questions about the Silicon Valley company’s close partnership with Microsoft, a major defense contractor that has invested billions in the startup.

While current OpenAI technologies may have limited practical uses for militaries in their present form, the policy shift occurs at a time when defense departments globally have growing interest in leveraging advanced AI for intelligence and operational purposes. It remains unknown how OpenAI will interpret or enforce these revised guidelines as military demand continues increasing in the future.

Featured Image: Dall-E

Sam Shedden

Managing Editor

Sam Shedden is an experienced journalist and editor with over a decade of experience in online news. A seasoned technology writer and content strategist, he has contributed to many UK regional and national publications including The Scotsman, inews.co.uk, nationalworld.com, Edinburgh Evening News, The Daily Record and more. Sam has written and edited content for audiences whose interests include media, technology, AI, start-ups and innovation. He's also produced and set-up email newsletters in numerous specialist topics in previous roles and his work on newsletters saw him nominated as Newsletter Hero Of The Year at the UK's Publisher Newsletter Awards 2023. He has worked in roles focused on growing reader revenue and loyalty at one of the UK's leading news publishers, National World plc growing quality, profitable news sites. He has given industry talks and presentations sharing his experience growing digital audiences to international audiences. Now a Managing Editor at Readwrite.com, Sam is involved in all aspects of the site's news operation including commissioning, fact-checking, editing and content planning.