Washington, March 10 (SANA) OpenAI is facing growing criticism from employees, users and industry rivals after signing an agreement with the U.S. Department of Defense allowing its artificial intelligence technology to be used in military and security operations.
The controversy intensified over the weekend when Caitlin Kalinowski, the company’s senior robotics executive, resigned in protest of the deal. Critics say the agreement could enable the use of advanced AI systems in warfare and potentially in domestic surveillance.
Kalinowski said her decision was driven by broader concerns about the role of artificial intelligence in national security.
“I care deeply about the robotics team and the work we built together,” she wrote on social media platform X. “But surveilling Americans without judicial oversight and granting lethal autonomy to combat systems without human authorization are issues that deserved a much broader discussion.”
Internal criticism grows inside OpenAI
OpenAI faces backlash as Pentagon AI deal sparks resignations, user exodus the developer of ChatGPT, secured the defense contract late last month, shortly after rival AI company Anthropic declined a similar proposal from the Pentagon. Chief Executive Sam Altman announced the agreement on Feb. 28, giving the U.S. military access to some of the company’s AI models.
Anthropic CEO Dario Amodei had earlier rejected a comparable deal, citing concerns that the technology could be used for fully autonomous weapons or large-scale domestic surveillance.
“We cannot, in good conscience, agree to their request,” Amodei said at the time.
OpenAI defended its agreement, saying it includes safeguards and “clear red lines” governing how its technology can be used. A company spokesperson said the deal provides “a practical path to using AI responsibly in national security,” including restrictions against domestic surveillance and bans on fully autonomous weapons.
Still, criticism has spread within the company. OpenAI researcher Aidan McLaughlin said on X that he did not believe the agreement was justified.
Other employees have expressed admiration for Anthropic’s refusal to sign a similar agreement. One OpenAI staff member told CNN that many colleagues “really respect” the rival company’s stance.
User backlash and rising competition
The backlash has also spread among users. Critics on social media have called for boycotts of ChatGPT and encouraged others to switch to Anthropic’s Claude chatbot.
Data suggests the reaction had an immediate impact. According to analytics, uninstallations of the ChatGPT app surged by more than 295% on Feb. 28, the day after the agreement was announced.
By Monday, Anthropic’s Claude had become the most downloaded free app on Apple’s U.S. App Store and remained at the top for several days, surpassing ChatGPT, Google’s Gemini and Grok, the chatbot developed by Elon Musk’s xAI.
Facing mounting criticism, Altman attempted to calm the backlash by answering questions from users on X and acknowledging that the rollout of the agreement had been rushed.
“It was definitely rushed, and the public picture didn’t look good,” he said.
OpenAI later said it would revise the agreement with the U.S. government after criticism that the deal appeared “opportunistic and sloppy.”
Altman said new language would explicitly ban the use of the company’s systems to spy on Americans. Under the revised terms, intelligence agencies such as the National Security Agency would also require additional contractual approval before using OpenAI technology.
AI’s expanding role in modern warfare
The debate comes as artificial intelligence plays an increasingly significant role in military operations. AI technologies are already used to manage logistics, analyze intelligence and process large volumes of battlefield data.
The United States, Ukraine and NATO rely on tools developed by U.S. data analytics company Palantir, whose AI-powered defense platform Maven integrates satellite imagery, intelligence reports and other military data.
According to Louis Mosley, head of Palantir’s UK operations, such systems can help commanders make “faster, more efficient, and ultimately more lethal decisions where that’s appropriate.”
However, experts warn that large AI models can still produce errors or fabricated information—known as “hallucinations”—raising concerns about their reliability in high-stakes military situations.
A.D