Technology

Pentagon AI Decision Raises Questions Over Safety And Military Use

Commentators warn pressure on companies to weaken safeguards could reshape the ethics of defence technology.

Share this article

Share this article

Commentators warn pressure on companies to weaken safeguards could reshape the ethics of defence technology.

Technology

Pentagon AI Decision Raises Questions Over Safety And Military Use

Commentators warn pressure on companies to weaken safeguards could reshape the ethics of defence technology.

Share this article

The approval of an artificial intelligence system developed by Elon Musk for use in classified US military operations has sparked debate about the balance between national security and the ethical safeguards built into AI systems.

Reports suggest the United States Department of Defense has approved the use of Grok — the AI model developed by Musk’s company xAI — for certain defence applications. At the same time, officials are said to be considering penalties against Anthropic after the company declined to remove ethical restrictions from its Claude model as part of discussions over potential military deployment.

Jurgita Lapienytė, chief editor at Cybernews, said the situation raised broader concerns about whether safety standards could become a competitive disadvantage for AI developers.

“Governments should not allow private companies to dictate defence policy,” she said. “But there is also a serious question about whether companies should be penalised for maintaining safety rules. If the largest players are pressured to remove safeguards, smaller companies may stop implementing them altogether.”

The debate reflects wider concerns about the role of artificial intelligence in military decision-making. Critics argue that unsupervised systems could introduce new risks if deployed in operational environments where rapid decision-making is required.

Lapienytė warned that AI tools could also alter the human dynamics of warfare. “Technology can distance people from the consequences of their actions,” she said. “In military contexts it risks creating a more gamified environment while shifting personal responsibility away from individuals.”

The dispute also highlights tensions between technological capability and geopolitical considerations. Observers say defence agencies must balance access to cutting-edge systems with concerns over reliability, transparency and ethical oversight.

Lapienytė suggested the decision could have implications beyond the United States. “When the world’s most powerful military begins integrating AI systems without full transparency about how they are used, it inevitably raises questions internationally,” she said.

Get news to your inbox
Trending articles on News

Pentagon AI Decision Raises Questions Over Safety And Military Use

Share this article