The growing use of AI in defence strategy is unsettling enterprise users with fresh concerns over ethics and data security. Several companies are slowing or pausing AI deployments in sensitive sectors, while escalating these issues to internal ethics boards and governance councils.
While earlier discussions on ethics committees were centred around bias and hallucinations in AI systems, these conversations have shifted to more complex concerns of AI’s role in geopolitics and weaponisation of technology.
“We have observed this caution among clients,” said Biswajeet Mahapatra, principal analyst at Forrester. “Forrester notes some firms have temporarily scaled back or paused ambitious AI projects until proper guardrails are in place, and many banks now restrict genAI usage to approved pilots only. Such moves often involve internal AI ethics committees or governance councils reviewing high-risk use cases before they proceed.”
According to a recent study by Forrester, data privacy and security concerns are the top barriers to generative AI adoption. Nearly 40% of enterprise AI decision-makers cite worries about how models handle and expose sensitive data.
“This indicates growing reluctance to fully trust opaque AI services with confidential enterprise data,” Mahapatra said. In fact, in India, ungoverned AI deployments will force 25% of CIOs into damage control by 2026, the study predicts.
“Enterprises are becoming much more disciplined and risk-aware,” said Srinivas Konidena, vice president at global HR management firm ADP. “The conversation has shifted from excitement alone to questions of governance, data lineage, access control, security safeguards, and accountability.”
He said enterprises are not stepping back from AI; they are raising the bar for responsible deployment.
“Adoption becomes more sustainable when organisations have clarity on what data is being used, where it is processed, what controls are in place, and how privacy, compliance and resilience are being managed,” Konidena said.
From a policy perspective, this is a new battle between big tech and the government that’s playing out. “Before this, the fight was over anti-competition, anti-monopoly, advertising control, etc.,” said Abishur Prakash, an author and geopolitical strategist at Canada-based advisory firm The Geopolitical Business Inc.
“Now the fight is over AI companies that have decided the ideology of their AI, that have decided to train their AI in certain data, which is affecting the outputs.”
Governments procuring AI are worried about whether it could be biased toward a certain nation. What will happen to the data that they feed this AI with? Can this AI be used against the country or the government or its officials?
“But enterprises are worried about something very different. Will this AI go offline in the future? There are big concerns in Europe right now that America has a kill switch,” Prakash said.
“If tensions flare between the US and Europe, America could simply cut the flow of critical technology services to the entire EU, effectively a technology blackout. So that is the ground that enterprises are standing on. And sovereign AI is a way to mitigate that risk.”
While earlier discussions on ethics committees were centred around bias and hallucinations in AI systems, these conversations have shifted to more complex concerns of AI’s role in geopolitics and weaponisation of technology.
“We have observed this caution among clients,” said Biswajeet Mahapatra, principal analyst at Forrester. “Forrester notes some firms have temporarily scaled back or paused ambitious AI projects until proper guardrails are in place, and many banks now restrict genAI usage to approved pilots only. Such moves often involve internal AI ethics committees or governance councils reviewing high-risk use cases before they proceed.”
According to a recent study by Forrester, data privacy and security concerns are the top barriers to generative AI adoption. Nearly 40% of enterprise AI decision-makers cite worries about how models handle and expose sensitive data.
“This indicates growing reluctance to fully trust opaque AI services with confidential enterprise data,” Mahapatra said. In fact, in India, ungoverned AI deployments will force 25% of CIOs into damage control by 2026, the study predicts.
“Enterprises are becoming much more disciplined and risk-aware,” said Srinivas Konidena, vice president at global HR management firm ADP. “The conversation has shifted from excitement alone to questions of governance, data lineage, access control, security safeguards, and accountability.”
He said enterprises are not stepping back from AI; they are raising the bar for responsible deployment.
“Adoption becomes more sustainable when organisations have clarity on what data is being used, where it is processed, what controls are in place, and how privacy, compliance and resilience are being managed,” Konidena said.
From a policy perspective, this is a new battle between big tech and the government that’s playing out. “Before this, the fight was over anti-competition, anti-monopoly, advertising control, etc.,” said Abishur Prakash, an author and geopolitical strategist at Canada-based advisory firm The Geopolitical Business Inc.
“Now the fight is over AI companies that have decided the ideology of their AI, that have decided to train their AI in certain data, which is affecting the outputs.”
Governments procuring AI are worried about whether it could be biased toward a certain nation. What will happen to the data that they feed this AI with? Can this AI be used against the country or the government or its officials?
“But enterprises are worried about something very different. Will this AI go offline in the future? There are big concerns in Europe right now that America has a kill switch,” Prakash said.
“If tensions flare between the US and Europe, America could simply cut the flow of critical technology services to the entire EU, effectively a technology blackout. So that is the ground that enterprises are standing on. And sovereign AI is a way to mitigate that risk.”





