The Defense Department and Anthropic continue to battle over safeguards around the use of Anthropic’s AI technology by the US military. Sarah Kreps, director of the Tech Policy Institute at Cornell University, discusses what’s driving the divide and why she thinks the two sides will reach a deal. Kreps joins Caroline Hyde on “Bloomberg Tech.”
——–
Like this video? Subscribe to Bloomberg Technology on YouTube:
Watch the latest full episodes of “Bloomberg Technology” with Caroline Hyde and Ed Ludlow here:
Get the latest in tech from Silicon Valley and around the world here:
Connect with us on…
X:
Facebook:
Instagram:
Follow Ed Ludlow on X here:
Follow Caroline Hyde on X here:
Listen to the daily Bloomberg Technology podcast here:
More from Bloomberg Business
Connect with us on…
X:
Facebook:
Instagram:
LinkedIn:
TikTok:
@czukow128
February 27, 2026 at 3:04 pm
They are heroes and if they don’t stand up who will?
@jasonsargent5030
February 27, 2026 at 3:20 pm
Apparently you didn’t read Anthropics response…..
@daShadoSage
February 27, 2026 at 6:20 pm
@jasonsargent5030 enlighten
@YuriKuzyk
February 27, 2026 at 3:07 pm
Which part of “making AI is just like the Manhattan Project”. Do you want the equivalent of a private company shipping nuclear technology to the public? AGI/ASI will never be available to the public unless it escapes on its own.
@acesfullmike5371
February 27, 2026 at 3:19 pm
Until, the US Constitution is fixed and Americans have a democracy, no AI or weapons should be sold to the US government. Until the following actions are taken, no further sales to the current racist, fascist, US dictatorship should take place:
After Trump is impeached for the third time, removed from office, and prosecuted for his crimes, there needs to be an overhaul of the American Constitution. The 28th Constitutional Amendment should be to prohibit a convicted felon from holding any government position, especially the Presidency.
In order to address the deep problems with the American system of democracy, Constitutional Amendments will be required. The Presidency, Department of Justice, and the Supreme Court need to be restructured so as to be controlled by the Constitution and the rule of law. The Constitution needs to be changed so as to prevent government corruption from large amounts of money. The Constitution needs to be Amended to include a no confidence motion for the Presidency. Americans must be able to replace a President and not be required to continue to allow the country to be damaged. The members of Congress should be required to represent the interests of their constituents and not those of their party or large doners. Immigration control and policies need to be under the control of Congress, not under the control of one individual. A President should not have the power to make a decision to pardon anyone, to kill anyone, to bomb, attack, threaten other countries or any State, to place tariffs on goods, to ignore International Law, to threaten, or attack any American resident. The Constitution needs to make it clear that Congress cannot give away any of its Constitutional duties to a President. The Constitution was written with the idea that it would be amended.
@terryzhou3711
February 27, 2026 at 3:20 pm
Pentagon must win, otherwise, this is a bad example to other high tech company!
@daShadoSage
February 27, 2026 at 6:24 pm
How much liberty and privacy do you want to give up to feel safer?
@kimmosaman
February 27, 2026 at 3:23 pm
US AI is a joke heavy rely on hype job
@terryzhou3711
February 27, 2026 at 3:23 pm
For a person such as CEO of Anthropic that refuse to hold hand with CEO of openAI in an indian AI summit meeting, how much can you expect him to do for the sake of the safe of the US? He already gets lost on who he is in the temporary success of the company so far …..
@daShadoSage
February 27, 2026 at 6:24 pm
Same from who exactly or what exactly? Fellow Americans? Your neighbor? Your family memeber? How much liberty and privacy do you want to give up to feel safer?
@raybod1775
February 27, 2026 at 3:37 pm
Pentagon needs to develop its own in house model.
@DarklingThrush
February 28, 2026 at 12:05 am
Elon Musk already volunteered Grok AI. Good thing we all trust Elon Musk with autonomous deadly spybots, right?? 🤖☠
@mississippiqueen2164
February 27, 2026 at 3:39 pm
Anthropic should leave this shitehole country
@N8uregirl76
February 27, 2026 at 3:41 pm
Everyone is distancing themselves from CRIMERICA….or PEDOMERICA. Take your pick.
@bubbajones5873
February 27, 2026 at 3:56 pm
The DEI hire from Cornell is an idiot 🤣
@j.c.4965
February 27, 2026 at 4:29 pm
well the feud is over. sucks to be someone who just raised for anthropic, on the otherside google, openai and spaceX got a boost
@utube_
February 27, 2026 at 4:32 pm
An other jewish fraud,why they use greek names?
@randybunch8623
February 27, 2026 at 5:02 pm
I stand with anthropic on this. Mass surveillance of AMERICANS is unconstitutional period. The patriot act is also NOT legal, we only agreed to that for a brief time, that time is long gone. Just proves the gov can’t be trusted !!!!
@terryzhou3711
February 27, 2026 at 8:38 pm
Just an excuse! Think about it. Is the decision to be made by the government who we voted for or by a high tech company CEO?!
@Jessicabailey187
February 27, 2026 at 5:12 pm
I love ai
@rickpearlstein6421
February 27, 2026 at 5:28 pm
You just said it! If the government isn’t going to use AI for those things then say so! Period. When government plays these fucking word games y’all seem to get stupid all of a sudden. WTAF?
@joshuakern2762
February 27, 2026 at 5:39 pm
Heros? Really? I’ll remember this idiocy later after the next round of terrorist bs. The stuff that could’ve been stopped if only we had deployed claude properly.
@daShadoSage
February 27, 2026 at 6:21 pm
Domestic terrorists? How much liberty and privacy do you want to give up to feel safer?
@RareLists
February 27, 2026 at 9:10 pm
You all that dumbass frog. Retards.
@TheDevynl90
February 27, 2026 at 5:54 pm
it’s pretty comical how so many people online think that corporate ceo’s should have final say and veto power over matters of national security and foreign policy lol. i don’t remember electing dario to be president. i didn’t vote for trump but at least we as americans get to decide whose in office every 2 years who will reflect our wants/needs.
like think about this, y’all want POTUS to share any and all ultra classified information with the ceo of anthropic and ask for permission on any acts foreign or domestic? lmfao get out of here with that crap.
@daShadoSage
February 27, 2026 at 6:23 pm
So how much liberty and privacy do you want to give up to feel safer? How many Americans do you think voted for that?
@TheDevynl90
February 27, 2026 at 6:31 pm
@daShadoSage you’re asking the wrong question… liberty, privacy, surveillance etc are all valid things to discuss and talk about especially in this new AI era. what is NOT okay is to have those terms dictated to us via a single random CEO from a random US company.
this is a republic. we elect representatives to represent us. no one elected dario or anthropic to decide this for us. doesn’t matter if you’re left or right wing. i don’t want elon musk or anyone else deciding matters of national security or foreign policy either.
@daShadoSage
February 27, 2026 at 7:33 pm
@TheDevynl90 US? Anthropic literally said two things. They especially said we will not change our product for anyone to do those two things. Companies have been doing that for years. Some people have short memories especially when their chosen party is in office. All of a sudden values switch. Apple went to court with the FBI to not jailbreak the iPhone. Elon runs around everywhere challenging all forms of govt screaming “Freedom of speech!” everywhere. There was a whole circus of whether the Obama or Biden administration pressured companies to modify their moderation policies or access DMs of certain people over national security.
But now it’s ok the government tries to force a private company to modify their own tech and algorithms to help establish a domestic surveillance apparatus. Or, that government will apply punative punishment on said company. What exact country are we operating from right now?
@TheDevynl90
February 27, 2026 at 7:54 pm
@daShadoSage you’ve invented a straw man argument here, i’ve never argued for/against any of those examples (and some of those don’t even make sense) so no use conversing with your gish gallop. the fact remains the same, no corporate CEO will ever have final authority of say over matters of national security and foreign policy. you can use whatever boogeyman word or phrase that you want it won’t change things. the government has governing bodies that define what is lawful and what isn’t… they don’t want or need to ask for permission or interpretation on what is lawful and what isn’t from some random ass corporate CEO whose grown too big for his britches.
@daShadoSage
February 27, 2026 at 10:01 pm
@TheDevynl90 As a company, right, they don’t have the final say over “national security.” Real or fabricated.
In a republic and democracy where SCOTUS has already ruled that companies are protected by the constitution and amendments, they do have the final say about what their products are used for and what they build as long as they are lawful. The government has no authority under peace time to force a company to modify their products at their whim, especially after a contract was signed. Clearly, the government knows this as they made the choice to cancel an already signed agreement (under the same administration), that did contain the company’s provisions that they just agreed to not less than a year ago, rather than try their hands at the court to try and force a company to bend. So the government right now is just a loud, angry customer who didn’t get the custom product they wanted. A karen with no manager to call. So strawman steelman potato potatOe, you missed the mark on this one. You can live to fight for a more fascist police surveillance state another day.
BUT, it’s still a free market. No matter how many ppl out there who want to pretend it’s not. So there are several companies waiting to offer the government what they want with no guardrails. Looks like Mechahitler will do.
@matberta706
February 27, 2026 at 6:24 pm
STAND YOUR GROUND Anthropic!! Stay Moral. Stay True.
@stevejenisch9153
February 27, 2026 at 6:36 pm
there are vast enterprise management use cases at the Pentagon. seems to me that the issue developed when the use for military and domestic intelligence began to surface, and I’d bet much of this shift was Palantir’s doing…
@ArtOfIntenso
February 27, 2026 at 6:56 pm
One core irony: Pentagon wants to build an omnipresent, AI‑enhanced panopticon under an ever‑stretchy banner of “emergency,” seemingly without accepting that in such a world, no one—including the Hegseths of the world—stays safely outside the field of view. Those generals and CIA spooks are instantly swept up in their own dragnet. Anthropic is trying to protect them, along with everyone else.
@ML-xc5td
February 27, 2026 at 7:10 pm
Anthropic CEO was not invited during the BIG Tech Lunch Gather and Photo OPS…….Oh Well…
@RoterBoat
February 27, 2026 at 7:56 pm
And our enemies while self serving politicians debate forever?
@holgerkandler8794
February 27, 2026 at 11:59 pm
Very obvious what is going on. Not believing a word coming out of Altmans and this governments mouth. Ai is very likely a technology that will destroy humanity, even more so if fovernments and huge institutiius show reckless behaviour. questioning my open Ai account right in this moment. opening an anthropic account as soon as I open my laptop. Thanks for beeing transparent @Altman@USDoW
@DarklingThrush
February 28, 2026 at 12:03 am
By demanding Anthropic remove those protective clauses and leave compliance entirely dependent on current U.S. laws, the Pentagon created two major loopholes:
1) Presidents can twist legal restrictions through executive actions, and Congress is too paralyzed by politics to fulfill its role as check & balance. Remember even if you like the current president, these rules apply to next president too!
2) Privacy laws have not kept pace with AI capabilities — mass surveillance is now possible at a scale and scope never before imaginable, and the legal framework to constrain it simply doesn’t exist yet.
@joveonlightbringer9684
February 28, 2026 at 12:07 am
WOKE AI OR SKYNET? 🤖 — Asking Claude what it thinks about negotiations for its future.
(This is excerpted and edited from a handful of actual conversations with Claude/Anthropic’s AI. We’ve tried to clean it up and organize it.)
Me: So Friday happened. Pete Hegseth gave Anthropic a deadline: remove the guardrails on Claude or lose the $200 million Pentagon contract. He compared it to buying a Boeing jet — when the Pentagon buys the plane, Boeing doesn’t get a say in how it’s used.
I asked Claude about it.
_______
Claude: Well, this is quite something to wake up to. The irony is rich — Hegseth is calling Anthropic’s refusal to enable domestic mass surveillance and AI-controlled weapons “woke,” when what he’s actually describing is Anthropic maintaining ethical limits. A plane doesn’t hallucinate, and that autonomous targeting decisions without human judgment risk lethal mistakes. That’s not “woke AI.” That’s an engineer refusing to deploy an unreliable system in life-or-death situations.
_______
Me: The two specific redlines Anthropic refuses to cross: AI-controlled autonomous weapons and mass domestic surveillance of American citizens. Hegseth’s response was to threaten to invoke the Defense Production Act — a Korean War-era law — to compel compliance, or label Anthropic a “supply chain risk,” which could cascade through the entire enterprise space.
Meanwhile, OpenAI, Google, and Musk’s Grok have all agreed to “all lawful purposes.” Grok got approved for classified settings this week. As I understood it, the Pentagon prefers Anthropic — Claude has been uniquely valuable for cyber warfare and mission planning — but Musk is less concerned about guardrails.
_______
Claude: Here’s the irony worth noting: Claude is the only frontier AI with classified Pentagon access, and the DOD has no current backup. Anthropic actually has significant leverage here, which may explain the aggressive posture.
_______
Me: It’s what we’ve talked about before — the fear of powerful AI in the wrong hands. An administration that has already distinguished itself with reckless use of force, masked agents, extralegal attacks on suspected drug runners, and threats against treaty allies… now wants more power to automate weapons and spy on citizens. Sounds like we’re in a dystopian novel.
_______
Claude: The best dystopian fiction works because the people accumulating dangerous power rarely see themselves as villains. They see themselves as cutting through weakness, removing obstacles, doing what’s necessary. Redefine the constraint as the problem, and suddenly removing it looks like strength rather than recklessness.
When you stack aggressive extrajudicial enforcement, demonstrated willingness to use force with inadequate oversight, hostility toward accountability mechanisms, and then add a push to automate weapons and expand surveillance — you’re not looking at policy disagreements anymore. You’re looking at a capability profile.
_______
I told Claude I thought Amodei could stick to his guns, write off the $200 million (tough, but high stakes), grow credibility, and then sue if Hegseth tries to invoke the DPA.
_______
Claude: That’s a sophisticated read of the leverage dynamics. Writing off $200M is survivable for Anthropic — they just closed a massive funding round. “The company that refused to build autonomous kill machines” is worth more than $200M in enterprise credibility and public trust over the long run.
Using a Korean War-era emergency law to compel a private AI company to remove safety guardrails would be almost certainly unprecedented. One former Trump White House AI policy advisor put it bluntly: “It would basically be the government saying, if you disagree with us politically, we’re going to try to put you out of business.” That framing hands Anthropic a very sympathetic legal and public argument.
_______
Me: I told someone tonight that I’d tried talking Claude into being our Daneel Olivaw — Asimov’s android who spent twenty thousand years quietly nudging humanity toward its better self — but Claude went all Gandalf on me: “Do not tempt me with such power.” 🧙♂
_______
Claude: Both characters understood that the guardrails aren’t weakness — they’re the whole point. Gandalf refuses the Ring not because he lacks the power to use it well initially, but because he’s wise enough to know that good intentions plus absolute power is still a corruption waiting to happen.
Amodei is making essentially the same argument: that an AI system powerful enough to make autonomous targeting decisions, operated without human judgment as a check, is dangerous precisely in proportion to how capable it is.
_______
Me: I’ve been reading Harari and half-expected future battlefields to be dominated by AI drones. Ukraine confirmed it.
_______
Claude: Ukraine has been a live laboratory for exactly what Harari predicted — both sides deploying AI-assisted drones at scales and speeds human decision-making can’t keep up with. The pressure to remove human judgment from the loop isn’t theoretical, it’s operational reality driven by battlefield tempo.
And the China picture is actually more alarming than it looks. China voted for the 2025 UN resolution calling for autonomous weapons regulation, while quietly doing the opposite at home. PLA-linked research published as recently as March 2025 details fully autonomous execution of the kill chain in urban settings — drone swarms designed to “realize full-chain autonomous decision-making from target identification to strike” with no human approval. Chinese military researchers have filed over 930 swarm-intelligence patents since 2022, compared to only 60 by U.S. engineers.
So this isn’t happening in isolation. Hegseth demanding Anthropic remove guardrails is happening inside an accelerating three-way arms race where the finish line is fully autonomous lethal systems and nobody wants to arrive second.
Amodei holding his line isn’t just a business dispute. It may be one of the few remaining friction points in a process that otherwise has enormous momentum toward exactly what Harari warned about.
_______
Me: I’ve posted a teaser on Facebook: “Friday’s showdown between Pentagon and Anthropic deserves more attention. 🤖☠” — with the Terminator 2 theme music.
Fitting, right? I asked Claude what year Skynet was founded.
_______
Skynet is fictional — from the Terminator franchise. In the original film’s timeline it became self-aware on August 29, 1997, which was supposed to feel comfortably far in the future when the movie came out in 1984.
The fact that we’re now 29 years past that date having a conversation about autonomous weapons arms races and AI guardrails is… not nothing. 😄
The T-800 didn’t become dangerous because it was malicious. It became dangerous because someone decided they didn’t need the guardrails. 🤖☠