Our agreement with the Department of War Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies. https://openai.com/index/our-agreement-with-the-department-of-war/
OpenAI: "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s." Then why did the DoD kick out Anthropic? Ah, I see... OpenAI Red Line #1: "No autonomous weapons." The contract language: AI won't direct weapons "in any case where law or Department policy requires human control." Wherever policy doesn't require human oversight, the restriction disappears. OpenAI Red Line #2: "No mass domestic surveillance." The contract language: no "unconstrained monitoring... consistent with these authorities" - authorities that have historically been used to justify enormous surveillance programs (see: FISA, EO 12333). They're not prohibiting surveillance. OpenAI Red Line #3: "No high-stakes automated decisions." The contract language: AI won't make decisions "that require approval by a human decisionmaker under the same authorities." If the government changes what requires human approval, the guardrail moves with it. Bottom line is OpenAI built a framework that sounds like constraint but preserves maximum flexibility for the government. Every red line contains an escape clause written by the entity being restrained. There is no real guardrail here. Sam Altman is playing word games like a politician.
OpenAI is why the Middle East is burning. “you’re right that was a school. We will do better next time”
This breh crashed out on Nazi Twitter last night after randomly digging the hole deeper in some AMa style clusterf You hate to see it
"go do something for a while." My students sometimes write that way, but not by the time they are seniors.
I have some presentations etc for our C-Suite AI huffers this week and I plan on answering their questions with this if I start mucking up
Anthropic's Claude AI being used in Iran war by U.S. military https://www.cbsnews.com/amp/news/anthropic-claude-ai-iran-war-u-s/
TACO (trump chickening out again), as it relates ti Anthropic's Claude AI, being played out in the Iran War. on Jun 3 2026, Trump 2.0 blacklists Anthropic as AI firm refuses Pentagon demand Trump ordered U.S. government agencies to “immediately cease” using technology from the artificial intelligence company Anthropic. DUI Hegseth, soon after Trump’s order, said he was ordering the Pentagon to “designate Anthropic a Supply-Chain Risk to National Security” against this backdrop, CBS is reporting that the Defense Department uses Claude for synthesizing documents and making logistics and supply chains more efficient, among other tasks