Anthropic, the company behind Claude, just sued the Pentagon.
They refused to let their AI be used for autonomous weapons or mass surveillance. The government's response? Blacklist them as a supply chain risk. A label normally reserved for foreign adversaries.
In a show of solidarity, 30 staff from OpenAI and Google DeepMind, direct competitors, signed a brief supporting Anthropic's lawsuit. And OpenAI's head of robotics resigned over the weekend because of concerns about OpenAI's own Pentagon deal.
So one company said no and got punished. Another said yes and moved straight in.
Why this matters if you run a small business
If you're using Claude, ChatGPT, or any other AI tool, the trustworthiness of that tool is now a live legal discussion. Not theoretical. Not future tense. Right now.
Think about what you're putting into these platforms. Customer data, internal processes, financial details, your thinking. All of that is feeding their models. And most of us, myself included until recently, just use these tools without really asking who we're handing that information to.
The gap between using a tool and understanding who's behind it is where the risk sits. It's worth knowing which companies will draw a line when pushed, and which ones won't. Because that tells you something about how they're treating your data too.
The boring infrastructure story that isn't boring
Oracle and OpenAI have scrapped plans to expand a flagship AI data centre in Texas, part of the Stargate project announced at the White House. Negotiations broke down, requirements changed. Meta are now in talks to lease the site, with projected capital expenditure of $135 billion in 2026 alone.
You might wonder what a data centre in Texas has to do with your business. Here's the thing. The tools you use every day, ChatGPT, automation software, AI assistants, they all run on this infrastructure. When the biggest players in the world are shifting billions mid-build, it tells you how fast things are moving and how uncertain the landscape still is.
Which tools will get cheaper? Which will get more expensive? Which ones will even be around in two years?
This is a good reason to document your processes properly. If you build your entire operation around one tool and that tool changes dramatically, your team still needs to know what good looks like. Structure beats dependency every time.
CEOs are rethinking how many people they need
A new KPMG survey of 100 large company CEOs found that nearly 80% are now allocating at least 5% of their total capital budgets to AI. The key metric reshaping workforce decisions is what they call the "labour cost margin", basically the ratio of people to technology per unit of work delivered.
Now, this is big company thinking. But the principle trickles down. If you're running a team of 5 to 20 people, the question isn't "should I replace people with AI?" It's "where is AI doing the heavy lifting so my people can do the work that actually matters?"
That's the shift. Not fewer humans. Better use of humans. And a much leaner cost base while you're at it.
Will