
Over the past week, a significant and fast-moving controversy has unfolded in the AI industry – one that has caused us, along with many others, to take a hard look at which AI companies we choose to support and partner with.
We’ve decided to switch. From today, MIDAS is moving away from OpenAI and over to Claude, the AI assistant developed by Anthropic.
Here’s why.
What Happened
If you haven’t been following the news, here’s a quick summary. The US Department of Defense had been using Anthropic’s Claude AI on its classified networks – the first AI to be deployed in that context. As the contract came up for renegotiation, the Pentagon demanded that Anthropic remove two key restrictions from the agreement: a prohibition on using Claude to power fully autonomous weapons systems, and a prohibition on using it for mass domestic surveillance of American citizens.
Anthropic refused. Their CEO, Dario Amodei, was unequivocal: current AI models are simply not reliable enough to be trusted with lethal autonomous targeting decisions, and mass surveillance of citizens is incompatible with democratic values. The company had tried for months to reach a workable agreement, but the Pentagon’s position was that it required the ability to use AI for “all lawful purposes” without restriction – a standard that, as legal experts have noted, is far broader than it sounds.
The fallout was swift. President Trump ordered every federal agency to immediately cease using Anthropic’s technology. Defense Secretary Pete Hegseth announced Anthropic would be designated a national security supply chain risk. Hours later, OpenAI – having apparently been in talks throughout – announced it had struck a deal with the Pentagon to fill the gap.
Many of OpenAI’s own employees were furious. A number of them, along with other prominent tech figures, signed an open letter opposing the government’s retaliation against Anthropic. Sam Altman himself had written in an internal memo just days earlier that OpenAI shared Anthropic’s “red lines” – before going on to sign a deal that critics said those red lines were far less robustly protected.
The contrast in the two companies’ behaviour was stark and, for us, clarifying.
Why This Matters to Us
We’re a small UK software company. We make room booking software. We’re not in the defence industry, and the intricacies of US military contracting aren’t something we’d normally have strong opinions about.
But this issue isn’t really about US military contracts. It’s about what kind of company you’re doing business with – what they stand for, and what they’re willing to sacrifice to stay in favour with those in power.
Anthropic chose to walk away from a government contract worth up to $200 million rather than remove safeguards they believed were ethically essential. OpenAI, in the same week its CEO had privately affirmed those same values to his own staff, made the opposite choice.
We think that matters. The companies whose technology we choose to use reflect, in a small way, on us. And we’d rather reflect the values of a company that held its ground.
What We’ve Changed
We used OpenAI’s models primarily in two places within our products and workflows:
Miriam, our virtual assistant. When no live support agents are available, visitors to our website can get help from Miriam, our AI-powered chatbot. Miriam was previously powered by OpenAI. She’s now powered by Claude.
Code optimisation. As we written about previously, we’ve been using AI assistance to help our human developers optimise MIDAS source code – identifying opportunities to make our software run faster and more efficiently. We’ve switched that workflow from ChatGPT to Claude as well.
In both cases, the transition has been smooth. Claude is a highly capable model, and we’re pleased with the results so far.
We’re Not Alone
Claude surged to number one on the Apple App Store in the days following the controversy. Thousands of individuals and organisations made the same decision we did – not because Anthropic asked them to, but because the situation made the choice feel important.
It’s worth noting that this isn’t simply about picking a “winner” in a commercial rivalry. We genuinely hope the situation with Anthropic and the US government is resolved fairly and without further retaliation. Anthropic has, by all accounts, been one of the most thoughtful voices in AI development when it comes to safety and responsible deployment. The idea of that voice being sidelined – or punished for speaking up – should concern anyone who cares about where this technology is heading.
A Note on AI in MIDAS
As we’ve said before, AI is not taking over the development of MIDAS. Our software is, and will remain, built and maintained by our human team. But AI tools have become a genuinely useful part of how we work – and it matters to us that we use them thoughtfully, including being selective about whose tools we use and why.
If you have any questions about these changes, or about how AI is used in our products, please don’t hesitate to get in touch.