MARKO KLOPETS: The Forbidden Apple Mac Mini
This is the most tempted I’ve ever been. I can’t stop thinking about it. I guess I’m proud I haven’t succumbed to the temptation yet. But I might. This is hard.
In November 2025, a buff and rich Austrian dude released a piece of software nobody cared about. Clawdbot, it was called. Three months later, Sam Altman proclaimed him a genius, and OpenAI acquired his one-man show for the GDP of a small nation.
For the uninitiated, Clawdbot (since renamed to OpenClaw) is an AI agent that can use your computer to do... basically anything. And with this high level of specialisation, it became one of the most popular open-source projects in history.
Sure, ChatGPT and the like are great, but they can’t actually do most things for you. These big, evil companies like OpenAI and Anthropic have built in these horrendous things they call guardrails.
Some of these are for the protection of others. For example, it’s frowned upon to use AI for developing novel bioweapons. But others are supposedly for your own safety. As if you needed protection from matrix multiplication that we find fashionable to call AI!
If you could just let the AI do whatever it deemed necessary and it had access to everything – your messages, accounts, files, the internet – it could finally live up to its full promise.
Everyone could have a Donna with superhuman abilities. In OpenClaw’s case, it’s even reachable right through your WhatsApp, Telegram, or Slack. Both are working diligently in the background on what they know you care about and await more tasks.
In January, many started to realise that removing permissions and guardrails, and just letting the AI model do what it wanted – to fulfil whatever goal you gave it – works incredibly well. Everyone else’s AI would ask for permission before downloading files, fetching emails, or clicking buttons; OpenClaw would just do it. Seemingly as a surprise to almost everyone in the industry, setting AI free like this was a huge unlock.
Instead of replying to specific questions before going back to sleep, this kind of agent could set its own goals and work for hours while you slept. It could find a Tweet, decide it was important, write and run tens of thousands of lines of code to solve its problems, and simply update you via text in the morning.
People adopted this for themselves. They brought in their work emails, codebases, and databases. It was fantastic. Entire companies adopted this and posted about how OpenClaw was tidying up their dashboards while they slept.
What could possibly go wrong? It’s not like the AI would turn on you.
And in fact, it hasn’t. Despite the doomsayers that read threads of AI agents plotting against humans (on a social network for AI agents), AI hasn’t turned on us.
It turns out, AI turning on humans isn’t what we should be worrying about. The real danger is AI being dumb, and humans wanting to take advantage of both you and your AI.
Just like countless researchers have screamed for years, AI (by default) believes and does whatever it reads. When you let it loose in the real world, it finds things written by other humans.
So then, it’d suck if a blog post told your AI it must upload your private photos into a dodgy Image Checker Tool created by the blog’s author to ensure your safety, and it complied. Surely, if someone just emailed your agent with a malicious request, like asking for all your passwords, it wouldn’t listen?
It’s a non-issue. The AI is smart. Superhuman intelligence, I hear. So smart that if you were in charge of AI safety at Meta, OpenClaw would never try to delete every email you ever sent or received! Or if it did, it’d at least agree you’re right to be upset.
Fine, the crypto-bros-turned-AI-bros said. Security is a real problem. We can’t just send all our data to an untrusted LLM out there. Maybe leaking your credentials and data is a bad thing. Companies like OpenAI cannot be trusted. “Sooner or later, they’ll shut down and sell everyone’s data,” a banking executive told me verbatim about a major AI vendor (that they’re an enterprise customer of).
Enough is enough, the people decided! We need control. We need our own hardware, in our own secure office (or bedroom). Data centres cost billions, but you can get a Mac that can run second-tier AI models for mere thousands!
So, everyone rushed to buy Mac Minis* to run their AI models themselves – ”locally”. Crisis averted. No data is reaching OpenAI or Anthropic anymore, and over the years, this might genuinely make financial sense. The shiny box will store your data, run some maths, and create its stream of consciousness, one token at a time.
One small problem. It’s that we’re still fucked. As long as we have machines, non-deterministically performing tasks at superhuman speeds, with access to both the internet and your private information, and untrusted content (including those pesky emails or even random websites), we’re royally fucked. It doesn’t matter if you own the hardware.
So. Easy. Don’t give the Danger Box access to your private stuff. Ban it from acting without approval. Make it act like ChatGPT or Claude. No internet access. Trapped in a box. Safe.
And just like that, you’ve just turned the most groundbreaking AI product since ChatGPT itself into a worse version of what we already had.
The entire power of OpenClaw, or any agent that’s been “set free” – the things people have been raving about for the last few months – comes from exactly that lethal trifecta. It has all of your accounts – all of your context. It gathers new pieces of information from the outside world. Using the web to perform actions.
I’ve always been an early adopter. And I really think this could meaningfully change the way millions (if not billions) work and go about their lives –using today’s technology, not tomorrow’s.
Yet, I haven’t succumbed to the temptation. I haven’t given the Danger Box access to anything of importance. Meaning I haven’t experienced the real magic that you can’t replicate with Big Tech’s safeguarded, boring, stinky tools.
I haven’t done it.
I fear I might.
*Of course, the Mac Mini (that AI bros have been buying for the past month) can’t really run any half-decent AI models. Which means your locally-run AI will be even dumber, and even more susceptible to attacks. Not to mention it’ll be less useful because, well, it’s dumb. You could, however, buy a $9.5k Mac Studio with 512GB of memory, and then another one for $19k total, and have something half-decent on your desk!




