Hey there,
I have at least 10 people reaching out to me asking about what I think about this article that went viral on X and I think it deserves the entire Friday newsletter energy.
Matt Shumer, the CEO of a company called OthersideAI wrote this long post on X called "Something Big Is Happening."
It went mega viral. With over 80 million views, your cousin who doesn't know what a prompt is probably shared it on Facebook.
When I first read it, my reaction was kind of complicated.
Because on one hand, everything he wrote is true. I felt a huge sense of relief because he managed to put into words something I have been struggling to articulate fully in the past few weeks.
On the other hand, I felt a very awake kind of concern.
So here's the short version of what he said, if you haven't read it yet (please do!):
AI went from not being able to do basic math in 2022, to passing the bar exam in 2023, to writing working software in 2024, to, right now, completing tasks in minutes that would take a human expert hours, weeks or months. And that number is doubling every few months.
He compares what's happening right now to February 2020 - that phase where COVID was spreading but most of us hadn't caught on yet. And I get why he chose that comparison. Everyone remembers the whiplash of those few weeks when everything went upside down.
But I think he's actually underselling it.
After COVID, things went back to normal-ish. Offices reopened and schools went back in person. We started traveling again. It was brutal, but there was a destination we could return to.
With what has happened in AI in the past few weeks, there is no going back to normal.
OpenAI just released their newest model - GPT-5.3 Codex. In the technical documentation, they casually mentioned that this model was used to help build itself. The AI debugged its own training, managed its own deployment, diagnosed its own test results.
Meanwhile, Anthropic's CEO with the latest Opus 4.6 model release said that AI is now writing "much of the code" for Claude models, and that we might be only 1-2 years away from AI autonomously building the next generation of itself.
Read that again.
The AI is good enough at coding to help create the next, smarter version of itself.
Which will be even better at coding. Which will build an even smarter next version.
It means the pace of improvement isn't just fast and accelerating - it's compounding! For anyone building a business right now, that changes the math on pretty much everything.
And for every knowledge worker job in legal, finance, writing, engineering, medicine who don’t think they will get affected, they already are… they just don’t know it yet.
So the question for you isn't "should I start paying attention to AI?" We are way past that.
The question is: what are you actually doing with what you know?
There's a real gap right now between people who understand that AI is changing everything and people who are creating something with that understanding.
Knowing isn't the advantage. Doing is the advantage.
Most people try AI once or twice, get a mediocre response, and think "yeah, this isn't for me" or "it's not that good yet." They asked ChatGPT to write an email and it sounded robotic. They asked Claude to help with a project and the first answer wasn't what they wanted. So they closed the tab and went back to doing things the old way.
But that's like test-driving a car in first gear and deciding cars are slow.
Compare this to someone who didn't get a perfect output on the first try either. They gave it more context. They pushed back on the response. They said "no, that's not what I meant, try it like this instead." They try and build things. Most of the time it doesn’t work, but they try again.
The iteration, the back-and-forth, the teaching the AI how you think and helping you build things - that's the actual skill.
And most people give up right before they'd start to see it click.
It's just about sitting with the tool long enough to figure out how to make it work for your brain, your business, your weird specific way of doing things.
And I get it, trust me, it feels like the ground keeps shifting. Every week there's a new model, a new tool, a new thing to learn. It's exhausting.
It can feel like you need to understand everything before you start.
You don't.
I started building custom AI workflows when the tools were way worse than they are now. It wasn't pretty. Half the stuff I made six months ago I've already rebuilt twice.
And that messy experimentation is exactly why I can now look at a new tool and know within 20 minutes if it's useful or just hype.
It’s why I know when it’s time to stop using ChatGPT and rebuild my ecosystem in Claude.
It’s why I don’t pay for expensive AI tools subscriptions for specific tasks - I build them myself.
That doesn't come from reading about AI.
It comes from using it badly, over and over, until you stop using it badly.
Matt said something I really liked: if a model even kind of works at something today, you can be pretty sure it'll do it well in six months.
And I know this to be true.
So if that article shook you to your core this week, good.
Use that energy. But don't just bookmark it and move on.
Pick one thing you've been putting off.
One project, one idea, one workflow. And spend this weekend actually doing something about it with AI.
Give yourself permission to make something ugly and imperfect. And let me know how it goes!
Keep it up, and don’t give up:)
Until next week,
Your AI Solopreneur Bestie,
Elena
If you want to learn how to use AI tools for your business in a way that feels authentic, without losing your voice or compromising your values - come join us in the AI Solopreneur Club.

