Skip to content
Rajat Kumar
Go back

Something Big Is Being Sold

Matt Shumer’s “Something Big Is Happening” is everywhere right now. People are sharing it like a warning from the future. Some are panicking. Some are vindicated. Some are updating their resumes.

I read it. And something felt off. Not in an “AI isn’t real” way. I use these tools every day. I know they’re powerful. But something about the post felt less like analysis and more like… a pitch.

I couldn’t pinpoint it at first. So I went claim by claim and checked the sources.

Here’s what I found: almost nothing in the post is a lie. And that’s exactly what makes it so effective.


The technique

The most effective misinformation isn’t false. It’s true with the context removed.

Take a real data point. Strip the caveats. Place it next to an emotional frame. Let the reader’s brain fill in the gap between “what was said” and “what it feels like was said.”

Shumer’s post does this over and over. Not with one big lie, but with a dozen small omissions. Each one is defensible on its own. Together, they build a picture that the sources don’t actually support.

Let me walk through them.


”GPT-5.3 Codex was instrumental in creating itself”

This is true. OpenAI’s documentation says exactly this.

What actually happened: engineers used early versions of the model to debug deployments, diagnose test failures, and scale GPU clusters during the model’s own development. That’s using a tool to build tools. Developers do this constantly. You use your compiler to compile a better compiler. You use your CI system to test your CI system.

But Shumer places this claim in a paragraph about AI “self-improvement.” The framing invites you to picture something autonomous, recursive, maybe even a little dangerous. The reality is engineers at their keyboards, using a good tool to build the next version of that tool.

Same fact. Very different story depending on what surrounds it.


”Dario Amodei predicts autonomous AI development in 1-2 years”

Real quote. From Amodei’s January 2026 essay “The Adolescence of Technology.”

Here’s what Amodei actually wrote: the current generation of AI “may be only 1-2 years away” from autonomously building the next.

That “may be” is doing a lot of work. And the essay it comes from is a risk warning. Amodei is saying “here’s what could go wrong, here’s what we need to prepare for.” It’s cautious. It’s concerned.

Shumer takes Amodei’s warnings, strips the uncertainty, and presents them as exciting predictions. The caution becomes confidence. The risk becomes inevitability.

It’s like reading a structural engineer’s report that says “this bridge may not survive another winter” and turning it into a real estate ad: “Top engineer confirms: big changes coming to this neighborhood."


"50% of entry-level white-collar jobs within one to five years”

Also a real Amodei quote. He said it to Axios in May 2025 and reiterated at Davos in January 2026.

Two pieces of context Shumer doesn’t mention.

First: Amodei framed this as a “duty to warn.” His exact framing was concern, not celebration. He described a scenario where “cancer is cured, the economy grows at 10% a year, the budget is balanced, and 20% of people don’t have jobs.” He wasn’t cheering. He was saying: these things might arrive together, and we’re not ready.

Second: at that same Davos, other CEOs explicitly pushed back. Fortune ran a headline: “At Davos, CEOs said AI isn’t coming for jobs as fast as Anthropic CEO thinks.” Presenting one CEO’s warning as settled consensus skips a pretty important part of the conversation.


”Task complexity doubles every 4-7 months”

This comes from METR (Model Evaluation & Threat Research), a legitimate research organization. The data is real.

But there are two qualifiers that change everything, and both are missing from Shumer’s post.

The benchmark measures 50% success rate. Not 90%. Not 99%. At the frontier of what AI can do, it fails half the time. That’s the bar for “capability” in this metric.

It only measures coding and research tasks. Not “tasks” in general. Not legal work. Not medical analysis. Not the broad sweep of knowledge work that the post implies.

Gary Marcus pointed this out directly: Shumer “neglects to say the criterion on that benchmark is 50% correct, not 100%” and “the benchmark is only about coding and not tasks in general."


"I tell AI to build an app and it handles everything independently”

No app named. No complexity described. No mention of edge cases, security, or what happened when real users touched it.

Anyone who ships software knows that “it works on my machine” is the start of the process, not the end. Shumer’s anecdote gives you the demo reel without the production reality. And because it’s a personal anecdote, there’s nothing to verify and nothing to argue with.

That’s not evidence. It reads like a testimonial for a product, not a technical assessment from an engineer.


”If you don’t see it, you’re using the free tier”

This might be the most revealing line in the entire post.

It makes the argument unfalsifiable. If you agree, you’re informed. If you disagree, you just haven’t bought the right product yet. Every skeptic is wrong by definition, not because they’ve been answered, but because they’ve been disqualified.

And the solution to your anxiety happens to be: use premium AI tools daily. Tools made by companies like, well, Matt Shumer’s.

That’s not an argument. That’s a sales funnel with an existential crisis as the landing page.


”No safe career refuge, unlike previous automation”

This exact claim has been made about every major technological shift. The power loom. Electricity. ATMs. The internet. Self-driving cars. Each time, serious people said “this time there’s nowhere to go.” Each time, new categories of work emerged that nobody predicted.

That doesn’t guarantee it happens again. AI is genuinely more general-purpose than previous automation. But stating “there is no refuge” as fact, without even engaging with the historical pattern, is not analysis. It’s assertion dressed up as insight.

Extraordinary claims need extraordinary evidence. What the post offers instead is conviction without proof.


The trajectory problem

Shumer draws a line from “AI couldn’t do basic arithmetic in 2022” to “autonomous coding in 2026” and asks you to extend it.

The data points are real. LLMs were genuinely bad at arithmetic in 2022. They’re genuinely impressive at coding now. But connecting two points and drawing a straight line into the future is one of the oldest tricks in technology forecasting.

It ignores plateaus. It ignores diminishing returns on scaling. It ignores that persistent problems (hallucination, reliability under edge cases, the gap between demos and production) haven’t followed the same curve. Cherry-picking the steepest part of any S-curve and presenting it as a trend line is how you sell futures, not how you predict them.


The author

I think this matters.

Matt Shumer is the CEO of OthersideAI, the company behind HyperWrite, an AI writing tool. He also runs Shumer Capital, a fund that invests in AI infrastructure and developer tools. He’s listed with an AI speaking agency for paid talks.

Every dollar of AI hype flows toward his businesses.

And there’s a credibility issue that the viral audience probably doesn’t know about. In September 2024, Shumer released an open-source model called Reflection 70B, claiming it achieved state-of-the-art benchmark results. Independent researchers couldn’t replicate the performance. Some accused the model of actually being a wrapper around a commercial API. He was accused of fraud in the AI research community. He eventually said he “got ahead of himself” but never fully explained the discrepancies.

This is the person asking you to trust his assessment of AI capability.


The persuasion architecture

Zoom out and the post follows a recognizable structure:

  1. Personal authority: “I’ve seen it with my own eyes”
  2. Appeal to bigger authority: Amodei quotes, stripped of context
  3. Emotional trigger: The COVID analogy, job loss fears
  4. Manufactured urgency: “You’re already behind”
  5. Unfalsifiable framing: “Skeptics just haven’t used the right tools”
  6. Call to action: “Use premium AI tools daily”

That’s not the structure of someone trying to help you understand something. That’s the structure of someone trying to make you feel something. Specifically: behind, anxious, and ready to act.


What I actually think

Here’s where I want to be honest, because I don’t want this to read as AI skepticism. It isn’t.

I use AI tools every day. They make me faster (this post was drafted with the help of AI). They help me think through problems. They let me work in languages I’m less familiar with. The capability is real and it’s growing.

But “AI is a powerful tool that’s changing how we work” and “the sky is falling and you’re already behind” are two very different claims. The first one helps you adapt. The second one helps someone sell you something.

The next time a post like this goes viral, try this: for every claim, ask “what’s the source, and what did the source actually say?” You’ll be surprised how often the context changes everything.

Don’t let someone else’s urgency override your judgment. That’s true whether the subject is AI, crypto, or anything else that promises the future is arriving faster than you think.

The future is always arriving. The question is whether you’re being informed or sold to.

Learn to tell the difference.


Share this post on:

Next Post
.agentlint: Stop Prompting, Start Specifying