Back to all blog posts

You Get What You Surface: How GPT-5 Exposed the Quiet Power of Defaults

By Asaf Shamly | October 20, 2025

In August, OpenAI quietly reset expectations for what everyday AI can do.

With little warning, the company launched GPT-5 – its most advanced reasoning model yet – and rolled it out not just to Pro users, but to the general public. It was the first time a frontier model of this caliber was made accessible to hundreds of millions of people by default. 

No complex settings. No paywalls. No fine print.

Technically, it was a model release. But practically, it was something more: an interface moment. The moment OpenAI made high-level reasoning the default (not the exception) for hundreds of millions of people.

But what happened next wasn’t what OpenAI expected.

The router that dynamically assigns model strength based on task complexity broke down. Users didn’t understand what they were seeing. And what should have been a leap forward quickly backfired into frustration. Reddit lit up. “Bring back GPT-4o” trended. OpenAI did what any platform does in a moment of panic: they brought back a legacy model to put people at ease. 

For anyone working in AI, product, or UX – there’s a lesson here.

Features and capabilities are one thing. Making sense to people is another.

As someone who lives at the intersection of intelligent systems and real-time user engagement, I’ve been thinking a lot about what this moment reveals about us.

People Form Relationships With Interfaces.

What shook OpenAI most wasn’t technical feedback – it was emotional backlash.

Users were genuinely upset. They missed the “feel” of GPT-4o.
They didn’t trust GPT-5, even though it was technically better. That’s not something you’d expect from a switch in backend logic. But it’s exactly what happens when systems stop being tools and start feeling like collaborators.

This isn’t unique to AI. It’s a broader behavioral truth.

When people form expectations about how a system responds, those expectations are sticky. We get attached to feedback patterns. Familiarity leads to trust. Change that, even for improvement, and you risk rejection.

In media, we’ve been seeing this for years.

Change the layout of a homepage? Expect a drop in engagement.

Tweak a targeting model? Watch conversions dip before stabilizing.

Redesign an ad experience? Even the smallest UX shifts can mess up performance.

None of this is irrational. 

We link behavior to expectations. And when our expectations aren’t met, performance drops too.

Accessibility Doesn’t Equal Understanding

One of the most fascinating stats from the GPT-5 rollout came directly from Sam Altman:

Less than 1% of free users had ever engaged with OpenAI’s most advanced models.

Not because they weren’t interested.

But because the interface never told them they could.

Think about that.

Millions of people (700M weekly active users), sitting on top of some of the most sophisticated reasoning technology available to the public – and very few, if any, ever took advantage of it. Not out of rejection, but invisibility.

I think this is especially relevant in the context of data, visibility, and signal quality.

In media, we often talk about “making smarter decisions.” But how many of those decisions are based on information that’s actually surfaced? How often are we missing the context that would have changed our approach – not because it didn’t exist, but because we didn’t know where to look?

That’s not a tooling problem.

It’s a framing problem.

If insight doesn’t show up at the moment of action, it might as well not exist.

The Default Shapes the Outcome

What made GPT-5 different wasn’t its architecture. It was that OpenAI made it the default.

And when defaults change, everything else follows.

In advertising, we don’t pay much attention to defaults. But they’re everywhere:

The default metrics we optimize toward (CTR, viewability, CPM)

The default formats we buy (standard display, midroll, sponsored content)

The default assumptions we make (more impressions = more impact)

But defaults aren’t neutral. They’re historical. They reflect what was easy to measure, not necessarily what mattered most.

And just like GPT-5 exposed hidden complexity, media teams are starting to realize the limits of their own assumptions.

That’s a good thing. Because it opens up room for better defaults that prioritize context, quality, and attention over flat-out delivery.

Final thoughts

The launch is a reminder that how something’s delivered impacts its reception –  no matter if it’s a model, a metric, or a message.

We need to ask harder questions about our own attention and what we’re optimizing by default – is it still serving the outcomes we care about?

Because in the end, what we measure is what we optimize, and maybe we’re not measuring the right things.

Hint: I think we’re not.

 

Latest Articles