The "Nimble and Aggressive" Lie
Why newsrooms are A/B testing their way to irrelevance—and why AI strategy requires a different kind of failure.

Throughout my career, I’ve leaned hard into the “nimble and aggressive” tag to describe the organizations I’ve led or admired.
If you look at newsroom job descriptions over the last decade, those two words are practically a requirement for any outfit trying to prove it has a future. They are the shorthand we use to tell the world (and ourselves) that we aren’t like the legacy giants of the past. We wore that label like a badge of honor. To us, being a “nimble and aggressive” organization signified that we were the experimental ones—the ones moving fast, breaking things, and out-maneuvering the slow-moving incumbents.
But if I’m being honest with myself (and now with you), that organizational identity was always much easier to claim in an interview or a pitch deck than it was to actually demonstrate in the daily grind.
For years, many of us operated under a comforting, if expensive, delusion: that working with a cutting-edge vendor made our organization cutting-edge by association. We’d sign a six-figure SaaS contract for the latest “industry-changing” tool—thanks a lot, “AI-powered” whatever—explain to our board we were “being innovative,” and pat ourselves on the back for building the first journalism company on Mars.
Meanwhile, we sat comfortably in our chairs as our “innovative” strategies amounted to little more than... subscriptions.
We weren’t experimenting. We were optimizing. We were buying innovation, not being it. That lie was comfortable. Until AI came along and turned that comfortable lie into a death sentence. You cannot purchase “AI Strategy” from Zendesk. You have to actually build the organizational strength to develop one.
Today’s Optimization Trap
Optimizing vs experimenting are two very different activities with incredibly different goals, outputs, and investments. If you’re failing at one and succeeding at the other without realizing it, I guarantee you it’s optimizing.
There are a million levers to pull to get “2% better” today. And yet, as we stare down the barrel of the greatest disruption in our industry’s history, “2% better” has left us with some extremely fragile business models that don’t have the roster flexibility to actually do anything about it.
Optimization is maintenance. It asks, “How do we do what we do today, just better?”
An “incremental improvement” is the comfort food of the corporate set because it can be easily measured:
How do we get our churn down by 0.5%? Time to tweak that paywall logic!
How do we improve newsletter open rates? Let’s A/B test if “Breaking” or “Update” makes more people click.
How do we save editors ten minutes of drudge work? Sure, let’s use AI to summarize these meeting minutes.
Optimization is important! In fact, I’d argue most newsrooms today are criminally behind the curve at just optimizing their workflows. But it isn’t experimentation, nor should we be conflating the two.
The problem is, if you’re optimizing a product that fundamentally isn’t working for how people get news in 2026, all you’re really doing is riding that buggy off of a cliff with style.
The Strategic Blackout
There was a time newsrooms loved sharing their failures and wins with each other. We built entire careers out of it. Then the pressure to optimize ate away at the budget and the mental bandwidth required to actually experiment.
I was skimming through some OpenNews data, Ben Werdmuller, and I have been discussing over on LinkedIn, and one piece of data keeps haunting me.
There has been an 80% reduction in shared code repositories in our industry over the last ten years.
Code repositories and dev blogs are, essentially, journalism organizations showing off the “how” of what you do to the world. They are the public-facing R&D departments we’ve all decided to just stop funding. All the while congratulating ourselves on how innovative “buying” a new subscription makes us feel.
What happens when you stop publishing the “how” of what you do? Every single newsroom has to reinvent that wheel completely on their own. We have optimized ourselves into silos just as the entire industry was collapsing out from underneath us.
You Can’t A/B Test Your Way into an AI Strategy
If all you do with AI is optimize—create more SEO garbage or schedule social posts—well then you’re just going to build a more scalable version of a broken business model.
So how do you actually start experimenting with AI? It starts by asking questions that don’t have optimistic, conservative answers:
Optimization is using AI to generate a clickbait headline.
Experimentation is building your own AI bot to interview your 20-year archive to see if it can surface any lost local connections or interesting patterns—and only caring about the result if it doesn’t suck for at least six months.
Ask yourself: If an LLM can deliver my reader the perfect personalized news digest, why does my tired 800-word piece need to exist? If you aren’t making space for your team to fail trying to answer that question, you’re not experimenting. You’re doing maintenance while everyone else figures it out.
The Engine vs. The Explorer
For organizations to actually get through this period of transition, we need to be honest about what mode we’re actually investing in and what mode we’re just paying lip service to. I like to think of it as balancing the needs of the Engine vs. the Explorer.
1. The Engine (What you optimize)
Goal: Extract as much value as possible out of your current business model.
Metrics: Conversion, CPMs, retention, MAUs, etc.
AI Use Case: Automating notebook transcription, SEO tagging, distributing newsletters.
2. The Explorer (What you experiment with)
Goal: Identify the next product category or business model before the current one exits.
Metrics: Lessons learned, prototypes created, and assumptions invalidated.
AI Use Case: Building AI agents that change the nature of your relationship with the reader or experimenting with nonlinear story formats that your CMS can’t support.
If your staff meetings are all about how to hit next month’s KPIs, you don’t have an Explorer. You have a very well-oiled engine. True resilience means insulating your Explorer team from the Engine. It means giving a team room to spend 6 months on a project that could totally flop without punishing them if it does.
The Reality Check
We have to stop using “nimble” as a buzzword to describe our vendor list and start using it as a metric for our willingness to fail.
How’s your organization actually doing at leaving room for failure? Are every instinct to create immediately crushed under a monthly KPI report? Are you still hoarding “company secrets” that every other organization would benefit from learning from, too scared to share?
I want to know where the "Explorer" lives in your organization. Is there a project you’re protecting from the KPI spreadsheet right now? Or are you struggling to find the room to even start? Let’s talk about how we start sharing the "how" again.
P.S. — Did this analysis provide you with a breakthrough strategy? If so, please consider making a one-time tip to support the deep research and analysis that goes into every Backstory & Strategy post.
Additionally, if you found this post helpful, please restack it and share it with your audience. This spreads the word and keeps me writing the types of content you enjoy.




