The AI Transparency Problem: What Business Insider's Layoffs Reveal About Media's Blind Spot
When 70% of your newsroom uses AI daily, shouldn't readers know about it?

Earlier today, New York Times reporter Benjamin Mullin shared on LinkedIn that Business Insider was cutting 21% of its staff "as it looks to reduce its exposure to 'traffic-sensitive businesses' and focus on 'fully embracing AI.'"
One detail from CEO Barbara Peng's memo caught my attention: "Over 70% of Business Insider employees are already using Enterprise ChatGPT regularly (our goal is 100%), and we're building prompt libraries and sharing everyday use cases that help us work faster, smarter, and better."
Seventy percent. That's not experimentation — that's integration.
It made me wonder: what exactly are all these BI employees using AI for? So I went digging on their website for answers.
What I Found (And Didn't Find)
I started by reading random articles. Many had standard affiliate link disclaimers at the top, but I saw no mention of AI—nothing identifying whether AI was used for writing, research, or anything else.
I checked their Legal & Privacy policies. Under "Prohibited Uses" in their Terms of Service, I found this interesting restriction: users can't use BI's content "to develop any software program, model, algorithm, or generative AI tool" or for "training or using the Sites' content in connection with the development or operation of a machine learning or artificial intelligence (AI) system."
They don't want others using their content to train AI. But there was still nothing about how they use AI.
I found one clue: a 2023 memo from then-Editor Nicholas Carlson about AI that was shared publicly. He wrote:
"Artificial intelligence promises to be a tool at least as powerful as all of those. To do our very best for you, our newsroom is going to have to learn how to use it. But we are going to have to learn how to use it carefully... That's why I can promise you that no matter what tools we use to make our work, you can trust us to be accountable and responsible for each story's accuracy, fairness, originality, and quality."
Notice what's missing? Any mention of transparency or how they'd share with readers the ways AI contributed to their content.
The Transparency Gap
Look, it's unfair to single out Business Insider. They're not the only publisher experimenting with AI; I doubt they're the only ones lacking transparency. But as the saying goes, two wrongs don't make a right.
Media has a huge opportunity to be a trusted source for AI. But to do that, we need to be honest, ethical brokers. And transparency is key to that.
How Some Are Getting It Right
When I was at Lehigh Valley Public Media, we worked with United Robots to generate real estate content using AI. The crucial difference? Each article included this disclaimer:
"This article was generated by the LehighValleyNews.com Bot, artificial intelligence software that analyzes information from prominent real estate data providers and applies it to templates created by our newsroom. We are experimenting with this and other new ways of providing more useful content to our readers. You can report errors or bugs to news@lehighvalleynews.com."
The New York Times published "Principles for Using Generative A.I. in The Times's Newsroom" in 2024, which states: "We should tell readers how our work was created and, if we make substantial use of generative A.I., explain how we mitigate risks, such as bias or inaccuracy, with human oversight."
The Texas Tribune's Code of Ethics includes an AI Policy with this commitment: "When AI tools play a role in developing key findings in a story, such as through a data analysis, we will clearly disclose how the tools were used."
These are exactly the types of commitments every media organization should make.
Building Your Own Policy
If your organization doesn't have a formal AI policy yet, don't feel bad — we didn't at LVPM either. Nieman Lab published helpful guidance called "Writing guidelines for the role of AI in your newsroom? Here are some, er, guidelines for that." They examined 21 newsrooms in the U.S. and Europe and their policies.
However, they acknowledge a key problem: "Mentions of transparency are often interconnected with the requirement that content should be labeled in a way that is understandable for audiences. However, it is often far from clear from the guidelines how these mentions of transparency will take shape in practice."
In other words, many newsrooms talk about transparency, but few spell out what that actually means on a daily basis.
The Bottom Line
As AI becomes more integrated into how we produce journalism — from research to writing to editing — readers deserve to know when and how it's being used. Not because AI is inherently bad but because transparency builds trust.
And in an era when trust in media is already fragile, we can't afford to keep our AI use in the shadows.
So let me ask: Does your organization have an AI policy? How did you approach it in theory and in practice? And most importantly — are you being transparent with your audience about it?

