Skip to content

The AI English Teacher

Learn English with Ai!

Menu
  • Home
  • Learn English
  • Privacy Policy
  • Contact Form
Menu
Helpful Tool or Attention Vampire?   – The AI English Teacher

Helpful Tool or Attention Vampire?   – The AI English Teacher

Posted on November 26, 2025 by admin
Does AI risk creating a super powered attention vampire?

OpenAI has launched a television advertising campaign. I spotted it on Channel 4’s streaming platform a week or so ago, but apparently it is on live terrestrial TV too. The advert is deliberately wholesome: a brother and sister planning a trip together, with helpful captions showing ChatGPT guiding their journey, orchestrating their bonding experience. (The irony of the advert being shot on 35mm film rather than using Sora was not lost on me). It’s carefully crafted to position AI as just another natural part of everyday life—a helpful butler that does things for you – why should we have to struggle at all? Let ChatGPT do it all for you so you can focus on the good stuff.

But scratch at that wholesome veneer just a little and we can see OpenAI is no longer positioning itself as a technology provider. It’s following the social media company playbook, and we’ve seen how that story ends.

Two Advertising Campaigns, Two Philosophies

Interestingly, Anthropic has also launched its first paid advertising campaign for Claude, launching on streaming services like Netflix and Hulu, during live sports events, with print placements in The New York Times and Wall Street Journal, even pop-up events like the “Claude cafe” in New York’s West Village.

But the approach couldn’t be more different.

Anthropic’s campaign, themed “Keep thinking,” positions Claude as an AI thinking partner. The 90-second hero film reframes problems as opportunities, with the empowering message that there’s “never been a better time” to tackle challenges. Their target audience? “Problem solvers” – coders, researchers, creators – people who want to enhance their thinking, not replace it.

OpenAI’s campaign, by contrast, suggests replacing effort and thought with a simple electro-butler. The messaging is about convenience, about having AI do things for you, about making life easier by outsourcing cognitive work.

This isn’t a trivial distinction. It represents two fundamentally different visions for what AI should be:

  • A tool that amplifies human capability (Anthropic’s vision)
  • A service that substitutes for human effort (OpenAI’s vision)

One encourages continued thinking. The other encourages cognitive offloading.

The Platform Play 

The advertising campaign is just the visible tip of OpenAI’s strategic transformation. At its DevDay event in October 2025, OpenAI launched something far more significant: an app store directly within ChatGPT.

Users can now access apps from partners like Spotify, Expedia, and Zillow within natural language conversations, without downloading separate applications. OpenAI is explicitly positioning ChatGPT as an “operating system” – a direct challenge to Apple and Google’s established app store models.

This is the classic platform playbook: build a large user base, then monetise that captive audience. Sound eerily familiar? It’s precisely how social media platforms evolved from tangentially useful distractions into attention-extraction machines and a plague on our social fabric.

The Revenue Model Nobody’s Quite Saying Out Loud

Here’s where things get concerning. While OpenAI hasn’t explicitly announced plans to incorporate advertising into ChatGPT itself, all signs point unmistakably in that direction:

The financial pressure: OpenAI is shifting from a non-profit to a for-profit structure whilst facing significant operational costs. They need new revenue streams.

The infrastructure building: The company has been hiring advertising experts from Google and Meta—you don’t build an advertising team unless you plan to use it.

The CEO’s shifting stance: Sam Altman, who previously expressed distaste for ads, has softened his position, stating that ads could be a feature “if done right.” Apparently he has decided that he quite likes Instagram ads and felt the need to say this publicly?

The competitive pressure: Google’s AI Mode and Microsoft Copilot are already experimenting with integrated advertising. OpenAI can’t ignore this competitive reality indefinitely.

The documentary evidence: Internal documents project that OpenAI expects to generate revenue from “free user monetisation” (which means advertising) starting in 2026.

The pieces are all there. We’re watching the classic Silicon Valley pivot from “growth at all costs” to “monetise the captive audience.” The app store creates the platform, the massive free user base provides the audience, and advertising becomes the inevitable revenue model.

We’ve watched this trajectory before. Social media platforms started as tools for connection and information sharing. Then came the advertising revenue model. Then came the engagement algorithms. Then came the addiction, the misinformation, the mental health crisis, and the erosion of information literacy.

Now we’re being asked to welcome this same model into AI—except this time, we’re dealing with something far more powerful than a social network. We’re dealing with a sycophantic worldview reinforcing piece of software (as I’ve previously described it) that will be optimised for engagement, being deployed in a society where information literacy, data literacy, and AI literacy are catastrophically low. What could possibly go wrong..?

Sora: The Smoking Gun

If you needed evidence of where OpenAI’s priorities lie, look no further than Sora, their text-to-video generator. This technology appears capable of producing little beyond deeply damaging, unpleasant, and frankly sinister content. It’s optimised for viral moments, not genuine utility – the fact that its largest audience was TikTok should tell you all you need to know. It’s built for engagement metrics, not human flourishing.

Sora reveals the truth about OpenAI’s trajectory: they’re not building tools to solve problems or enhance human capability. They’re building tools to capture attention and generate shareability. The technology may be impressive, but the applications are concerning at best, harmful at worst.

This is a company that’s lost its way, caught up in its own hype, overvaluing its technology to extreme levels whilst underestimating the damage of deploying engagement-optimised AI into an unprepared society. The huge financial overvaluation of the company makes this clear, if nothing else does.

Beyond Advertising: Institutional Partnerships vs. Consumer Capture

The difference in philosophy extends beyond advertising campaigns to broader business strategy. While both companies are advertising, Anthropic is simultaneously building serious institutional partnerships: major collaborations with Deloitte, rolling out Claude to hundreds of thousands of employees; working with Northumbria University to help students develop AI skills and understanding. They have even partnered with the UK’s AI Security Institute (AISI).

The contrast is clear: Anthropic are positioning AI and their product as a powerful tool requiring careful deployment, working with institutions that can implement it thoughtfully. OpenAI is pursuing the platform play—maximum consumer penetration, app store lock-in, and advertising revenue—the same approach that gave us addictive social media.

To be as even handed as possible, there are OpenAI projects that see it partner with governments, but these almost all feel like commercial enterprises.

Why Education Should Be Terrified

For those of us working in education, OpenAI’s trajectory should be deeply concerning. We’re already struggling with the damage social media has inflicted on our students – the attention fragmentation, the mental health crisis, the erosion of critical thinking, the vulnerability to misinformation.

Now imagine adding a force multiplier to all those issues – AI systems being positioned as an operating system, with an app store creating lock-in, advertising revenue creating pressure for engagement optimisation, explicitly marketed as a replacement for effort and thought. Now imagine this being deployed into classrooms and homes where information literacy is already dangerously low.

AI can become a creative partner or a turn us into passive consumers

I’m decidedly pro-AI in education. I’ve written extensively about the genuine utility these tools can provide. I see people using AI as reflective thought partners, as thinking tools, as aids for creation and learning. These applications are valuable.

But there’s a crucial difference between thoughtfully implemented AI as an educational tool and platform-optimised AI marketed as a substitute for thinking. The former requires preparation, training, and critical thinking skills. The latter bypasses all of that by positioning itself as the easier alternative to cognitive work.

When Anthropic says “keep thinking,” they’re reinforcing that AI should enhance human capability. When OpenAI’s advertising suggests replacing planning and effort with helpful AI guidance, whilst simultaneously building the infrastructure for an advertising-supported platform model, they’re actively discouraging the very cognitive work that education aims to develop.

The Regulation Gap

We need protecting from this trajectory. We need regulation that steps in and says “This technology isn’t safe yet to be deployed like this, to be thrown out into the mainstream, to be in every corner of life being used for everything.” We need strong safeguards for age limits. We need to have our own images protected as copyright or IP – an area the UK government is notoriously weak on.

Currently, we’re being asked to react rather than prepare. We’re expected to deal with AI being “dropped on us” rather than having the breathing space to develop the literacy, the safeguards, and the pedagogical approaches needed to use it responsibly.

Those of us in education who are moving in this direction are making plans, developing strategies, building understanding. But we need time. We need breathing space. We need the chance to educate before we’re forced to simply cope with ubiquitous deployment optimised for engagement and advertising revenue. Hell, the safeguarding implications alone are enough to give a team of people in a school sufficient work to keep them going for years to come!

OpenAI’s push into mainstream consciousness via television advertising – with messaging that explicitly positions AI as a replacement for cognitive effort – combined with their platform strategy and clear trajectory toward advertising-supported revenue, takes away that breathing space. It creates pressure to adopt and integrate before we’re ready, before our students have developed the critical thinking skills necessary to use these tools wisely. It also muddies the waters of what these tools should and shouldn’t be used for. Already I’m encountering the question ‘if AI can do this, why should I?’.

What We Can Do

We can’t ignore what’s happening. These tools are extant and evolving rapidly. The technologies we’re seeing now are the Model T Fords of AI; within months and years, we’ll see rapid iteration, different tuning, eventually entirely new architectures that work fundamentally differently.

But we don’t have to buy into OpenAI’s trajectory. We have choices:

Educate relentlessly. Build AI literacy, information literacy, and critical thinking skills before these tools become ubiquitous platform monopolies. This is our only real defence. Teach students that AI should enhance thinking, not replace it—and help them understand the business models that shape these tools.

Educate effectively. We need to redesign the curriculum to reflect the changing context. Hanging onto the old ways of product focused assessment driven systems won’t cut it in a world where AI is ubiquitous.

Legislate thoughtfully. Push for regulation that protects vulnerable populations (including children) from engagement-optimised, advertising-supported AI before it replicates social media’s damage.

Support the right approaches. When companies like Anthropic position AI as a thinking partner rather than a thought replacement, that’s worth supporting. When companies build institutional partnerships alongside consumer products, that demonstrates responsibility worth recognising.

Seek alternatives. Support open-source models, publicly funded initiatives like the Swiss Model, or companies that prioritise enhancing human capability over capturing attention and advertising revenue. Hell, even learn how to create our own tools – it won’t be long before what currently lies in the hands of the few in silicon valley will be widely available on a consumer laptop.

The path OpenAI has chosen—from technology provider to platform operator, from tool to operating system, from thinking aid to thinking substitute, from subscription model to inevitable advertising revenue—isn’t inevitable for the entire AI industry. It’s a choice, and it’s the wrong one.

We’ve seen this playbook before with social media. We know how it ends. The question is whether we’ll let it happen again with something far more powerful, or whether we’ll demand better before it’s too late.

The good news? Some companies are demonstrating there’s another way. Let’s hope that Anthropic’s “keep thinking” isn’t just advertising copy, but a fundamentally different philosophy about what AI should be. I guess only time will tell if they can cleave to this path or if they too will fall to the allure of the market.

Whatever the case we need to pay attention to that difference. Our students’ cognitive development may depend on it.


The Big Questions: How do we prepare students for an AI-saturated world? What literacy skills do we prioritise? How do we navigate the pressure to adopt tools that may not have our best interests at heart?


Suggested hashtags: #AIinEducation #EdTech #DigitalLiteracy #AIEthics #CriticalThinking #TechRegulation #EducationalTechnology #AILiteracy #ResponsibleAI

Category: Learn English

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Copilot Laptops Education – AI English Teacher
  • Google vs OpenAI – The AI English Teacher
  • Black belts don’t mean anything anymore… – AI English teacher
  • Why phones miss the point of student anxiety – AI English teacher
  • Power Combos –  The Real AI Level-Up in Teaching – The AI English Teacher

Recent Comments

No comments to show.

Categories

  • Learn English

Pages

  • Contact Form
  • Privacy Policy
© 2025 The AI English Teacher | Powered by Minimalist Blog WordPress Theme

Powered by
...
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by