A week after ChatGPT launched Study Mode, Google fired back with Guided Learning for Gemini. Given my interest in AI tools for education and the fact that I was engaged in some summer learning of my own, I thought I’d put both systems to the test. What I discovered wasn’t which AI performs better – better is very subjective in this instance – but instead it got me thinking about exactly what it is we want from learning in the first place.
A Real-World Test
My testing ground was a Python programming course that Claude had helped me design, loosely based on ‘Automate the Boring Stuff with Python.’ Claude’s superhero-themed interpretation covered the fundamentals well, but I was struggling to make the concepts stick for both of us. Perfect timing to test our new AI-powered tutors.
The Claude course was a sort of translated version of a popular free Python resource ‘Automate the Boring Stuff with Python’. Claude’s interpretation of this and the context I gave it produced a segmented, but often uneven course that is covering the fundamentals, but I was having trouble making it stick, so I thought why not try our new super powered AI teachers out.
The results were very instructive, though perhaps not in the ways the tech companies intended.
Gemini’s Guided Learning: The Structured Approach
Google’s implementation lived up to its promise of step-by-step progression. When I fed it our superhero-themed Python lesson on functions, Gemini methodically broke down each concept:
- Comprehensive Analysis: It didn’t just explain the code; it deconstructed the educational approach, praising the superhero analogy that Claude had constructed and explaining why it works pedagogically
- Scaffolded Learning: Each section built logically on the previous one, with clear explanations of programming concepts like DRY (Don’t Repeat Yourself) principles
- Interactive Elements: Regular “Let’s Try It” suggestions kept the learning active rather than passive
- Authentic Problem-Solving: The response culminated in a detailed walkthrough of the practice challenge, complete with commented code and step-by-step reasoning
Gemini appeared to maintain the educational context throughout. It understood that this wasn’t just about getting code to work—it was about building genuine understanding.
ChatGPT’s Study Mode: The Conversational Guide
ChatGPT took a distinctly different, although not necessarily worse, approach, prioritising dialogue and personalisation:
- Immediate Personalisation: The first thing it did was ask about our experience level and grade, tailoring its approach accordingly
- Socratic Method: Rather than providing comprehensive explanations, it guided us through discovery with targeted questions
- Practical Focus: When we hit a stumbling block (not knowing how to run code in IDLE), it immediately shifted to practical problem-solving
- Encouraging Tone: The interaction felt supportive and non-judgmental, perfect for nervous beginners
However, ChatGPT’s approach felt more reactive than proactive. It responded well to our immediate needs but didn’t demonstrate the same strategic educational planning that Gemini exhibited.
A second experiment:

To broaden my testing beyond coding, I designed a second experiment focused squarely on traditional academic content. I approached both systems as a KS4 student studying IGCSE Edexcel English Literature, asking: ‘How does Shakespeare present loyalty and betrayal in Macbeth?’
Straight away, even before I supplied the context ChatGPT asked me for topic, level and qualification – which is why I came up with this scenario. However after this it continued by first stating that:
“We can tackle this like a guided exploration so you build your own argument, rather than me handing you an essay.
Here’s how we could work:
- Clarify the question — make sure we know what “loyalty” and “betrayal” mean in the play’s world.
- Gather key examples — moments in the plot, quotations, and characters that show each idea.
- Zoom in on language — see how Shakespeare’s words build meaning.
- Link to context — what Jacobean audiences might think.
- Pull together an argument — something that could be shaped into coursework.”
This gave me some hope, but then whilst it did probe and question it was very apt to give me entire chunks of material as ‘models’ – here’s an example:
“ Here’s one you could adapt for your coursework:
When Macbeth refers to the “suggestion” of killing Duncan, the word implies both a personal temptation and an idea planted from outside, linking to the Witches’ power to influence thought. To a Jacobean audience, this might suggest a form of mental bewitchment, where loyalty can be eroded not only by ambition but also by supernatural manipulation.”
And of course no student would just take this and drop it straight into their essay.
Now when I asked Gemini to do this for me, here was the response:
“Of course. Let’s build a paragraph together that you can use as a model. This is a great way to practice structuring your analysis.
A strong analytical paragraph usually has three parts:
- Point: The main argument or idea you want to make in the paragraph.
- Evidence: A quote or specific example from the play to support your point.
- Analysis: Your explanation of how the evidence proves your point, linking it back to the theme of loyalty and betrayal.
Let’s start with the first part. Based on what we’ve discussed, how would you write a single sentence that makes the point that Shakespeare initially presents Macbeth as a loyal hero, whose loyalty is almost immediately put at risk?”
What was really interesting was the fact that I could see the model’s ‘reasoning’ as it measured my input against the system prompt – this is not a feature on the free version of GPT in study mode. Here’s what Gemini said in response to my request:
“My current focus is on helping the user craft their example paragraph. I’m thinking about the best way to gently nudge them in the right direction, providing a framework rather than a finished product. I believe it’s more beneficial for them to generate it themselves to learn.
Refining the Approach
My plan is now focusing on the initial stage of crafting a response. I aim to acknowledge the user’s request and frame the task within a simple structure to aid their own writing process. I’ll prompt them to start with the topic sentence, guiding them towards articulating their main idea. The “PEA” structure seems ideal for this level of writing.”
The Deeper Issue: Technology Versus Culture
Here’s where my excitement about these AI capabilities meets a sobering reality: all the sophisticated algorithms in the world won’t transform education until we address our fundamental approach to learning.
The Transactional Learning Trap
Our education system has become dangerously transactional. Students learn to ask: “What do I need to know for the test?” rather than “How does this connect to everything else I’m learning?” This mindset turns AI tutors into glorified answer machines rather than learning partners.
Consider Google’s research findings that informed Guided Learning development. They discovered that “simply improving prompting wasn’t enough to create a meaningful learning tool.” Why? Because the technology was being used within an educational culture that prioritises quick answers over deep understanding.
The Shortcut Mentality
When education focuses solely on grades and metrics, any AI system—no matter how sophisticated—becomes a shortcut to endpoints rather than a tool for genuine learning. Students will use Guided Learning or Study Mode the same way they currently use regular ChatGPT: to complete assignments rather than build understanding. At the moment they can be easily duped into practically autocompleting work still – ChatGPT still leaps in with much more content and more material that could literally be copy pasted into a student’s work, but it doesn’t take much cajoling to get them both to provide more content that I’m comfortable with, although Gemini seems inherently more resistant to this, but only slightly.
This isn’t the fault of the AI. It’s a reflection of educational systems that have trained students to be answer-seekers rather than question-askers.
What Real Learning Looks Like
My experience with my son and these AI tools highlighted what effective learning actually requires:
Active Engagement Over Passive Consumption
Google’s research emphasises that “people learn best when they actively engage with information.” This isn’t about clicking through interactive elements—it’s about wrestling with concepts, making connections, and building mental models.
When my son struggled with understanding why we needed both def and function calls, neither AI could simply explain it into his brain. He needed to experiment, make mistakes, and gradually construct his understanding through practice. And so, to be fair, did I – I’m still not sure it has stuck… old dog, new tricks and all that. However, just like my son, practice and experimentation along with spaced practice and retrieval will be needed, not just AI answers being regurgitated.
Process Over Product
The most valuable part of the textual study were the guiding questions that it generated and not the content that it made.
The most valuable part of our Python session wasn’t the working code we produced—it was how our thinking evolved. We moved from seeing functions as mysterious black boxes to understanding them as reusable problem-solving tools.
Neither Study Mode nor Guided Learning can assess this kind of growth, because it’s not measurable through conventional metrics. It requires human observation, reflection, and adjustment.
Cultural Prerequisites for AI Success
For AI tutoring to reach its potential, we need educational environments that value:
- Productive Struggle: Students need permission to grapple with difficult concepts without immediate resolution
- Process Documentation: Learning journeys matter more than final answers
- Question Generation: The ability to ask better questions is more valuable than answering existing ones
- Collaborative Intelligence: Working with AI as a thinking partner rather than an answer provider
- Metacognitive Awareness: Understanding how you learn and think
- Innovative Assessment: Methods of assessment that follow the journey, not focus on the destination.
The Path Forward: Culture First, Technology Second
The AI tutoring arms race between Google and OpenAI represent interesting technological achievements. Both systems demonstrate some understanding of pedagogical principles and learning science research – or at least some application of both, but both systems are still vulnerable to exploitation and can be used to create shortcuts. Until we transform educational culture, these tools will always offer the temptation of a quick fix and easy solution.
For Educators
We need to model and reward:
- Curiosity over compliance
- Understanding over completion
- Questions over answers
- Process over product
For Students
They need environments where:
- Mistakes are learning opportunities, not failures
- Deep thinking is valued over quick responses
- AI becomes a thinking partner, not a replacement for thought
- Assessment focuses on growth rather than performance
For Educational Leaders
Policy and practice must align to support:
- Professional development in AI literacy for staff
- Assessment methods that value authentic learning
- Curriculum design that prioritises depth over coverage
- Cultural shifts toward learning-focused rather than performance-focused environments
The Real Competition – and it’s not Google vs OpenAI
The competition between AI tutoring systems isn’t really about features or algorithms—it’s about which approach better supports genuine learning. Both Guided Learning and Study Mode show promise, but their ultimate success depends on the educational contexts in which they’re deployed.
The most sophisticated AI tutor in the world is powerless against a culture that reduces learning to grade optimisation. Conversely, even basic AI tools can be transformative in environments that prioritise understanding, growth, and authentic engagement.
As AI continues to impact education, our focus must shift from technological capabilities to cultural transformation. Only then will we unlock any benefit from AI-enhanced learning and only then can we really build education that adds value rather than just generates statistics.
The question isn’t which AI tutor is better—it’s whether we’re ready to create educational cultures that value learning over results.
#AIinEducation #EdTech #EducationalCulture #TeachingWithAI #FutureOfLearning #DigitalLiteracy #StudentEngagement #AuthenticLearning
