Categories:
The 2025–26 school year marks a decisive turning point: digital assistants, exercise generators, and intelligent platforms are being integrated into classrooms at scale. More than 75% of Spanish teachers expect to use AI this year, according to a Kahoot! Survey. But the technological revolution in schools doesn’t come alone it also brings a crucial debate: Can AI support and personalize learning without replacing the essential emotional connection and critical thinking that define human education?
Article published by Cuadernos de Pedagogía

In many schools across Spain, the start of the academic year doesn’t just bring new books and fresh whiteboards; it also introduces digital assistants, exercise generators, and platforms with artificial intelligence (AI) features. This shift is no coincidence; it reflects a growing trend: the widespread integration of AI tools in education.
A recent survey by Kahoot!, a game-based (gamified) interactive learning and assessment platform, conducted with more than 1,100 Spanish teachers, shows that over 75% of educators expect to use AI tools during the 2025–26 school year. Among the most frequently cited uses are preparing educational content (37%), virtual learning (18%), and classroom gamification (17%).
But that figure doesn’t exist in a vacuum: global reports show how AI is expanding throughout education. According to Microsoft’s 2025 AI in Education Report, the use of generative AI has grown rapidly: more than 86% of educational organizations already use tools of this kind, and the percentage of teachers who say they use them increased by 21 points compared to the previous year.
Ainhoa Marcos, Global VP of Education & Public Sector at ODILO, clearly addresses the ethical challenges AI poses within the education ecosystem. For her, the core premise is that “AI is always a support tool, not an end in itself,” a concept that must be understood across the entire education community.
Marcos emphasizes that AI should never replace a teacher’s instruction or the development of essential student skills: “It’s also not a substitute for creativity or critical thinking.” That’s why she insists that teaching students to use technology responsibly is just as crucial as “teaching math or language arts.”
AI adoption also raises unavoidable privacy challenges, given that it can collect sensitive information about students’ learning pace, preferences, and habits. Marcos stresses the need to “ensure that our platform and AI provider comply with current regulations” and manage data responsibly—something vital to building trust.
She also points to the need for technology that is “ethical in itself,” meaning it does not perpetuate inequalities or bias. For that reason, continuous human review and ongoing validation of how these technologies are used are essential to ensure fairness and educational quality.
To address these challenges, Marcos argues that there is an urgent need for clear regulations governing student privacy and data protection. That means defining precisely what data can be collected, how it will be stored, and who can access it—areas where “it’s important to be clear with teachers and parents.”
In addition, standards must be established to ensure the quality and fairness of AI-generated content. Marcos recommends auditing platforms and trusting experienced providers who work closely with the education community and public administrations, ensuring materials are “accurate, inclusive, and educational.”
In day-to-day practice, the ODILO expert adds that schools should establish protocols for teacher accountability, defining how AI is used and how teachers should “validate, supervise, and guide” its application. This protects both students and educators and ensures the technology truly serves its support function. Above all, she concludes, teacher training—and, in particular, student training—in the ethical and responsible use of AI is critically important.
On how to ensure AI supports rather than replaces human interaction, Marcos says the first step is “recognizing that AI cannot replace the teacher.” While technology excels at personalization and content recommendations, education still requires an emotional, human dimension of guidance that AI cannot replicate. Teachers, she stresses, will continue to be “the guide, the reference point, and the critical evaluator.”
Another key factor is intentionally integrating AI with oversight. It’s essential to establish a clear framework where teachers moderate, validate, and contextualize learning. This prevents AI from becoming a shortcut for instant answers and preserves the value of human interaction.
It is also critical to train students so that AI becomes an “educational superpower.” They must learn how to cite sources properly, evaluate reliability, and understand the technology’s limitations. Only then, Marcos emphasizes, will AI add value to learning rather than replace thinking or creativity.
Instructional Uses
In schools already experimenting with AI, uses are varied:
- Creating materials and resources: teachers use AI to design presentations, worksheets, simulations, or interactive resources tailored to students’ levels.
- Automated assessment and instant feedback: AI-assisted systems grade quizzes, offer hints, and provide immediate feedback.
- Personalized adaptation: AI adjusts task difficulty based on each student’s previous responses.
- Support for struggling students: automatic summaries, translation, assisted reading, and rephrasing prompts.
- An “intellectual assistant” in the classroom: a “copilot” that suggests ideas, generates questions, or helps students and teachers plan.
Microsoft’s AI in Education Report: Insights to Support Teaching and Learning (2025) highlights that AI is no longer just a passive assistant; it is beginning to act as a “thinking partner,” collaborating in instructional decision-making.
However, not all teachers use it the same way. A study in Spain by Educación 3.0 found that 64% of teachers already used AI to prepare lessons, although a smaller share considers its impact on real learning to be strongly positive.
For psychologist Yolanda Romero, co-founder and technical director of ICEPS, technology can be transformative but she offers a clear warning: AI “can transform how we learn, but it must not replace what is essential: the emotional connection and human perspective that support every child’s development.” In her words, “learning isn’t automated it’s accompanied.”
Romero stresses that while technology is powerful, it must not erode four key aspects of child and adolescent development: emotional self-regulation and the irreplaceable role of the emotional mentor, and it must avoid undermining critical thinking or creating dependence on immediacy.
In a world where access to knowledge is instant, she says the true challenge has changed: it’s no longer just about knowing, but about sustaining attention, patience, and motivation when things are difficult. “Children and adolescents need to learn to tolerate frustration, trust their abilities, and sustain effort without expecting immediate results,” she notes.
On the role technology cannot play, Romero is unequivocal: “Emotional mentor, without a doubt.” AI can design engaging activities, but no technology can replace “the look that contains, understands, and motivates.” Learning, she argues, is inherently human and grows from that connection. She underscores that “children learn better when they feel seen, heard, and supported.”
Risks and Challenges
- The digital divide and inequality: One of the most explicit threats to equitable AI implementation is unequal access to devices and connectivity. In regions with limited infrastructure, technological progress can become a new disadvantage.
The Cotec Foundation, in a study launched in 2025 titled “AI and Education” and based on responses from more than 7,000 people (education community and general population), highlights the urgency of policies to address these inequalities.
Meanwhile, the study “Educating in the Age of AI (2025),” produced by Empantallados and GAD3, reports that 70% of teachers set limits on AI use in the classroom, while only 40% of parents do the same at home. In addition, six out of ten teenagers admit to ignoring those restrictions.
On how AI may affect inequality, Àngels Soriano Sánchez, coordinator of UNIR’s University Expert Program in AI in Education, argues that “like previous technologies, it offers the possibility of improving our students’ digital skills,” as long as teachers guide them in a deliberate, pedagogically purposeful way.
She notes that students already have access to apps and generative AI on their phones, shifting the focus toward ethical education: “the challenge is educating them in the ethical use of AI.” That way, she says, students can build awareness “from a critical perspective unbiased, equitable, and responsible toward others and the planet we live on.”
Regarding the most effective instructional models, Soriano Sánchez insists that classroom AI implementation “must have a pedagogical purpose,” aimed at making AI “a driver of critical thinking.” That requires a model that pushes students to reflect on their interactions with AI, seek evidence, question outputs, and cross-check sources in short, a commitment to argumentation and verification. To achieve this, she concludes, it is essential to have teachers who accompany and guide this learning process.
On the policies and investments needed for equitable and effective AI implementation nationally or regionally, she is categorical: first, “simply don’t turn education into a political weapon.” She calls for consensus and education laws that last 6–8 years, followed by a comprehensive training plan for teachers, education stakeholders, and citizens, with specific actions for each group. She also outlines regulatory needs: legislation, practical implementation guides for classroom use, and the creation of ethics committees to address disagreements or issues arising from misuse.
- Academic integrity and impersonation: Teachers widely fear students will use AI to “cheat.” In a Virtual Educa survey (March 2025), 87% of teachers reported that they believe students may be using AI to pass off academic work as their own. Only 3% think current tools adequately respect teachers’ intellectual property.
An international study titled Student Perspectives on the Benefits and Risks of AI in Education (2025), involving 262 U.S. college students, identified major concerns, including academic integrity, the reliability of AI outputs, the loss of critical thinking, and risks of dependence.
A systematic review titled Practical and Ethical Challenges of Large Language Models in Education (2023) highlights recurring risks: lack of transparency, algorithmic bias, privacy issues, and low reproducibility in AI-based educational solutions.
Here, Romero also warns about the collateral effects of integrating technology without human control, pointing to a double risk affecting young people’s ability to think independently and manage emotions. One of the biggest dangers she sees is “the loss of critical thinking and emotional dependence on immediacy.” When everything can be solved with a click, young people may stop questioning, doubting, or trusting their own judgment. Long-term, she fears this can trigger anxiety, low tolerance for mistakes, and a sense of inadequacy when facing “perfect” systems.
- Lack of teacher training: A near-universal barrier to effective adoption is that many teachers lack adequate training. In the U.S., Microsoft reports that, despite a surge in AI use, nearly one-third of teachers are not confident in using AI responsibly or effectively.
In Spain, the Empantallados/GAD3 study shows that teachers are seeking structured AI training programs—not just informal tutorials.
Leading Experiences: Where AI Is Already Taking Hold
While the AI debate often focuses on ethics and regulation, AI is already transforming concrete initiatives in Spanish schools, demonstrating practical value. Although AI remains in its early stages at many institutions, some programs have paved the way.
- From bullying prevention to “Assessment 4.0”: One promising use is in the social sphere. Several schools in Madrid, Asturias, and the Basque Country have implemented an IBM AI system to prevent bullying. Rather than viewing AI as a distraction, these schools use it as a quiet ally to “improve school climate,” providing a discreet yet powerful tool against bullying.
In academics, innovation centers on personalization. The Assessment 4.0 project with ten pilot schools and support from the IIIA-CSIC is developing AI technology to make assessment more “efficient” and “equitable,” helping learning processes better adapt to individual needs.
- AI as a driver of computational thinking: Introducing AI doesn’t always require screens. Juan Pablo II School in Parla (Madrid) has adopted an approach that forces students to understand technology from the ground up. As coordinators explain, students learn to “create their own AI” and develop computational thinking without requiring mobile devices in the classroom, redefining digital education.
Meanwhile, the Community of Madrid, through partnerships with major tech companies, is driving curricular adaptation. AI is used to tailor lessons to each student’s level, moving away from one-size-fits-all instruction. Even creative and historical projects benefit: in “Letters from the Trenches” at IEI Giner de los Ríos, students used AI-based chatbots to simulate conversations with World War I soldiers showing how generative AI can foster empathy and critical thinking in History.
- Personalization that boosts performance: A clear example with tangible results appears in the private sector. King’s College Soto de Viñuelas (Madrid) has implemented Inspired AI to personalize content and classroom support. The school reports this led to a “one-point” improvement in performance thanks to AI, and it plans to expand use to additional grade levels.
- Pilot programs: AI adoption isn’t limited to large institutions or secondary education. Fundación Hiberus in Zaragoza launched the after-school program Menudos Techies, introducing 5th and 6th graders to AI projects linked to art, video games, robotics, and accessibility. After success in pilot schools in Zaragoza, it is now expanding to more than 20 schools nationwide. These initiatives act as “preparatory” cases, helping the next generation become familiar with AI early.
In addition, several schools explored the WatsomApp application (developed by IBM in 2019), which used AI to improve classroom climate through conflict detection and facilitation an early exploration of AI’s social potential.
- Institutional adoption: The broadest commitment comes from public administrations. In September 2024, the Community of Madrid renewed its collaboration with Google to enable public schools to use AI tools that support curricular adaptation. The goal is clear: exercises and lessons that automatically adjust to each student’s level, providing a fully individualized learning experience an indicator that Madrid’s public system is planning large-scale institutional AI integration.
- Trends in higher education and globally: While compulsory education provides concrete examples, AI implementation in Spanish universities remains in its early stages. The EDUTEC report on AI and Education notes that most higher-ed experiences come from individual initiatives or pilot projects, using AI primarily to generate multiple-choice questions, support tutoring, or assist teaching.
This caution contrasts with international trends. Outside Spain, California State University in the U.S. has deployed an education-focused version of ChatGPT (ChatGPT Edu) for students and professors to provide personalized tutoring and reduce administrative workload (Reuters, February 2025). Likewise, major corporate providers are launching pilots: Microsoft recently announced AI-assisted feedback features in Microsoft 365 Copilot, designed to streamline grading and feedback in the classroom.
These pioneering cases national and global do more than prove technical feasibility. They also help capture key lessons about acceptance, obstacles, and best practices ahead of widespread rollout. Ultimately, they confirm that beyond theoretical debates, AI is already shaping a new educational reality in Spain and worldwide.
Where Are We Headed?
The 2025–26 academic year marks a turning point: AI is no longer a distant promise, but a tangible reality. The decisive question now is how to manage that reality so it benefits all schools and students fairly.
For AI to fulfill its educational promise, at least three conditions are needed:
- Structured teacher training: occasional tutorials aren’t enough; educators need solid competencies in AI literacy, ethics, and AI-informed instructional design.
- Digital equity policies: ensuring all schools and students have access to devices, connectivity, and technical support.
- Ethical regulation and transparency: clear standards to protect privacy, prevent bias, preserve academic integrity, and require systems to explain their decisions.
If these conditions are met, AI can enable more personalized teaching, immediate feedback, more inclusive classrooms, and critical digital literacy for society in the future. But without them, the risk is that AI amplifies inequality and erodes fundamental educational values.
On using AI to personalize learning, Romero sees it as viable “only if it’s done with an inclusive and human-centered lens.” Technology can help adapt content, but without emotional context, she warns, it could “reinforce invisible labels.” The fundamental purpose of tech-driven personalization must be “to better understand the student, not to classify them.” Any AI tool, she insists, must be supportive: “Personalization should complement the observation of the teacher and the psychologist, not replace it.”
Finally, she lists tasks no algorithm will ever be able to perform those that require empathy, ethical judgment, and human connection. Crucial decisions, such as when a child needs a break, how to manage conflict, or how to respond to mistakes, remain beyond the reach of programming. Romero emphasizes that “discipline and autonomy are learned through relationships, not through code.”