Categories:
The expert believes that technology allows us to focus on higher-quality actions.
Interview published by El Nacional
Artificial intelligence is no longer a science fiction concept; it has become the driving force behind a silent revolution in how we learn and work. Do we really understand its potential beyond being a mere branch of computer science? In an interview with ON IA, Iván López, vice president of ODILO, discusses the transformative role of AI in corporate education and addresses common fears about it. The conversation addresses ethical dilemmas, including responsibility for algorithmic errors, efforts to address bias in historical data, and the delicate balance between innovation and the protection of fundamental rights.
What does artificial intelligence mean to you?
To put it simply, it is a branch of computer science. It creates systems capable of performing tasks that typically require human intelligence, such as learning, reasoning, perception, and, above all, language comprehension and decision-making. To date, we have not been able to automate all these processes.
AI is a simulation of human cognitive processes by machines.
In very technical terms, it is a simulation of human cognitive processes by machines. Ultimately, these machines use algorithms and mathematical models that learn from data and recognize patterns. This allows them to adapt to different situations and provide us with other contexts and information based on the input they receive.
What are the main daily uses of technology at ODILO?
At ODILO, we focus on helping organizations transform talent into competitive advantage. We are an EdTech training company and we use artificial intelligence to be able to offer each employee or student customized content and training experiences. This is important because, ultimately, it’s not about taking courses and obtaining many certifications, but about how we apply what we actually learn. And there are many ways to learn; people learn in different ways, some in a more linear way and others in a more visual or auditory way.
Where have you identified the biggest challenges in transitioning to and applying artificial intelligence?
When you introduce artificial intelligence, it’s hard to show immediate value. But the reality is that it enables us to do things we were already doing—just much faster and more efficiently. In the learning space, AI helps us design training that is more user-friendly, more engaging, and more appealing.
We can structure learning in a way that is far more user-friendly
One primary concern for users—students, employees, and companies — is whether this investment in AI will actually improve adoption. In training, it’s hard to demonstrate return on investment. Institutions invest in their learners and employees, but how do you prove that this new way of learning is making an impact at the corporate level?
Do companies adopt AI proactively, or are they pushed by the general climate?
There are indeed two worlds. On one hand, companies that proactively seek ways to boost AI adoption. On the other hand, those who feel the rapid changes have been overwhelmed by them. The truth is that in both cases, AI is here with us—not only in training but across many areas of the workplace.
The approach that produces the best results is to prepare employees to use AI tools while ensuring control over where the information goes. What do I, as a company, need to provide my employees to improve their performance? AI turns us into “super employees,” allowing us to focus on higher-value activities.
What do you think about the fear many people have that AI will take their jobs?
A concept that has become popular is FOBOT (Fear of Better Options), which refers to the fear of becoming obsolete. This is common among groups that feel that if they don’t keep up, they won’t be able to keep pace with colleagues who use AI. The responsibility lies in preparing these groups in a conscious, structured way that leads to measurable results for everyone.
To what extent do you think AI can reshape or redefine the concept of learning?
In my view, it already has—and when I say “already,” I mean literally yesterday. Learning used to be linear and traditional. You searched for the best content, read an entire e-book on cybersecurity, just to understand what you wanted to do. Now, and this has become fully democratized, people use AI to find particular information.
Everyone already uses AI to search for more targeted, concrete content.
We no longer seek a holistic, overarching view of the subject—what we want is precise execution of our needs. This has transformed the content and course landscape. We want content that fits our needs and is more personalized. The sheer volume of information, data, and tailored learning can only be managed within limited time and cost constraints through the use of artificial intelligence.
What role should public administrations play in this ecosystem?
I believe it’s crucial for administrations to actively monitor, measure, and regulate, to the right extent, how all this information is being used. In Europe, some countries are more permissive while others are more regulated. The key is to strike a balance and, above all, ensure data confidentiality. Where does the data come from? How is it being used? What privacy protections are in place?
This is why at ODILO, for example, we create private environments for our clients. Their data is always encapsulated. We can use external content, but always prioritize security. This raises ethical dilemmas—for example, whether this information could introduce bias or discrimination, given that algorithms learn from historical data. Could that historical data reflect gender, race, or socioeconomic bias?
Let’s move into ethics. How do you interpret the debate between ethical AI use and the protection of fundamental user rights?
The obligation of all companies using AI—whether internally or as a service to others—is to ensure that the data and decisions produced by AI are fair and do not cause harm. This is managed by the AI technology providers who comply with regulations, privacy standards, and oversight requirements. Ultimately, the key question is: to what extent does the right to privacy outweigh corporate or state interests?
The duty of every company using AI is to guarantee that its decisions are fair and do not cause harm.
It’s essential to define responsibilities clearly. If an AI system makes an error—say, a medical diagnosis—who is responsible? The programmer, the provider, or the company? Everyone must assume their share of responsibility. At ODILO, we ensure that the information we provide comes from reliable sources, which is crucial in education. What matters most in training is knowing what you are sharing.
Which sectors does ODILO work with most, and where do you see the most significant business opportunities in the future?
We work across practically all sectors. We have two primary lines of business. One is education—universities and schools. We operate in several countries and collaborate with institutions, governments, and public administrations. Our other big area is corporate training, especially in telecommunications, banking, and insurance, which are heavy consumers of information. It’s no longer about 300-hour certifications—the demand is now for microlearning that can be applied in practice.