New in EasyVideo: edit by scene, convert PPTs to video, and translate into 75+ languages. Learn more

Al in e-learning: what’s actually working?

By Rares Bratucu

Most L&D teams have started using AI in some form. The question has moved on from whether to adopt it. Now it’s about whether it’s actually making learning better, or just making it faster.

Last updated on May 6, 2026

Most L&D teams have used AI in some form by now. The question has moved. It’s no longer about whether to adopt it. It’s about whether it’s actually delivering something better, or just delivering something faster.

In our latest webinar, Nelson Sivalingam, CEO at HowNow and author of Learning at Speed, and Talha Faridy, AI Innovation Lead at Easygenerator, joined moderator Ashling Moran for an honest look at where AI in L&D is actually landing. What has worked, what hasn’t, and what L&D professionals should be asking of the tools they invest in.

🎥 Watch the session: Missed it live? Watch the full recording below.

 

Efficiency is not the same as effectiveness

When you ask how L&D teams are using AI today, the answer is mostly the same: to do what they were already doing, but faster. Content creation, curriculum planning, analyzing evaluation data. The efficiency gains are real.

But Nelson drew a line between efficiency and effectiveness that came up throughout the whole conversation.

“There’s a big difference between doing a thing fast and doing the right thing. And right now what we’re doing is doing a thing fast, but we’re not necessarily evaluating whether it’s the right thing.”

The risk is not just wasted time. It’s that AI makes it easier to scale the wrong things. If your content wasn’t landing before, producing more of it faster does not solve the problem. As Nelson put it:

“What we might actually see ourselves doing is scaling the very problem we are hoping to solve. One of the challenges is there being a ton of content. And when you’ve got a ton of content, discovery is one of the biggest challenges. By being able to create a lot of content that’s not great quality very fast, what you’re actually doing is scaling the problem you already had much, much more quickly.”

There’s also a harder-to-recover-from consequence. When learners engage with poor-quality AI-generated content and don’t find it useful, they form an opinion. Getting them back after that is difficult.

Talha added that unrealistic expectations are part of what drives this. Many authors assume they can hand a task to AI with minimal context and get something production-ready back.

“A lot of authors have wrongful expectations of what AI can do. They feel that they can just delegate without a lot of context and a lot of groundwork, and then expect an output that is completely ready for production. Whereas oftentimes that’s not the case.”

What a real measure of value looks like

If speed isn’t the measure, what is? Nelson’s answer goes back to what L&D is actually for.

“L&D’s outcome isn’t to create content. It’s a means to an end. Our job in this profession is to help people do their jobs better and help them grow. It’s as simple as that.”

That means the question isn’t how many courses were shipped or how many hours of learning were completed. It’s whether the skills that matter to the business actually got built.

Nelson framed this through the lens of what he called the talent crunch: organizations struggling to hire specialized skills that are in high demand and short supply. The only sustainable response is to upskill people internally. That makes skills-building the output that matters, and it’s the output that most L&D measurement still doesn’t track directly.

“The question is not about how many hours of learning someone did or what content was completed. It really comes back down to: did we build the skills we needed in order to achieve those objectives? That’s really the measure of value.”

Context is what AI is missing

One of the clearest practical points from the session was about what actually makes AI output useful. Generic AI tools are reasonably good at generic information. What they’re not good at, by default, is company context.

Talha was direct about this:

“Large language models are really good at generic information. But one thing they are still not great at by default is company context. Your company context, your business goals, the outcomes you’re looking for. That is the kind of context you want to bring into the AI. That prep work needs to be done prior to working with AI on any tool.”

This applies to course creation too. The ease of building something with AI also makes it easier to build the wrong thing.

“It’s easier to build things now, but it’s also easier to build the wrong things. That concept applies to course creation. You’re building something to drive meaningful outcomes. It’s easier to build training, for sure, but it’s also easier to build the wrong training for the wrong people in the wrong formats.”

The prep work that prevents this is defining learning objectives upfront, being specific about the behavior change you want learners to have, and giving the AI a clear picture of what good output looks like before you start. In Easygenerator, this is built into the Course Guidelines feature, which lets authors give EasyAI explicit instructions about goals, content preferences, and source material before it generates anything.

Author-first AI vs AI-first content generation

This brought the conversation to a distinction that ran through the whole session: the difference between AI that starts from the author’s intent and AI that starts from content generation.

Talha described how Easygenerator approaches this, using Bloom’s taxonomy to anchor learning objectives before any content gets built.

“We ask the subject-matter expert or the instructional designer what exactly they want their learners to do after they finish the course. What is that behavior change in terms of application that they want to drive? Once that is defined upfront, along with the knowledge the learner needs, that steers the AI in the right direction and overall uplifts the quality of the content.”

Nelson added a useful frame for what this unlocks. In the old model, an L&D team would go to a subject-matter expert (SME), collect expertise, take it away, apply instructional design thinking, and come back weeks later with a course. AI-assisted authoring changes that timeline.

“As an SME, I can go to Easygenerator and work with the AI to say: I’m dumping my expertise here, but you’ve got the expertise of the pedagogy, and the best way to frame and structure this. It would have taken you weeks to get to something that was pedagogically sound. But now we can get from ‘here’s expertise’ to ‘pedagogically sound learning resource’ very quickly.”

That said, neither speaker suggested this removes the need for human judgment. The power users of AI are the ones who iterate, who treat the first output as a starting point, and who bring their own context and expertise into the process at every step.

L&D’s new job is to engineer the context, not the content

If SMEs can now create content directly with AI support, what does that mean for L&D’s role? Both Nelson and Talha pointed in the same direction.

Nelson’s framing was memorable: in this world, L&D teams are effectively managers of AI. And a manager’s job is to set the bar and create the conditions for others to hit it.

“It’s not necessarily L&D’s job to engineer the content. It’s L&D’s job to engineer the context so everyone else can create more useful resources with very little friction.”

That means defining what good looks like, setting course guidelines that any SME can follow, choosing the right pedagogical frameworks, and building governance structures that keep quality consistent across the organization without burdening every SME with instructional design decisions.

Talha made the same point from the product side:

“The L&D team can set that framework and steer things in the right direction. If you don’t, it’s just another situation where SMEs, who are already stretched, now also have to learn how to create quality learning. AI can be super helpful here, where L&D teams define what good looks like, and then SMEs can use that to create content without it adding burden on them.”

The recognition piece also matters. Ashling mentioned something she sees work in practice: putting people’s names on courses, calling out strong examples, and giving public credit to SMEs who do it well. It sets the bar visibly and gives others something to aim for.

Whether AI is built in or bolted on matters more than you might think

A question from the audience during the session touched on something that came up throughout: how do you tell whether AI is genuinely built into a tool or just added on top of an existing product?

Nelson’s take was that the answer usually shows up in how the AI performs across the full workflow.

“When you’ve got AI native, you’re essentially applying intelligence at every step in that process. So you get the efficiency gains, but also the gain in effectiveness by applying that intelligence at every step. As an add-on, it’s really difficult. You’re trying to polish the last mile with AI, which is what makes it very difficult.”

Content creation, as Nelson pointed out, involves multiple steps that benefit from intelligence at each one: curriculum design, structuring individual pieces of content, building in scaffolding, applying pedagogical frameworks. A bolt-on solution can help at one or two of these points. A platform where AI is native can support all of them.

The practical test he suggested is simple: how effective was the AI at helping you do the specific task you were trying to do? Not which AI sounds most impressive, but which one actually helped you solve the problem in front of you.

Talha addressed the pricing dimension of this directly. Some vendors have responded to the cost of AI by passing costs on through higher prices. Easygenerator’s position is that AI should be included in the base price for everyone.

“We strongly believe that the future for most authors and L&D people is AI-native. Our position is that everyone should be empowered with that technology. Hence, for our core platform AI, we don’t charge extra for it. It is included in the base price. That is the core reason for it, to empower everyone and work with them towards that future of being AI-native.”

Where this goes next

The final chapter of the conversation looked at the next 12 months. Talha pointed to agentic AI as the direction of travel: AI that moves through a workflow iteratively, using different tools and capabilities to accomplish a task, rather than producing a single output in response to a single prompt. More effective, more context-aware, better at getting to something useful without requiring the user to do multiple rounds of manual correction.

Nelson’s outlook was bigger in scope. He described a gap in L&D that has never been closed: the connection between learning and performance. Most organizations can’t tell you, when performance drops, which skills are missing or why. And even when they can identify a skills gap, they often don’t know which learning will address it.

“With autonomous agents that can go through that loop all by themselves, you can now look at work signals, infer what the performance gaps are, figure out what skills are missing, connect you with the right relevant learning resource, then monitor the work data to see whether it actually changed the performance. All in one autonomous loop.”

He called this the vision of a self-improving company. L&D sets the guardrails, and the system handles the loop.

The bottom line

AI in L&D is delivering real efficiency gains. The organizations seeing the most value are the ones treating context as a first step, keeping humans in the process rather than removing them, and measuring outcomes rather than output.

The tools matter too. AI that’s built into the authoring workflow from the start behaves differently from AI added on top of a legacy product. Understanding that difference is part of evaluating whether a platform will actually help you or just help you produce more content faster.

👏 Huge thanks to Nelson Sivalingam  for a genuinely honest and practical conversation.

How to scale localization with Easygenerator (live demo)
Register

About the author

Rares is a Content Specialist at Easygenerator. He spends his time researching and writing about the latest L&D trends and the e-learning sector. In his spare time, Rares loves plane spotting, so you’ll often find him at the nearest airport.

Frequently asked questions

How do you stop AI from generating generic e-learning that doesn't reflect your company's context?

The most effective step is to give AI your company context before you start building, not after. That means defining your learning objectives, the specific behavior change you want learners to have, and any relevant source material upfront. In Easygenerator, the Course Guidelines feature lets you set these instructions once so EasyAI refers back to them throughout the entire content creation process.

How can subject-matter experts create good e-learning without instructional design experience? +

The key is a tool that brings instructional design thinking into the authoring process itself, so the SME doesn't have to supply it. Easygenerator uses Bloom's taxonomy to help authors define learning objectives before any content gets built, which steers the AI toward output that is pedagogically sound rather than just informative. The SME contributes the expertise; the platform handles the structure.

How do you maintain learning quality when subject-matter experts are creating content across the organization? +

L&D teams get the best results when they focus on setting the standard rather than producing every course themselves. That means defining what good looks like, creating course guidelines that any SME can follow, and using a platform that keeps those guardrails in place across every piece of content. Easygenerator's governance features let L&D teams set those standards centrally while SMEs create content independently.

How do you tell if AI is genuinely built into an authoring tool or just added on top? +

The clearest signal is whether AI supports the full authoring workflow or only one part of it. A bolt-on solution typically helps at the content generation step and little else, whereas a platform built around AI applies intelligence across curriculum design, learning objectives, content structure, and assessment. If a vendor charges separately for AI as a tier or add-on, that is usually a sign the feature was added rather than built in from the start.

What should L&D teams ask vendors before investing in an AI-powered authoring tool? +

Ask whether AI is included in the base price or charged as a separate tier, and ask how it is integrated into the authoring workflow specifically. A vendor should be able to show you where AI supports the process beyond content generation, such as learning objective creation, quality review, and instructional design guidance. If the answers are vague, the AI is likely a feature added on top of a product that was not built with it in mind.

It's easy to get started
  • 14 day trial with access to all features. Start with variety of course templates.
  • Get unlimited design inspirations. Level up your courses.
  • Upload your PowerPoint presentations. Get instant courses created.