“Wait, that’s how it works?”
That’s the reaction we'd get every time we’d explain how a classic, intent-based chatbot works. For something using AI, it’s remarkable how rudimentary these systems can be. The AI is used only to recognize what someone is saying – not more.
Generative AI may be hyped, but when it comes to Conversational AI, it changed the game. For the first time, AI is not used just to recognize a question, but to actually answer it as well.
With this change come a lot of questions – and a lot to unpack. Let’s dive in.
If a user asks, "Are you open now?," the AI will analyze the sentence and recognize the intent ("business-hours-inquiry") with a certain confidence score (ex: 98%) and trigger a prebuilt response, like "Our business hours are from 9 AM to 5 PM, Monday through Friday." This method has been the backbone of chatbots for years, providing reliable and predictable outputs (though not always perfect).
Intent-based AI relies on Natural Language Processing (NLP), which can be thought of as “AI for Language”. It’s used to understand the “intent” behind a user’s question. Just like image recognition requires a lot of pictures of bicycles before it can recognize one, intent-based AI requires many variations of the same question, so the NLP model can start to confidently recognize it.
Once the intent is identified with a high enough confidence score, the system triggers a flow – this is where the process becomes entirely non-AI. That means any kind of “traditional IT” process can be executed here. For example, if a user asks about their remaining data balance, an intent-based system can trigger an API call to retrieve this information and provide an accurate, real-time response. This is something gen AI can’t do. The flow is entirely made up of scripted responses that were built in advance. This ensures the AI agent delivers the exact answer we want to the recognized question.
If a question falls outside of this predefined scope, the chatbot won’t be able to reply. That means there’s an upfront effort associated with the setup of these flows, which requires a cost-benefit analysis to be done to define the scope. That said, if a company receives more than 2000 messages a month, it’s highly likely a positive business case can be made for an intent-based bot that pays itself back within 1 year.
Again, if a user asks, "Are you open now?", instead of checking for the scripted response, it will analyze the question, draw from its training data, and generate an answer that might fit the conversation more, such as, "Yes, we’re open till 5 PM today! Our opening hours are from 9 AM to 5 PM, Monday through Friday. Is there anything else I can help you with?"
Ever since Open AI’s GPT-3.5, there's been a reliable and easy-to-use AI system that can not only recognize questions, but also answer them. Unlike Intent-based AI, which relies on predefined scripts, Generative AI uses machine learning models to generate responses in real-time. This means the AI doesn’t only identify a user's intent—it also creates a tailored and custom response based on the message, the context of the conversation and the data it’s trained on.
(Remember: intent-based AI will give the same answer, no matter how the question is asked.)
This is why Generative AI is best for handling the long-tail of questions: the types of questions, which, individually, are not asked very often, but put together, represent a high volume of questions. Additionally, it can improve the user experience by understanding very long questions, or handling multi intents (both things intent-based AI is bad at). It will also provide more personalized and engaging responses.
This ability to generate responses in real-time has several advantages:
At this point you might be thinking: “If Generative AI is so powerful, why not use it exclusively?” The answer lies in the complementary strengths of both approaches.
Generative AI is not without its drawbacks. One of the most significant challenges is the risk of hallucinations—where the AI generates an answer that sounds reasonable but is factually incorrect. This happens because generative models, like GPT, are essentially advanced autocomplete systems. They form a sentence by predicting and stringing together words based on patterns in the data they were trained on. But they don’t truly understand the content. As a result, they might invent information if they don't have the right data at hand.
Beyond hallucinations, there are some other downsides: no API integration (yet), costs are still on the higher side (though diminishing fast), and privacy concerns. Also, since you don’t really have control over the answer that is put out, customization can be difficult.
This unpredictability is especially problematic in scenarios where accuracy is critical, such as in customer support for financial services or healthcare. Generative AI models need to be carefully monitored and controlled to prevent the dissemination of incorrect or misleading information.
Intent-Based AI excels in scenarios that are frequent, where precision, control and a consistent response are key. The predefined nature of responses also allows for complex interactions, such as performing actions via API calls (personalized data for personalized questions). For example, in a government service portal, if a citizen asks, "What’s the status of my tax refund?" an intent-based system can securely trigger an API call to retrieve and display the exact status of the refund in real-time. This is something Generative AI can’t do because it can't fulfill an API call by itself.
Generative AI, on the other hand, is best for handling long-tail questions: the types of questions, which, individually, are not asked very often, but put together, represent a high volume of questions.
The true power of modern conversational agents is in combining the two approaches. The best strategy for a long-term, powerful AI Agent is in using intent-based AI to handle the most common and critical questions, making sure responses are accurate and consistent. Generative AI can then be added to manage the less common questions, and to help the NLP system understand complex user questions better. That said, we're anticipating bot builders soon where a simple description of the intent and its exceptions will be enough for the AI to figure out the NLP part by itself.
If you want to discuss AI in more detail, then reach out to Alexis.
He's ready to chat in French, English and Greek.