
AI-supported search with RAG
Modern websites aggregate content from a wide variety of sources for a wide range of target groups. With AI-powered onsite search, you can ensure that your users find the content that is relevant to them—summarized in natural language and by AI, just as they are used to from ChatGPT, Gemini, and others!
With the widespread availability of language models, users have become accustomed to formulating their search queries in natural language. They also expect AI-generated summaries instead of endless lists of hits. But how do you get a language model for your DXP that reliably searches your own products and services without relying solely on the "knowledge" it was trained with?
Retrieval-Augmented Generation
Retrieval-Augmented Generation (RAG) is an advanced artificial intelligence technology that improves traditional language models (LLMs) by giving them access to your internal data sources. Before generating a response, an RAG-supported language model searches for relevant information in your structured data sources, such as products, FAQs, and instructions, in order to provide an accurate and fact-based answer.
How does RAG work?
- Retrieval: The user asks a question. The system does not search the entire Internet, but rather searches a defined knowledge database (e.g., company documents, PDFs, databases) for information that semantically matches the question.
- Augmentation (extension): The relevant text passages found are passed on to the LLM of your choice together with the original question. The prompt is thus "extended."
- Generation: Based on this new context, the LLM generates a precise, fact-based response and can cite sources.
Advantages of RAG
- Higher accuracy: Reduces AI "hallucinations" (invented facts).
- Timeliness: Enables access to data generated after the model has been trained.
- Data protection: RAG systems can access company data without using it to train the public model.
- Transparency: Answers can be substantiated with citations.

