The introduction of OpenAI’s ChatGPT in November 2022 marked a significant technological achievement, offering the public a powerful AI tool free of charge. This AI chatbot quickly gained global popularity, becoming the go‑to solution for various tasks, including text summarization, language translation, code snippet generation, and creative writing. With its diverse creative output in formats like letters, emails, and blogs, ChatGPT proved to be a versatile and valuable tool for users.
In March 2023, Google’s Bard entered the scene as a formidable competitor to ChatGPT, offering an experimental free service to users.
How AI Chatbots Work
Both ChatGPT and Bard are powered by pre-trained Large Language Models (LLMs): GPT (Generative Pre-trained Transformer) for ChatGPT and PaLM (Pathways Language Model) for Bard. These LLMs are deep learning algorithms, extensively trained on datasets comprising publicly available information from the web, possibly supplemented by data from private sources and synthetic, computer-generated data. Notably, LLMs are not limited to textual data alone, as they can be pre-trained on code from various programming languages and on mathematical equations, as well as images (and their captions), and audio recordings (and their transcripts) to emulate the multimodality of human cognition.
The underlying architecture of LLMs is the Transformer model, which incorporates a self-attention mechanism that enables it to grasp long-range dependencies within text sequences. By breaking down text into tokens (words and sub-words), and predicting the next token based on the context of preceding ones, LLMs gain an understanding of language patterns, grammar, and semantic relationships, enabling coherent and contextually relevant responses to user input.
Despite their impressive capabilities, LLMs do have certain limitations when it comes to knowledge access:
- Factual Accuracy – When generating text, an LLM is tasked with predicting the next token based on its highest probability. This does not invariably result in a statement that is most probable to be factual.
- Lack of Post-Training Learning – LLMs do not learn from experiences post-training, and their knowledge is based on the data available up to their training cutoff date.1
- Source Verification – LLMs do not provide the sources of generated content, which makes it challenging to verify the information they present.
To address some of these limitations and enable real-time knowledge access, LLMs can be combined with retriever models (RMs) in a unified architecture known as RALM (Retriever-Augmented Language Model). This integration empowers LLMs with search engine capabilities, leading to instant knowledge retrieval. Prominent examples of such AI systems include Bing Chat, Bard, and Google’s SGE (Search Generative Experience), all of which use the Dense Passage Retriever (DPR) as their retriever model.2
Generative Search Engines
Generative search engines, like Bing Chat and Google’s SGE, leverage their LLMs’ generative abilities to interpret user queries, summarize information from the search engine, and engage users in conversational interactions.3 This interactive process allows the AI system to understand the query’s meaning and the user’s interests, though the working memory is limited to the specific chat instance.
While generative search engines offer numerous advantages, they may struggle with complex instructions. However, users can adopt different personas during interactions, helping to provide relevant context. For example, when asking about Oppenheimer, all three systems (Bing Chat, SGE, and Bard) provide information on the person Oppenheimer, or, if prompted as a movie critic, they return information about the movie released in the summer of 2023.4
Generative search has the potential to revolutionize knowledge access for various domains, including business leaders, underwriters, and claims staff in insurance and reinsurance. By offering contextual understanding, detailed and informative answers, factual topic summaries, and content generation capabilities, generative search presents a novel approach to accessing, integrating, and leveraging public knowledge in decision-making processes.