Paper: How faithful are RAG models?

I read How faithful are RAG models? Quantifying the tug-of-war between RAG and LLMs’ internal prior paper today and thought of sharing two important things I learnt from the paper. I find this paper useful as it helps in thinking about how to build RAG systems.

#1. Impact of answer generation prompt on response

Researchers investigated how different prompting techniques affect how well a large language model (LLM) uses information retrieved by a Retrieval-Augmented Generation (RAG) system. The study compared three prompts: “strict” which told the model to strictly follow the RAG information, “loose” which encouraged the model to use its judgement based on context, and “standard”. Following were the definitions of these prompts.

As mentioned in the paper

We observe lower and steeper drops in RAG adherence with the loose vs strict prompts, suggesting that prompt wording plays a significant factor in controlling RAG adherence.

This suggests that the way you ask the LLM a question can significantly impact how much it relies on the provided information. The study also looked at how these prompts affected different LLMs, finding similar trends across the board. Overall, the research highlights that carefully choosing how you prompt an LLM can have a big impact on the information it uses to answer your questions.

The above also implies that for the problems where you only want to guide the LLM answer generation you can rely on standard or loose prompt formats. For example, I am building a learning tool for scrum masters and product owners. In this scenario I only want to use the retrieved knowledge for guidance purpose so using standard or loose prompt formats make sense.

# 2. Likelihood of a model adhering to retrieved information in RAG settings change with the model’s confidence in its response without context

The second interesting point discussed in the paper is relationship between model’s confidence in its answer without context and retrieved information. Imagine you ask a large language model a question, but it’s not sure if the answer it already has is the best. New information is then provided to help it refine its response. This information is typically called context. The study here shows that the model is less likely to consider this context if it was very confident in its initial answer.

As the model’s confidence in its response without context (its prior probability) increases, the likelihood of the model to adhere to the retrieved information presented in context (RAG preference rate) decreases. This inverse correlation indicates that the model is more likely to stick to its initial response when it is more confident in its answer without considering the context. This relationship holds true across different domain datasets and is influenced by the choice of prompting technique, such as strictly adhering or loosely adhering to the retrieved information. The tension between the model’s pre-trained knowledge and the information provided in context is highlighted by this inverse correlation.

We can use logprobs to calculate the confidence score

LLM Tools #1: HackerNews Discussion Summarizer

I have started to build many small, single purpose tools using LLMs and Generative AI that helps improve my productivity. I add these little tools to my web UI based AI assistant. One such tool that I built recently is summarising long HackerNews discussions.

For example, the thread on GitHub Copilot Workspace: Technical Preview which has 323 comments my tool generated following summary.

The above summary is generated using the below prompt.

You are a key point extractor and summarizer.
You are given HackerNews comments for a story and you have to extract key points from these comments.
You have to group the key points in logical groups. Each group should not have more than 5 points.
You should come up with a group name that you will use in the generated response.

Response should be in following format.

## Story Details
1. Title: [Title of the story](url)
2. Total Comments: 123

## Logical Group
1. Key point 1
2. Key point 2

## Logical Group
1. Key point 3
2. Key point 4
3. Key point 5
4. Key point 6

Why you should consider building your own AI assistant?

For the past six months, I’ve been leading the development of a custom AI assistant for our organization. It began with the straightforward concept of offering an alternative to publicly available chat assistants like OpenAI’s ChatGPT or Google’s Gemini. However, it has evolved into a comprehensive platform powering multiple bots tailored to specific business units, departments, and individual needs.

The feedback on the AI Assistant has been positive, with people reporting productivity gains. It is also helping to break down knowledge and information silos within the organization.

A common question I receive is why we opted to build our own solution instead of leveraging existing assistants like ChatGPT, Perplexity, Microsoft’s Enterprise Copilot, or the plethora of other options available. After all, isn’t the AI chat assistant landscape already saturated? While there are indeed numerous products vying for a slice of this multi-billion dollar market, I believe we are still in its nascent stages. The optimal workflows and functionalities to integrate into these tools are still under exploration by all players.

In this blog post, I’ll delve into the reasons why I believe organizations should strongly consider building their own AI assistants. It’s important to clarify that I’m not advocating for everyone to embark on building entirely from scratch.

Continue reading “Why you should consider building your own AI assistant?”