What I Learned Building RAG-Based LLM Assistants

Building AI assistants using Retrieval Augmented Generation (RAG) taught me valuable lessons about user expectations and technical challenges. Here’s what I discovered along the way.

1. The Challenge of Multiple Assistants

Users often struggle when dealing with multiple AI assistants. They frequently ask questions to the wrong assistant or expect a single assistant to handle everything. We solved this by creating specific URLs for each assistant and adding clear chat placeholders to show which assistant they’re talking to. We also implemented role-based access control (RBAC) and a central homepage to help users navigate between assistants.

2. The ChatGPT Comparison

Users naturally compare any AI assistant with ChatGPT. They expect similar features like handling thank you messages, follow-up questions, and list-based queries. We enhanced our RAG implementation (RAG++) to better match these expectations.

3. Managing Conversations

Single-conversation interfaces create several challenges. Long conversations slow down page loading and can affect answer accuracy. Users rarely organize their chats effectively. We addressed this by:

  • Implementing automatic context management
  • Setting conversation history limits
  • Creating automatic chat organization features

4. Real-Time Information Access

Users want current information and often fear missing out. They still turn to search engines for real-time updates. To address this, we integrated search APIs and added an explicit search mode similar to ChatGPT’s browsing feature.

5. Setting Clear Boundaries

Users often don’t understand what RAG-based assistants can and cannot do. This leads to questions outside the assistant’s capabilities and mismatched expectations. Clear communication about limitations helps manage these expectations.

6. Handling I don't know answers

In RAG applications if you are unable to answer then you show some variant of I don't know in the assistant response. Users gvae us feedback they dislike when assistants say “I don’t know.” . We solved this by showing them something useful. For example if a user asked give me case study on unified visa platform then we showed following answer.

I couldn't find a specific case study on a unified Visa platform in the provided context. However, for related insights on payment systems and financial services integration, you might find the following case studies relevant:

- Case Study 1
- Case Study 2

7. Improving Question Quality

Many users struggle to ask effective questions. We helped by:

  • Generating follow-up questions
  • Implementing query rewriting
  • Teaching basic prompt engineering skills

8. Knowledge base Management

Real-time document indexing in RAG applications is a common user expectation. We found it helpful to for each assistat:

  • Display knowledge base statistics
  • Show when kb indexes were last updated
  • Provide document filtering options when making search queries. We extract a metadata from documents during indexing

9. Interface Improvements

Small UI features made a big difference. I have not seen these features in any public assistant like ChatGPT or Claude.

  • Adding conversation statistics. This include number of queries in a conversation and feedback analysis
  • Show metadata for each message like token count, time to first token, token/sec, total time to generate answer
  • Showing query timestamps
  • Replaying an entire conversation
  • Supporting multiple tabs and split windows
  • Regenerate with and without history
  • Giving editor mode along with the chat mode.

These lessons continue to shape how we build and improve our RAG-based assistants. Understanding user needs and expectations helps create more effective AI tools.

Case study: Building a LLM based workflow configuration generator for a low code product

I run a small indie consulting firm that specializes in building LLM-based solutions. Recently, I worked on an LLM solution where we had to generate JSON-based workflow configurations for a low code product. Think of it like AWS Step Functions where you write your business workflow in a JSON configuration file. In this post, I will share lessons learned while building this solution.

Below is an example workflow configuration. It is a simplified example. In their case, a step can have close to 50 fields.

{
    "name" : "Name of the workflow",
    "description" : "Description of the workflow",
    "steps" : {
        "step_1" : {
            "async" : false,
            "sql_query_name" : "Name of the query to execute",
            "transform_request" : "Can be a Go template",
            "transform_response" : "Can be a Go template",
            "steps" : {
                "step_1_1" : {

                },
                "step_1_2" : {

                }
            }
        },
        "step_2" : {
            "async" : true,
            "function_to_execute" : "Name of the query to execute",
            "transform_request" : "Can be a Go template",
            "transform_response" : "Can be a Go template",
            "steps" : {
                "step_2_1" : {

                },
                "step_2_2" : {

                }
            }
        }
    }
}

Important points to note in the above JSON configuration:

  • The workflow configuration is recursive. A step can have steps, and those steps can have further steps, and so on.
  • Step names follow a pattern ^[a-z]+(_[a-z]+)*$.
  • Certain JSON attributes require us to generate valid Go templates. These Go templates use some reusable library functions.
Continue reading “Case study: Building a LLM based workflow configuration generator for a low code product”

How Good Are LLMs at Generating Functional and Aesthetic UIs? An Experiment

I conducted an LLM training session last week. To teach attendees about structured output, I built an HTML/JS web application. This application allows users to input a webpage and specify fields they want to extract. The web app uses OpenAI’s LLM to extract the relevant information. Before making the OpenAI call, the app first sends a request to Jina to retrieve a markdown version of the webpage. Then, the extracted markdown is passed to OpenAI for further processing. You can access the tool here: Structured Extraction Tool.

Note: The tool will prompt you to enter your OpenAI key, which is stored in your browser’s local storage.

Below, I will demonstrate the app’s workflow using screenshots. The user starts by entering the webpage URL. In this example, I want to extract some information from a case study.

Next, users specify the fields they want to extract. We also support field templates for reusability. For each field, users provide a name, description, and its type.

After specifying the fields, users press the Extract Data button. The app displays the extracted data, along with token usage and cost.

Continue reading “How Good Are LLMs at Generating Functional and Aesthetic UIs? An Experiment”

Can Claude Single Call and Zero Shot Do What Devin Can’t Do?

I remain skeptical about using LLMs in an autonomous, agentic manner. In my experience, for software development tasks, they are most useful when employed in a chat-driven development manner, where humans guide their behavior and work with them step by step. The Answer.ai team recently published a post about Devin, sharing multiple tasks where their AI agent failed. Devin is a collaborative AI teammate built to help ambitious engineering teams achieve more. According to Answer.ai blog post, out of 20 tasks they gave to Devin, it failed at 14 tasks, succeeded in 3, and 3 were inconclusive. These results are quite disappointing. They have shared the complete task list in the appendix for anyone to try.

Many of the tasks they mentioned seemed achievable using AI assistants like Claude or ChatGPT, so I decided to experiment with Claude to complete one of them. I’ve increasingly been using Claude for coding-related tasks and have a paid subscription. This experiment uses Claude 3.5 Sonnet.

Continue reading “Can Claude Single Call and Zero Shot Do What Devin Can’t Do?”

Only ChatGPT Search Got It Right

I am using an open-source library called Docling to extract text from PDF documents. It was developed by the IBM research team, and the library works surprisingly well for my PDF documents.

from pathlib import Path
from docling.document_converter import DocumentConverter

source = "document.pdf"
converter = DocumentConverter()
result = converter.convert(source)
result.document.save_as_markdown(filename=Path("document.md"))

The code above generated a good-looking Markdown document. It cleanly extracted tables from my PDF. I am still benchmarking it with multiple PDFs, but it has been a good first experience with Docling.

Its README.md mentions that it uses an OCR engine, but it does not specify which one. Before diving into the source code, I decided to see if any GenAI search solutions could find the answer for me.

Continue reading “Only ChatGPT Search Got It Right”

Paper: Using Large Language Models to Promote Health Equity

I enjoyed reading this paper from the NEJM AI journal. This short (3-page) paper takes a positive view of large language models (LLMs) and discusses three potential LLM use cases that can make healthcare more equitable. As the paper mentions, 85% of the articles/papers that focus on equity-related impacts focus on the harm LLMs can cause. Only 15% of the articles/papers focus on equity opportunities.

NEJM AI is a new journal on medical artificial intelligence and machine learning from the NEJM Group. The New England Journal of Medicine (NEJM), published by NEJM Group, is one of the oldest and most prestigious medical journals in the world.

In this paper, the authors discuss three LLM use cases:

  1. LLMs can improve the detection of bias.
  2. LLMs can create structured datasets relevant to health equity.

I won’t discuss the third use case because everyone understands the importance of improving access to health information. However, people often underestimate the effort required to build a useful and reliable AI assistant.

Having spent the last two years working with LLMs, I’ve frequently encountered discussions about their inherent biases from training data. This paper presents an innovative counter-perspective: using LLMs to detect human biases in healthcare settings. While traditional methods relied on simple keyword searches, LLMs can analyze clinical notes to identify subtle linguistic patterns, sentiment, and stereotypes that may indicate bias in patient care. For example, research has shown that doctors more frequently describe Black patients as ‘difficult’ compared to white patients. LLMs can help systematically identify these kinds of biased language patterns by treating medical texts as ‘artifacts’ of a biased healthcare ecosystem.

Similarly, if LLMs can be used to promote propaganda, they can also be powerful tools for detecting it. Their deep understanding of language patterns can be turned from a potential liability into an asset for identifying harmful biases.

The second use case, creating structured datasets, is already happening. Features like structured output have made it much easier to extract data from unstructured documents. However, I still see people writing custom scripts and tools to do this. I believe that just as coding experience has improved in chat applications with the introduction of features like Canvas and artifacts, structured text extraction will also become better. The workflow will be more UI-driven rather than relying on Python code execution with a code interpreter.

Can OpenAI o1 model analyze GitLab Postgres Schema?

In July 2022, I wrote a blog on Gitlab’s Postgres schema design. I analyzed the schema design and documented some of the interesting patterns and lesser-known best practices. If my memory serves me correctly, I believe I spent close to 20 hours spread over a couple of weeks writing the post. The blog post also managed to reach the front page of HackerNews.

I use large language models (LLMs) daily for my work. I primarily use the gpt-4o, gpt-4o-mini, and Claude 3.5 Sonnet models exposed via ChatGPT or Claude AI assistants. I tried the o1 model once when it was launched, but I couldn’t find the right use cases for it. At that time, I tried it for a code translation task, but it didn’t work well. So, I decided not to invest much effort in it.

OpenAI’s o1 series models are large language models trained with reinforcement learning to perform complex reasoning. These o1 models think before they answer, producing a long internal chain of thought before responding to the user.

Continue reading “Can OpenAI o1 model analyze GitLab Postgres Schema?”

PostgreSQL Enum Types with SQLModel and Alembic

While working on a product that uses FastAPI, SQLModel, Alembic, and PostgreSQL, I encountered a situation where I needed to add an enum column to an existing table. Since it took me some time to figure out the correct approach, I decided to document the process to help others who might face similar challenges.

Let’s start with a basic scenario. Assume you have a data model called Task as shown below:

import uuid
from datetime import datetime
from typing import Optional
from sqlmodel import SQLModel, Field

class Task(SQLModel, table=True):
    __tablename__ = "tasks"
    id: uuid.UUID = Field(default_factory=uuid.uuid4, primary_key=True)
    title: str = default=None
    description: str | None = Field(default=None)
    created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))

Using Alembic, you can generate the initial migration script with these commands:

alembic revision --autogenerate -m "Created task table"
alembic upgrade head

Now, let’s say you want to add a status field that should be an enum with two values – OPEN and CLOSED. First, define the enum class:

import enum
class TaskStatus(str, enum.Enum):
    OPEN = "open"
    CLOSED = "closed"

Then, add the status field to the Task class:

class Task(SQLModel, table=True):
    __tablename__ = "tasks"
    id: uuid.UUID = Field(default_factory=uuid.uuid4, primary_key=True)
    title: str = default=None
    description: str | None = Field(default=None)
    created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
    status: Optional[TaskStatus] = None

If you run the Alembic migration commands at this point, it will define the status column as text. However, if you want to create a proper PostgreSQL enum type instead of storing the data as text, you’ll need to follow these additional steps:

  1. Install the alembic-postgresql-enum library:
pip install alembic-postgresql-enum 

or if you’re using Poetry:

poetry add alembic-postgresql-enum
  1. Add the library import to your Alembic env.py file:
import alembic_postgresql_enum
  1. Modify the status field declaration in your Task class to explicitly use the enum type:
from sqlmodel import SQLModel, Field, Enum, Column


class Task(SQLModel, table=True):
    __tablename__ = "tasks"
    id: uuid.UUID = Field(default_factory=uuid.uuid4, primary_key=True)
    title: str = default=None
    description: str | None = Field(default=None)
    created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
    status: Optional[TaskStatus] = Field(default=None, sa_column=Column(Enum(TaskStatus)))

Now you can run the Alembic commands to create a new PostgreSQL type for TaskStatus and use it for the column type:

alembic revision --autogenerate -m "Added status column in tasks table"
alembic upgrade head

To verify that the enum type was created correctly, connect to your PostgreSQL instance using psql and run the \dT+ command:

taskdb=# \dT+
                                          List of data types
 Schema |    Name     | Internal name | Size | Elements  |  Owner   | Access privileges | Description
--------+-------------+---------------+------+-----------+----------+-------------------+-------------
 public | taskstatus  | taskstatus    | 4    | OPEN +    | postgres |                   |
        |             |               |      | CLOSED   +|          |                   |

This approach ensures that your enum values are properly constrained at the database level, providing better data integrity than using a simple text field.

The One GitHub Copilot Feature I Use

A couple of days back, I posted that I prefer to use Chat-driven development using ChatGPT or Claude over using IDE-integrated LLM tools like GitHub Copilot. An old friend reached out and asked if there is any part of my development workflow where LLM IDE integration makes me more productive. It turns out there is one place where I still like to use GitHub Copilot with VS Code: writing Git commit messages after I have made changes. For me, a good clean Git history is important. It helps me understand why I made a change. I’m a lazy person, so I often end up writing poor commit messages.

Continue reading “The One GitHub Copilot Feature I Use”

Giving Microsoft Phi-4 LLM model a try

Microsoft has officially released MIT licensed Phi-4 model. It is available on hugging face https://huggingface.co/microsoft/phi-4.

Phi 4 is a 14B parameter, state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets.

I wanted to give it a try so I used Ollama on runpod.io. You can follow the instructions mentioned here https://docs.runpod.io/tutorials/pods/run-ollama

I used 4-bit quantized model on ollama. You can also try 8-bit and fp16 versions as well. As I mentioned in my last blog 4-bit quantization strikes a good balance between performance and efficiency. I also tried 8bit quantized model but they both worked same for me.

Continue reading “Giving Microsoft Phi-4 LLM model a try”