Do Nothing Script Generator

I learnt about do nothing scripting this week. Do-nothing scripting is a way to structure manual workflows into interactive scripts that guide the user step by step without automating the process immediately. By encapsulating each step in a function, this method ensures that no steps are skipped, reduces cognitive load, and provides a structured way to transition toward full automation over time.

You can convert your static documents to do nothing scripts. Instead of reading a document, you interacts with a script that prompts you for action, making it easier to maintain consistency and track progress. Then, eventually you can replace these functions with automation.

Continue reading “Do Nothing Script Generator”

Running Large Language Models at Scale

I was watching a talk by Dylan Patel from Semi Analysis where he covered important techniques AI labs are using to do inference at scale. I’ve shared my notes below and highly recommend checking out the full presentation.

There are two distinct phases in inference: prefill and decode. Prefill processes the initial prompt and is compute-intensive, while decode generates tokens iteratively and is memory bandwidth-intensive.

Several key techniques have emerged for efficient inference:

  • Continuous batching is essential for cost-effective operations. Rather than processing requests individually, continuous batching allows multiple user requests to be processed together, dramatically reducing costs compared to batch size one. This is particularly important when handling asynchronous user requests arriving at different times.
  • Disaggregated prefill separates the compute-intensive prefill phase from the bandwidth-intensive decode phase. Major providers implement this by using different accelerators for each phase, helping mitigate “noisy neighbor” problems and maintaining consistent performance under varying loads.
  • Context caching is an emerging optimization that avoids recomputing the key-value (KV) cache for frequently used prompts. While this requires significant CPU memory or storage, it can substantially reduce costs for applications that repeatedly reference the same context, such as legal document analysis. Google first implemented this technique. Now, both OpenAI and Anthropic implements this.

A critical challenge in large-scale deployments is managing “straggler” GPUs. As ByteDance discovered, even in high-end GPU clusters, individual chips can underperform due to the “Silicon Lottery” – natural variations in chip performance. In their case, a single underperforming GPU reduced cluster performance significantly, as training workloads are synchronous and are limited by the slowest component.

For organizations building inference infrastructure, managing memory bandwidth becomes a critical challenge. Running large models requires loading enormous numbers of parameters for each token generation, making memory bandwidth a key constraint for achieving desired tokens-per-second targets while serving multiple users.

The infrastructure challenges compound when scaling to larger models, requiring careful consideration of hardware capabilities, batching strategies, and caching mechanisms to maintain performance and cost-effectiveness.

My Notes on Open Deep Researcher

OpenAI recently released a new agentic application called Deep Research. This tool is available exclusively to pro users with a $200 monthly subscription. It utilizes their upcoming o3 reasoning model, which is not yet available via API. According to OpenAI’s blog, their Deep Research agent system achieves a score of 26.6% on the Humanity’s Last Exam evaluation benchmark. However, comparing an agent system directly to language models may not be the most appropriate comparison. A more suitable comparison would have been against similar research tools like Perplexity or Google’s Gemini Deep Research tool.

In addition to the Humanity’s Last Exam benchmark results, OpenAI shared their performance on the GAIA benchmark. GAIA is a public benchmark designed to evaluate AI systems on real-world questions, and the Deep Research agentic system has achieved a new state of the art (SOTA), leading the external leaderboard.

Today, HuggingFace launched an open source initiative to replicate OpenAI’s DeepResearch capabilities. It’s worth noting that while Google released their experimental Deep Research model in Gemini in December 2024, there weren’t any significant replication attempts at that time.

According to the HuggingFace team’s blog, they developed their prototype in under 24 hours and have improved upon the previous state of the art, advancing from Magentic-One’s 46% to 54% on the validation set.

Continue reading “My Notes on Open Deep Researcher”

What I Learned Building RAG-Based LLM Assistants

Building AI assistants using Retrieval Augmented Generation (RAG) taught me valuable lessons about user expectations and technical challenges. Here’s what I discovered along the way.

1. The Challenge of Multiple Assistants

Users often struggle when dealing with multiple AI assistants. They frequently ask questions to the wrong assistant or expect a single assistant to handle everything. We solved this by creating specific URLs for each assistant and adding clear chat placeholders to show which assistant they’re talking to. We also implemented role-based access control (RBAC) and a central homepage to help users navigate between assistants.

2. The ChatGPT Comparison

Users naturally compare any AI assistant with ChatGPT. They expect similar features like handling thank you messages, follow-up questions, and list-based queries. We enhanced our RAG implementation (RAG++) to better match these expectations.

3. Managing Conversations

Single-conversation interfaces create several challenges. Long conversations slow down page loading and can affect answer accuracy. Users rarely organize their chats effectively. We addressed this by:

  • Implementing automatic context management
  • Setting conversation history limits
  • Creating automatic chat organization features

4. Real-Time Information Access

Users want current information and often fear missing out. They still turn to search engines for real-time updates. To address this, we integrated search APIs and added an explicit search mode similar to ChatGPT’s browsing feature.

5. Setting Clear Boundaries

Users often don’t understand what RAG-based assistants can and cannot do. This leads to questions outside the assistant’s capabilities and mismatched expectations. Clear communication about limitations helps manage these expectations.

6. Handling I don't know answers

In RAG applications if you are unable to answer then you show some variant of I don't know in the assistant response. Users gvae us feedback they dislike when assistants say “I don’t know.” . We solved this by showing them something useful. For example if a user asked give me case study on unified visa platform then we showed following answer.

I couldn't find a specific case study on a unified Visa platform in the provided context. However, for related insights on payment systems and financial services integration, you might find the following case studies relevant:

- Case Study 1
- Case Study 2

7. Improving Question Quality

Many users struggle to ask effective questions. We helped by:

  • Generating follow-up questions
  • Implementing query rewriting
  • Teaching basic prompt engineering skills

8. Knowledge base Management

Real-time document indexing in RAG applications is a common user expectation. We found it helpful to for each assistat:

  • Display knowledge base statistics
  • Show when kb indexes were last updated
  • Provide document filtering options when making search queries. We extract a metadata from documents during indexing

9. Interface Improvements

Small UI features made a big difference. I have not seen these features in any public assistant like ChatGPT or Claude.

  • Adding conversation statistics. This include number of queries in a conversation and feedback analysis
  • Show metadata for each message like token count, time to first token, token/sec, total time to generate answer
  • Showing query timestamps
  • Replaying an entire conversation
  • Supporting multiple tabs and split windows
  • Regenerate with and without history
  • Giving editor mode along with the chat mode.

These lessons continue to shape how we build and improve our RAG-based assistants. Understanding user needs and expectations helps create more effective AI tools.

Case study: Building a LLM based workflow configuration generator for a low code product

I run a small indie consulting firm that specializes in building LLM-based solutions. Recently, I worked on an LLM solution where we had to generate JSON-based workflow configurations for a low code product. Think of it like AWS Step Functions where you write your business workflow in a JSON configuration file. In this post, I will share lessons learned while building this solution.

Below is an example workflow configuration. It is a simplified example. In their case, a step can have close to 50 fields.

{
    "name" : "Name of the workflow",
    "description" : "Description of the workflow",
    "steps" : {
        "step_1" : {
            "async" : false,
            "sql_query_name" : "Name of the query to execute",
            "transform_request" : "Can be a Go template",
            "transform_response" : "Can be a Go template",
            "steps" : {
                "step_1_1" : {

                },
                "step_1_2" : {

                }
            }
        },
        "step_2" : {
            "async" : true,
            "function_to_execute" : "Name of the query to execute",
            "transform_request" : "Can be a Go template",
            "transform_response" : "Can be a Go template",
            "steps" : {
                "step_2_1" : {

                },
                "step_2_2" : {

                }
            }
        }
    }
}

Important points to note in the above JSON configuration:

  • The workflow configuration is recursive. A step can have steps, and those steps can have further steps, and so on.
  • Step names follow a pattern ^[a-z]+(_[a-z]+)*$.
  • Certain JSON attributes require us to generate valid Go templates. These Go templates use some reusable library functions.
Continue reading “Case study: Building a LLM based workflow configuration generator for a low code product”

How Good Are LLMs at Generating Functional and Aesthetic UIs? An Experiment

I conducted an LLM training session last week. To teach attendees about structured output, I built an HTML/JS web application. This application allows users to input a webpage and specify fields they want to extract. The web app uses OpenAI’s LLM to extract the relevant information. Before making the OpenAI call, the app first sends a request to Jina to retrieve a markdown version of the webpage. Then, the extracted markdown is passed to OpenAI for further processing. You can access the tool here: Structured Extraction Tool.

Note: The tool will prompt you to enter your OpenAI key, which is stored in your browser’s local storage.

Below, I will demonstrate the app’s workflow using screenshots. The user starts by entering the webpage URL. In this example, I want to extract some information from a case study.

Next, users specify the fields they want to extract. We also support field templates for reusability. For each field, users provide a name, description, and its type.

After specifying the fields, users press the Extract Data button. The app displays the extracted data, along with token usage and cost.

Continue reading “How Good Are LLMs at Generating Functional and Aesthetic UIs? An Experiment”

Can Claude Single Call and Zero Shot Do What Devin Can’t Do?

I remain skeptical about using LLMs in an autonomous, agentic manner. In my experience, for software development tasks, they are most useful when employed in a chat-driven development manner, where humans guide their behavior and work with them step by step. The Answer.ai team recently published a post about Devin, sharing multiple tasks where their AI agent failed. Devin is a collaborative AI teammate built to help ambitious engineering teams achieve more. According to Answer.ai blog post, out of 20 tasks they gave to Devin, it failed at 14 tasks, succeeded in 3, and 3 were inconclusive. These results are quite disappointing. They have shared the complete task list in the appendix for anyone to try.

Many of the tasks they mentioned seemed achievable using AI assistants like Claude or ChatGPT, so I decided to experiment with Claude to complete one of them. I’ve increasingly been using Claude for coding-related tasks and have a paid subscription. This experiment uses Claude 3.5 Sonnet.

Continue reading “Can Claude Single Call and Zero Shot Do What Devin Can’t Do?”

Only ChatGPT Search Got It Right

I am using an open-source library called Docling to extract text from PDF documents. It was developed by the IBM research team, and the library works surprisingly well for my PDF documents.

from pathlib import Path
from docling.document_converter import DocumentConverter

source = "document.pdf"
converter = DocumentConverter()
result = converter.convert(source)
result.document.save_as_markdown(filename=Path("document.md"))

The code above generated a good-looking Markdown document. It cleanly extracted tables from my PDF. I am still benchmarking it with multiple PDFs, but it has been a good first experience with Docling.

Its README.md mentions that it uses an OCR engine, but it does not specify which one. Before diving into the source code, I decided to see if any GenAI search solutions could find the answer for me.

Continue reading “Only ChatGPT Search Got It Right”

Paper: Using Large Language Models to Promote Health Equity

I enjoyed reading this paper from the NEJM AI journal. This short (3-page) paper takes a positive view of large language models (LLMs) and discusses three potential LLM use cases that can make healthcare more equitable. As the paper mentions, 85% of the articles/papers that focus on equity-related impacts focus on the harm LLMs can cause. Only 15% of the articles/papers focus on equity opportunities.

NEJM AI is a new journal on medical artificial intelligence and machine learning from the NEJM Group. The New England Journal of Medicine (NEJM), published by NEJM Group, is one of the oldest and most prestigious medical journals in the world.

In this paper, the authors discuss three LLM use cases:

  1. LLMs can improve the detection of bias.
  2. LLMs can create structured datasets relevant to health equity.

I won’t discuss the third use case because everyone understands the importance of improving access to health information. However, people often underestimate the effort required to build a useful and reliable AI assistant.

Having spent the last two years working with LLMs, I’ve frequently encountered discussions about their inherent biases from training data. This paper presents an innovative counter-perspective: using LLMs to detect human biases in healthcare settings. While traditional methods relied on simple keyword searches, LLMs can analyze clinical notes to identify subtle linguistic patterns, sentiment, and stereotypes that may indicate bias in patient care. For example, research has shown that doctors more frequently describe Black patients as ‘difficult’ compared to white patients. LLMs can help systematically identify these kinds of biased language patterns by treating medical texts as ‘artifacts’ of a biased healthcare ecosystem.

Similarly, if LLMs can be used to promote propaganda, they can also be powerful tools for detecting it. Their deep understanding of language patterns can be turned from a potential liability into an asset for identifying harmful biases.

The second use case, creating structured datasets, is already happening. Features like structured output have made it much easier to extract data from unstructured documents. However, I still see people writing custom scripts and tools to do this. I believe that just as coding experience has improved in chat applications with the introduction of features like Canvas and artifacts, structured text extraction will also become better. The workflow will be more UI-driven rather than relying on Python code execution with a code interpreter.