I recently came across an insightful article by Drew Breunig that introduces a compelling framework for categorizing the use cases of Generative AI (Gen AI) and Large Language Models (LLMs). He classifies these applications into three distinct categories: Gods, Interns, and Cogs. Each bucket represents a different level of automation and complexity, and it’s fascinating to consider how these categories are shaping the AI landscape today.
1. Gods: Autonomous Agents
“Gods” represent fully autonomous agents capable of replacing humans in specific tasks. These agents must be highly reliable, with minimal error rates, because they will operate independently without human oversight. Currently, we have yet to see such agents in widespread, real-world use. The development of these “Gods” depends on significant advancements in capabilities like planning, decision-making, and tool integration.
For instance, think of an AI agent tasked with managing entire supply chains or healthcare systems without human intervention. While this is theoretically possible, today’s systems lack the sophistication and dependability required for such complex tasks. Before we can deploy these agents at scale, we’ll need comprehensive benchmarks to evaluate their real-world performance. Experts predict that it may take at least another decade to reach this level of AI maturity.
2. Interns: AI Assistants with Human Oversight
“Interns” are where we see the most practical and widely adopted AI applications today. These are AI assistants that work alongside humans, operating within workflows where human oversight is essential. GitHub Copilot is a prime example—an AI tool that assists developers by generating code suggestions, boosting productivity while leaving the final decision in the hands of the human user.
Most Retrieval-Augmented Generation (RAG) systems and generative search tools also fall into this category. These AI systems assist in content creation, summarization, and even research, but they rely on human oversight to verify accuracy and relevance. Many organizations are heavily investing in this area because of its proven ability to enhance productivity without the risks of full automation. In fact, I am currently working on several such “intern” systems myself, designed to support tasks ranging from data analysis to content generation.
One real-world example is customer service chatbots that handle routine queries but escalate complex issues to human agents. These AI “interns” improve efficiency, but still need human intervention when tasks become too nuanced.
3. Cogs: Specialized, Single-Task AI Functions
At the other end of the spectrum are the “Cogs”—small, focused AI components that perform specific tasks reliably and efficiently. These are the building blocks of more complex AI systems, often operating behind the scenes. For example, a cog might be an AI model that excels at extracting structured data from unstructured documents, such as extracting key fields from a PDF invoice or summarizing long articles.
In RAG systems, cogs play a crucial role in ingestion pipelines, where they enhance and enrich documents by adding metadata, performing text classification, or applying summarization techniques. While they may seem less glamorous compared to the ambitious “God” agents, cogs are indispensable in creating robust, compound AI systems. Think of them as the gears that keep the larger machine running smoothly.
One notable example is email spam filters—an AI cog that focuses on the single task of identifying spam emails based on patterns, keywords, and historical data. It’s a narrowly defined job, but one that is critical to daily communication systems.
This framework of Gods, Interns, and Cogs provides a useful lens through which we can understand the varying levels of AI capabilities and their potential impact. While fully autonomous “God” agents are still a long way off, “Interns” and “Cogs” are already delivering significant value, driving productivity and innovation across industries. The future of AI will likely see continued growth in these categories as we move toward more autonomous and capable systems.
Discover more from Shekhar Gulati
Subscribe to get the latest posts sent to your email.