Aspiring for Intelligence

Share this post

Dawn of the Agents

aspiringforintelligence.substack.com

Discover more from Aspiring for Intelligence

The latest themes, trends, and news in the world of intelligent applications
Over 1,000 subscribers
Continue reading
Sign in

Dawn of the Agents

AutoGPT, BabyAGI, and the arrival of autonomous AI

Vivek Ramaswami
and
Sabrina Wu
Apr 27, 2023
5
Share this post

Dawn of the Agents

aspiringforintelligence.substack.com
Share

Please share the love with your friends and colleagues who want to stay up to date on the latest in artificial intelligence and generative apps!๐Ÿ™๐Ÿค–

Share Aspiring for Intelligence


What Caught Our Attention In AI?

๐Ÿง  Deepmind and Google Brain Join Forces

๐Ÿค– Microsoft Readies AI Chip as Machine Learning Costs Surge

โš•๏ธGenerative AI for Healthcare Takes Center Stage at HIMSS

๐Ÿ’ก Databricks Dolly 2.0 Announces One of the First Open Instruction Tuned-LLMs

๐Ÿ‘ถ BabyAGI and AutoGPT take the world by a storm


One of the biggest highlights in the past two weeks was how AutoGPT and BabyAGI have taken the world by storm. Imagine being able to ask your computer to do something like โ€œorder me a pizzaโ€ or โ€œbook me the cheapest flights to Mauiโ€ and then having the program accomplish the task, no additional steps required.

Well, thatโ€™s the promise of autonomous AI Agents. Autonomous agents can be programmed to iteratively start, run, and complete different tasks, employing โ€œhuman-likeโ€ notions such as prioritizing steps and determining how best to accomplish a task. While this concept has been around for several years going back to early โ€œAI assistantsโ€ (remember the original x.ai?), they are now seeing a resurgence thanks to the recent explosion of large language models and the technical capabilities of GPT-4.

While weโ€™re still in the early innings of seeing autonomous agents being deployed at scale or in enterprise production, weโ€™re excited by the promise of the technology. One only has to look at the massive growth in star count, essentially GitHubโ€™s version of a โ€œlikeโ€ button, to get a sense of the popularity of these projects.

AutoGPT achieved 100K stars on GitHub in just a few weeksโ€ฆ

How do agents differentiate from โ€œclassicโ€ AI?

In the context of AI and foundation models, agents refer to computer programs or systems designed to perform specific tasks without human intervention. Today, the way that most of us interact with AI is through prompting ChatGPT. Letโ€™s go back to the example above of booking a flight from New York City to Maui. Today, a user cannot actually complete the action of booking a flight through ChatGPT. Instead, they would likely query several prompts into ChatGPT, such as:

  • What are the best times of the year to go to Maui?

  • What airlines have direct flights from New York City to Maui?

  • Where can I book the cheapest direct flight?

After querying ChatGPT for relevant information, the user would then have to go to an external travel site to book the flight themselves. If they have access to the Expedia ChatGPT plugin, they will be presented with the relevant link within their conversation window to book a flight on Expedia.

Agents take this experience one step further by automating the entire workflow.

The promise of autonomous agents is that they only require an initial prompt (i.e. the objective) from the user: โ€œbook me the cheapest flight from New York City to Mauiโ€. From there, the agent will build a series of tasks that it can self-execute to accomplish the objective. For example, armed with the initial prompt above and enough information, an agent may create a task list like this:

  1. Google the best months of the year to visit Maui

  2. Survey the five airlines that have direct flights from NYC to Maui during those months

  3. Select the lowest-cost direct flight across those airlines for a week-long stay

  4. Input travelerโ€™s information and credit card details into the airline website to book a flight

  5. Send flight confirmation details to the userโ€™s email address

Of course, this example assumes the agent has access to the userโ€™s payment details and other relevant information (which we believe will increasingly happen), but the key is that the agent was able to define a list of tasks and priorities on its own, and fulfilled the original objective without needing any further prompting from the user.

How do agents actually work?

I. Introduction of Agents:

The concept of agents has been around for over a decade, typically in the software context. Agents can be classified into different types based on their characteristics, such as whether they are reactive or proactive, whether they have a fixed or dynamic environment, and whether they are single or multi-agent systems (working together to achieve tasks, or operating as a single agent). Importantly, not all agents are autonomous agents, so letโ€™s quickly level set on what an agent is and how different types of software agents work.

Agent Characteristics:

  • Reactive agents: Respond to their environment and take actions based on stimuli.

  • Proactive agents: Take initiative and perform an action based on prediction.

Environments in which Agents Operate:

  • Fixed environment: Static set of rules that do not change.

  • Dynamic environment: Rules are changing and the agent needs to adapt to new situations.

Some Examples of Types of Agents:

  • Conversational Agents: Simulate human conversation and can be used to answer questions, provide information, schedule appointments, set reminders, etc.

  • Recommendation Agents: Designed to provide personalized recommendations based on user data and behavior.

  • Task-oriented Agents: Perform specific tasks like booking a reservation. Knowledge-based agents are also task-oriented and designed to provide answers to questions based on a database of knowledge.

  • Autonomous Agents or โ€œZero-Shotโ€ Agents: AI systems that can act independently of human control, making decisions and taking actions based on its own internal state and the external environment.

II. Putting Agents to Work:

Letโ€™s take BabyAGI as an example. BabyAGI is the brainchild (brainbaby?) of Yohei Nakajima, a venture capitalist and AI builder based in the Seattle area. Originally spawned out of a desire to create an โ€œAI founderโ€ and with inspiration from the #HustleGPT movement, Yohei built BabyAGI is a simplified AI Agent in Python script that leverages OpenAIโ€™s GPT-4 language model, an open-source embedding database called Chroma (previously leveraging Pinecone vector search), and the LangChain framework to perform a wide range of tasks across different domains.

The BabyAGI script works by running an infinite loop that follows the below steps:

  1. Pulls the first task from the task list.

  2. Sends the task to the execution agent, which uses OpenAI's API to complete the task based on the context.

  3. Enriches the result and stores it in Chroma.

  4. Creates new tasks and reprioritizes the task list based on the objective and the result of the previous task.

BabyAGIโ€™s system works based on three agents that are working together:

  1. Execution Agent - core system that utilizes OpenAIโ€™s APIs to process the tasks. Two key parameters: the objective and the task.

  2. Task Creation Agent - uses OpenAIโ€™s API to create new tasks based on current objects and previous tasks. Four key parameters: the objective, the result of the previous task, the task description, and the current task list.

  3. Prioritization Agent - uses OpenAIโ€™s API to prioritize the task list. One key parameter: the ID of the current task.

Overview of the BabyAGI Agent System

III. The Stack:

In the context of BabyAGI, the stack contains a few key components that allow the agents to work. We expect these systems will be relevant across any use of agents.

  • The Model - An LLM sits at the core of the stack, responsible for completing tasks and generating new tasks based on the completed results. BabyAGI is calling OpenAIโ€™s GPT-4 API.

  • The Vector Database - The vector search platform provides search and storage capabilities for retrieving task-related data and results. Storing in the database also allows agents to reference for context in future tasks. Vector DBs include Chroma, Pinecone, Weaviate, and others.

  • The Tooling Framework -Langchain is the framework that is used to enhance system capabilities around task completion, agent-based decision-making, and data contextual awareness.

Other Agents in the Wild

Today there are a handful of projects that have catalyzed resurgent interest in AI agents, most notably AutoGPT and BabyAGI (referenced above), but also more โ€œnicheโ€ projects like Westworld and Camel.

AutoGPT

The first autonomous agent to burst onto the scene was AutoGPT, which was released as an open-source project on March 30th by game developer Toran Bruce Richards. Billed as an โ€œexperimental open-source attempt to make GPT-4 fully autonomousโ€, AutoGPT builds on ChatGPTโ€™s framework, but essentially pairs GPT with a companion robot that instructs GPT on what actions to take. The companion robot receives instructions from the user and uses GPT and several APIs to carry out the necessary steps to achieve the desired goal. In this way, AutoGPT relies on self-prompting to chain together โ€œmultiple LLM thoughtsโ€ to achieve a desired goal.

AutoGPT is not yet an off-the-shelf application that any person can just start using. It requires some technical know-how, and users need to be able to connect with OpenAIโ€™s API and create a token-based payment arrangement. That hasnโ€™t slowed down its popularity among developers however; AutoGPT surpassed 100K stars on GitHub in just a few weeks, and amassed a large and passionate community across Discord and Twitter.

Image

Westworld Simulation

Researchers from Stanford and Google created an interactive sandbox environment with 25 generative AI agents that can simulate human behavior. This is all possible via the agent architecture that extends an LLM with three important elements:

  1. Memory and Retrieval: Memory stream which contains a list of observations for each agent alongside a timestamp of that memory. The important parts of the memory are based off recency, importance, and relevance.

  2. Reflection: High-level abstract thoughts to help agents make inferences. Synthesizes memories into higher-level inferences over time, enabling the agent to draw conclusions about itself and others to better guide its behavior.

  3. Planning: Translates conclusions to the current environment and creates action plans. Agents can create actions based on the plan and can react and update the plan according to the other observations in the memory stream.

CAMEL (Communicative Agents for โ€œMindโ€ Exploration of Large Scale Language Model)

CAMEL proposes a role-playing agent framework where two agents can communicate with one another. Typically solving tasks in the real world requires multiple steps, so this framework involves a few key components:

  1. AI user agent: Gives instructions to the AI assistant agent.

  2. AI assistant agent: Follow AI userโ€™s instruction and responds with solutions to the task.

In this set up there is also a task-specific agent that is brainstorming tasks for the AI user and the AI assistant. This task-specific agent also helps write task prompts without the user having to define them.

What are the limitations today?

While the promise of autonomous agents is immense, we have yet to see any fully autonomous agents being deployed in major enterprise use cases. There are a few reasons behind this:

  • Specification of the objective and reliably translating the prompt to action: Generating natural language into a code interpretation still presents many variables. Additionally, there are more challenges as the agents have to interpret and order multiple complex tasks.

  • Security and authorization: In order for agents to be ready for production use cases, there will need to be strong security and authorization layers. What should the agent have access to, and how do you integrate across different layers?

  • Hallucinations: Models still have hallucinations problems, which means the information retrieval and interpretation could remain inaccurate.

  • Cost: Assuming agents are running prompts at $0.05 per 1K token and each context has ~10K prompts, that would cost ~$0.50 per prompt. This can meaningfully add up if an agent is running 8 hours per day and 365 days per year (think a customer service agent). At 2,920 average work hours and 1 prompt per second, this would be ~$5.3M/year to run - much more expensive than a customer success agent!

  • Building out ready-to-use agents / APIs: In order for these autonomous agents to interact, they will need to have more APIs to interact with. Pre-built agents/APIs will help speed up this process.

What innovation can we expect to see in the years ahead?

  • The Copilot becomes the Pilot โ†’ Weโ€™ve talked about AI pair programming at length, such as how GitHub Copilot is already being used to write billions of lines of code. With AI Agents, the โ€œcopilotโ€ may become the โ€œpilotโ€ itself, with the agent not only generating code but modifying it, recursively debugging, and moving code into production. Hereโ€™s an early example.

  • โ€œPrompt Engineeringโ€ gets replaced โ†’ There has been much discussion of the role that โ€œprompt engineersโ€ will play in an AI-first world, and the six-figure salaries they can command. However, if agents are able to coax the best possible results out of the model without human intervention, the need for this type of role would lessen (or be altered to a โ€œguiderโ€ vs. a โ€œprompterโ€).

  • Integrations with Plugins โ†’ OpenAIโ€™s Plugin ecosystem allows third-party apps like Expedia and Instacart to unlock the last-mile โ€œactionโ€ in a conversation. These actions will become more important as autonomous agents take shape, as they cannot actually complete most tasks without access to user data, payment information, etc., making integrations between agents and third-party tools more important.

In the AI world, we may remember April 2023 as the month that autonomous agents began to take its first steps. While there are still plenty of limitations and obstacles to keep in mind, we are excited for the role agents will play in the AI ecosystem. Check out other great pieces on this topic including from Krishna and Sophia Yang.

Funding News

Below we highlight select private funding announcements across the Intelligent Applications sector. These deals include private Intelligent Application companies who have raised in the last two weeks, are HQโ€™d in the U.S. or Canada, and have raised a Seed - Series E round.

New Deal Announcements - 04/14/2023 - 04/17/2023:

Special shoutout to Madrona portfolio companies, Bobsled, Groundlight, and Lexion on their recent financings! Bobsled is revolutionizing data sharing across platforms and is a critical part of the modern data stack. Groundlight is building a world-class AI-computer vision system platform, making high-quality CV as simple as integrating an API service like Twilio. Lexion is a contract management software that is leveraging AI to help operations teams get deals done faster.

We hope you enjoyed this edition of Aspiring for Intelligence, and we will see you again in two weeks! This is a quickly evolving category, and we welcome any and all feedback around the viewpoints and theses expressed in this newsletter (as well as what you would like us to cover in future writeups). And it goes without saying but if you are building the next great intelligent application and want to chat, drop us a line!

5
Share this post

Dawn of the Agents

aspiringforintelligence.substack.com
Share
Comments
Top
New
Community

No posts

Ready for more?

ยฉ 2023 Vivek Ramaswami and Sabrina Wu
Privacy โˆ™ Terms โˆ™ Collection notice
Start WritingGet the app
Substack is the home for great writing