5 Takeaways From Intelligent Apps Summit 2023
Key learnings from a gathering of 250 AI builders, investors, researchers, and leaders
A slightly condensed post this week as we are coming back from a series of summits. Please continue to share with colleagues, friends, or anyone you think would enjoy staying current on the latest trends in AI :)
This week, Madrona hosted its second annual Intelligent Applications Summit, where we brought together ~250 founders, builders, investors, and thought leaders across the AI community. So much has changed in the space since our first IA Summit twelve months ago, which was held six weeks before the release of ChatGPT!
Last year, the key question centered around how long it would take users to adopt intelligent applications and embrace AI. This year, that wasn’t even a question; AI is here, and every company is racing to figure out how best to use it.
We were lucky to hear perspectives from a range of practitioners, including builders in AI infrastructure (e.g. LangChain, Unstructured, Pinecone, Neon); large public companies (e.g. Microsoft, Amazon, Salesforce, Goldman Sachs); research institutions (e.g. AI2, McKinsey); and cutting-edge applications (e.g. Typeface, Glean, Insitro), including many others.
For this week’s post, we wanted to highlight five key takeaways from this exciting Summit.
Five Key Takeaways
AI Has Gone Mainstream
To slightly rephrase Dinah Washington, what a difference a year makes!
In the twelve months since our first IA Summit in October 2022:
ChatGPT was released, and in a few months became the fastest growing consumer app of all time, helping its creator OpenAI generate $1B+ in annualized revenue.
LangChain went from an idea to one of the fastest-growing open-source projects in history, with 65K+ stars on GitHub and spawning a commercial business with millions in funding.
Venture funding in Generative AI went from $6B in 2022 to $18B in 2023…with ~2.5 months still left to go.
Microsoft, Google, Salesforce, Amazon, and even Apple have all jumped into the fray, either through their own products, or aggressively funding and partnering with emerging startups and foundation models.
Needless to say, the tone of this year’s Summit was markedly different from a year ago; nobody was questioning if consumers would “accept” AI, but rather how best could companies and users capture its immense value.
RAG is Hot
One topic that appeared to be on everyone’s lips was retrieval augmented generation (or “RAG”). RAG is a framework for improving the quality of large language model (LLM) generated text by grounding the model on external sources of knowledge. It is essentially a technique for using data specific to an enterprise or user to train a model, and helps the model to “look up” external information to improve its responses. RAG stands in contrast to fine-tuning, which is the process of taking a pre-trained LLM and further training it on a smaller, specific dataset to adapt it for a particular task or to improve its performance.
RAG is certainly having its moment. In a panel on emerging architectures for building generative apps, Harrison at LangChain, Brian at Unstructured, and Luis at OctoML discussed the pros and cons of RAG and its rising popularity among AI developers, particularly because it is less expensive than fine-tuning, and more generalizable. Another discussion topic was the ability to train an LLM, and then use another LLM for re-training based on specific use cases.
However, it is clear that there is no definitive “standard” yet, and companies of all sizes are stitching together several different techniques to optimize model performance.
Everyone Will Have Their Own App
There are five million applications in app stores, there are probably several hundred PC applications, there’s 350,000, maybe 500,000 websites that matter — how many applications are there going to be on large language models? I think it will be hundreds of millions, and the reason for that is because we’re going to write our own! Everybody’s going to write their own. - Jensen Huang, March 2023
We heard similar sentiments throughout the day. We have gone from a world with hundreds of apps during the PC and Internet days, to a world with millions of apps in the mobile era. It is not a massive stretch to believe that we are now entering a world where apps built on LLMs can be customized to the needs and wants of individual users. Imagine an app that suggests recipes tailored to you, using a model trained on your restaurant history, dietary restrictions, and taste preferences…and then multiply that for every picky foodie out there.
Nikita Shamgunov, CEO of Neon, reiterated this point when he described that part of Neon’s mission is to broaden the number of developers from ~25M today to 50M+ by 2030, a necessary force for a world with “hundreds of millions” of apps!
Jason Warner, CEO of Poolside, also shared a vision where we will be able to easily have millions of code-generated applications.
Open Models are Gaining Steam
Ali Farhadi, formerly the founder of Xnor.ai (acquired by Apple in 2020) and currently the CEO of the Allen Institute for AI (AI2), led a keynote on “The State of Open-Source Models & the Path to Foundation Models at the Edge”, where he passionately made the case for open models. Open-source AI models allow anyone to view and manipulate code, and are growing in popularity as startups and giants alike race to compete with private model players like OpenAI (confusing names…we know). With powerful open-source models like Meta’s Llama-2 and Mosaic’s MBT-7B, the ability for developers to quickly get up and running and create new applications is becoming easier and easier.
However, this is not to say that there isn’t a need for closed-source models. The most popular AI tool remains ChatGPT, and we will continue to see a proliferation of closed models from Anthropic, Google, and others, particularly due to their ability to move quickly and spend enormous resources to better train their models. There will continue to be a balancing act between a closed stack driving value capture, and an open stack unlocking broad innovation.
Early Stage Companies Are as Dynamic As Ever
One of the most interesting insights came from comparing 2023’s list of IA40 winners (the top 40 private companies building AI apps as voted by a group of VCs) with prior lists. 29 of the 40 companies are first-time winners, and only 7 are three-peat winners (Cresta, Runway, Snyk, Abnormal, HuggingFace, Databricks, and dbt). In fact, the highest vote-getter in 2022 (Jasper) didn’t even appear on this year’s list.
This just goes to show how incredibly dynamic the space is. There are new companies being formed at every level of the stack, from the foundation model level to enabler to application, and the speed at which they can gain thousands of users and millions of dollars is unprecedented (Midjourney being a great example).
We are also watching to see how the generative-native vs. generative-enhanced debate continues to play out. Generative-enhanced companies have the distribution, data, and network effects, but can the generative-native companies re-imagine a workflow and gain new customers to adopt the technology? Assuming the generative-enhanced companies win in the short term, the generative-native companies will be even more dynamic, further highlighting how many of these early-stage companies may no longer be around in a number of years.
We fully expect a new crop of winners in the 2024 list, and are excited to watch the incredible developments in the market!
We hope you enjoyed this edition of Aspiring for Intelligence, and we will see you again in two weeks! This is a quickly evolving category, and we welcome any and all feedback around the viewpoints and theses expressed in this newsletter (as well as what you would like us to cover in future writeups). And it goes without saying but if you are building the next great intelligent application and want to chat, drop us a line!