Highlights From the 2024 IA Summit
Key takeaways from this year's gathering of 300+ AI founders, builders, and researchers
Please subscribe, and share the love with your friends and colleagues who want to stay up to date on the latest in artificial intelligence and generative apps!šš¤
This past week, Madrona hosted its third annual IA Summit, celebrating the winners of our āTop 40 Intelligent Applicationsā list and hearing from AI luminaries like Mustafa Suleyman (CEO of Microsoft AI), Arvind Jain (CEO of Glean), Rama Akkiraju (VP of Enterprise AI & Automation at NVIDIA), and Ali Farhadi (CEO of AI2). This was our largest summit yet, bringing together over 300+ founders, builders, leaders, and investors across the AI landscape.
If the theme of our first summit in 2022 (held six weeks before ChatGPT launched) revolved around the possibilities of AI, and 2023 was about experimentation, the clear theme this year was - how do you make AI work in production and generate real use cases that lead to ROI?
It was a terrific day, filled with extremely engaging conversations. Coming out of the summit, we wanted to share five of our key takeaways below.
1. There is no āone-size-fits-allā model.
Virtually every speaker agreed that the future of AI will include many models, and no one single model will dominate. Our partner Matt McIlwain has been talking about this concept of āmodel cocktailsā for over a year now, and while in past years the common perception was that OpenAIās family of closed-source GPT models would always be used by developers, there is now a clear consensus that most applications will leverage multiple models depending on which model best fits the application. Builders are now choosing among a diverse ecosystem of models, each optimized and deployed for specific use cases.
Asha Sharma, head of product for Microsoftās AI platforms, summarized it best saying you āneed the right model for the right use caseā, and commented that most Microsoft customers use multiple models in production, not just one. The gap between open and closed models, as well as large and small models, is rapidly shrinking (you can see more details on some of the benchmarks and comparisons here). And, as opposed to using models off the shelf, itās become increasingly common that builders will end up distilling and finetuning a pre-trained model (we heard the term āmid-trainingā being used) to align to specific business objectives.
Mustafa Suleyman highlighted the significant decrease in the cost of deploying models, envisioning a future where models become a commodity. As token costs follow a power law, they are expected to approach near zero, mirroring trends in cloud computing.
2. The āKiller Use Casesā Are Starting To Emerge.
If 2023 was the year of AI experimentation, 2024 marks the shift to AI in production. Last year, companies explored models to understand their potential. Now, they are moving beyond research, testing, and prototyping, integrating AI into real, sustainable enterprise workflows.
Rama Akkiraju, VP of Enterprise AI & Automation at NVIDIA, quickly rattled off several ways that NVIDIA uses LLMs internally, including for chip design, SRE productivity, supply chain forecasting, enterprise search, and bug management. Although NVIDIA itself is clearly at the tip of the AI spear themselves, we were excited to hear how traditional businesses that are dealing with many of the same issues can also leverage AI themselves beyond initial functionalities like code and content generation.
We heard from a few of our IA40 winners about some of these killer use cases they are going after. Companies like Writer are helping companies build GenAI into business processes and companies like Read are making humans more productive across the enterprise, from recording meeting, email, and messaging summaries to updating client records.
3. From RAG to Agents - Agents are AI's future.
The hot theme from 2023ās IA40 was RAG (retrieval augmented generation). This year, we hardly heard the term at all. Instead the term on everyoneās lips was āagentsā. In one discussion group, participants discussed how the term has been so bandied about without any crisp concise definition. But the overarching theme was that the future of AI will revolve around agents, or autonomous programs that can perform and complete a task on behalf of the user.
We are starting to see agents pop up across different verticals (e.g. Otto for travel, Dropzone for security, Factory for software engineering) as well as general purpose agents that can perform across horizontal use cases (e.g. MultiOn). While we are still very early innings in this area, we are excited for a future where agents are more common and interact with other agents and in tandem with humans. The agent theme was so prolific that one participant commented that the āāAā in IA40 should be agents instead of applications!ā.
4. AI Funding Continues Unabated!
It was a nice coincidence that on the day of the IA Summit, OpenAI announced the largest venture funding round of all time: $6.6B at a $157B valuation, making OpenAI instantly worth more than ~90% of the S&P 500. Poolside, an earlier stage AI coding startup, also announced a mammoth $500M Series B. Both of these funding rounds only bolstered the fact that funding for leading AI startups and companies is not slowing down, at all.
Brad Gerstner, Founder and CEO of Altimeter Capital and an investor in OpenAIās recent funding round, posted a chart showing that OpenAIās revenue exceeded that of Google at the time of their IPO, and OpenAIās current trajectory is several years ahead of where Meta was on a similar timescale. There is still a long road ahead for OpenAI to generate the same kind of profitability as these giants, but they are now armed with the capital to do so.
Much of the excitement for (and dollars into) AI startups drives home the message we had been hearing throughout the conference: AI is a transformational wave on par with, or likely even bigger than, the Internet revolution. The prize is simply too big to miss out.
5. Memory Is a Critical Component to Driving Personalization.
Earlier this year, we saw ChatGPT introduce new features as it relates to memory - both short-term in-session memory and longer-term preference memory. Mustafa Suleyman highlighted how in the next 18 months, AI systems are expected to develop robust memory capabilities, enabling efficient retrieval across vast and arbitrary documents. But, there is a broader perspective to consider - true intelligence is not only about general capabilities but also about focusing on the right information at the right time, effectively managing multiple subsystems, and directing processing power accordingly. When combined with emotional intelligence (EQ) and cognitive intelligence (IQ), memory forms a critical componentācreating a powerful and well-rounded AI system.
Throughout the summit, the importance of memory and personalization was a recurring theme. Memory is key to enabling agents to execute tasks effectively and will be crucial for creating personalized AI systems. This capability will enhance experiences like customer support, making interactions more relevant and tailored to individual needs. Michelle Horn, SVP & Chief Strategy Officer of Delta, emphasized the value of using AI in customer success scenarios, and this kind of personalization is expected to become increasingly prevalent across all AI apps.
In Conclusionā¦
We would be remiss not to mention the ongoing importance of safety and security as AI systems evolve. Oren Etzioni highlighted the growing significance of seeking for truth in a world where deepfakes continue to emerge, while Mustafa Suleyman discussed AI safety and drew a very interesting analogy to trains and cars. When trains and railways were first introduced, people were unaware of their dangers and unaccustomed to their speed and force. Enthralled by the novelty, many stood on the tracks without understanding the risks, leading to fatal accidents. In contrast, when cars came out, they came with many new safety measures - seatbelts, driverās licenses, speed limits, etc. Point being, with any new technological advancement, we must learn from the past, recognize potential risks, and establish proper safeguards for the future. Technological advancements in AI will be no different.
We had an incredible time at this yearās summit and we continue to be impressed by how fast innovation is happening. Itās always a joy learning from a diverse group of creative minds and was wonderful to see familiar faces and meet so many new ones. We're grateful to all the amazing people who joined us, and weāre already looking forward to next yearās event! Congrats again to all of our winners!
Great updates! Thank you, really. Not everyone can go to these conferences so this helps.