Discover more from Aspiring for Intelligence
Davids in the Land of (AI) Goliaths
How do startups compete in the midst of a platform shift?
If you follow recent headlines in artificial intelligence, you’d be forgiven for thinking that only the large, already dominant companies are the ones that are winning. Just take some of the recent product announcements in the past few weeks:
Amidst all this news, it’s hard to fathom how startups can compete against scaled players with limitless resources and key partnerships with leading foundation models. Should we all just pack up and go home?
Not so fast. Just as the vast majority of value derived from the internet revolution went to “internet-native” startups like Google and Amazon instead of existing behemoths like Microsoft and Oracle, we believe that the AI revolution will follow a similar path.
Anyways, if incumbents were so impenetrable, wouldn’t we be hearing more about IBM Watson??
By the way, this is not a new topic to any degree. Elad Gill wrote a fantastic piece in October about Startups vs. Incumbent value capture in AI. But a lot has changed in the past six months, and it’s worth reevaluating where value is captured in this new AI paradigm. How do startups win in this world?
The Lay of the Land
Let’s categorize the key players in AI applications into three main groups:
BigTech → These are the massive, dominant, and often publicly traded software and internet companies aggressively investing in AI (e.g. Microsoft, Google, Meta, Salesforce, Adobe).
Generative-Enhanced Scalers → These are high-growth, rapidly scaling players that did not begin life as “AI” companies but are quickly baking in generative AI or LLM features into their core products (e.g. Notion, Coda, Canva).
Generative-Native Startups → A newly emerging category of startups that were birthed specifically as generative AI products and are natively built on top of foundation models and LLMs at its core (e.g. Jasper, Runway, Harvey).
Note that we are not specifically calling out the foundation model companies (OpenAI, Anthropic, Cohere, etc.) as a separate category, as we believe that all generative AI applications will be built using one or several of these models. In the future, FM’s will likely play a similar role as today’s cloud hyperscalers (AWS, Azure, GCP); they will be the utility providers of generative AI capabilities that applications will build on top of.
Each group of companies above has unique strategies, benefits, and challenges in the race for AI supremacy.
BigTech has been quietly investing in AI for years (recall it was Google researchers who published the seminal “Attention is All You Need” paper in 2017), but they have led the race in rapidly accelerating AI adoption in this recent cycle. No BigTech company has embodied this better than Microsoft, which first invested in OpenAI in 2019 and through this partnership has introduced LLM capabilities across Github, Office, Bing, Teams, and virtually every other one of their expansive product offerings. Perhaps in an effort to avoid their prior failures in embracing the mobile revolution, Microsoft has emerged as the early favorite to lead in AI applications, with other BigTech players also shipping new products and features at a rate not seen in a long time (decades?). But like any other business, success is not guaranteed.
Where They Win:
Distribution → Selling into an existing user base with entrenched habits, and an ability to roll out products to millions (if not billions) instantly.
Partnerships → Incumbents have the capital, team, and resources to quickly spin up and support alliances, particularly now with foundation model companies (think OpenAI-Microsoft, Anthropic-Google, Stability AI-Amazon).
Where They Are Challenged:
Innovator’s Dilemma → If there’s one thing counting against the incumbents, it’s that they may fall victim to the classic innovator’s dilemma, the phenomenon where established players must decide whether to cannibalize their existing products in favor of a new disruptive technology. In the case of AI, BigTech is less about introducing wholly novel products than bolstering existing ones with generative techniques. Benedict Evans captures this well below; we should be asking the question “what new products are enabled by AI” instead of just “what existing products get better with AI?”.
There has been a wide swathe of high-growth, rapidly scaling SaaS products built over the past decade, fueled by secular shifts from on-prem to cloud, access to great talent, and large amounts of venture capital. These products tend to be cloud-native, user-friendly, and chip away at the BigTech players from the ground up. Think of Notion, Coda, Airtable, Canva, Webflow, Zapier, etc. Many of these products originally innovated not through AI but rather through better workflows, speed of execution, and usability. But similar to BigTech, they are rapidly looking to integrate generative AI capabilities into their core products.
A few examples from the past week:
Attentive launched “Attentive AI”, generative AI tools for message creation and marketing campaigns.
Canva unveiled “Magic Design”, a free AI design tool where a user can input text in natural language and Canva will “magically” auto-populate content.
Zapier announced its ChatGPT Plugin, a collaboration between OpenAI and Zapier which allows users to connect 5,000+ apps (like Google Sheets, Gmail, or Slack) from Zapier and interact with them directly inside ChatGPT.
Zoom launched ZoomIQ, a next-gen “AI smart companion” for chat summarization, content drafting, meeting notes, and more.
Where They Win:
Brand Recognition → These companies benefit from users knowing and trusting their brand, often with a high degree of dedication (for example, Zapier has an NPS of 61 with 80%+ customer loyalty).
Product Love → A product-first mindset and ethos has been key to the growth of many of these tools, particularly those with PLG roots. How else could they have survived against the might of established incumbents? Maintaining product supremacy while adding AI features will be a key balancing act.
Where They Might Be Challenged:
Integration → Integrating AI features into existing workflows can be challenging. Companies may need to develop new APIs, modify existing ones, or create custom integrations to enable AI functionality. This can lead to compatibility issues and additional development time and cost. Some of these companies are still focused on reaching profitability; figuring out how to balance existing products with new AI-based workflows and teams can be a tough juggling act in a scaling environment.
Generative AI is a rapidly growing field that has sprouted hundreds of new companies and products in recent years. From Co:here and Anthropic at the foundation model layer, to Fixie and Langchain at the middleware layer, and Runway and Tome at the application layer, generative-native companies are disrupting the space and re-imagining a wide range of applications across businesses and verticals. So, what separates a generative-enhanced company from a generative-native one? And, how can a generative-native startup compete against companies like Salesforce, LinkedIn, OpenAI, and MSFT?
One of the key advantages of generative-native companies is their use of LLM architecture at their core. These startups can employ one or multiple LLMs, boasting features such as representation and embeddings, text generation and summarization, and co-pilot-like capabilities. This architecture allows for improved product recommendations, travel itineraries, and document retrieval, disrupting various industries and applications. In contrast, generative-enhanced companies were not natively built on LLMs at their core, and have to work backward to incorporate generative elements as indicated above.
Here are some areas we think Generative-Native startups can win:
Reimagining underserved verticals → In the prior cloud era, new vertical software giants were created to serve specific verticals; think Procore ($8B mkt cap) for construction, Veeva ($28B) for life sciences, and Toast ($9B) for restaurants. However, many large sectors remain underserved by technology, partly because digitized workflows don’t always result in 10x work improvements. That is rapidly changing with generative AI, which is more a “system of action” than a “system of record”, and can create near-finished outputs or supercharged assistance vs. merely faster workflows. We are seeing this play out in the legal industry, a $500B+ market fairly averse to traditional software but now embracing GenAI-Native tools like Harvey and EvenUp.
Novel use cases → LLMs have enabled new applications across fields such as biotech and healthcare. AI tools can now be leveraged to create personalized treatment plans, synthetic data, and even new drugs. AI is revolutionizing drug discovery, reimagining every type of drug from peptides to spatial biology to small molecule drug discovery. We believe that there are other novel use cases that will emerge around climate tech, biodiversity, sustainability, energy, logistics, and more that were previously not possible without LLMs. Some examples include A-Alpha Bio, Atomic AI, and Profluent.
Leveraging multiple LLMs & Embracing Open Source → Partnerships between BigTech and foundation model companies are often exclusive in nature, limiting the number of models they can leverage. However, generative-native startups can stitch together multiple best-in-class models and do not have the same limitation. Given how fast the pace of innovation is in LLMs, it is advantageous for startups to have a flexible architecture that allows them to leverage the best-performing models and embrace open-source models, something BigTech can’t do as easily.
Tooling → As more open-source and proprietary models are released, developers will need better tools to be able to stitch together, fine-tune, and optimize these models. A new crop of companies is emerging in the middleware space which can be complementary to the model providers, like Fixie, Langchain, and others. In virtually any new platform shift, selling the “picks and shovels” can be a lucrative bet.
Of course, it’s not going to come easy. Generative-Native companies also face significant challenges against both BigTech and Generative-Enhanced businesses. Some areas to use caution:
Where high Capex is required → Given the high capital expenditures, compute costs, dataset requirements, and physical server space needed to build and train LLMs, we believe there will be high barriers to entry and high investment costs to enter the model and deep infrastructure layer. As a result, early-stage companies trying to build here should really consider why OpenAI, Microsoft, Meta, and Google will not dominate. This is why the App layer is likely more fertile ground than the FM layer for emerging gen-native startups.
Use cases with little proprietary data → If generative-native apps will have a tough time differentiating at the model layer, where they can win is by having access to proprietary data that others don’t. This could be done through the hard work of stitching together public and private datasets or generating new data to train models in unique ways, producing more relevant outputs for end users. While incumbents may have a head start today in access to data, that competitive advantage can quickly dissipate as the pace of data generation explodes.
There’s no question that both the BigTech giants and high-growth cloud players of the past era are moving as quickly as they can to embrace Generative AI. Amidst all the noise, it's easy to believe that there isn’t room for new startups to thrive. But we believe precisely the opposite: in this new platform shift, a novel crop of companies natively built on top of large language models can and will become successful.
The Ancient Greek philosopher Parmenides believed that “change is an illusion”, and that everything that existed remains “unchanged and eternal”. Then Socrates came along and developed his eponymous Socratic method, using critical thinking and new techniques to challenge conventional orthodoxy.
If Generative AI can be compared to Ancient Greek philosophy, we prefer it to be Socratic: let’s not just use AI to make “better spreadsheets” but challenge why spreadsheets need to exist in the first place…
Below we highlight select private funding announcements across the Intelligent Applications sector. These deals include private Intelligent Application companies who have raised in the last two weeks, are HQ’d in the U.S. or Canada, and have raised a Seed - Series E round.
New Deal Announcements - 03/17/2023 - 03/30/2023:
Numbers Station raised a $17.5M Series A round and is a perfect example of a generative-native startup that is leveraging LLMs to build an intelligent data stack automation platform. You can read more about their Tech Crunch announcement here!
Fixie.ai raised a $17M Seed round and is a great example of a generative native start-up in the FMOps/middleware layer. Fixie is a cloud-based platform for building, hosting, and scaling natural language agents that integrate with arbitrary tools, data sources, and APIs. You can read more about Madrona’s investment in Fixie here!
We hope you enjoyed this edition of Aspiring for Intelligence, and we will see you again in two weeks! This is a quickly evolving category, and we welcome any and all feedback around the viewpoints and theses expressed in this newsletter (as well as what you would like us to cover in future writeups). And it goes without saying but if you are building the next great intelligent application and want to chat, drop us a line!