For the first time, AI can create. This is a market inflection on par with the largest we’ve seen – perhaps profoundly bigger than the rise of the internet, mobile or the public cloud. We believe we are in the early innings of the next generational shift. The influence this technology will have on digital transformation worldwide is tremendous. Large language models (LLMs) are on the precipice of accelerating fundamental disruption at both the application and infrastructure levels. The recent breakthroughs of transformers and few-shot models in unsupervised learning have allowed model parameters and corresponding accuracy to grow exponentially (without degradation in loss function).
That said, the rising attention on generative AI and its seemingly limitless potential and applicability means it’s important to consider where in the stack value will ultimately accrue. The rise of the public cloud drove a platform shift, but the public cloud was an enabler, not a product itself. Like the public cloud, generative AI and AI more broadly, is an enabler of digitally transformative software. Some of the most exciting opportunities will emerge where AI is a more efficient and effective way to solve a core business problem and drive durable value long-term – ironically how we have evaluated enterprise software opportunities for over two decades.
New large language model capabilities combined with higher data quality, and availability and declining computing costs create favorable market conditions for the rise of applications that will fundamentally change how we work daily. Many existing applications will also be retrofitted with AI superpowers. And we are excited to see companies emerge that solve the gaps in the infrastructure stack needed to support this growing suite of applications.
Generative AI has been picking up a lot of steam in VC and tech circles – but where will durable value accrue? We outline below how the generative AI stack will evolve and where we see the most immediate and exciting investment opportunities.
Generative AI models are emerging along a wide range of data types, including text, images, audio and video. Some examples of popular generative model techniques and the types of data they can generate include GANs, VAEs, Flow-based models, and language models. However, we think of foundation models as more than LLMs. New types of generative models will likely continue to be developed as research in this area advances. The foundation models are progressing at a rapid pace and are already often as good or better than the human generation of content.
As data becomes king and more foundation models proliferate, the demand for AI-first applications will drive the emergence of many highly specialized domain or vertical-specific models. The models may be trained in specific data or curated styles, and tailored for specific industries (e.g., e-commerce, logistics). Some of these models may become Model-as-a-Service or will likely become full-stack offerings with applications and tooling that sit on top.
We believe tooling will be a critical layer in the value chain. While some infrastructure needs will become more acute with the rise of LLMs, new gaps in the tooling stack will emerge entirely. Below are four key segments where we believe new tooling will evolve:
- Playgrounds: Allow non-technical individuals to interact with and explore the capabilities of foundation or expert models; this will typically enable prosumer use cases.
- Programming Frameworks: Streamline and automate AI-specific workflow needs to access and build applications on top of LLMs (i.e., enable the programmability of models). Key workflow needs include, but are not limited to, state management, instrumentation and chaining.
- Model Lifecycle: Support the training, deployment and performance management of models relying on complex, unstructured data.
- Management & Safety: Manage the safety, compliance and security concerns and requirements surrounding LLMs.
We believe the next generation of software applications will emerge with AI as a first-class citizen. While data-driven applications will enable a more personalized and improved customer experience, like successful traditional software, AI-first applications will need to solve an acute need and have attributes that suggest long-term durability (e.g., being embedded within a workflow, access to proprietary data, network effects, etc.).
Where Will Durable Value be Created?
We believe the next generation of applications will be AI-first, and new infrastructure tooling will need to be developed to support this step change in underlying capabilities. At the core, long-term category leadership will come from a few key ingredients: Problem, data and product. Like software investing, we believe companies must solve a problem with a clear and material ROI, where there is a tangible data flywheel and where the product is highly embedded into the customer’s day-to-day operations, creating material customer stickiness.
The rise of foundational models specifically LLMs exposes many unserved and underserved gaps in the current infrastructure stack. New tools will need to emerge for developers, data scientists, and non-technical users to leverage LLMs within an enterprise. Hence, we are excited first and foremost about products that improve the accessibility and usability of large language models and make the lives of the “prompt engineer” easier. They will form the backbone of next-gen model creation, deployment and orchestration. There are numerous challenges for new infrastructure technologies to address – the raw, unstructured quality of image and video data, the static nature of existing LLM interfaces, the fragmentation of different models, the rising query and data volume, or the need for effective governance to ensure unbiased, responsible results.
Companies are already beginning to develop new frameworks that re-imagine prompt engineering used in the design and deployment of LLM apps (Dust.tt) and enable LLM applications to be built through composability (LangChain). Model lifecycle management solutions will take on even greater importance as LLM creation, deployment and collaboration become more complex. Moreover, emerging solutions in the management and safety of datasets will construct important guardrails and processes against bias in line with new regulations and expectations in the ethical uses of AI.
We believe every function in an organization with repetitive and/or skill-based work will be reshaped by foundation models, whether it is the democratization of coding, generating sales, supporting customers with virtual agents, or creating content for design and marketing. That said, some of these functions will be reshaped more effectively by agile and innovative incumbents. Identifying where large language models are better suited to enable a better feature instead of a standalone platform will be critical when investing in this space. In addition to this, it will be important to find problems that require solutions deeply embedded into business workflows and access to unique data sets.
Vertical-specific applications present some of the most significant opportunities for durable company creation given the existence of proprietary data sets, tailored GTM motions and the ability to embed deep into business workflows. We have been excited about sizable verticals with large and complex data that is mission-critical to businesses, in many cases combined with labor shortages and regulatory or compliance requirements.
While there is a lot of noise around generative AI, and creating durable differentiation becomes more difficult as products have become easier to launch – at the core, we look at generative AI like we looked at the rise of software. Whether it is embedded into existing technology or leveraged more effectively in a standalone new business, it will change the way that both incumbents and new companies will need to build, along with the way we live and work daily. This is both a transformational and new generational shift that we haven’t seen for more than a decade.