11th Jul 2023

What kind of billion-dollar AI company are you building?

Simon Menashy, Partner at MMC Ventures, shares his view as an investor of where we can expect to see AI innovation in the coming years.

OpenAI’s CEO Sam Altman said that the age of giant AI models is over. Whether or not that’s true, I’m more interested in exploring the different layers where innovation can be found within the AI space and what companies are likely to be built in the coming years.

If you’re building in this space, you don’t have to go and raise €105m and start developing your own large language model, competing with the likes of OpenAI, Google and Meta. Whoever (one or more) wins the race to the best foundational model, I expect to see a thousand good businesses built touching this new ecosystem over the coming years, with a variety of different models and focuses.

Most market maps I’ve seen – and I’m sure you’ve seen your fair share too – focus on the most visible way to break down the generative AI space, by medium (text, code, images, video, speech etc.) or sector. But there is more to explore. I want to delve into five layers where we can expect to see key innovation from AI – the new landscape where founders can establish their presence and investors can find opportunities beyond the most apparent generative AI applications.

  1. Foundational models

  2. Specialised models

  3. Vertical use cases

  4. The orchestration layer

  5. Supporting tools and enablers

Let’s go through them one by one.

Foundational models

OpenAI, Google and Meta dominate this space, building large, foundational models that create human-like general language output and can address a broad set of use cases. The state of the art is advancing at a breathtaking pace, with big leaps every 6-12 months and well-funded start-ups like Anthropic, AI21, Cohere, Mistral AI, and Character.AI investing substantial resources to catch up. Even Elon Musk is throwing his hat into the ring.

In short, it’s a capital intensive game – and that’s what you’d need to compete. We will certainly see innovation in this space, but it’s likely going to be amongst the biggest tech players and the few scale-ups they sponsor, who have the necessary resources to fuel such efforts.

So you could build your own language model. New AI model designs, architectures, and further refinements based on human feedback are promising areas worth investigating, and every few weeks we meet a professor claiming a superior or alternative methodology. There’s also utility in models built for speed and low cost, or those that run on particular hardware architectures. But unless you have easy access to tens of millions of starting capital, I believe there’s a lot of value to be created elsewhere in the diverse ecosystem of companies that can be built around AI technology.

Specialised models

Start-ups might struggle to compete with the best-funded foundational models, but there are lots of specific areas where big general models may fall short or lack focus.

Some businesses will take foundational models and fine tune them for specific use cases. Others will develop more specialised models. These could be very narrow but widely applicable – for example, emotion analysis on voice, or interpreting the particular syntax of tax codes or insurance categories. They could also be very deep and focused on one problem, for example collision avoidance for driverless cars. I expect to see particularly interesting and defensible examples where there is sensitive proprietary data or models operating within highly regulated areas.

The crucial question for businesses creating these models is whether there is sufficient applicability to licence or offer their technology to other businesses via APIs. If the model is specialised enough, a company might establish its own vertical use case based on their unique model, carving out a niche in the market (see the next category).

This may also birth new go-to-market and distribution models. Marketplaces and communities, like Hugging Face, are already developing so people can use, build on and stitch together different open-source specialised models. As start-ups develop their own closed specialised models, they will need to think how to sell into and connect with a diverse range of contexts. Tied with agentic applications I talk about below, businesses might develop and sell specialised, preset AI agents, functioning like digital employees with a specific skill set.

Vertical use cases

AI models, particularly as part of enterprise SaaS solutions, will also be adapted for specific applications and industries.

AI has potential applications across every business and function – from customer support, sales and advertising to insurance pricing, transaction analysis, drug discovery and countless other examples. We are seeing a new generation of innovation potential in enterprise SaaS, with start-ups building these verticalised tools.

Vertical products are likely to draw on a mix of foundational and specialised models, whether licensed or called via an API from leading players, or in some cases built (or at least tuned) in-house, potentially stitched together with proprietary datasets. But the product still needs to be a quality, sufficiently-functional, enterprise-ready product with the right integrations with other systems – it’s not just about showing up with the latest generation of generative AI.

Innovation in the user interface will be the other big enabler. A large part of ChatGPT's success can be attributed to the simple, accessible UI built on top of the underlying technology. Those businesses that can make AI tools usable in the simplest, most streamlined way (like Synthesia’s simple studio UI) will win, because it will increase the adoption rate across the business, rather than being locked away in data science and engineering teams. When starting out with an unfamiliar product, most people know what they want to do, but they don’t know how to do it. Implementing a chat-first UI would allow people to navigate unfamiliar, powerful tools more intuitively. But there’s still a lot of space for innovation in UI design, with scope to redefine how we interact with AI.

This creates an interesting dynamic. The previous generation of AI companies has already built a lot of the enterprise infrastructure and customer base, so I expect we’ll see a lot of innovation come from these established businesses building the new generation of tech into their products. For example, our portfolio company Signal AI is adding additional intelligence layers on top of its unique, web-scale dataset of licenced media and social content, ensuring output is both fact-based (based on verified sources) and compliant with the requirements of its suppliers. Or perhaps we will see new businesses spun out from these existing companies, taking AI use cases as their north star.

It will be interesting to observe how new entrants challenge the previous generation of companies that are just starting to get meaningful penetration and adoption in these different vertical use cases. Who's going to find it easier to build some initial scale in this rapidly evolving landscape?

The orchestration layer

The orchestration layer could be the most crucial area for innovation, as it focuses on deploying AI models and use cases in practice within an enterprise environment. This layer serves as the connective tissue for AI in businesses – linking prompts to outputs, drawing on integrations with other systems and feeding outputs into the appropriate places.

We’ve already seen this with Auto-GPT, an approach to building "AI agents" that suggest tasks and carry them out in sequence – stringing many sub-tasks together into an evolving, self-directing project. But open-source frameworks like LangChain, Hugging Face’s Transformers Agent, and Microsoft’s Guidance take this even further. These frameworks act like a bridge between the language model and specific data sources or capabilities, giving the LLM access to structured information, APIs, and other data outside its original training set, and chaining prompts together – building something that is greater than the sum of its parts.

In this new paradigm, applications don’t just make an API call to a language model, but connect to and interact with various data sources (beyond the model’s training data) in real-time, providing more relevant, accurate, and insightful responses. And by allowing language models to interact with their environment and act on behalf of the user, it pushes the boundaries of their potential applications, enabling them to make decisions and execute actions based on current context, greatly enhancing automation capabilities. Autocomplete goes from “complete this sentence”, to “complete this workflow”.

This will become increasingly important as AI models go multi-modal, needing to accept inputs and output in various formats (text, speech, images, video, tables, documents, data stores, etc). Different core and specialist models will do different jobs, drawing on data within the modern enterprise data stack and interacting with different kinds of data schema and other software products. The goal is to seamlessly integrate various technologies, including non-generative AI, to address end-to-end problems or fit into enterprise business processes.

By using these tools, developers can extend the functionality of an LLM, customising it to accomplish specific objectives, like solving complex problems, answering specialised queries, or interacting with external systems in a more intelligent and context-aware manner.

Supporting tools and enablers

Tools that support and enable the use of AI within businesses will be crucial for effective implementation, monitoring and measuring outcomes.

Vector databases, specialised in managing multidimensional data, are already emerging as game-changers in AI applications. Rather than relying on conventional methods like tags or labels, they use vector embeddings that capture relevant properties, allowing for similarity-based searches. This fundamentally shifts how we engage with unstructured data, moving from clumsy keyword-based queries to ones that better reflect the intention or meaning behind the query.

This helps reduce false positives, streamline queries, and enhance productivity across the knowledge economy, with better handling of creative and open-ended queries. And it revolutionises real-time personalisation, as our portfolio company Superlinked illustrates. As CEO Daniel Svonava said in a recent podcast, by incorporating vector search into products, companies can tap the power of deep learning to deliver better user experiences in many places – from personalised recommendations to user clustering and bot detection.

It is the companies that solve these sort of hard, practical problems and provide the underlying tooling that will benefit as AI reshapes value chains. And this perspective guides our view of responsible AI's potential, too.

As my colleague Nitish Malhotra put it in an earlier piece on the opportunity in responsible AI, ‘Deploying models is hard. Trusting predictions is harder.’ Unlike traditional software engineering, it is challenging to build safety guarantees in machine learning (ML) systems, given the black-box nature of algorithms and the dynamic system architecture. But the quest to address these vulnerabilities – interpretability, verifiability, and performance limitations – opens up an opportunity for start-ups and spin-offs to attack various parts of this problem. Too often, responsible AI is considered a hindrance to the pace of innovation, but I think it’s a prerequisite. Data quality; model experimentation and early explainability; testing, QA and debugging; and monitoring and observability – these are all necessary foundations for innovative, scalable, responsible AI.

For example, ensuring that datasets used for training AI models are of high quality, comprehensive, properly audited, and appropriately licensed will be key. The concept of data quality and observability is nothing new, and existing players in the space can use their expertise and customer bases to beat new entrants to the punch. When only 30-50% of AI projects make it from pilot into production, and those that do take an average of nine months to do so, tools that help developers go from zero to one and beyond faster and more effectively are invaluable. And as AI-powered products grow in complexity and scope, observability tools will need to evolve as well. The orchestration layer will require models that not only evaluate their own output, but also scrutinise the performance of other models, systematically checking for hallucinations, data breaches and bias. For the same reason, reporting and analytics tools to facilitate the tracking and monitoring of AI-driven systems will also be crucial, enabling organisations to maximise the potential of their AI stack.

Want to learn more about data observability? Check out MMC's recent report

Tooling up the AI revolution

While competing directly with industry giants may be off the table for many, there is immense opportunity in the ecosystem. By innovating in specialised AI models and vertical use cases, both new start-ups and established AI businesses can carve out their niches, with the latter capitalising on their existing market presence.

As AI goes multimodal, data-aware and agentic, the orchestration layer presents a big opportunity. Tools that integrate varied data sources, connect models and solve end-to-end problems will become essential.

The backbone of this latest AI wave will be developer tools and infrastructure. Vector databases will transform the handling of unstructured data, democratising analytics and business intelligence, and offering better user experiences. At the same time, ensuring high-quality datasets for AI training, model experimentation, and early explainability, alongside rigorous testing, QA, debugging, and observability, will be crucial. Start-ups building in this space will lay the groundwork for future innovation, scaling, and responsible AI.

All these areas offer exciting entry points into the AI market – both for the next and existing generation of AI businesses.

What's your view?

Where do you think the next billion-dollar AI company will come from? Are you the one building it? We want to hear from you!

Chat with Simon
Newsletter Sign Up

Sign up for latest news