Fads fade. We believe machine learning (ML) is a classic piece of building software. Classics have an undeniable air of timelessness. They are hard to ignore and impossible to miss. Classics are forever. We believe ML is forever for software. In fact, at SignalFire, ML is woven into the software we build. We develop our own ML models in-house for our Beacon AI data platform, which we use for sourcing investments and helping portfolio companies with recruiting. That's given us a deep appreciation for how ML tooling can improve a business, so we're eager to invest in and support founders building the future of ML infrastructure.
Hot takes on machine learning and LLMs
- For the time being, large companies will own model training for large language models (LLMs) and foundation models (FMs). Training requires a combination of proprietary/web-scale datasets and costly infrastructure, making them too big a challenge to be adequately addressed by smaller companies.
- Companies can open-source models and methodology without fear of giving away their secret sauce since data is often proprietary and infrastructure is tough to build. The secret sauce is not primarily the recipe, but the rare ingredients and huge kitchen required. Facebook’s pytorch may be open-source, but you can’t build Facebook models on your own.
- Foundation models unlock bigger TAMs for enterprises via additional services. Incumbents can charge higher annual contract value for premium AI features, and there is a high interest in companies like Fixie helping enterprises build with LLMs.
- Foundation models are industry agnostic, making specialization a growth opportunity. To scale in the commercial medium, one has to fine-tune on industry-specific data, as well as have more control and visibility—for example, the ability to filter for brand safety and content moderation standards.
- MLOps will be a critical component in the industry for the foreseeable future. For several years to come MLOps will be important for tasks where LLMs don’t perform reliably. And once large models become more efficient, MLOps will be able to fine-tune the models on a task, get data, manage artifacts, monitor, update a model to address failures, and more.
ML is a core component of the software we build and our portfolio
SignalFire truly understands ML because we've spent 10 years building our own ML models for many uses across venture: sourcing companies, completing due diligence, winning competitive deals, and supporting our portfolio with recruiting data and customer lead lists. We do not build a “robot general partner,” but rather utilize data to empower human decision-makers. For example, we use natural language processing (NLP) models to classify companies and surface them to the appropriate investors, classify potential recruiting leads for our portfolio companies to match their hiring needs, and use graph ML algorithms on the open-source community to find compelling technical projects, many of which are ML projects themselves!
MLOps or MLOops?
Machine learning infrastructure went from Matlab produced by Mathworks in 1984 to an explosion of tooling to operationalize models, referred to in the industry as MLOps. Up to 2021, $3.8B was invested in MLOps. The category has existed for a decade, and while there are a few companies valued at $1B+—such as Scale.ai, Weights & Biases, and DataRobot—there is no IPO yet. Which begs the question: MLOps or MLOops? Here's how we think the timeline will play out.
ML infrastructure timeline
We believe that MLOps aren’t disappearing anytime soon. LLMs have raised the awareness of AI in our collective mindshare. While they can be useful, LLMs are not (yet) suitable for all tasks, such as time series forecasting, or where end users can’t afford to be wrong in even one percent of cases, such as self-driving cars. While the latest GPT-4 LLM has expanded to accept audio and image inputs—and other foundation models tackle image (DALL-E 2) and audio (AudioLM) generation—the high cost of inference and model fine-tuning are a barrier for high-volume, low-margin business applications.
For several years to come, MLOps will be needed for tasks where LLMs don’t perform or are cost prohibitive. And even once large models become more efficient, there will still be a need for MLOps to fine-tune the models on your task, get data, manage artifacts, monitor data, update the model when you see failure cases, and update the simpler models that manage spam filtering and toxicity detection, among other things.
In the category of MLOps, we invested in annotation (Explosion), testing (Kolena), and compute (SaturnCloud). While there is no common tooling yet, there is a common workflow to build models. In the image below, you can see an incomplete sketch of companies operating in each of the categories, illustrating in one snapshot the common workflow for building ML models. It captures notable companies for each step, the different teams involved, and some of their publicly announced total funding.
This graphic shows how operationalizing an ML model requires a wide variety of different tools and teams with deep expertise. Enterprise companies with the requisite data may still find it hard to staff all these teams with top AI talent—but they have the funds to purchase MLOps.
MLOps startup opportunities
We want to meet founders to solve common pain points for commercial customers who want to add AI skills. If you are building tooling that helps companies use ML to benefit the bottom line, please email email@example.com.
Three of the areas we are especially interested to meet more MLOps startups are:
- Visibility across the stack: Customers tell us their ML stacks are heterogeneous and hard to review and audit end-to-end. A head of AI at a large retail chain told us, “We have to work with many companies to get to 75% of what Google has internally as their ML platform.” He would pay top dollar for tooling that can provide the enterprise level of standardization and visibility end-to-end, vs. opinionated ML tooling for a point solution.
- Closing the skills gap: As the diagram makes clear, a multitude of different builders have to interact throughout the ML lifecycle. We are interested in startups that bridge the gap in skills between these different organizations as well as tooling that helps the engineers who build models to be more productive.
- Data engineering: We have more data than ever, and we are interested in tooling that makes the collection, transformation, and usage of data more efficient.
Large language models are bringing faster time-to-value to enterprises. SignalFire recently invested in Fixie.ai in this space, a cloud-hosted Platform-as-a-Service that enables anyone to build and integrate smart agents that leverage the power of LLMs to solve problems based on natural language. We’re looking to meet more founders building native tooling to use LLMs for commercial use cases, particularly around these three areas:
- Inference: Inference (applying an ML model to new data) is expensive, and optimizing it would be beneficial to many companies. Businesses need help with inference and also need help understanding the cost. Algorithmia pioneered MLOps; we would love to speak with teams building the next Algorithmia and startups helping build industrial-strength inference.
- Business-specific context: Foundation models are industry agnostic; we are interested in tooling to verticalize a model, providing the vertical-specific context to the model, such as vector databases.
- Commercial control: To scale in the commercial medium, one has to have more control and visibility—for example, a tool to filter for brand safety. ChatGPT is cool but cannot be adopted and used in production if it uses profanity or risks saying something against brand standards that could be screenshotted and shared with everyone.
A VC engineered to help you scale
At SignalFire, we like to say “think of us as an extension of your team that scales with you.” Beyond our in-house Beacon AI, we built our full-time Portfolio Experience team with world-class operators across a variety of functions including the former Chief People Officer at Netflix for developing an engineer hiring strategy, the Chief Marketing Officer at Stripe to optimize your sales process, and the former Editor-At-Large at TechCrunch to help you convert the value you deliver into a persuasive story. Our approach was built around providing value to founders, leading to our net promoter score of 85 among founders, with 85% saying we are the most valuable investor on their cap table.
We are investors, and we don’t have a magic crystal ball; we are open to being proven wrong and we keep an open mind as we’re meeting founders. If you are building in the LLM space, come talk with us—we will share our full research, and we’d like to learn more about what you’re building and earn the option to be at the table when you’re raising. We’re also co-hosting plenty of AI events at our San Francisco office if you have ideas. Cold emails welcomed at firstname.lastname@example.org.