Thesis Spotlight: Foundation Models

The Acrew Team
10 min readAug 15, 2023
Thesis Spotlight: Overlooked Consequences of Foundation Models

It’s no surprise that startups and big tech alike have been quick to explore the near-term opportunities of foundation models. But with all the focus on what’s now possible today, our team at Acrew Capital had the view that some of the long-range implications and outcomes were going relatively overlooked.

We decided to take this as a prompt to ask founders, investors, and industry executives within our executive community, the Crew of Leaders, the following question…

“In your opinion, what are some under-discussed second order consequences (positive and/or negative) from the rapid deployment of foundation models into the world?”

The perspectives from our community emphasize the profound ways in which foundation models may reshape technology and society in the coming years across three core themes: net new product experiences and business models, large-scale disruption to white collar work, and the emergence of new governance systems.

At Acrew, AI & ML have been key horizontal drivers underpinning our investment theses since our inception, and we continue to have strong conviction in the long term potential of the category. If you’re building AI/foundation model products — we’d love to meet you.

The Paradigm for Products & Business Models is Shifting

Foundation models represent a paradigm shift in data processing, unlocking a host of new application-layer use cases (e.g. intelligence copilots, answer-first search experience) that can be powered by infra/architecture that’s fast to implement, e.g. in-context learning or larger context windows in lieu of fine-tuning or proprietary models. The powerful yet broadly accessible nature of the foundation model tech stack has yielded a tidal wave of AI product releases and company formations in the past year — an imperative for all enterprises to carefully reconsider how to best serve their internal and external stakeholders.

“With the rapid deployment of foundation models, it’s become much easier for small teams (or individuals) to create effective demos of ML applications in a range of verticals. However, these demos don’t necessarily translate into the performance that customers desire in production, especially if they are thin wrappers around foundation models. This is leading to some unique challenges for the go-to-market motion and sales cycle in the space.”
Ali K. Chaudhry, Fmr. SVP Product & Strategy, Caption Health

“Everyone uses the analogy that foundational models are the new cloud compute platform, but I think 1) cloud primitives like S3 and EC2 were always and obviously going to be a commoditized, low margin service, whereas the latest LLM can command a huge pricing premium and 2) cloud infrastructure doesn’t directly impact the key value prop of your SaaS app the way that the foundational model you build on does. So I think that companies building on these foundational models 1) very well may not have 80%+ gross margins at scale 2) have a ton of risk around defensibility as other models advance and 3) might need to consider different pricing / business model than the typical seat based or usage based SaaS / infra company. I think we’ve sort of taken for granted that software companies get 80%+ gross margin and 120%+ NDR and I wonder if companies built on foundational models look a lot different.”
John Cowgill, Partner, Costanoa Ventures

“The biggest upside of AI is how cyber companies like Exabeam can incorporate AI into our threat detection. So much of cyber is pattern matching. Back in the AV days that’s what Symantec and McAFee did. They saw an attack and then tried to pattern match prior attacks and write a DAT update. With AI that can be done super fast now.”
Mike DeCesare, CEO & President, Exabeam

“The implications that the foundation models in software and language broadly today have on robotics and embodied intelligence is going vastly underappreciated. We’ve been waiting for a robotic revolution for decades but could never have intelligent enough systems, we now might be able to.”
Mike Dempsey, Managing Partner, Compound VC

“We spent a lot of time making it easier to put human thought into computer language, but now we can use these models to turn human language into computer actions. This has many implications like lowering the barrier for system integration, being able to generate “just in time” interfaces for users based on their needs, etc.”
Alessio Fanelli, Partner, Decibel

“Foundation models break some barriers for smaller businesses (or individuals) that would otherwise not have the resources to get the proper data, computing resources, etc to train a huge AI model themselves. They can now prepare their unique solutions specific to their product, which in turn may result in the flourishing of many different interesting small businesses. It feels like Machine Learning was made for this from the start.”
Gordon Midwood, Cofounder & CEO, Anything World

“I think the rapid release and success foundation models have seen will force all companies to rethink user experiences to some degree and longer-term remove many of our barriers to entry that are defined by domain expertise requirements. Stickiness will go to the better UX and domain gatekeepers could be impacted.”
Rich Mogull, SVP Cloud Security, FireMon

“So much design & engineering effort today goes into creating simple abstractions around information so humans can easily grok it. AI reduces the need for this by making it easier to make sense of unstructured data. Laws can become more complex because human judges don’t need to be able to interpret them. Data doesn’t have to be schematized because human engineers don’t need to write code to convert unstructured data to types. Software products can get more extensive because human support reps don’t need to explain all of it.”
Neil Patil, Sr. Product Manager, Vanta

“For use cases that are very binary — e.g. cancer detection with a recommended treatment plan — how will knowledge specialists (in this case, oncologists) interact with this technology? What might happen if/when it improves enough such that an oncologist isn’t needed? And even if it did technically, would a patient ever get comfortable? User sentiment and trust towards the applicability of foundation models in such high-stakes, objective/binary scenarios will have large implications. How long will it take for the requisite trust to develop? And what happens to that trust when there is an accident that garners a lot of publicity?”
Nick Washburn, Sr. Managing Partner, Intel Capital

Knowledge Work of the Future will be Unrecognizable

The emergent capabilities of LLMs have made them deft tools for tackling many kinds of standard knowledge work — from summarizing meeting notes to writing code. And when integrated into digital workflows (e.g. ChatGPT Plugins), LLMs can facilitate workflow automations with little/limited implementation (e.g. rules mapping) or human oversight. The impacts could be far-reaching — as more white collar work shifts toward the specialized and editorial, countless human labor hours currently assigned to now-automatable tasks will need to be repurposed in relatively short order. The scale of disruption from manufacturing automation and offshoring in the late 20th century may pale in comparison to what could come in the next few decades.

“Even if we pause foundational AI development, I think there’s still at least 5 years of the typical enterprise / knowledge worker just catching up to all the ways they could be using / embedding AI in their day to day work. I’m finding new uses for GPT4 as a VC every single day, and I use it *constantly* now. The more I use it, the better I get at using it, and I think there’s still a ton of opportunity purely at the UI / prompt optimization layer to keep squeezing more juice out of GPT 4 and other LLM’s without even getting into fine tuning.”
John Cowgill, Partner, Costanoa Ventures

“I expect a questioning of what it means to be human when art, culture and analytic results /“understanding” is widely produced by machines. Perhaps the human value-add will be to understand the models and the different results and make choices there. At least until we have second level AIs to do this for us!”
Moez Draief, Managing Director, Mozilla.ai

“These models can power agents that end up replacing a lot of the integration / management work that is done today. The IT Services market is 2x the size of the software market!”
Alessio Fanelli, Partner, Decibel

“We just ran an amazing survey on generative AI in the enterprise; see the finding here!”
May Habib, Cofounder & CEO, Writer

“AI will take over a lot of entry level and/or unwanted jobs, but how do we then continue to train individuals? How do we thoughtfully create jobs that augment rather than replace humans with AI, thus yielding exponential results?”
Pam Kostka, Operator in Residence, Operator Collective

“A lot of coding is around optimization, especially as we deploy into cloud where we pay by the millisecond. I suspect we will find that some orgs start feeling the pain when they don’t realize the implications of deploying non-optimized, LLM-generated code.”
Rich Mogull, SVP Cloud Security, FireMon

“Overlap between roles increases — Product managers can be more engineering-y by getting more built with written language. Engineers can operate more like designers by having user interfaces align to their work. Salespeople can synthesize product feedback without needing to communicate it to a PM. The net result is less value from role specialization. We’re already moving to this world today (e.g. Figma was multiplayer-by-default) — this will accelerate it. Instead of a ‘PM, designer, and 6 engineers’, the common software pod will look like ‘8 builders, some of whom specialize in user experience, some who specialize in customer discovery, some who specialize in technology.’ ”
Neil Patil, Sr. Product Manager, Vanta

“I think there are large swaths of good paying jobs that may disappear before people can retrain on anything else. LLMs are powerful already, and improving quickly. Enterprises are heavily incentivized to deploy LLMs — they lower costs & drive revenues across a wide array of use cases. It’s an incredible confluence that I feel will lead to significant + fast adoption.”
Rohan Puranik, Partner, WestWave Capital

“Centralization of influence — The winners in AI will have more power over the world’s thoughts and actions than Microsoft, Google or Facebook ever did.”
Gokul Rajaram, Executive, DoorDash

“I shared my thoughts with Forbes on how geopolitics will shape the production of chips that in turn shape the entire category.”
Rob Toews, Partner, Radical Ventures

New form factors for entertainment will rise, not limited by today’s constraints in production…We will see an explosion of creation.
Chang Xu, Partner, Basis Set

AI in Full Swing, Governance in Early Innings

The capabilities and utility of foundation models are strongly linked to the underlying data they were trained on. Even outside the more widely-covered problem of model hallucinations, the deployment of third-party foundation models can create significant risk for enterprises when the details of training data (e.g. breadth & depth of coverage, copyright status) are opaque. Although the deployment of foundation models is already in full swing, we’re still in the early innings of hashing out the legal/governance implications and potential hazards.

“I see great epistemological risk. The standards that underpin science, medicine, and law are being challenged by unexplainable AIs that are neither able to cite all their sources nor show how they used that data to come to an answer. We have a choice to make: Is this the information culture we want? Is this the public sphere we want?”
Adam Bly, Founder & CEO, System

“IP & Data Privacy — Foundational models are hungry for data. Every organization will invest in the fortification of its data to protect against a growing army of foundational models seeking to ingest all of it by any means possible.

Source Signatures and Watermarking — Let’s call it the Blade Runner problem: As foundational models proliferate, we need ways to authenticate source identity for both humans and models. Is this a human- or machine-generating artifact I’m looking at?

Audit-trailing and Custody Chaining of Private Data — Data begets new data. Companies and individuals will increasingly want to track how their proprietary or private information is used to create ‘net new data’, especially if/when these they withdraw their consent for data to train foundational models. It’s an absolute hairball of a problem for model creators: How do they certify that a model is gaining no further advantage from a dataset they have been asked to expunge from the nearly limitless sea of data used to create it?”
Tom Chavez, Cofounder & GP, super{set}

“Government stepping into regulate with little understanding of what to regulate, causing headwinds in future AI development.”
Rayfe Gaspar-Asaoka, Partner, Canaan

“The biggest issue is ingrained bias of the data on which the AI models are being trained. Models are not infallible. So in today’s world of short attention spans, how do we get the consumer of LLM-generated results to pause and reflect on the veracity of outputs?”
Pam Kostka, Operator in Residence, Operator Collective

“The security implications will be with us for a long time. Why? Because the models were trained on public code and examples, which generally show the fastest way to do something but not the secure way. Many examples in documentation and public sites/code flat out say “don’t forget to secure this”. I’m pretty sure generative code models don’t understand that warning, and have no route to do anything with it even if they did.”
Rich Mogull, SVP Cloud Security, FireMon

“There is a need for AI safety and guardrails on the outputs of the models to ensure they are accurate, appropriate, on topic and secure.”
Astasia Myers, Partner, Quiet Capital

“I don’t think we’ve truly understood the consequences of having biased data in generative AI models. When we start giving these models decision making abilities based on biased data, there will be consequences that are difficult for us to comprehend given how new this technology is, how rapidly it’s getting adopted, and how easy it can be to trust rather than question the outputs if they seem to make sense.”
Amanda Robson, Partner, Cowboy Ventures

“Continuing from the landmark genAI legal cases unfolding now, I think people will come to realize that a large portion of the data being ingested by these models comes from consumers who have been adding their personal data to the web and not controlling where it goes or how it’s used. As companies like Reddit try to monetize their data heading into AI projects, I suspect individuals will realize they are ultimately the source of value and raw data to create these foundational models, bringing renewed focus back to consumer data, privacy and monetization.”
Riley Rodgers, Principal, Valia Ventures

“When generative AI can now be used to create information that, barring significant fact checking, is false/fake/misleading, and it can do it at a scale 10X, 100X, 1000X more than we have now, what are the consequences from a local, national, international safety perspective? I fear overall “trust” across these spectrums will continue to go down, political instability will continue to arise, and unfortunately conflict will follow.”
Nick Washburn, Sr. Managing Partner, Intel Capital

--

--