What 50 HBS students, 10 tech executives, and a week in the Valley taught me about where AI is headed.

As part of a course I teach at Harvard Business School on Entrepreneurial Management, I take 50 students on an annual field trip. This is easily the best part of the course — I learn as much as my students. Last quarter, the annual field trip took us to Silicon Valley to study specific disruptive effects of AI. The guest speakers included: 

  1. Jay Puri (EVP Global Sales of NVIDIA)
  2. Tekedra Mawakana (CEO of Waymo)
  3. Dmitry Shevelenko (Chief Business Officer at Perplexity)
  4. Ryan Roslansky (CEO of LinkedIn) and Dan Shapero (COO of LinkedIn)
  5. Yamini Rangan (CEO of HubSpot)
  6. James Dyett (Head of Enterprise Sales at OpenAI)
  7. Sabrina Farmer (CTO at GitLab)
  8. Scott Belsky (fmr. CSO at Adobe)
  9. Yoav Shapira (Director of Engineering at Meta)

The students spent the four months leading up to the trip in research teams studying the incumbents and native AI strategies in B2B and B2C categories.

Then, during the closed-door sessions, they presented their findings to the respective executives we met with in each category. We spent bus rides, coffees, and hikes, reflecting on our takeaways from each session and trying to distill overarching themes or guiding principles from what we’d learned. 

Here are a few narratives that we took away (ironically not written with AI ;)

Tomorrow’s Organizations Will  Run on Sequences of Highly Specialized AI Agents Coordinated by Cross-Functional Operations Executives

As we transition from the co-pilot AI era to the agentic AI era, many people are increasingly convinced that we are moving from the "hype" phase to the "sustainable disruption" phase of the Gartner Hype Cycle. Our guest speakers described a future organizational structure consisting of a series of highly specialized AI agents, each responsible for specific tasks and passing their outputs to the next agent in the workflow. 

For instance, in a go-to-market prospecting sequence, one agent might calculate the current Ideal Customer Profile (ICP) based on metrics such as customer segment retention, usage, customer acquisition cost (CAC), and demand generation. The next agent would identify the optimal 50 unpenetrated accounts that fit the ICP. Following that, another agent would pinpoint five employees within the decision-making unit at each account. Next, an agent would create a 20-step personalized outreach sequence for each contact, while another agent would execute each step of the sequence through the appropriate communication medium, and so on. 

Humans will oversee the overarching strategy, train off-the-shelf AI agents on the specific business context, and handle sub-tasks that the agents are currently unable to manage. We are seeing early evidence of this vision coming to life in rapidly scaling vertical AI-native startups, such as Harvey in the legal sector.

As we played with this concept, a few follow-up hypotheses surfaced. 

  1. Future C-suite leaders resemble today’s Ops leaders: Future functional leaders may look less like today’s C-level executives, who are arguably highly skilled at people management (hiring, managing, holding accountable, etc.), and more like today’s Operations leaders, who are skilled at accelerating business outcomes with process, data, and technology. 

  2. AI-agents become “hyper-specialized”...:  During the Industrial Revolution, workers became highly specialized to improve organizational productivity. However, this specialization has its drawbacks within a human-centric organizational structure, including issues related to coordination costs, employee motivation, and the potential for local maxima, among other factors. Many of these risks can be minimized when AI agents perform tasks at the frontline, suggesting these AI agents will be deep experts in one specific task rather than having a broad range of skills across multiple tasks.

  3. …while human executives become cross-functional generalists: As functional boundaries blur, a generalist mindset becomes essential. An AI agent organization reduces the negative impacts of functional silos. AI agents can facilitate cross-department data exchange and collaboration between customer support and product, sales and marketing, engineering and finance. Executives with broad business acumen and cross-functional abilities are better positioned to exploit these opportunities and run these organizations. 

  4. Organizations benefit by being modular, agile, and interoperable. AI innovation progresses at lightning speed. Best-in-class foundational models and AI agents seem to evolve nearly every month. Organizations that design their tech stack to quickly replace outdated agents or models with superior versions as technology advances will benefit significantly. Speed of adaptation becomes a competitive advantage.

  5. The magnitude of this shift may present an innovator’s dilemma for native-AI startups to take advantage of. Many founders are primarily focused on how the AI era will transform the products in their industry. However, few are considering how AI will impact their organization's internal operations. Startups can implement new processes much more quickly than larger companies. In the future, we may look back and conclude that more disruption was driven by startups rethinking their organizational processes using AI, rather than just their product offerings. While both aspects are critical, today's founders seem to be more focused on the latter rather than the former.

  6. “Revenue per employee” may become the critical north star for all businesses. This shift reminds me of the early days of the Cloud era when it took years for the ecosystem to elevate customer retention from a secondary metric to a company-wide guiding principle. In the on-premise era, customer retention was not mission-critical, as customers were locked into physical server implementations and paid upfront for perpetual licenses. However, everything changed with the move to the Cloud, where customer retention became a crucial north star metric. In the post-AI era, organizations that can rethink their strategies and quickly implement organizational efficiencies—potentially measured by revenue per employee—may emerge as the ultimate winners.

Outcome-Based Pricing Represents a Major Disruption Opportunity for Native AI Startups. However, It Comes With Operational Risk.

The leaders we visited seemed to have differing opinions on optimal pricing models, which became one of the most discussed topics during our trip. Will the market transition to outcome-based pricing? Could pricing become an innovator's dilemma that ultimately leads to incumbents losing their market share, similar to what we observed during the shift from on-premise solutions (perpetual licenses) to cloud services (subscriptions)? 

I’ll summarize the discussions by consumption versus outcome-based pricing models.

Consumption-Based-Pricing

I see an analogy between the transition from perpetual to subscription licenses during the Cloud era and the possible transition from subscription to consumption-based pricing in the AI era. 

At the outset of the Cloud era, SaaS-native startups almost had to adopt subscription pricing to align their revenue with their costs. In the early days of the Cloud, storage was expensive. Unlike legacy on-premise providers, Cloud-native startups effectively absorbed storage costs for their customers. Subscription pricing enabled Cloud providers to align the price they charged the customer with their cost to deliver the service, which monthly storage costs was a large part of. Over the next decade, storage costs fell dramatically, eliminating the need for this pricing model. However, customers favored subscription-based pricing and it was difficult for legacy providers to adopt subscription pricing without big disruption to their business model. Cloud-native companies embraced this approach. 

Similarly, native-AI startups may be motivated to adopt consumption-based pricing to align their revenue with their costs. Compute represents a significant expense for these companies. However, our student research found that compute costs are declining ten times faster than storage costs did in the early days of SaaS. The compute cost dependency may not persist for long. 

Does consumption-based pricing provide the same innovators' dilemma opportunity for these startups as subscription-based pricing did during the Cloud transition? We did not have conviction on this point.

Outcome-Based-Pricing

Outcome-based pricing is another pricing innovation embraced by several native AI startups. This pricing model aligns nicely with our transition from the co-pilot era, where we viewed AI as an iterative supplement to the existing worker and workflow, to the agentic area, where we view AI as a full-cycle owner of tasks and their corresponding outputs. 

Outcome-based pricing closely resembles the disruptive impact that subscription pricing had during the early days of cloud computing. When structured properly, outcome-based pricing can be highly beneficial for customers, as it guarantees a fixed return on investment (ROI).

For instance, if a company spends an average of $50 to process each support ticket while a native AI agent only charges $10 to fully process a ticket, the pricing model makes this purchase an easy decision for a CFO. If the pricing model gains traction, established companies would struggle to adopt it without significantly disrupting their current business models.

That said, outcome-based pricing carries certain risks, even for businesses that are adept in AI. This pricing strategy creates a heavy reliance on the customer's performance from one quarter to the next, which can lead to significant revenue volatility for the vendor, often beyond their control. Additionally, challenges related to forecasting, attribution, and implementation add to the risks associated with this model.

Similar challenges emerged when cloud-native startups adopted subscription pricing two decades ago. They addressed these challenges by utilizing up-front contracts and other strategies. Perhaps native AI startups can also reduce operational risks through pricing innovations, allowing them to capitalize on this opportunity created by the innovator's dilemma.

 Early “Moat” Theories are Challenged. New Hypotheses Surface.

The development of sustainable competitive advantages in the post-AI era is still a debated topic with little consensus. In the Fall, one of the students' assignments was to assess opportunities for sustainable advantages among native AI startups. Our work primarily concentrated on the B2B and B2C application layers, rather than on the infrastructure layer. 

rethinking moats

Moat evaluations were conducted from three lenses:

  1. Foundational Models moving up into the application layer
  2. Current application incumbents expanding horizontally by adding the AI capability into their platform
  3. Internal competition from other native AI startups

Many of the early hypotheses about sustainable moat development are not unfolding as predicted.  

MOAT and Description

Arguments For

Arguments Against

Access to Proprietary Data

Having exclusive access to high-quality and/or quantity of data that can be used to develop AI.  

Enables the development and training of superior algorithms that others are unable to copy.  

1. Synthetic data allows companies with less data to engineer their own.  

2. Diminishing returns of more data and better algorithm development.  

Economies of Scale

In an AI context, companies with more compute and data can develop models more efficiently and at a lower cost. 


Compute is a scarce resource.  Companies with large funding and scale may get unique access and can negotiate favorable pricing terms. 


1. The cost of compute is decreasing faster than the cost of storage in the Cloud era.

2. Companies are delivering competitive, if not, better models without this access.



System of Record

Storing critical customer data needed for day-to-day operations, creating switching costs as customers become dependent on it. 


Once implemented, the software is difficult to “rip and replace”, leading to strong customer retention and a barrier for new entrants to capture market share via displacement.


While this moat can have strong retention for current customers, it can negatively impact new customer acquisition as word can spread about system lock-in. For this reason, it’s my least favorite type of moat.

Controlling the User Interface

Owning the user interface through which the user accesses the product capability. 


Opportunity to develop user habits and comfort with proprietary interface,

AI application UIs are currently simple (chat, voice interfaces, etc.) and expect to get easier.

Pricing Model Innovation

New payment structures, like outcome-based pricing, where companies charge based on outcomes rather than flat subscription or license fees. 

Create an innovator’s dilemma through business model re-invention.


1. Less protection from other native AI startups.

2. Implementation challenges.

Successful Enterprise Case Studies

A track record of delivering on the success outcomes promised by the product in a complex, enterprise setting. This moat resembles a “brand” or “reputational” categorization as framed in early barrier to entry theory. 


1. Large enterprises in conservative industries (healthcare, legal, finance) require significant effort to adopt new AI technology and may opt for a proven brand in their sector over a lower price.

2. Early success stories in the AI application layer support this hypothesis.


Maturity of tech and buyer comfort over time may weaken the advantage.  

Workflow Re-invention

Re-defining and streamlining a legacy workflow leveraging advancements enabled by the new tech stack.


Potential to make technical architecture of incumbents obsolete.   


1. Uncertainty around feasibility and user response to new workflow.

2. Less defensive from other native AI startups.

 

AI Adoption Barriers Shift from Legal/IT to the End User

Just two years ago, in 2023, finding successful implementations of AI was a challenge. Most of the activity took place in innovation labs, and there were significant concerns about AI hallucinations. 

By 2024, comfort with the technology had increased. However, rolling out AI into production was hindered by legal and security issues, as these departments lacked established AI policies and were concerned about protecting their intellectual property. 

As we move through 2025, we are seeing greater success in integrating AI into the core operations of businesses. Nonetheless, a new barrier to adoption seems to be emerging, driven by end users and their department leaders. Front-line workers involved in AI adoption are anxious that they may be training a tool that could eventually replace them. Similarly, department heads worry that endorsing such a tool could reduce their workforce, undermine their political authority within the organization, and diminish the importance of their skill sets in hiring and managing personnel.  Handling this concern may be an important ingredient to category leadership in 2025.  

As we move deeper into the AI-driven future, organizations must evolve to harness the full potential of specialized AI agents — not merely as products to sell, but as drivers of operational agility and innovative pricing models. Nimble startups have a unique opportunity to rethink not only their products but their entire internal operations, positioning themselves to disrupt incumbents. With rapid advancements come new challenges, from shifting adoption barriers to redefining sustainable moats. The real winners will be those who not only embrace AI but do so in a way that aligns technology with business strategy while optimizing organizational efficiency in the process.