PEOPLE 11 min read

Mira Murati's Thinking Machines Lab: Inside the Stealth AI Startup of 2026

A year after leaving OpenAI as CTO, Mira Murati's Thinking Machines Lab has become the most scrutinized new AI lab in the field. Here's what's publicly known, who's there, and what their bet appears to be.

By EgoistAI ·
Mira Murati's Thinking Machines Lab: Inside the Stealth AI Startup of 2026

The Most-Watched New Lab Since Anthropic

When Mira Murati left OpenAI in September 2024 after six years as CTO and a brief turn as interim CEO during the November 2023 board crisis, the speculation about her next move began immediately. In February 2025 she confirmed it: a new AI lab called Thinking Machines Lab, based in San Francisco, with many of her senior OpenAI colleagues joining her.

A year later, Thinking Machines has become the most scrutinized new AI lab since Anthropic’s founding. It raised what Reuters reported as one of the largest seed rounds in tech history — figures above $1B have been cited in multiple publications — at a valuation approaching $10B pre-launch. It has recruited an unusually senior team. And it has said very little publicly about what it’s actually building.

This profile collects what is publicly documented as of April 2026.


Murati’s Path to the Lab

Mira Murati joined OpenAI in 2018 and rose to become CTO in 2022, overseeing the ChatGPT launch, the DALL-E line, the GPT-4 rollout, and Sora’s initial research. During the November 2023 board crisis she was briefly appointed interim CEO before Sam Altman’s return. That episode, later reporting suggested, contributed to a broader leadership reshuffling at OpenAI that eventually saw Murati, Ilya Sutskever, Jan Leike, and several other senior researchers depart within a year.

Murati’s public statements throughout her OpenAI tenure emphasized deployment safety, product reliability, and the importance of building AI that is “not just powerful but predictable.” That framing — product engineering discipline married to capability research — became the early signal for what Thinking Machines would focus on.


The Team

Thinking Machines Lab’s founding team, publicly disclosed through the company’s launch announcement and follow-up reporting, is unusually senior. Co-founders and early leaders named in public sources include:

  • Mira Murati — Founder and CEO
  • John Schulman — Co-founder of OpenAI, lead researcher on RLHF and ChatGPT alignment, previously at Anthropic after leaving OpenAI
  • Barret Zoph — Former VP of Research at OpenAI
  • Jonathan Lachman — Former head of special projects at OpenAI
  • Lilian Weng — Former head of safety research at OpenAI

Additional researchers from Google DeepMind, Anthropic, and Character AI have been reported as joining throughout 2025, per The Information and Bloomberg. The team size, as of the company’s last public statement, was around 30 people — unusually small for the capital they’ve raised, which is consistent with the lab’s stated focus on research per researcher rather than headcount.

The concentration of former OpenAI leadership — Murati, Schulman, Zoph, Weng — led some observers to describe Thinking Machines as “OpenAI’s shadow cabinet.” That framing is incomplete but captures the pattern.


What Thinking Machines Has Said Publicly

The lab’s public communication has been deliberately minimal. Its original blog post (thinkingmachines.ai) articulated a few stated principles:

  1. Research on human-AI collaboration, not autonomous agents operating without human oversight
  2. Scientific openness, including publishing research and sharing model details more broadly than is now standard at frontier labs
  3. Reliability and predictability as first-class engineering goals, not afterthoughts
  4. Customizable and steerable models, emphasizing that users should be able to align model behavior to their needs

Those principles read as a deliberate contrast with both OpenAI’s perceived closedness and Anthropic’s strict constitutional framework. The emphasis on “collaboration” and “steerability” suggests Thinking Machines may target model architectures that are more transparent in their decision-making and more responsive to user preferences than the current defaults.

Beyond the launch blog post, the company has made few public statements. It has not released a model as of April 2026. It has not published research papers under the Thinking Machines name. This is typical of labs in the first 12-18 months of foundation model training, which take enormous compute and engineering effort before producing anything visible.


The Funding Story

According to multiple reports (Reuters, The Information, Bloomberg), Thinking Machines raised a seed round of more than $1 billion in mid-2025 at a valuation reportedly near $10 billion. Andreessen Horowitz led with participation from other large venture funds. The round was extraordinary on several dimensions:

  • It was effectively a seed round for a company with no product and no public research.
  • The valuation placed Thinking Machines above most mid-stage AI companies before it had shipped anything.
  • The check sizes were among the largest ever written for a frontier AI lab, reflecting both the compute costs of training frontier models and the premium investors are willing to pay for ex-OpenAI leadership.

The size of the round created skepticism in some corners of the industry — “investors are buying the roster, not the product” was a common refrain — but it also gave Thinking Machines roughly 3-5 years of runway to train frontier models and develop differentiated research without needing to ship a commercial product immediately.


What They Might Be Building

Since nothing has been officially announced, any analysis of Thinking Machines’ direction is speculation informed by public signals. Several inferences can be made:

Training a foundation model. A lab of this size, with this funding, with researchers of this background, is almost certainly training at least one frontier-capable foundation model. The team has the expertise and the capital. The compute commitments required for such training typically become visible through public cloud spend or GPU-order reporting, and Thinking Machines has been rumored in trade press to have secured significant Nvidia H200/B200 capacity.

Focusing on steerability and interpretability. Murati’s stated principles, Schulman’s background in RLHF, and Weng’s safety experience all point toward a research focus on models that are more controllable and more transparent than the current defaults. This would differentiate Thinking Machines from the raw-capability chase at OpenAI and Google.

Selling to enterprises, not consumers. The emphasis on “customizable” and “reliable” models is more compatible with enterprise deployment than consumer products. Thinking Machines has no known consumer app strategy and no B2C brand efforts.

Possibly research-first commercialization. Some observers have compared Thinking Machines’ posture to early Anthropic: release research papers and high-quality models, publish details, charge API rates for access. Whether they will ultimately take a similar path is an open question.


What Murati Herself Has Said Since the Launch

Murati has done very few interviews since founding Thinking Machines. In the small number of public appearances, she has emphasized themes that are consistent with her OpenAI persona:

  • AI needs to be built with product discipline, not just research ambition
  • Reliability matters as much as raw capability
  • Humans should be in the loop, especially for consequential decisions
  • The AI industry has under-invested in the application layer and over-invested in scale

None of this is novel framing, but coming from the former CTO of OpenAI and backed by $1B+ in capital, it represents a bet on a specific version of how AI should evolve.


Why This Matters

For the broader AI field, Thinking Machines matters for two reasons beyond its own product roadmap:

It’s a credibility test for the “second frontier lab” model. Anthropic succeeded by staffing up with ex-OpenAI researchers who disagreed with OpenAI’s direction. Thinking Machines is trying to do the same thing a second time. If it succeeds, the pattern becomes normal. If it fails, it suggests the first-mover advantage in AI labs is bigger than investors currently believe.

It’s a signal about what direction the industry’s most thoughtful people are moving. When Murati, Schulman, Weng, and Zoph all agree on a single new lab’s direction, it reflects something about where they think the field is headed. The stated focus on steerability, reliability, and collaboration is worth taking seriously as an implicit bet against the pure scale-and-capability race.

By the end of 2026, Thinking Machines will likely have released something public — a research paper, a model, a product, or some combination. Until then, it remains one of the most consequential silent companies in AI.


Where to Follow

  • thinkingmachines.ai — occasional official updates
  • Murati’s rare public appearances (typically industry conferences and university talks)
  • Reuters, The Information, and Bloomberg for hiring and funding leaks
  • Research paper arXiv listings under author affiliations (a leading indicator for any lab about to ship)

A year in, Thinking Machines is still mostly a hypothesis. But it’s the hypothesis with the most accomplished researchers and the most capital in 2026, which makes it worth watching closely regardless of how it resolves.

Share this article

> Want more like this?

Get the best AI insights delivered weekly.

> Related Articles

Tags

aiprofilemira-muratithinking-machinesopenaistartup

> Stay in the loop

Weekly AI tools & insights.