PEOPLE 12 min read

Jensen Huang: The Man Behind NVIDIA's AI Empire

From a $40,000 startup in a Denny's booth to a $3 trillion AI empire. How Jensen Huang bet everything on a future nobody else could see — and won.

By EgoistAI ·
Jensen Huang: The Man Behind NVIDIA's AI Empire

There is a leather jacket that has become the most recognizable piece of clothing in the technology industry. It is not a hoodie. It is not a turtleneck. It is a black leather jacket, worn by a 63-year-old Taiwanese-American engineer who somehow turned a company that made graphics cards for video games into the most important business on the planet.

Jensen Huang does not look like a man who controls the supply lines of the AI revolution. He looks like a guy who rides a motorcycle to a rock concert. But make no mistake — every major AI lab in the world, every hyperscaler building the future of computing, every government racing to secure AI sovereignty — they all need what Jensen is selling. And he knows it.

This is the story of how a kid from a Thai boarding school built a $3 trillion empire on a bet that nobody else was willing to make.

Denny’s, 1993

Every great company has an origin story. Most of them are embellished. NVIDIA’s is weirdly specific: three engineers sitting in a Denny’s restaurant in East San Jose, sketching out a plan to build chips that could render 3D graphics.

Jensen Huang was 30 years old. He had grown up in Oneida, a small town in eastern Oregon, after his parents sent him from Taiwan at age nine. He’d been a dishwasher, a busboy, and a table tennis player good enough to rank in the US Junior Olympics. He got his electrical engineering degree from Oregon State, then a master’s from Stanford. He worked at LSI Logic and then at AMD — both chip companies, both useful training grounds for what came next.

His co-founders were Chris Malachowsky and Curtis Priem, both veterans of Sun Microsystems. Between the three of them, they had $40,000. They also had a conviction that 3D graphics were about to become a massive market, driven by gaming and multimedia computing.

“I had no different ambition than building a really great company,” Huang has said. “We just believed that accelerated computing was going to be important.”

They were right. But it almost killed them first.

Near Death and the GPU Revolution

NVIDIA’s first chip, the NV1, was a commercial disaster. It used a quadratic texture mapping approach that the industry rejected in favor of polygon-based rendering. The company nearly went bankrupt. Huang later said that NVIDIA came within 30 days of running out of money.

They survived by pivoting hard. The RIVA 128, launched in 1997, was NVIDIA’s first real hit — a chip that could compete with 3Dfx, the dominant graphics company at the time. Two years later, the GeForce 256 arrived, and Huang made a marketing decision that would prove prescient: he called it a “GPU” — a graphics processing unit. It was a new term, invented by NVIDIA to distinguish what they were building from generic graphics chips.

NVIDIA went public in January 1999. The stock opened at $12 per share. If you’d bought $10,000 worth that day and held, you’d be sitting on roughly $30 million today. Not bad for a company that was 30 days from death five years earlier.

But the GPU was only the beginning. Jensen Huang was already thinking about something much bigger.

The CUDA Bet

In 2006, NVIDIA did something that confused Wall Street, annoyed its shareholders, and set the stage for everything that followed: it launched CUDA.

CUDA — Compute Unified Device Architecture — was a software platform that allowed developers to use NVIDIA GPUs for general-purpose computing, not just graphics. The idea was simple in hindsight and radical at the time: GPUs have thousands of cores designed for parallel processing. Graphics rendering is just one application of parallel processing. What if you could use those cores for scientific computing, financial modeling, signal processing — anything that required crunching massive amounts of data simultaneously?

Wall Street didn’t get it. CUDA cost hundreds of millions of dollars to develop and generated zero direct revenue. Analysts asked why NVIDIA was burning cash on a software platform when it was a hardware company. Some investors bailed.

Huang didn’t care.

“We invested in CUDA for years and years — a billion dollars or more — before it became obvious to others why we were doing it,” Huang told 60 Minutes in 2024. “We were building a computing platform. We just had to wait for the applications to arrive.”

This is the single most important strategic decision in the history of the semiconductor industry. CUDA created an ecosystem — a moat made of software, not silicon. Researchers who learned CUDA wrote code that only ran on NVIDIA GPUs. Universities taught CUDA in their computer science programs. By the time the AI boom arrived, an entire generation of developers was locked into NVIDIA’s platform.

It was a flywheel that took a decade to start spinning. Once it did, nobody could stop it.

Timeline: Key Milestones

1993 — Jensen Huang, Chris Malachowsky, and Curtis Priem found NVIDIA in a Denny’s in San Jose with $40,000.

1999 — NVIDIA goes public. Launches the GeForce 256, the world’s first GPU.

2006 — CUDA launches. Wall Street is confused. Jensen doesn’t care.

2009 — Stanford’s Andrew Ng begins using NVIDIA GPUs to train neural networks, demonstrating 100x speedups over CPUs for deep learning workloads.

2012 — AlexNet, a deep neural network running on two NVIDIA GTX 580 GPUs, wins the ImageNet competition by a staggering margin. The deep learning revolution officially begins.

2016 — Jensen personally delivers the first DGX-1 AI supercomputer to OpenAI. “I’m going to hand-deliver this to you because this is the future,” he tells the team.

2017 — NVIDIA launches the V100 (Volta architecture), the first GPU designed specifically for AI training. Tensor Cores debut.

2020 — The A100 (Ampere) launches. Data center revenue begins its vertical ascent.

2022 — The H100 (Hopper) ships. ChatGPT launches in November, and every company on Earth suddenly needs as many H100s as they can get. Waitlists stretch to months.

2023 — NVIDIA’s market cap crosses $1 trillion. Then $2 trillion. Jensen becomes one of the 20 richest people on the planet.

2024 — The B200 (Blackwell) architecture launches at GTC 2024. NVIDIA’s market cap briefly touches $3.4 trillion, making it the most valuable company in the world.

2025 — Blackwell Ultra and the Vera Rubin architecture are announced. Data center revenue exceeds $100 billion annually.

Deep Learning Finds Its Engine

When Alex Krizhevsky used two NVIDIA GTX 580 GPUs to train AlexNet in 2012, it was a watershed moment for both AI research and for NVIDIA’s future. The model crushed the ImageNet competition, reducing the error rate by nearly 10 percentage points compared to the next best entry. The key insight: GPUs, with their thousands of parallel cores, were dramatically better at training neural networks than traditional CPUs.

Overnight, every AI research lab started buying NVIDIA GPUs. Not because NVIDIA had the best marketing, but because CUDA had made their hardware the only practical option. The software ecosystem — the libraries, the frameworks, the years of accumulated developer knowledge — created a switching cost that was nearly impossible to overcome.

Jensen saw the wave coming before most people knew there was an ocean.

“Deep learning is the most important computing revolution in our lifetime,” Huang declared at GTC 2015. “This is the big bang of modern AI, and we are at the epicenter.”

It was not hyperbole. It was a mission statement.

The Data Center Juggernaut

NVIDIA’s transformation from a gaming company to an AI infrastructure company is one of the most dramatic pivots in business history — except it wasn’t really a pivot. Jensen had been building toward this for twenty years.

The numbers tell the story. In NVIDIA’s fiscal year 2020, data center revenue was $2.98 billion — respectable, but still smaller than gaming. By fiscal year 2024, data center revenue had exploded to $47.5 billion. By fiscal year 2025, it was well north of $100 billion. Gaming revenue, once the core of the business, became a rounding error.

The A100 was the chip that started the stampede. Launched in 2020, it was the first GPU built from the ground up for AI training and inference at scale. When ChatGPT detonated in late 2022, the demand for A100s — and then their successor, the H100 — became insatiable.

Stories from that era border on the absurd. Tech CEOs were personally calling Jensen Huang to beg for GPU allocations. Mark Zuckerberg reportedly ordered 350,000 H100s. Sovereign wealth funds were buying GPUs as strategic assets. The H100 became the most sought-after piece of hardware since the iPhone, except each one cost $30,000–$40,000 and you needed thousands of them.

“The world’s data centers are being retooled for generative AI,” Huang said at GTC 2024, standing on stage in his leather jacket like a rockstar headlining a tech festival. “We’re at the beginning of a new industrial revolution.”

Why NVIDIA Won

The question every competitor, analyst, and tech journalist has been trying to answer for years: how did NVIDIA get such an overwhelming lead, and why can’t anyone catch up?

The answer is CUDA. It has always been CUDA.

The hardware matters, obviously. NVIDIA’s chips are excellent. The A100, H100, and B200 represent genuine engineering achievements — each generation delivering roughly 2–3x performance improvements for AI workloads. The interconnect technology (NVLink, NVSwitch) that allows thousands of GPUs to work together in massive clusters is best-in-class.

But hardware alone doesn’t explain NVIDIA’s dominance. AMD makes competitive GPUs. Intel has been trying to break into the AI chip market for years. Google has its TPUs. Amazon has Trainium and Inferentia. Microsoft is developing Maia. Dozens of startups — Cerebras, Graphcore, SambaNova, Groq — have raised billions to challenge NVIDIA’s grip.

None of them have made a meaningful dent.

The reason is the ecosystem. CUDA has been in development since 2006. It has millions of developers. Every major AI framework — PyTorch, TensorFlow, JAX — is optimized for NVIDIA first and everything else second. The tooling, the documentation, the community support, the years of accumulated optimization — it’s a moat that money alone cannot cross.

“Our moat is a twenty-year investment in an ecosystem,” Huang has said. “You can’t replicate that in a couple of years. You’d have to replicate all of the world’s developers deciding to adopt your platform.”

He’s right. And he knows it. And the leather jacket swagger comes from knowing it.

The Competition Tries Anyway

That hasn’t stopped people from trying.

AMD, under CEO Lisa Su (who is, in a delightful twist of fate, Jensen Huang’s cousin — their mothers are sisters), has been making the most credible hardware challenge. The MI300X and its successors are powerful chips that compete with NVIDIA on specs. AMD’s ROCm software platform is meant to be a CUDA alternative. Some cloud providers have started offering AMD-based AI instances at lower prices.

But adoption has been slow. Developers don’t want to rewrite their code. The ecosystem gap is a chasm.

Google’s TPUs (Tensor Processing Units) are the other serious contender, but Google mostly uses them internally. They’re not available for purchase; you rent them through Google Cloud. That limits their reach.

Intel, once the undisputed king of computing, has been the most conspicuous failure. Its Gaudi AI chips have struggled to gain traction, and the company’s foundry ambitions have been plagued by delays and billions in losses. The company that powered the PC revolution is watching the AI revolution from the sidelines.

Custom chips from the hyperscalers — Amazon’s Trainium, Microsoft’s Maia, Meta’s MTIA — represent a longer-term threat. If the biggest buyers of NVIDIA GPUs can build their own, they can reduce their dependence on Jensen’s supply chain. But designing competitive AI chips is brutally hard, and these efforts are years away from matching NVIDIA’s performance at scale.

For now, Jensen’s throne is secure.

The Man in the Jacket

What makes Jensen Huang tick? He’s not the typical Silicon Valley founder-CEO. He’s not a dropout visionary like Jobs or Zuckerberg. He’s not a physicist-turned-entrepreneur like Elon Musk. He’s an engineer — a real one, with decades of chip design experience — who happens to also be an extraordinary businessman and showman.

His GTC keynotes are legendary. He does them alone, no teleprompter, for two to three hours. He demos products, explains architectures, and makes the semiconductor industry feel like a stadium rock tour. The leather jacket is his costume. The GPU roadmap is his setlist.

People who work with him describe a leader of extreme intensity. He reportedly manages 60 direct reports — an absurdly flat org chart designed to keep him close to every part of the business. He has said publicly that NVIDIA’s culture is built around a “mission over comfort” mentality.

“If you want to do extraordinary things, it’s not going to be easy,” Huang told Stanford students in a 2024 commencement-adjacent talk. “I wish upon you ample doses of suffering.”

It’s a strange thing to wish on anyone. But coming from a man who nearly watched his company die multiple times and came out the other side controlling the most valuable real estate in computing, it carries weight.

What’s Next

NVIDIA is not standing still. The Blackwell Ultra architecture, announced in late 2024 and shipping through 2025, delivers another generational leap in AI performance. The Vera Rubin architecture, named after the astronomer who discovered evidence of dark matter, is already on the roadmap for 2026.

But the bigger play is software and systems. NVIDIA is pushing hard into AI inference (running models, not just training them), networking (its acquisition of Mellanox in 2020 for $7 billion looks like highway robbery in retrospect), and full-stack AI platforms. The company now sells entire AI data center designs — not just chips, but the racks, the cooling systems, the networking, and the software to run it all.

Jensen calls this “AI factories.” The framing is deliberate. He wants customers to think of AI infrastructure the way they think of manufacturing plants: essential, capital-intensive, and non-negotiable.

Sovereign AI is another frontier. Countries from France to Japan to Saudi Arabia are building national AI infrastructure, and NVIDIA is the primary supplier. Jensen has become a head-of-state-level figure in international tech diplomacy, meeting with world leaders and positioning NVIDIA as the arms dealer of the AI age.

There are risks, of course. US export controls have restricted NVIDIA’s ability to sell its most powerful chips to China — a massive market. The custom chip efforts from hyperscalers could eventually erode the monopoly. A fundamental architectural shift away from GPU-based computing could render NVIDIA’s moat irrelevant.

But if you’ve learned anything from the last 33 years, it’s this: don’t bet against Jensen Huang.

The man in the leather jacket has been playing a game that most people didn’t even realize existed. He bet on accelerated computing before the word “GPU” was in the dictionary. He bet on CUDA when Wall Street wanted him to stick to gaming. He bet on AI when “deep learning” was an academic curiosity.

Every single time, he was right. Every single time, he was early. And every single time, the rest of the industry eventually showed up to find Jensen already there, leather jacket on, arms crossed, grinning.

“The more you suffer, the more you appreciate what you’ve built,” he has said.

NVIDIA has suffered. And what Jensen Huang has built is the most consequential technology company of the AI age. There’s no close second.

Share this article

> Want more like this?

Get the best AI insights delivered weekly.

> Related Articles

Tags

ainvidiajensen-huanggpuhardwareprofileleadership

> Stay in the loop

Weekly AI tools & insights.