The AI Arsenal That Could Stop World War III | Palmer Luckey | TED

Contenido del Video OriginalExpandir Video
  • The scenario of a surprise invasion of Taiwan by China highlights vulnerabilities in U.S. military capacity.
  • Taiwan's semiconductor industry is essential to the global economy, and its collapse could lead to severe economic repercussions.
  • The rise of AI in defense technology is crucial for maintaining an edge against adversaries like China.
  • Building autonomous systems can foster peace by deterring potential conflicts.
  • The past and future of warfare demand innovative technology to protect national security.

I want you to imagine something.

In the early hours of a massive surprise invasion of Taiwan, China unleashes its full arsenal. Ballistic missiles rain down on key military installations, neutralizing air bases and command centers. Before Taiwan can fire a single shot, the People's Liberation Army Navy moves in with overwhelming force deploying amphibious assault ships and aircraft carriers. While cyberattacks cripple Taiwan's infrastructure and prevent emergency response, the Chinese rocket forces' long-range missiles shred through our defenses. Ships, command and control nodes, and critical assets are destroyed before they can even engage.

The United States attempts to respond, but it quickly becomes clear we don’t have enough. Not enough weapons, not enough platforms to carry those weapons. American warships, too slow and too few, sink to the bottom of the Pacific under anti-ship missile swarms. Our fighter jets, piloted by brave but outnumbered human pilots, are shot down one by one. The United States exhausted its shallow arsenal of precision munitions in a mere eight days. Taiwan falls within weeks. And the world wakes up to a new reality—one where the world's dominant power is no longer a democracy.

This is the war US military analysts fear most. Not just because of outdated technology or slow decision making, but because our lack of capacity, our sheer shortage of tools and platforms means we can't even get into the fight. When China invades Taiwan, the consequences will be global. Taiwan is the undisputed epicenter of the world's chip supply, producing over 90% of the most advanced semiconductors. The high-performance chips that power today's AI, GPUs, and robotics are also the chips that power your phones, computers, cars, and medical devices. If those factories are seized or destroyed, a global economy will crash overnight—tens of trillions of dollars in losses, supply chains in chaos, and the worst economic depression in a century.

And the danger is more than economic; it's ideological. China is an autocracy in a world where China dictates the terms of international order—is a world where individual freedoms erode, authoritarianism spreads, and smaller nations are forced into submission. Before anyone shrugs this off as the plot of Michael Bay's latest movie, we've seen this film before. Just ask Ukraine.

At this point, you might be wondering why a guy in a Hawaiian shirt and flip-flops is up here talking about a potential world war. My name is Palmer Lucky. I'm an inventor and an entrepreneur. At 19 years old, I founded Oculus VR while living in a camper trailer and brought virtual reality to the masses. Years later, I was fired from Facebook after donating $9,000 to the wrong political candidate. That left me with a choice: either fade into irrelevance and obscurity or build something that actually mattered.

I wanted to solve a problem that was being ignored, one that would shape the future of this country and the world. Despite the incredible technological progress happening all around us, our defense sector was stuck in the past. The biggest defense contractors had stopped innovating as fast as they had before, prioritizing shareholder dividends over advanced capability and prioritizing bureaucracy over breakthroughs. Silicon Valley, home to many of our top engineers and scientists, had turned its back on defense and the military at large, betting on China as the only economy or government worth pandering to.

Tech companies that once partnered with the military had decided that national security was someone else's problem. The result? Your Tesla has better AI than any US aircraft. Your Roomba has better autonomy than most of the Pentagon's weapon systems. And your Snapchat filters rely on better computer vision than our most advanced military sensors.

Now, I knew that if both the smartest minds in technology and the biggest players in defense deprioritized innovation, the United States would forever lose its ability to protect our way of life. With so few willing to solve that problem, I decided to give it my best shot. So, I founded a company called Anduril. Not a defense contractor, but a defense product company. We spend our own money building defense products that work, rather than asking taxpayers to foot the bill.

The result is that we move much faster and at lower cost than most traditional primes. Our first pitch deck to our investors, who are very aligned with us, said it plainly: We will save taxpayers hundreds of billions of dollars a year by making tens of billions of dollars a year.

While we make dozens of different hardware products, our core system is a piece of software, an AI platform called Lattice that lets us deploy millions of weapons without risking millions of lives. It also allows us to make updates to those weapons at the speed of code, ensuring we always stay one step ahead of emerging threats.

Another big difference is that we design hardware for mass production using existing infrastructure. Unlike traditional contractors, we build, test, and deploy our products in months, not years. In less than eight years, we have built autonomous fighter jets for the United States Air Force, school bus-sized autonomous submarines for the Australian Navy, and augmented reality headsets that give every one of our superheroes superpowers, just to name a few.

We also built counter-drone technology like Roadrunner, a twin turbojet-powered reusable counter-drone interceptor that we took from napkin sketch to real-world combat validated capability in less than 24 months. And we did it using our own money.

Now, coming from a guy who builds weapons for a living, what I’m about to say next might sound counterintuitive to you. At our core, we’re about fostering peace. We deter conflict by making sure our adversaries know they can’t compete. Putin invaded Ukraine because he believed that he could win.

Countries only go to war when they disagree over who the victor will be. That's what deterrence is all about, not saber rattling. It's about making aggression so costly that adversaries don't try in the first place.

So how do we do that? For centuries, military power was derived from size. More troops, more tanks, more firepower. But the defense industry has spent far too long handcrafting exquisite, almost impossible to build weapons. Meanwhile, China has studied how we fight and invested in the technologies and the mass that counter our specific strategies. Today, China has the world’s largest Navy, with 232 times the shipbuilding capacity of the United States, the world’s largest coast guard, the world’s largest standing ground force, and the world’s largest missile arsenal.

With production capacity growing every single day, we’ll never meet China’s numerical advantage through traditional means. Nor should we try. What we need is fundamentally different capabilities. We need autonomous systems that can augment our existing manned fleets. We need intelligent platforms that can operate in contested environments where human-piloted systems simply could not. We need weapons that can be produced at scale, deployed rapidly, and updated continuously. Mass production matters.

In a conflict where our capacity is our greatest vulnerability, we need a production model that mirrors the best of our commercial sector—fast, scalable, and resilient. We know how to win like this. We rallied our industrial base during World War II to mass produce weapons at an unprecedented scale. It’s how we won. The Ford Motor Company, for example, produced one B24 bomber every 63 minutes.

To actually achieve the benefits of these mass-produced systems, we need them to be smarter. This is where AI comes in. AI is the only possible way we can keep up with China’s numerical advantage. We don’t want to throw millions of people into the fight like they do. We can’t do it, and we shouldn’t do it. AI software allows us to build a different kind of force—one that isn’t limited by cost, complexity, population, or manpower, but instead by adaptability, scale, and speed of manufacturing.

Now, the ethical implications of AI and warfare are serious. But here's the truth: if the United States doesn't lead in this space, authoritarian regimes will, and they won’t care about our ethical norms. AI enhances decision making, increases precision, reduces collateral damage, and can hopefully eliminate some conflicts altogether.

The good news is that we and our allies have the technology, human capital, and expertise to mass produce these new kinds of autonomous systems and launch a new golden age of defense production.

With all that information in mind, let’s go back to Taiwan.

Imagine a different scenario. The attack might begin the same way. Chinese missiles streak towards Taiwan, but this time the response is instant. A fleet of AI-driven autonomous drones, already stationed in the region by allies, launch within seconds. Swarming together in coordinated attacks, they intercept incoming Chinese bombers and cruise missiles before they ever reach Taiwan.

In the Pacific, a distributed force of unmanned submarines, stealthy drone warships, and autonomous aircraft that work alongside manned systems strike from unpredictable locations. Our AI-piloted fighter swarms engage Chinese aircraft in dogfights, responding faster than any human possibly could.

On the ground, robotic sentries and AI-assisted long-range fires halt China’s amphibious assault before a single Chinese boot reaches Taiwan. By deploying autonomous systems at scale, we prove to our adversaries that we have the capacity to win. That is how we reclaim our deterrence.

To do so, we just have to stand with our allies across the world, united by the shared values and common resolve that we've held for the better part of a century. Our defenders, the men and women who volunteer to risk their lives, deserve technology that makes them stronger, faster, and safer. Anything less is a betrayal because that technology is available today.

This is how we prevent a repeat of Pearl Harbor. We could be the second greatest generation by rethinking warfare altogether. Thank you.

Thank you, Palmer. You painted a very vivid picture of the future of warfare and deterrence.

I want to ask you a couple of questions. I think one that’s on a lot of people’s minds is autonomy in the military kill chain. With the rise of AI, are we contending with fundamentally a new set of questions here? Because some advocate that we shouldn’t build autonomous systems or killer robots at all. What’s your take on that?

I love killer robots. The thing that people have to remember is that this idea of humans building tools that divorce the design of the tool from when the decision is made to enact violence is not something new. We’ve been doing it for thousands of years.

Pit traps, spike traps, a huge variety of weapons—even into the modern era. Think about anti-ship mines. Even purely defensive tools that are fundamentally autonomous. Whether or not you use AI is a very modern problem. It's one that people who haven’t usually examined the problem fall into this trap.

And there are people who say things that sound pretty good. Well, you should never allow a robot to pull the trigger. You should never allow AI to decide who lives and who dies. I look at it in a different way. I think that the ethics of warfare are so fraught and the decision so difficult that to artificially box yourself in and refuse to use sets of technology that could lead to better results is an abdication of responsibility.

There’s no moral high ground in saying I refuse to use AI because I don’t want minds to be able to tell the difference between a school bus full of children and Russian armor. There are a thousand problems like this. The right way to look at this is problem by problem.

Is this ethical? Are people taking responsibility for this use of force? It’s not to write off an entire category of technology and in doing so tie our hands behind our backs and hope we can still win. I can’t abide by that.

You’re right. If the information is available to you, why not create systems that actually take advantage of it? If you align yourself to it, the result could be far more catastrophic.

Precisely. And people will say things, usually non-technical people, like why not just make it all remote control? They don’t recognize that the scale of these conflicts we’re talking about doesn’t lend themselves to a one-to-one ratio of people to systems. It’s to say nothing of the fact that if you're a remotely piloted system, all you have to do is break the remote part, and everything falls apart.

There’s no moral high ground either in saying all you have to do is figure out how to jam us and you win. It sounds like a lot of defense systems that exist today kind of have this type of autonomous mode.

And I mean, this is another point. It’s usually not one that I make on a stage, but I get confronted by journalists who say, “Oh well, you know, we shouldn’t open Pandora's box.” My point to them is the Pandora's box was opened a long time ago with anti-radiation missiles that seek out surface-to-air missile launchers. We’ve been using them since the pre-Vietnam era.

Our destroyers' Aegis systems are capable of locking on and firing on targets totally autonomously. Almost all of our ships are protected by close-in weapon systems that shoot down incoming mortars, incoming missiles, and incoming drones. I mean, we've been in this world of systems that enact our will autonomously for decades.

And so the point I would make to people is you’re not asking to not open Pandora’s box; you’re asking to shove it back in and close it again. And the whole point of the allegory is that such cannot be done.

I gotta ask you one more question. Going back to your roots, many folks were obviously introduced to VR because of Oculus. And in a twist of fate, Anduril recently took over the IVAS program, essentially building AR/VR headsets for the US Army. What’s your vision for the program and what does that feel like?

We need all of our robots and all of our people to be getting the right information at the right time. That means they need a common view of the battlefield. The way that you can present that view to a human is very different from the way that you present it to a robot.

Robots are great. They have very, very high I/O and very low error rates in connectivity. People, we have to try to figure out how to strap stuff onto our appendages like our hands and our eyes and our ears and present information in a way that allows us to collaboratively work with these types of tools.

So superhuman vision augmentation systems like better night vision, thermal vision, ultraviolet vision, hyperspectral vision—those are the things that people focus on when they look at IVAS. But there's a whole nother layer— we need to be able to see the world the same way that robots do if we’re going to work closely alongside them on such high-stakes problems.

I love it—human plus machine intelligence. Everyone.