Elon Musk : How to Build the Future

Original Video ContentExpand Video

文章要点:

  • Elon Musk discusses the significance of working on useful problems for society.
  • He identifies AI and genetics as the most pressing issues impacting humanity's future.
  • Musk emphasizes the importance of democratizing AI technology to mitigate risks.
  • He shares insights on his journey and decision-making process in pursuing innovative projects.
  • Musk highlights his focus on engineering and design at SpaceX and Tesla.

Today we have Elon Musk. Elon, thank you for joining us.
Yeah, thanks for having me.

So we want to spend the time today talking about your view of the future and what people should work on.
So to start off, could you tell us, you famously said when you were younger there were five problems that you thought were most important for you to work on. If you were 22 today, what would the five problems that you would think about working on?

Well, first of all, I think if somebody is doing something that is useful to the rest of society, I think that's a good thing. Like it doesn't have to change the world. Like, you know, if you're doing something that has high value to people and frankly, even if it's something, if it's like just a little game or you know, some improvement in photo sharing or something, if it has a small amount of good for a large number of people, that's. I mean, I think that's fine, like stuff doesn't need to be changed the world just to be good.

But in terms of things that I think are most likely to affect the future of humanity, I think AI is probably the single biggest item in the near term that's likely to affect humanity. So it's very important that we have the advent of AI in a good way. That is something that if you could look into the crysible and see the future, you would like that outcome because it is something that could go wrong, as we've talked about many times. And so we really need to make sure it goes right's I think AI working on AI and making sure' Great future. That's the most important thing I think right now.

The most pressing item then obviously anything to do with genetics. If you can actually solve genetic diseases, if you can prevent dementia or Alzheimer's or something like that with genetic reprogramming, that would be wonderful. So I think this genetics, it might be the sort of second most important item I think having a high bandwidth interface to the brain like we're currently bandwidth limited. We have a digital tertiary self in the form of our email capabilities like computers, phones, applications, we're effectively superhuman, but we are extremely bad with constraint in that interface between the cortex and your would sort of that tertiary digital form of yourself and helping solve that bandwidth constraint would be I think very important for the future as well.

So one of the, I think most common questions I hear young people, ambitious young people ask is I want to be the next Elon Musk. How do I do that? Obviously the next Elon Musk will work on very different things than you did. But what have you done or what did you do when you were younger that you think sort of set you up to have a big impact?

Well, I think first of all, I should say that I did not expect to be involved in all these things. So the five things that I thought about the time in college, quite a long time ago, 25 years ago, making life multilanetary, accelerating the transition, sustainable energy, the Internet, broadly speaking, and then genetics and AI.

I think I didn't expect to be involved in all of those things. I actually, at the time in college, I sort of thought helping with electrification of cars was how I would start out. And that's actually what I worked on as an intern was advanced ultracapacitors to see if they, if there would be a breakthrough relative to batteries for energy storage in cars. And then when I came out to go to Stanford, that's what I was going to be doing my grad studies on, was working on advanced energy storage technologies for electric cars.

And then I put that on hold to start Anet Company in 95 because there does seem to be like a time for particular technologies when they were at a steep point in the inflection curve. And I didn't want to, you know, do a PhD at Stanford and watch it all happen. And I wasn't entirely certain that the technology I'd be working on would actually succeed. You can get a doctorate on many things that ultimately do not have a practical bearing on the world. And I wanted to, you know, just, I really was just trying to be useful. That's the optimization.

It was like, what can I do that would actually be useful? Do you think people that want to be useful today should get PhDs? Mostly not. So what is the best way to do it?

Some yes, but mostly not. How should someone figure out how they can be most useful? Whatever this thing is that you're trying to create. What would, what would be the utility delta compared to the current state of the art times how many people it would affect. So that's why I think having something that makes a big difference but affects a sort of small to moderate number of people is great, as is something that makes even a small difference but affects a vast number of people. Like the area under the curve is, would actually be roughly similar for those two things. So it's actually really about just trying to be useful.

And when you're trying to estimate probability of success, so you say thing will be really useful. Good Area under the curve.

I guess to use the example of SpaceX, when you made the go decision that you were actually going to do that, this was kind of a very crazy thing at the time. Very crazy for sure. Yeah, I’m not sure about saying that, but I agreed with them that it was quite crazy. Crazy if the objective was to achieve the best risk-adjusted return, starting our company is insane. But that was not my objective.

I simply come to the conclusion that if something didn't happen to improve, rocket technology would be stuck on Earth forever. And the big aerospace companies had just had no interest in radical innovation. All they wanted to do was try to make their old technology slightly better every year. And in fact sometimes it would actually get worse. And particularly in rockets it's pretty bad.

In 69 we were able to go to the moon with the Saturn V and then the space shuttle could only take people to low Earth orbit and then the space shuttle retired. I mean that trend is basically trends to zero. If you somet always think technology just automatically gets better every year, but it actually doesn't. It only gets better if smart people work like crazy to make it better. That's how any technology actually gets better.

And by itself technology, if people don't work, it actually will decline. I mean you can look at the history of civilizations, many civilizations and look at say ancient Egypt were able to build these incredible pyramids and then they basically forgot how to build pyramids. And then even hieroglyphics, they forgot how to read hieroglyphics. So we look at Rome and how they'able to build these incredible roadways and aqueducts and indoor plumbing and they've forgot how to do all of those things. And there are many such examples in history.

So I think sure. Always bear in mind that, you know, entropy is not on your side.

One thing I really like about you is you are unusually fearless and willing to go in the face of other people telling you something is crazy. And I know a lot of pretty crazy people, you still stand out. Where does that come from? Or how do you think about making a decision when everyone tells you this is a crazy idea? Or where do you get the internal strength to do that?

Well, first of all, I'd say I actually think I fear feel fear quite strongly. So it's not as though I just have the absence of fear. I feel quite strongly. But there are times when something is important enough, you believe in enough, that you do it in spite of the fear. So speaking of important things like people shouldn't think.

Shouldn't think. Well, I feel fear about this and therefore I shouldn't do it. It's normal to feel fear. Like you'd have to be something mentally wrong if you shouldn't feel fear. So you just feel it and let the importance of it drive you to do it anyway.

Yeah, you know, actually something that can be helpful is fatalism to some degree. If you just accept the probabilities then that diminishes fear. So when starting SpaceX I thought the odds of success were less than 10% and I just accepted that actually probably I would just lose everything, but that maybe would make some progress if we could just move the ball forward. Even if we died, maybe some other company could pick up the baton and move and keep moving it forward so that we still do some good.

Yeah, same with Tesla. I thought the odds of a car company succeeding were extremely low.

What do you think the odds of the Mars colony are at this point today? Well, oddly enough I actually think they're pretty good. So like when can I go? Okay, at this point I am certain there is a way. I'm certain that success is one of the possible outcomes for establishing a self-sustaining Mars colony, in fact, growing a Mars colony. I'm certain that that is possible. Whereas until maybe a few years ago I was not sure that success was even one of the possible outcomes. Some meaningful number of people going to Mars.

I think this is potentially something that can be accomplished in about 10 years, maybe sooner, maybe nine years. I need to make sure that SpaceX doesn't die between now and then and that I don't die, or if I do die, that someone takes over who will continue that you shouldn't go on the first launch.

Yeah, exactly. The first launch will be a robotic anyway, so I want to go. Except for the Internet latency. Yeah, the Internet latency would be pretty significant. Mars is roughly 12 light minutes from the sun and Earth is 8 light minutes. So at closest approach, Mars' four light minutes away at first approach is 20 a little more because you can't sort of talk directly through the sun.

Speaking of really important problems, AI. So you've been outspoken about AI. Could you talk about what you think the positive future for AI looks like and how we get there?

Okay, I mean I do want to emphasize that this is not really something that I advocate or this is not prescriptive. This is simply hopefully predictive because people, some say well like this is something that I want to occur instead of. So I think that probably is the best of the available alternatives.

The best of the available alternatives that I can come up with and maybe somebody else can come up with a better approach or a better outcome is that we achieved democratization of AI technology. Meaning that no one company or small set of individuals has control over advanced AI technology.

Think that that's very dangerous. It could also get stolen by somebody bad. You know, like some evil dictator of a country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation.

I think if you've got any incredibly powerful AI, you just don't know who's going to control that. So it's not as that. I think that the risk is that the AI would develop a all of its own right off the bat. I think it's.

The concern is that someone may use it in a way that is bad. Even if they weren't going to use it in a way that's bad, that somebody could take it from them and use it in a way that's bad, that I think is quite a big danger. So I think we must have democratization of AI technology and make it widely available.

And that's the reason that obviously Yi and the rest of the team created OpenAI was to help with the democratization, help spread out AI technology so it doesn't get concentrated in the hands of a few.

But then of course that needs to be combined with solving the high bandwidth interface to the cortex. Humans are so slow. Humans are so slow. Yes, exactly.

But we already have a situation in our brain where we've got the cortex and the limbic system. And the limbic system is kind of the. I mean that's the primitive brain. It's kind of like your instincts and, and whatnot. And then the cortex is the thinking upper part of the brain. Those two seem to work together quite well occasionally.

Your cortex and limbic system may disagree, but they generally work pretty well. It generally works pretty well. And it's rare to find someone who. I've not found someone who wishes to either get rid of their cortex or get rid of their limbic system.

Very true. Yeah, that's unusual. So. So I think if we can effectively merge with AI by improving that the neural link between your cortex and your digital extension yourself, which already, like I said, already exists, just has a bandwidth issue and then effectively you become an AI human symbiote.

And if that then is widespread with anyone who wants it can have it, then we solve the control problem as well. We don't have to worry about some sort of evil dictator AI because kind of we are the AI collectively. That seems like the best outcome I can think of.

So you've seen other companies in the early days that start small and get really successful. Hope I don't ret asking this on camera, but how do you think OpenAI is going? As a six-month-old company?

It seems to go pretty well. I think we've got a really talented group at OpenAI seems. Yeah, really talented team and they're working hard. OpenAI is structured as a 501C through nonprofit but you know, many nonprofit do not have a sense of urgency.

It's fine, they don't have to have a sense of urgency, but OpenAI does because I think people really believe in the mission. I think it's important and it's about minimizing the risk of existential harm in the future. And so I think it's going well. I'm pretty impressed with what people are doing in the talent level and obviously we're always looking for great people to join who believe on the mission.

Close to 40 people now.

Yeah. That's. Well, all right, just a few more questions before we wrap up. How do you spend your days now? Like what do you allocate most of your time to?

My time is mostly split well'between SpaceX and Tesla. And of course I try to spend a part of every week at OpenAI. So I spend most. I spend basically half a day at OpenAI most weeks and then I have some OpenAI stuff that happens during the week.

But other than that it's really.

And what do you do when you're SpaceX or Tesla? What does your time look like there?

Yeah, so that's a good question. I think a lot of people think I must spend a lot of time with media or on businessy things. But actually almost, almost all my time, like 80% of it is spent on engineering, design, engineering and design. So it's developing the next generation product. That's 80% of it.

You probably don't remember this a very long time ago. Many, many years. You took me on a tour of SpaceX and the most impressive thing was that you knew every detail of the rocket and every piece of engineering that went into it. And I don't think many people get that about you.

Yeah, I think a lot of people think I'm kind of a business person or something, which is fine. Like business is fine, but really it's it, like it's SpaceX Gwynne Shotwell is chief operating officer. She kind of manages legal, finance, sales and kind of general business activity.

And then my time is almost entirely with the engineering team working on improving the Falcon 9 and Dragon spacecraft and developing the Mars Colonial architecture. And then at Tesla it's working on the Model 3. And some in the design studio typically happen a day a week dealing with this aesthetics and look and feel things.

And then most of the rest of the week is just going through engineering of the car itself as well as engineering of the factory. Because the biggest epiphany I've had this year is that so what really matters is the machine that builds the machine, the factory, and that is at least towards magnitude hotter than the vehicle itself.

It's amazing to watch the robots go here and these cars just happen.

Yeah. Now this actually has a relatively low level of automation compared to what the Gigafactory will have and what Model 3 will have.

What's the speed on the line of these cars? Actually, the average speed of the line is incredibly slow. It's probably about including both X and S, it's maybe 5 centimeters per second.

And what can you get? This is very slow. Or what would you like to get to? I'm confident we can get to at least 1 meter per second. So a 20-fold increase, that would be very fast.

Yeah, at least, I mean, I think quite a 1 meter per second. Just put that in perspective, is a slow walk or like a medium speed walk. A fast walk could be 1/2 meters per second and then the fastest humans can run over 10 meters per second.

So if we're only doing 0.05 meters per second, that's very slow current speed. And at 1 meter per second you can still walk faster than the production line.