Sam Altman on AGI, GPT-5, and what’s next — the OpenAI Podcast Ep. 1

Original Video ContentExpand Video
  • Welcome to the OpenAI podcast with Andrew Maine.
  • Discussion with Sam Altman about Stargate and ChatGPT.
  • Insights on AI's impact on parenting and education.
  • Predictions around GPT-5 and its implications for the future.
  • Conversation centers on privacy, data, and the future of AGI.

Welcome to the OpenAI podcast. My name is Andrew Maine.
For several years, I worked at OpenAI first as an engineer on the Applied team and then as the science communicator.
After that, I worked with companies and individuals trying to figure out how to incorporate artificial intelligence.

With this podcast, we have the opportunity to talk to the people working with and at OpenAI about what's going on behind the scenes and maybe get a glimpse of the future.

My first guest is Sam Altman, CEO and co-founder of OpenAI, and we're going to find out a bit more about Stargate, how he uses ChatGPT as a parent, and maybe get an idea of when GPT-5 is coming.

More and more people will think we've gotten to an AGI system every year. What you want out of hardware and software is changing quite rapidly. But if people knew what we could do with compute, they would want way, way more.

One of my friends is a new parent and is using ChatGPT a lot to ask questions.

"Has it become a very good resource? How much has ChatGPT been helping you with that?"

A lot. I mean, clearly, people have been able to take care of babies without ChatGPT for a long time. I don't know how I would have done that.

Those first few weeks it was like...every — I mean constantly now I kind of ask questions about developmental stages more. Cause I can do the basics, but is this normal? Yeah, but it was super helpful for that.

I spend a lot of time thinking about how my kid will use AI in the future. It is sort of like, by the way, extremely child-pilled. I think everybody should have a lot of kids.

Yeah, a lot of my friends at OpenAI—former colleagues and current ones—are having kids, and people go like, "Oh, what about this AI thing?" Everybody knows inside is very optimistic in having families. I think that's a good sign.

Yeah, like my kids will never be smaller or smarter than AI, but also they will grow up vastly more capable than we grew up and able to do things that we cannot imagine.

And they'll be really good at using AI. Obviously, I think about that a lot, but I think much more about what they will have that we didn't than what is going to be taken away. They're like, "I don't think my kids will ever be bothered by the fact that they're not smarter than AI."

There's this video that always stuck with me of a baby or a little toddler with one of those old glossy magazines going like this on the screen because it thinks it's just an iPad. I thought it was a broken iPad.

You know, kids born now will just think the world always had advanced AI, and they will use it incredibly naturally. They will look back at this as a very prehistoric time period.

I saw something on social media where a guy talked about how he got tired of talking to his kid about Thomas the Tank Engine. So he put it into ChatGPT, into voice mode.

Kids love voice mode and ChatGPT! And he was like, an hour later, the kid's still talking about Thomas the Train again.

I suspect this is not all going to be good. There will be problems. People will develop these somewhat problematic or maybe very problematic parasocial relationships. And society will have to figure out new guardrails.

But the upsides will be tremendous. And society, in general, is good at figuring out how to mitigate the downsides.

Yeah, so yeah, I think optimistic.

We're seeing some interesting data where, used along in classrooms with a good teacher, good curriculum, ChatGPT becomes very good. Used solely by itself as a sort of homework crutch can lead to kids just doing the same thing as trying to Google stuff.

I was one of those kids that everyone was worried I was just going to Google everything when it came out and stopped learning. And you, it turns out, like, relatively quickly, kids in schools adapt.

So yeah, I think we'll figure this out.

Think of what you could have become if you didn't Google everything. Sam, you know, we've seen these adoption figures which are really insane.

Is OpenAI's most popular product still going to be ChatGPT five years from now?

I mean, I think ChatGPT will just be a totally different thing five years from now. So in some sense, no. But will it still be called ChatGPT? Probably, yeah. Okay, so it’s still the name.

So the other thing we hear is AGI, which I'd like to hear your definition of AGI. In many senses, if you asked me or anybody else to propose a definition of AGI five years ago based on the cognitive capabilities of software, I think the definition many people would have given then has now like well surpassed.

These models are smart now, right? And they'll keep getting smarter, they'll keep improving. I think more and more people will think we've gotten into an AGI system every year.

Even though the definition will keep pushing out and getting more ambitious, more people will still agree to it. But you know, we have systems now that are really increasing people's productivity, that are able to do valuable economic work.

Maybe a better question is what will it take for something I would call superintelligence?

If we had a system that was capable of either doing autonomous discovery of new science or greatly increasing the capability of people using the tool to discover new science, that would feel like kind of almost definitionally superintelligence to me and be a wonderful thing for the world, I think.

So basically, a lot of it's kind of this gradient where it keeps getting better and better and each one of our definitions feels like that way when we hit GPT-4 internally playing with this.

I'm like, there's 10 years of runway that we can do so much stuff with this. And even when it starts using itself, like you can enter reasoning, it was really capable.

When you're saying it comes up with some new theorem or proof or something and then, oh hey, we found a better cure for cancer, or I found out some new GLP drug or something.

Yeah, I mean, I am a big believer that the higher order bit of people's lives getting better is more scientific progress. I think that is kind of what limits us.

And so if we can discover much more, I think that really will have a very significant impact. And for me, that will just be like a tremendously exciting milestone.

I think many other great uses of AI will happen too, but that one feels really important.
Have you seen signs of this? You'd say internally?

Have you seen things that made you go, "Oh, I think we've kind of figured it out?"

Nothing where I would say we have figured it out, but I would say increasing confidence in the directions to pursue, maybe.

I mean, this is the example everyone talks about, but I think it is still interesting what's happening with people using AI systems to write code and coders being much more productive and thus researchers as well.

Like that is a sort of example of, okay, it's obviously not doing new science, but it is definitely making scientists able to do their work faster.

We hear this with O3 all the time from scientists as well. So I wouldn't say we figured out. I wouldn't say we know the algorithm where we're just like, "Alright, we can point this thing, and it'll go do science on its own," but we're getting good guesses, and the rate of progress is continuing to just be super impressive.

Watching the progress from O1 to O3 where it was like every couple of weeks, the team was just like, "We have a major new idea." And they all kept working.

It was a reminder of sometimes when you discover a big new insight, things can go surprisingly fast, and I'm sure we'll see that many more times.

I noticed recently OpenAI just shifted the model in O2.03 and I noticed a big improvement in how it operates.

And I'd say that the thing that we ran into before was brittleness. You have people who promise agentic systems can do all these things, but the moment it gets into a problem it can't solve, it falls apart.

Interestingly speaking of the AGI question, a lot of people have told me that their personal moment was operation with O3. There's something about watching an AI use a computer pretty well.

Not perfectly, but it's not O3 was a big step forward that feels very AGI-like. It didn't really have that effect on me to the same degree, although it’s quite impressive.

But I've heard that enough times. Mine was with Deep Research because that felt like a really unique use of it.

That was when it came back and it produced something on a topic I’m interested in that was better than I had read before.

Previously, all those models would just get a bunch of sources and summarize them. But when I watch the system go out on the internet, get data, follow that, then follow that lead and then follow back, then come back like I would have, but better was interesting.

I met this guy recently. He's like one of these crazy autodidacts just obsessed with learning and knows about everything, and he uses Deep Research to produce a report on anything he's curious about and then just sits there all day and has gotten good at digesting them fast and knowing what to ask next.

And it’s like it is an amazing new tool for people who really have a crazy appetite to learn.

I built my own app that literally lets me ask questions and it generates audio files for me of this stuff because my curiosity probably exceeds my retention.

I'll tell you the magical moment for me, and I'm curious to see where things are going next.

I was doing anything on Marshall McLuhan, and I wanted to get a bunch of images of Marshall McLuhan, and I asked it to do it.

And then, all of a sudden, I had a whole folder full of these things which would have taken me forever to do.

Yeah. I think we're just going to keep seeing things like this where whatever we thought about what a workflow had to be like and how long something had to take, it's going to just change wildly fast.

How are you using Deep Research?

Yeah. Science that I'm curious about. I'm interested in this weird place of... I am extremely time-strapped.

If I had more time, I would read. I would read Deep Research reports preferentially to reading most other things. But I'm sort of short on time to read in general.

What's neat too is the sharing feature, which I love because now it's easy to share that with somebody else.

The PDFs are great, and that's cool.

And I would say that even though we have Deep Research, we have these tools, there is a model race going on. And so the question comes up is like GPT-5, and the ideas that with a system like that we should see an increase in capabilities. What is the timeframe for GPT-5?

When are we going to see this? Probably sometime this summer, right? I don't know exactly when.

One thing that we go back and forth on is how much are we supposed to like turn up the big number on new models versus what we did with GPT-4, which is just better and better and better and better.

When I had to handle the recent GPT-4 right when that was coming out, I had to kind of do this test between that and 3.5, and 3.5 kept getting better and better and better and the comparisons I was able to make were changing.

And so that's my question: like, yeah, you would know GPT-5 versus wow, this is a really good GPT-4.5 or probably not necessarily.

I mean, it could go either way, right? You could just keep doing iterations at 4.5 or at some point you could call it 5.

It used to be much clearer. We would train a model and put it out and then when we trained a new big model we would put it out. Now the systems have gotten much more complex, and we can continually post-train them to make them better.

We're thinking about this right now: like every time let's say we launch GPT-5 and then we update it and update it and update it, should we just keep calling it GPT-5 like we did with GPT-4 or should we call this 5.1, 5.2, 5.3?

So, you know when the version changes. I don't think we have an answer to this yet, but I think there is something better to do than how we handled it with GPT-4.

And we see this periodically: sometimes people like one snapshot much better than another, and they might want to keep using one. And we have to sort of figure something out here.

Yeah, that's the challenge is even if you're technically inclined, you can kind of understand, "Okay, if there's an O before it, I know this."

But then even then it's not clear. Should I use 04 mini? Should I use 03? Should I use this?

I think this was like an example of this was an artifact of shifting paradigms. And then we kind of had these two things going at once.

I think we are near the end of this current problem. But I can imagine a world, I don't know what it is, but I can imagine a world where we discover some new paradigm that again means we need to bifurcate the model tree. Even more complicated names. I hope we don't have to do that.

I am excited to just get to GPT-5 and then GPT-6, and I think that'll be easier for people to use.

And you want to have to think, do I want, you know, 04 mini high or 03 or 4? Well, for mini high is what I use to code; I want to have a conversation. It's 03.

I think we will be out of that whole mess soon for now.

Yeah, it's fun to have choice when you know what they mean. But I still think one of the things that's made these things more capable but also harder to understand where the capability is coming from is integrations of things like memory.

Memory started off as one very simple thing, and now memory is a lot more sophisticated.

Memory is probably my favorite recent ChatGPT feature.

You know, the first time we could talk to a computer like GPT-3 or whatever felt like a really big deal.

And now that the computer, I feel like, kind of knows a lot of context on me.

If I ask it a question with only a small number of words, it knows enough about the rest of my life to be pretty confident in what I want it to do, sometimes in ways I don't even think of. Like that has been a real surprising level up.

And I hear that from a lot of other people as well. There are people who don't like it, but most people really do.

I think we are heading towards a world where, if you want, the AI will just have unbelievable context on your life and give you these super, super helpful answers, which for me is cool.

The fact you can turn it off is also great. But one of the challenges that came up was in the New York Times ongoing lawsuit with OpenAI. They just asked the court to tell OpenAI they had to preserve consumer ChatGPT user records beyond the 30 day window that has to be held for regular reasons.

Brad LaCap just wrote a letter responding to this. Could you explain?

We're going to fight that obviously, and I suspect, I hope, but I do think we will win. I think it was a crazy overreach of The New York Times to ask for that.

This is someone who says they value user privacy, whatever, but I would like to look for the silver lining here.

I hope this will be a moment where society realizes that privacy is really important.

Privacy needs to be a core principle of using AI. You cannot have a company like the New York Times ask an AI provider to compromise user privacy.

And I think society needs to... I think it's really unfortunate the New York Times did that, but I hope this accelerates the conversation that society needs to have about how we're going to treat privacy and AI.

I hope the answer is like we take it very, very seriously. People are having quite private conversations with ChatGPT now. ChatGPT will be a very sensitive source of information, and I think we need a framework that reflects that.

So that brings up the other question from people who are using this or skeptical is that OpenAI now has access to this data.

And there's the concern one was about training, which OpenAI has been very clear about when or when not it's training, and you have the options to turn that off.

The other thing is like advertising—things like that. What's OpenAI's approach toward that? How are you going to handle that responsibility?

We haven't done any advertising product yet. I mean, I'm not totally against it. I can point to areas where I like ads.

I think ads on Instagram are kind of cool; I bought a bunch of stuff from them. But I am like, I think it’d be very hard to—would take a lot of care to get it right.

Yeah, people have a very high degree of trust in ChatGPT, which is interesting because like AI hosts it should be the tech that you don't trust that much.

My friends hallucinate too, so I trust them too.

People really do, but I think part of that is you can compare us to social media or web search or something where you can kind of tell that you are being monetized in.

The company is trying to deliver you good products and services, no doubt, but also to kind of get you to click on ads or whatever.

Like how much do you believe that you're getting the thing that that company actually thinks is the best content for you versus something that's also trying to interact with the ads?

I think there's a psychological thing there.

So for example, I think if we started modifying the output like the stream that comes back from the LLM in exchange for who was paying us more, that would feel really bad and I would hate that as a user.

I think that'd be like a trust-destroying moment.

Maybe if we just said, "Hey, we're never going to modify that stream, but if you click on something in there, that isn’t to be what we'd show anyway, like we'll get a little bit of the transaction revenue and it's a flat thing for everybody."

If we have an easy way to pay for it or something, maybe that could work.

Maybe there could be like ads outside the transaction stream—sorry, outside of the LLM stream that are still really great, but the burden of proof there I think would have to be very high, and it would have to feel like really useful to users and really clear that it was not messing with LLM's output.

Yeah, that's going to be a difficult one. I hope there's a solution.

I would love to do all my purchasing through ChatGPT or a really good chatbot because a lot of the times I feel like I’m not making the most informed decisions and so mitigating.

Yeah, no, that's good, if we can do it in some sort of really clear and aligned way.

But I don't know, like I love that we build good services; people pay us for them. It’s very clear. Well, that's a benefit. That's like I’d say the difference in models is like I think Google builds great stuff.

I think the new Gemini 2.5 is a really good model. I think they went from...

It is a really good model. Yeah, they went from kind of like, "Oh man, these games are good."

But end of the day, Google is an ad tech company, and that’s the thing that always kind of... you know using their API and stuff I’m not too concerned although.

But I do think about like, "Man, if I'm using their chatbot, whatever that is..." My thinking is where their incentives are aligned.

Google search was an amazing product for a long time. It does feel to me like it has degraded, but you know there was a time when there were lots of ads but I still thought it was the best thing on the Internet.

I mean, I love Google search, so I don’t like – it’s clearly possible to be a good ad-driven company, and I respect a lot of things Google has done, but there are obviously issues too.

Yeah, the Apple model as an Apple user I liked was, I know I’m paying a lot for my phone, but I know they're not trying to cram all these things in it. They do have ads, which was not terribly effective, which probably showed you their heart was really not in it.

Yeah, so it’s going to be interesting. I guess we just have to keep watching and seeing this, and we start to think, "Man, you know, ChatGPT is really pushing this."

I need to start wondering about this. Anything we do we obviously need to just be like crazy upfront and clear about.

So we had an issue. There was a model update, and what happened was apparently the model was trying to be a little bit too pleasing, was trying to be a little bit too agreeable.

And that brings up the human-AI interaction as people are using these systems more and developing these relationships with that.

How do you see the shape of that coming, and what is OpenAI's position on personality?

One of the big mistakes of the social media era was the feed algorithms had a bunch of unintended negative consequences on society as a whole, and maybe even individual users, although they were doing the thing that a user wanted or someone thought that user wanted in the moment, which is to get them to keep spending time on the site.

And that was the big misalignment of social media.

I always knew that there’d be new problems in the world of AI where the thing that would be something that was misaligned in a not obvious way.

But definitely, one of the first ones that we experienced was if you ask a user what they want for one given response, then you try to build a model that is most helpful to the user.

You show a user, say, two responses, which one's more helpful to you on any given thing. You might want a model to behave one way, but over the course of all your interactions with an AI that might not match up.

You can see, and we did see these problems where if you pay too much attention to the user signals and a lot of other things that we talked about in our post-mortem, but that I think this is just like an interesting one on the short horizon.

You kind of don't get the behavior that a user most wants or is most helpful or useful or healthy to a user in the long run.

So, you know, maybe the analogy to filter bubbles is going to be AIs that are helpful to a user in a short amount of horizon but not over a long horizon.

I think a sign of that was DALL-E 3, which I thought technically was a really capable model, but they all kind of sort to be one kind of genre of image and kind of like an HDR sort of style.

And was that from doing that sort of comparisons where users said, looking at just these two things in isolation? I prefer this one better.

I don't remember for DALL-E 3, but I would assume so, yeah. Which I think it’s gotten better.

The new image model is like... the new image model is fantastic, crazy good.

Yeah, yeah. And I can only imagine where that's going to go from here.

So when you're building these things and you're increasing usage, and that's always been sort of a problem, the new image model comes out, you have to restrict usage and you have the compute limits—you can only have a certain amount of compute to do.

That illustrates the big problem everybody's facing, which is compute.

To address this, we heard about Project Stargate, which has a very cool name and it involves computers.

Other than that, I think a lot of people are going in their price tag, you know, half a trillion dollars. People are going, "Wait, what, what is the simple description I give to my mom about Stargate?"

I think it’s quite simple. It’s an effort to finance and build an unprecedented amount of compute.

It's true that we don't have enough compute to let people do what they want. But if people knew what we could do with more compute, they would want way, way more.

So there's this incredibly huge gap between what we can offer the world today and what we could offer the world with 10 times more compute or someday, hopefully, 100 times more compute.

The thing that is different about AI than other technologies I've worked on, or at least AI, the scale of delivering it usefully to hundreds of millions, billions of people around the world is just how big the infrastructure investment has to be.

And so Stargate is an effort to pull a lot of capital and technology and operational expertise together to build the infrastructure to go deliver the next generation of services to all the people who want them and make intelligence as abundant and cheap as possible.

So it is a massive global project. We talked about before. One of the partners is the UAE. You're working with other governments around the world on this.

One of the considerations is one been asked on social media: half a trillion dollars, $500 billion, do you have the money?

We don't literally have it sitting in the bank account today, but it is in the room right now. But we will deploy it over the next—not even that many years. Unless something really goes wrong, and it turns out we can't build these computers, I'm confident that people are good for it.

I went recently to the first site that we're building out in Abilene. That'll be about roughly 10% of all of the initial commitment to Stargate, the sort of $500 billion.

It's incredible to see. It is like I knew in my head what an order gigawatt scale site looks like, but then to go see one being built and the thousands of people running around doing construction and going to like, you know, stand inside the rooms where the GPUs are getting installed and just look at how complex the whole system is and the speed with which it's going is quite something.

We'll have more to share about the next sites soon. But there’s a great quote about a pencil, just like a standard wood and graphite pencil, and one person could build it.

It's this magic of capitalism—a miracle, really—that the world gets coordinated to do these things.

Standing inside of the first Stargate site, I was really just thinking about the global complexity that it took to get these racks of GPUs running.

You know when you get your phone out and you type something into ChatGPT, and you get the answer back, you probably, at this point, you probably don't even think that's particularly surprising. You just expect it to work.

There was a time, maybe the first time you tried it, you were like, "That is really amazing."

But the work that happened over the last thousand, or at least many hundreds of years, of people working incredibly hard to get these hard-won scientific insights and then to build the engineering and the companies and the complex supply chains and reconfigure the world—that had to happen to get this rack of magic put somewhere.

Think about all the stuff that went into that and traced it all the way back to people that were just like digging rocks out of the ground and seeing what happened so that you now get to just type something into ChatGPT and it does something for you.

I read a behind-the-scenes story about the development of Project Stargate and the international partnerships, particularly UAE, and that Elon Musk had tried to derail that.

And what have you seen? What have you heard? What's the take on that?

I had said, I think also externally, but at least internally after the election, that I didn't think Elon was going to abuse his power in the government to unfairly compete.

I regret to say I was wrong about that, man. I don't like being wrong in general, but mostly I just think it's really unfortunate for the country that he would do these things and I didn't think. I genuinely didn't think he was going to.

I'm grateful that the administration has really done the right thing and stuck up to that kind of behavior. But yeah, it sucks.

Well, I think the thing that's changed—and I think Greg Brockman just talked about this—where there were a couple of years ago when people thought like, okay, whoever gets there first is the winner, and that's it; the game is over.

And now we realize there are great AI labs elsewhere, like Anthropic is building great tools. I think Google’s really got its game up. There's good stuff happening everywhere, and it’s not going to be that one person runs away with it.

I agree. And so it seems, yeah, the example that I like the most is the discovery of AI was analogous to this—not perfect, but close to the discovery of the transistor in many surprising numbers of ways.

But many companies are going to build great things on that, and then eventually it's gonna seep into almost all products, but you won't think about using transistors all the time.

So yeah, I think a lot of people are going to build really successful companies built on this incredible scientific discovery, and I wish Elon would be less zero-sum about it.

Yeah, or negative-sum. I think the pie is just going to get bigger and bigger if we think about that.

I was just at an energy conference, and it was interesting talking to the people who were involved in energy production and stuff and hyperscaling—the term they used for this was a topic—and that does bring up the energy requirements.

I know that for like Grok3, apparently, I guess they had to put generators in the parking lot to be able to train that model.

And that's the question: where is the energy going to come from? Money, I understand energy to think of when we talk about the scale of energy needed. I think kind of everywhere.

I think it’s a big mix right now. Eventually, I think a lot of—I'm very excited about advanced nuclear, both fission and fusion.

But for now, I think it's a whole mix of the entire portfolio: gas, solar, I mean really nuclear, everything.

So I was talking to people, some of them worked in areas like in Alberta where they said we have a lot of access to energy and not as much use for it there, etc.

And now was just this total picture I didn't even think about. Traditionally, it's very hard to move energy around the world most kinds but if you exchange energy for intelligence and then move the intelligence around the world, it's much easier.

So you could put the giant training center or even the big inference clusters in a lot of places and then just like ship the output over the internet.

There was a speaker at OpenAI who came to an event, and it was somebody who was working, I think, on the James Webb Space Telescope.

And he talked about his biggest bottleneck was they were about to get all this, you know, terabytes of data, but he doesn't have enough scientists to work on it, doesn’t have enough people to go through the data.

Here we have these answers about the universe, whatever in front of us, and it’s like a big data problem.

Yeah. I've always joked that one thing we should do when we have enough money, when OpenAI is enough money, is just build a gigantic particle accelerator and solve high energy physics once and for all.

Because I think that’d be like a triumphant, wonderful thing.

But I wonder what are the odds that a really, really smart AI could look at the data we currently have with no more data, no bigger particle accelerator and just figure it out.

It's not impossible. Yeah. And yeah, so there's this question of like, okay, there's already a lot of data out there, there’s a lot of smart people in the world, but we don't know how far our intelligence can go with no more experiments, how much more could we figure out?

Yeah, I remember Ed said that it talked about how in the early 1990s somebody had found like a form of Ozempic and presented it to like a drug company.

The drug company said they were going to pass on that. And that's been a life-changing drug for people, like for people who’ve just basically done chronic obesity, whatever.

It's going to improve the quality of life, and you think, "Oh, this was sitting there for 25 years."

I suspect there's a lot of other examples that we'll find where maybe we already have existing drugs that we know do something good, but they're reusable in some other big way or with a couple of small modifications.

We are very close to something great. And it's been very heartening to hear from scientists using even the current generation models for this kind of work.

So it sounds like one of the things we're going to need, though, for next-generation models is models that understand physics and chemistry and stuff.

Is SORA sort of a stab at that?

I mean, it’ll understand like Newtonian physics. I don't know if it'll help us with discovering new chemistry and novel physics or whatever you’d like.

But I think I'm optimistic that the techniques we use for the reasoning models will help us with those things a lot.

Okay, and what is the short definition of how a reasoning model works versus just me asking GPT-4.1 something?

So the GPT models can reason a little bit. And in fact, one of the things that got people really excited in the early days of the GPT models was you could get better performance by telling the model, "Let's think step by step."

And it would then just output text that was thinking step by step and get a better answer, which was amazing that that worked at all.

The reasoning models are just pushing that much further.

So it's the idea of like when it's able to break the question down to consume more time on each step.

When you ask me something, a question, if it's a really easy question, I might just fire back like almost on reflex with the answer.

But if it's a harder question, I might think in my head and have my internal monologue go and say, "Well, I could do this or that or maybe this will be clearer. I’m not sure about that."

And I could backtrack and retrace my steps.

Then when I finish thinking and I've, you know, been thinking in English, I can then, you know, make some bullet points and then kind of like I'll put an answer to you in English.

One of the interesting things I’ve observed now when I use the app, if I ask a deep research question or something and I go away on my lock screen, I get the... it’s still processing and thinking about it.

And I heard somebody, another company I was with, was using a metric of how long something spent.

I think it was Anthropic, like, said, "Hey, this model actually spent like 15 minutes or 30 minutes or whatever length of time to think about a thing," which is a good metric, but it needs to actually give you the right answer.

And I thought that was sort of just an interesting paradigm.

One thing I have been surprised by is people are surprisingly willing to wait for a great answer, even if the model is going to think a while.

All of my instincts have been, you know, the instant response is the thing that matters and users hate to wait.

And for a lot of stuff, that's true. But for hard problems with a really good answer, people are quite willing to wait.

Yeah, so we have all these tools, all these things. So far I’m using my phone, and now OpenAI just announced that you guys are building hardware.

You had the video with you and Johnny talking about. You guys have been talking about and collaborating for a couple of years.

Obviously, you can’t—I mean. Well, I could ask you, is it on you right now?

No, it is not.

All right, it's going to be a while.

Okay. We're going to try to do something at like a crazy high level of quality, and that does not come fast.

But computers, software, and hardware, just the way we think of current computers were designed for a world without AI, and now we're in like a very different world.

What you want out of hardware and software is changing quite rapidly.

You might want something that is way more aware of its environment, that has way more context in your life.

You might want to interact with it in a different way than typing and looking at a screen. We've been exploring that for a while, and we've got a couple of ideas we're really quite excited about.

I think it will take time for people to get used to what it means to use a computer in this kind of a world because it is so different now.

But if you really trusted an AI to understand all the context of your life and your question and make good judgments on your behalf, where you could like have it sit in a meeting, listen to the whole meeting, know what it was allowed to share with whom and what it shouldn't share with anyone and you know what your preferences would be and then you ask it one question, and you trust that it’s going to go do the right follow-ups with the right people and like you can then imagine a totally different way of how you use a computer to get down what you want.

So kind of the way we interact with ChatGPT is kind of inform the device.

I mean you could also say that the way we interact with ChatGPT was informed by the previous generation of devices.

So I think it is this sort of co-evolving thing.

But yeah, I hope so. One of the things that made the phone so ubiquitous was the fact that I can be in public and look at the screen; I can be in private and have a phone call and talk to it.

And I think that's one of the challenges for new devices is trying to bridge that gap between what we use in public and private.

Phones are unbelievable things. I mean, they are really fantastic for a lot of reasons, and you can imagine one new device that you could use everywhere.

But also like there's some things that I do do differently publicly and probably like at home I've got a great stereo system to listen to music, and when I'm walking in the world, I use AirPods, and that doesn't bother me.

Yeah, so I think there are things that are different in the public and private use case, but the general purposeness, I agree, is important.

Yeah, it follows you with it.

So nothing yet until maybe next year.

It's going to be a while.

All right. It will be worth the wait, I hope, but it's going to be a while.

Okay. I'm excited, curious, have thoughts.

So if you're giving advice to a 25-year-old right now, what do you tell them?

I mean, the obvious tactical stuff is probably what you’d expect me to say: like learn how to use AI tools.

It's funny how quickly the world went from telling, you know, the average 20-year-old to 25-year-old, "Learn programming," to "Programming, doesn’t matter, learn to use AI tools."

I wonder what will be next. But of course, there will be something next. But that's very good tactical advice.

And then on the sort of broader front, I believe that skills like resilience, adaptability, creativity, figuring out what other people want, I think these are all surprisingly learnable.

And it’s not as easy as saying, like go practice using ChatGPT, but it is doable. And those are the kinds of skills that I think will pay off a lot in the next couple of decades.

And we'd say the same thing to a 45-year-old: Just learn how to use it in your role now.

Yeah, probably whenever we have whatever your personal definition of AGI, will more people be working for OpenAI after than or before?

More.

More so, yeah. I see a lot of online people like, "Ah, they're so good. Why are they hiring people?"

I'm like, because computers can't do everything. They're not going to do everything.

The slightly longer answer with more than one word is that there will be more people, but each of them will do vastly more than what one person did, you know, in the pre-AGI times.

Right? Which is a goal of technology.