The Future of AI Research with Sam Altman of Open AI
Sam Altman is the co-founder and CEO of OpenAI. Sam is also the Chairman of Y Combinator, a leading silicon valley startup accelerator.
Alexandr Wang is the founder and CEO of Scale AI, the data platform accelerating the development of artificial intelligence. Alex founded Scale as a student at MIT at the age of 19 to help companies build long-term AI strategies with the right data and infrastructure. Under Alex's leadership, Scale has grown to a $7bn valuation serving hundreds of customers across industries from finance to e-commerce to U.S. government agencies.
Sam Altman, Co-Founder, and CEO of Open AI joins Scale AI CEO Alexandr Wang to discuss Open AI’s latest research from DALLE to GPT-3 and discuss AI research broadly.
Alexandr Wang: Hey, everyone. Really excited to jump into our next fireside chat with Sam Altman. Sam is the CEO and Co-founder of OpenAI and the former president of Y Combinator. Sam and I first met when Scale was actually going through the Y Combinator program back in 2016. Sam is one of the people in technology who works on the most interesting set of problems in both technology and hard tech, between AI, nuclear energy, and synthetic biology. Always excited to chat with Sam on the future of AI. Sam, thanks for coming.
Sam Altman: Thanks, Alex.
Alexandr Wang: One thing I wanted to start on is we mentioned this list of diverse and wide ranging topics that you work on, from AI, to nuclear energy, to synthetic biology. What inspires you to work on these problems?
Sam Altman: I mean, I do like trying to be useful, and I wish I could give some really sort of selfless answer, but honestly, I like to have an interesting life and I think these are the most interesting problems in the world. I think if AI even gets close to where we think, if it really is this technological revolution on the scale of the agricultural revolution, the industrial revolution, the computer revolution, those don't come across every lifetime. And so the ability to work on that is … I feel super lucky, and it's amazing to work on.
Research at Open AI
Alexandr Wang: OpenAI has had some incredible breakthroughs in research over the past few years, and it's been truly incredible to watch from the outside. What do you think sets OpenAI apart from other AI research organizations?
Sam Altman: I think we have maybe a unique, or at least a rare, combination of we are good at research, engineering, and the sort of safety policy planning thing that think tanks usually do. And we have all of those in one small organization that on a per headcount basis is extremely well-resourced and extremely focused. So we're willing to concentrate a lot of resources into a small number of bets and bring these three different pieces together to do that. We have a plan that we think goes from here to AGI. I'm sure it will be wrong in important ways, but because we have such a plan and because we're trying to think of how the pieces fit together and we're willing to make high conviction bets behind them, that has let us make, I think certainly relative to our size and capital, outsized progress.
Alexandr Wang: Super interesting. One question I have is how intentional was this magical mix of multidisciplinary interests on your team, as well as the strategy, or is this sort of emergent from assembling a group of very smart people who you enjoyed working with?
Sam Altman: I mean, I would say both. We intentionally thought that to do this well, you would need to put everything together. And then when we looked at the landscape out in the world before OpenAI, most of the groups were really strong in one of the areas, maybe one and a half, but no one in all three. And so we very consciously, like we call those the three clans of OpenAI, have always wanted to be good at all three.
Sam Altman: But then the other thing is I just, I think really talented people that are focused together in not only one vision, but one plan to get there, that is the rarest commodity in the world. And the best people are, you know, Steve Jobs used to say this, I think more eloquently than anyone else, but the best people are so much better than the pretty good people, that if you can get a lot of them together in a super talent dense environment, you can sort of surprise yourself on the upside.
Sam Altman: The central learning of my career so far has been that exponential curves are just super powerful, almost always underestimated, and usually keep going. And so in some sense, by the time I started OpenAI, it was clear that these curves were already going. It was clear that I think the biggest miracle needed for all of AI, which was an algorithm that could learn, was behind us. And we can have better algorithms that can learn. We can learn more efficiently. But once you can learn at all, once a computer can learn at all, then if you think about that from first principles, a lot of things are going to happen. And so that, the miracle was already behind us when we started, and it then became a process about doing a really good job and just executing on the engineering, figuring out the remaining research breakthroughs and then thinking about how it all comes together in a way that is good for the world, hopefully.
Alexandr Wang: Sam, you're such a good student of history, especially in terms of the situations and the events that led to many incredible innovations in the past, like the internet or GPS, or even the computer originally. What lessons do you learn from the histories of these incredible technologies, and how do you try to apply those into your work at OpenAI?
Sam Altman: Okay. I have a non-consensus answer here. I'd study all of those things. They're all super interesting. I do love reading about history. But I think most people sort of over-rotate on that. It's like always tempting to try to learn too much. It's always tempting to say like, “What did the atomic bomb people do? What can we learn about climate change?” And I think there are themes, there's some similarities, and you would be very stupid not to try to take those into account, but I think the most interesting learnings and the most interesting way to think about it is like, what about the shape of this new technology? What about the way that this is likely to go is going to be super different than what's come before?
Sam Altman: And how do we solve this with all of the benefits we've had in the past, but really not trying to apply that, really trying to think about the world today, the quirks of this particular technology, and what's going to happen is going to be different. I think AI will be super different from nuclear. I think it'll be super different from climate. I think it'll be super different from Bell Labs. And I think most people that try to do something like this take too much inspiration from efforts in the past, not enough.
Alexandr Wang: Yeah. That's super interesting. So how do you pick the, you mentioned that OpenAI, part of the strategy has been to be relatively concentrated, pick a small number of bets that you guys have high conviction on. How do you go about picking these bets, and what represents a good bet versus a bad bet?
Sam Altman: Do more of what works is part of the answer. And I think, weirdly, most research labs have a do less of what works approach. You know, there's this thing of like, oh, once it works, it's no longer innovative, we're going to go look for something different. We just want to build AGI and figure out how to then safely deploy it for maximum benefit. But if something's working, even if it then is kind of like a little bit more boring and you have to just put in a lot of grunt work to scale it up and make it work better, we're really excited to do that.
Sam Altman: We don't take the approach that personally makes very little sense to me, but seems to be what most research labs in most fields, not just AI, do, of “do less of what works”. So we have some thoughts. We may turn out to be wrong, but so far we've been right more than we've been wrong, about what it takes to make general purpose intelligence. And we try to pursue a portfolio of research bets that advance us to those.
Sam Altman: When we have scaled something up so much as we can, when we have gotten the data to be as good as we can get it, and we still can't do something, that's a really good area to go do novel research in. But again, the goal is to build and deploy safe AGI, share the benefits maximally well, and whatever tactics it takes to get there, we're excited about. And sometimes it's surprising. Sometimes you can just really scale things a lot. Sometimes what you thought you would work, it needs a really new idea. But we keep finding new ideas and we keep finding how to make bigger computers and get better data.
Alexandr Wang: Yeah. So one of the super interesting intellectual questions that many people engaged in AI have all pondered on is AGI is this thing that, at least theoretically, is certainly possible because we accomplish it through our brains. And there's this interesting question of what does the technological path to arrive at AGI actually look like? And obviously this is almost a philosophical question more than a real technical question, but based on what you know today, all the research you all have done it at OpenAI and what you've learned through that research, what do you think is the most likely path from here to something that represents AGI?
Sam Altman: Either creating AGI in a computer is a certain possibility, either physics all works like we think, and we're living in the physical universe and consciousness, the intelligence is sort of this emergent property of energy flowing through, you know, in your case, a biological network, in the computer's silicon, but it's going to happen. Or, we are living in some sort of weird simulation, or we're like a dream in universal consciousness, or nothing is like what we think. And in any case, you should live as if it's certainly going to happen. And so I still find it odd that people talk about maybe it's possible. It's like, I think you should certainly live as if it's possible and do your work as if it's possible.
Sam Altman: In terms of what we need, we don't talk too much about unannounced research, but certainly I think most of the world, ourselves included, have been surprised by the power of these large unsupervised models and how far that can go. And I think you can imagine, combining one of those that sort of understands the whole world and all of human knowledge with enough other ideas that can get that model to learn to do useful things for someone, that would feel super general purpose.
Alexandr Wang: Yeah. In that answer, you brought up one thing which I'm always very impressed by with you, which is this thought which is, you might as well believe that the technology is possible, because if it is, it changes everything.
Sam Altman: Yeah. There's this old philosophical debate, which is either, like, let's sort of say Descartes was right. You can say that I have this certain knowledge that I am, I exist, my own subjective experience is happening, and you can't be certain of anything else. So maybe this is all like, you're in a virtual reality game, you're dreaming, it's some apparition of a God, whatever. Or, it really is just physics as consensus understanding is. But in that case, it's totally possible to recreate whatever this subjective experience of self awareness is. And so it's like either you believe that physics is physics, or not, but in the or not case, then something else is very strange, so who cares?
Alexandr Wang: Yeah. Well, the other part I was going to mention is that part of, I think, what you mentioned to find your career is this belief in exponentials, and I think if you have a strong belief in exponentials, then the question for these great technologies is never yes or no. It's usually when.
Sam Altman: Yeah. For sure. That is another, I think if you can train yourself to overcome one cognitive bias to sort of maximize value creation in your own life, this is the one. It's understanding these exponential curves. For whatever reason, evolution didn't prioritize this. We're very bad at it, but it takes some work to overcome, but if you can do it, yeah, it's super valuable. It makes you look like a visionary.
Alexandr Wang: There's actually a lot of neuroscience research that shows that people, in their brains, they have circuits to do all sorts of mental operations, like addition, subtraction, et cetera. We're very bad at exponentials.
Sam Altman: If we can catch a ball or throw an arrow or something, yeah
Alexandr Wang: We can do parabolas.
Sam Altman: We can do parabolas, apparently.
Alexandr Wang: Parabolas. Yep. That's where it ends. And this is another in some sense also a philosophical question, but how obvious do you think it will be when we develop our first system that represents AGI? There's a few beliefs. There's one belief where it may be this emergent circuit in the middle of this giant soup of stuff. And we might not even understand that it exists when it emerges.
Sam Altman: I don't think we'll understand it when it emerges. I also think that it won't, this is pure speculation, I think it won't be this sort of single moment in time. It'll just be this exponential curve that keeps going. But there will be something that emerges that's quite powerful, that takes us a little while to really understand.
Alexandr Wang: Do you have a personal Turing test, if something where if it happened, it would be evidence that we've achieved it?
Sam Altman: In terms of something that's like, people always use this term slightly differently, something that's self aware or something that's just really generally intelligent that can learn fast? What do you mean by it?
Alexandr Wang: Generally intelligent, can learn fast, capable of doing, like given enough education, can do anything that humans could do.
Sam Altman: Yeah. I think that's actually not a super hard test. For what you were just saying, that would be so economically valuable to the world that it will show up that way. And so once it can start doing some significant percentage of human labor really well, that would pass the test for me.
Ethical AI
Alexandr Wang: Yep. Very cool. One topic that's really come up a lot, especially recently, is this topic of responsible and ethical AI. I think any powerful technology that will change the world has the ability to be responsible, ethical, good for the world overall, or bad for the world overall. How do you think about ensuring that the benefits of AI are equally distributed at OpenAI?
Sam Altman: Yep. Two principles here. One, I think that people that are going to be most affected by a technology should have the most say in how it's used and what the governance for it is. I think that this is something that some of the existing big tech platforms have gotten wrong. And I believe most of the time, if people understand the technology, they can express their preferences for how they'd like that to impact their lives, how they'd like that to be used and how to maximize benefits, minimize harms.
Sam Altman: But in the moment, people don't, me included, people don't always have the self discipline to not get led astray. So I can certainly say that my best life is not like scrolling all night on my phone, reading Instagram or whatever, but then on any given night, I have a hard time resisting it. And so I think if we ask people, like show people, here's this technology, how would you like it to be used, what do you want the rules of this advanced system to be, that's pretty good. And I think we'll get pretty good answers.
Sam Altman: And the second is, I really believe in some form, and there's a lot of asterisks to be placed here, but in democratic governance, if we collectively make an AGI, I think everyone in the world deserves some say into what that system will do and not do, and how it's used, how we share the benefits, how we make decisions about where the red lines are. I think it would be bad for the world and would lead to like, it would be unfair and it would lead to a not very good outcome if a few hundred people sitting in the Bay Area got to determine the value system of an AGI.
Sam Altman: On the other side, sometimes in the heat of the moment, democracy doesn't make very good decisions. So figuring out how to balance these seems really important. But what I would like is sort of a global conversation where we decide how we're going to use these technologies, and then my current best idea, and maybe there's a better one, is some form of a universal, basic income, or basic wealth sharing, or something where people get to sort of, we share the benefits of this as widely as we can.
Alexandr Wang: Definitely. One thing that's super interesting about AI is just, it's a very different paradigm from a lot of technologies that came before.
Sam Altman: Yeah. I think that always makes it hard. That's hard with any new technology, but for me it seems, and maybe everyone thinks this in their own era, but it seems particularly hard with this one to reason about, because it's so different.
Investment in Data
Alexandr Wang: Yeah. And one of the super interesting things that has really come up in a lot of recent instances of AI is this problem of bias that arises from the datasets. And if you talk to some folks, like Andrej Karpathy has been very public about this, there's a belief that data really does sort of 80, 90% of the programming, the true quote-unquote programming of these systems. How much scrutiny do you think we should put as a community into the data sets and the code and the algorithms, sort of relatively in the development of responsible systems?
Sam Altman: I mean, I think what we care about is that we have responsible systems that behave in the way we'd like them to. And again, back to this sort of do more of whatever works, if we can get there with data alone, that'd be fantastic. If it requires some intersection of data and algorithms, which I suspect will be the case, plus sort of real-time human correction and user correction, that's fine, too.
Sam Altman: So I think we should have a design goal of responsible systems that are as fair as possible and do what the user wants as often as possible. And I think it will take all of the above. Certainly, I think there's a very long way to go with better data. And that has been, if you sort of think about the Holy Trinity here, is data, compute, and algorithms, I'd say it's still been the most neglected.
Alexandr Wang: Yeah. And I think out of OpenAI, there was this amazing paper scaling laws for large language models. And I think that was the understanding, the sort of scientific understanding of how that Holy Trinity interacts, I think it was…
Sam Altman: I'm also, you know, someday we're going to get to models that can tell us the kind of data they need and what data they're missing. And then I think things can get better very quickly.
Alexandr Wang: You know, one of the things that we spoke to with one of our other fireside chats was Drago Anguelov, who's the head of research at Waymo. And one of the things that he discussed was there's this natural almost misalignment where neural networks are very good at optimizing for average loss functions. They're just incredible at that. That's what they're naturally good at. And the loss function is not representative of what your design goal is, as you mentioned. And so how do you think about, how do you all think at OpenAI, about this kind of, this misalignment is created by how the technology is developed between what you actually want the system to do and what your loss function tells the system to do? And how do you think about aligning those over time?
Sam Altman: Yeah, I mean, this touches on the earlier question about bias. This is one example of why I think it's not only about datasets. I think this is an interesting example for precisely how we design these systems and what we optimize them for has a bunch of effects that are not super obvious, and depending on what you want the system behavior to be, the algorithmic choices you make are really important.
AI Research
Alexandr Wang: So there's been a number of incredible breakthrough results in the AI research community over the past few years, many of them coming out of OpenAI, like GPT-3 and CLIP and DALL·E. One of the trends has been that these breakthroughs require more and more data, more and more compute, and more and more concentrated engineering resources. And so it seems to be the case that the effort required to do great research is increasing by quite a bit, and the resources required is increasing. How do you think about this impacting the landscape of research for AI?
Sam Altman: I don't think I entirely agree with that. I would say to build the most impressive sort of useful systems, that does require huge amounts of data and compute. So to make GPT-3 or four or whatever, that requires a large and complicated and high amount of various types of expertise effort. And there's only a handful of the companies in the world that can do that. But the fundamental research ideas that make those systems possible can still be done by a person or a handful of people and a little bit of compute. In fact, most of OpenAI's most impressive results have started that way and only then scaled up.
Sam Altman: So it sort of makes me sad to hear researchers saying, “Well, I can't do research without a 10,000 GPU cluster.” I really don't think that's true. But the part that is true is to sort of scale it up into the maximally impressive system that someone else can use. That is going to be a narrower and narrower slice that can do it.
Alexandr Wang: Yeah, so I think it's interesting. I do think it's empowering for as many researchers and prospective researchers in the audience today to believe that, you should certainly believe that it is possible to do great research independent of all these resources, but given what you just mentioned, which is that to create the most advanced systems with the highest performance for other organizations to use requires lots of resources. What do you think that means for what the right collaboration between the research community, industry, and government needs to be for maximum benefit of AI technology?
Sam Altman: I don't think we really know that yet. I mean, there's going to clearly need to be some, but collaboration is always tough, right? It's always a little bit slower and a little bit more difficult to get to work than it seems like it should be. And so what I am most optimistic about is that there will be organizations like OpenAI that will be at the forefront of creating these super powerful systems. And then we'll work with this government, other governments, experts in other fields to figure out how we answer these hard questions. What should we do with this system? How do we decide how we all get to use it? So my guess is it'll be something like that.
Alexandr Wang: One question that I think is interesting to ask regarding OpenAI is what are the bottlenecks that you all have experienced in scaling up OpenAI? And do you think those are reflective of the bottlenecks you will continue seeing?
Sam Altman: Like scaling up the organization itself?
Alexandr Wang: The organization and the research and results all together.
Sam Altman: Honestly, very standard, boring. There are these things that work in a 20 person organization that don't work in a 150 person organization, and you just have to accept somewhat more process and planning and slowness in exchange for dramatically more firepower. But I don't think there's a deep thing unique to OpenAI here.
Alexandr Wang: One thing that you mentioned before, how OpenAI works and why you think it's been so successful, but oftentimes it's the North stars of these organizations are so important for how they'll develop over decades. How would you describe the mission and the overall North star of OpenAI?
Sam Altman: I think I've said it a couple of times without meaning to, but our mission is to build and deploy safe AGI and maximally spread the benefit. That's simple, it's easy to understand. It's really hard to figure out how to build safe AGI, but at least it's clear what we're going for. And if we miss, it won't be because of a vague mission. I really do believe that good missions fit in a sentence and are pretty easy to understand. And I think ours is, and that's very clarifying whenever we need to make a decision.
Sam Altman: The thing that I think we could do better at, that I think almost all organizations could do better at, even people who get the mission right, is the tactics. I recently heard of a CEO who wore a t-shirt to the office every day with his top three or five priorities printed on it. And I was like, that is a good idea. Not only should the CEO do that, but everybody at the company should wear a t-shirt with that every day, so everyone's looking at it when they're talking to somebody else. That's the thing that I think people don't get quite as right.
Alexandr Wang: I know that CEO, and there's a remote version of that, which is you set your Zoom background for your top priorities.
Sam Altman: Interesting.
Alexandr Wang: OpenAI just crossed its five year anniversary. Is that right?
Sam Altman: Yeah, I think so.
Alexandr Wang: And so it was founded a little more than five years ago. I think my assumption would be that in the past five years, it's accomplished a lot more than what you'd expected. Maybe when you started it, what were your expectations for what would be possible within this timeframe, and how have you done with respect to those?
Sam Altman: I mean, this probably speaks to just an absolutely delusional level of self confidence, but basically I thought it would go exactly like this.
Alexandr Wang: Exactly like this. You thought GPT-3 would
Sam Altman: I mean, not like, I didn't know it was going to be called GPT-3, but I thought we would be about here by now, and thanks to a lot of incredibly hard work from a lot of very smart people, here we are.
Alexandr Wang: So where do we get to in five years from now?
Sam Altman: I don't like to make public predictions with timelines on them.
Alexandr Wang: Where do we get to next? Vaguely.
Sam Altman: I think one very fair criticism of GPT-3 is that it makes a lot of stupid errors. It's not very reliable. And I think it's pretty smart, but the reliability is a big problem. And so maybe we get to something that is subjectively a hundred times smarter than GPT-3, but 10,000 times more reliable. And then that's a system where you can start imagining way more utility. Also, something that can learn and update and remember in this way that GPT-3 doesn't. The context is gone. All of the background of you is gone. If the system can really sort of remember its interactions with you, I think that'll make it way more useful.
Future of AI
Alexandr Wang: I think one thing that's kind of happened within the field of AI research is this incredible upleveling of what it means to do AI research. Originally, if you were to think maybe 20 years ago, to build world-class AI systems, it would involve a lot of hand feature engineering and a lot of manual parameter tuning to do things correctly. And then kind of with more modern machine learning methods, that kind of went away and it was maybe more of a hyper parameter tuning and identifying the right architectures. And that was where 89% of the work went. And then with the recent breakthroughs with the transformers, all of a sudden the architectures are just copy paste.
Alexandr Wang: And so it's been this like leveling up, leveling up, leveling up. Where do you think this goes? What do you think are the things that we do today that take up a lot of our time with machine learning research that in the future are going to be meaningfully automated?
Sam Altman: We still have to write the code. I mean, when the AI is writing the code for us or helping us to write the code.
Alexandr Wang: Do you think it'll happen soon?
Sam Altman: Again, no time predictions. I think it'll happen at some point. And I think that will meaningfully change people's workflows.
Alexandr Wang: What are some of the short term use cases of AI that you think are sort of right around the corner, that you believe are going to be very impactful for the world on the whole, that people maybe aren't thinking about or aren't expecting?
Sam Altman: I don't think I have anything deeply new or insightful to say here, but if you think about the things in the world that we just need a lot more access to high-quality versions of, everybody should have access to incredible educators. Everybody should have access to incredible medical care. We can make a long list of these. But I think we can get there pretty soon. I think I can imagine sort of like, you know, GPT-7 doing these things incredibly well and that having a really meaningful impact on quality of life. So I think that's awesome.
Alexandr Wang: Yeah, it's really great. And I think we can seek glimpses of that in GPT-3.
Sam Altman: Yeah, for sure.
Alexandr Wang: The power of it to understand-
Sam Altman: Super early glimpses, but it's clearly there.
Alexandr Wang: The ability to sort of distill human knowledge is really incredible. Today, there's more and more people going to the field of ML and machine learning research than ever before, and if you were to give a few words of direction to this community of people who are all coming into machine learning, all looking to do incredible work in the field, what would be a few vectors of direction that you'd give this community to be maximally impactful to humanity?
Sam Altman: I will pick only one piece of advice for a young researcher, which is think for yourself. I think that this is pretty good advice for almost everyone, but something about the way that the academic system works right now, it feels like everyone should be doing this really novel research, but it seems so easy to sort of get sucked into working on what everybody else is working on, and that's what the whole reward system optimizes.
Sam Altman: And the best researchers I know in this field, but really most others, too, are the ones that are willing to trust their own intuition, to follow their own instincts about what's going to work, and do work that may not be popular in the field right now. Just keep grinding at it until they get it to work. And that's what I would encourage people to do.
Alexandr Wang: Yeah. To wrap up, with respect to AI and everything that's been happening today, I think there's, as we discussed before, there's very few mental models that people can have to actually understand how to think about AI and how it will change the world, just because it's a new technology with new fundamental characteristics. One topic that really interests me is what are the changes to the physics of economics that AI will encourage?
Alexandr Wang: I think when software first came out, there was an interesting change where software required a lot of cost to develop, but was basically zero cost to reproduce. That was an incredible thing. What do you think are some of the qualities of AI technology or some of the characteristics of AI that you think are going to meaningfully change how we think about the physics of our economic systems?
Sam Altman: Well, the cost of replicated goods went to zero with software. I think the cost of labor, for many definitions of the word labor, should go to zero. And that makes all the models very weird. So if you had to pick one input to the economic models that is not supposed to be zero, my expectation is that's it. And there's been this long standing push of too much power, in my opinion, shifting from labor to capital. But my intuition is that should go way further. And I think most of the interesting economic theory to go figure out is how to counteract that.
Sam Altman: There's all these arguments about like, is it going to be deflationary, inflationary? It seems obvious it should be deflationary, but I think there's these other things that are weird, like what it does to the time value of money. I actually don't know if he certainly said this, but I've always heard attributed to Marx the quote that, “When the interest rates go to zero, the capitalists have run out of ideas,” which is sort of interesting in a world where we've been in zero rates for so long. But maybe we get a lot more ideas really quickly once we have AGI and maybe something crazy happens with interest rates. I think all of that stuff is hard to think about.
Alexandr Wang: Yeah. Super cool. Thank you so much for joining us, Sam.
Sam Altman: Sure.
Alexandr Wang: It's always interesting to talk to you about these ideas, and we're very thankful for how thoughtful you are about them.
Sam Altman: Thanks for having me.