Scale Events
+00:00 GMT
Events
October 28, 2021

Eric Schmidt Discusses the Geopolitics of AI

Eric Schmidt Discusses the Geopolitics of AI

A TransformX Highlight

Eric Schmidt Discusses the Geopolitics of AI

At TransformX, we brought together a community of leaders, visionaries, practitioners, and researchers across industries to explore the shift from research to reality within Artificial Intelligence (AI) and Machine Learning (ML).

In this TransformX session, Eric Schmidt, a co-founder of Schmidt Futures and a former CEO of Google, discusses how AI will shape our global future. Eric joined Scale AI CEO Alexandr Wang in this fireside chat at TransformX.

Introducing Eric Schmidt

Eric Schmidt is the co-founder of Schmidt Futures, a philanthropic initiative that bets early on exceptional people making the world better, and a former CEO of Google. He also hosts the podcast “Reimagine with Eric Schmidt”, where he explores how society can build a brighter future after the global coronavirus pandemic.

<br/> <br/> <div style="position: relative; padding-bottom: 56.25%; height: 0;"><iframe src="https://fast.wistia.com/embed/medias/6h4bf8sh0p" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe></div>

- Eric Schmidt, co-founder of Schmidt Futures and former CEO of Google, sits down with Alexandr Wang, CEO of Scale AI

What Are Eric Schmidt’s Key Takeaways?

Shaped by AI - The New World Order

As AI grows more widespread, Eric explains, the primary competition within the field of AI will not be between companies but rather between the United States and China. Because both countries are large enough to make their own markets, we’ll likely see the internet splinter in two separate directions as the United States and China follow different approaches for AI development.

The Internet as a Strategic Asset

Eric shared that, unlike when he started at Google, the internet is no longer optional. It has become a powerful tool to people around the world, providing nearly limitless methods of communicating information, driving business and improving modern standards of living. China, Eric says, has approached this challenge by using surveillance to make it nearly impossible to remain anonymous online. Online speech is recorded, and certain actions can be criminalized and prosecuted. By surveilling citizens and controlling the information available online, China is building an internet that supports their government’s strategic goals. The question remains, what similarities will we see in the way China uses AI?

197 Countries Face a Critical Choice

The United States, on the other hand, tends to prioritize free access to information. While it’s clear how the ideologies of the United States and China could shape the future of AI, the unanswered question is, what direction will the other 197 countries go?

Eric suggests that most democracies will likely work with the United States, while authoritarian countries and countries with weaker governments will likely follow China. Additionally, Eric shares how Belt and Road Initiative countries will likely become clients of China’s information space. Some countries, especially those that are highly dependent on China economically but aligned with the United States ideologically, will likely be more divided.

What Does This Mean for the United States?

Eric states that the global growth of AI technology is a national security threat for the United States. In March, we believed we were ahead of China’s technological development in the AI sector, but they quickly showed us that we were wrong. By June, they’d demonstrated a model capable of producing human-like text that was comparable to OpenAI’s GPT-3 model in quality.

How Can We Stay Competitive?

To ensure that our AI technology advances at a competitive pace, Eric suggests that we need to focus on improving our own technology infrastructure now by building a national research network. This could involve free university tuition for students focusing on AI and working with our allies to develop technologies. We also need to establish a set of guidelines and ethics around the development of AI and create a national plan for AI development that includes leadership at the presidential level.

How Much Time Do We Have?

Eric explains that AI advancement is an issue that we cannot wait on-- TikTok, for example, is an illustration of a high-quality platform from China that is successful because of its advanced AI algorithm that matches users with content. The algorithm performs well enough that Eric would not have expected it for another 5 or 10 years. We have relatively little time to get ourselves organized and advance our own technology.

According to Eric, we need to focus on developing AI technology now, and we need to do so by establishing guidelines and ethics rules that apply to everything in AI.

The Next (AI-Driven) Industrial Revolution Is Coming

Change is coming quickly through AI technological advancements, and we need to think about how AI can change the world. AI can damage the information space by perpetuating fake news, but it can also help us to learn new and important things. For example, AI can be used to enhance drug development by simulating synthetic biology or to address climate change by generating advanced climate models. AI can even be used to generate information when we don’t have labeled data on hand. For example, AI can be used to detect when mice are asleep, essentially labeling the raw data without the limits of expensive testing equipment. This can be vital for speeding up scientific breakthroughs.

Eric explains that the AI-driven industrial revolution will impact different fields at different paces-- Fields with less regulation will adapt more quickly to these changes, while more regulated industries will take longer. In all industries, however, we’ll need to figure out how to build a better-educated workforce that’s empowered to work with technology. Otherwise, AI will lead to fewer well-paying jobs, and the Gini coefficient, the gap between the richest and poorest person, will continue to increase.

The rate of change in technology is faster than it has ever been, the rate compounding with each new technology. Eric suggests that the key to breaking into AI is to build a product 10 times better than what came before it. By building a much better product, you can disrupt any industry.

The Greatest Challenge to AI is its Usability

Because of the complexity of the systems involved, AI models are typically built by people with PhDs. AI models are difficult to create and even harder to understand. Because of the mathematics going on behind the scenes, developers often need a Ph.D. just to understand what these models are doing.

To truly take advantage of AI technology, Eric proposes that we need to build up tools and infrastructure for AI so that people with a standard technical education can master the art of building and interpreting these models. AI should help developers figure out how to solve a problem rather than obfuscating how a particular system works. By making AI more accessible to people with technical backgrounds, we can greatly increase the amount of progress we make in the realm of AI development.

Want to Learn More?

  • See more insights from AI researchers, practitioners and leaders at Scale Exchange

About Eric Schmidt

Eric Schmidt is an accomplished technologist, entrepreneur and philanthropist. He joined Google in 2001 and helped grow the company from a Silicon Valley startup to a global leader in technology. He served as Google’s Chief Executive Officer and Chairman from 2001 to 2011. Under his leadership, Google dramatically scaled its infrastructure and diversified its product offerings while maintaining a strong culture of innovation.

In 2017, Eric co-founded Schmidt Futures, a philanthropic initiative that bets early on exceptional people making the world better. Eric currently serves as Chairman of The Broad Institute Board of Directors and the National Security Commission on Artificial Intelligence. He also hosts “Reimagine with Eric Schmidt,” a podcast series of conversations with leaders to explore how society can build a brighter future after the global coronavirus pandemic.

Session Transcript

Nika Carlson (00:24):

For our first speaker of the day, we're honored to welcome Eric Schmidt. Eric Schmidt is an accomplished technologist, entrepreneur and philanthropist. Eric served as Google's chief executive officer and chairman from 2001 to 2011, as well as executive chairman and technical advisor. Under his leadership, Google dramatically scaled its infrastructure and diversified its product offerings while maintaining a strong culture of innovation. In 2017, Eric co-founded Schmidt Futures, a philanthropic initiative that bets early on exceptional people making the world better. Eric currently serves as chairman of the Broad Institute board of directors and the National Security Commission on Artificial Intelligence. He also hosts Re-imagined With Eric Schmidt, a podcast series of conversations with leaders to explore how society can build a brighter future after the global Coronavirus pandemic. Eric is joined by Alex Wang, CEO and founder at Scale. Alex, over to you.

Alex Wang (01:35):

Thank you so much for sitting down with us, Eric. We're super excited to be chatting with you and really glad you're taking the time to discuss AI in the future with us.

Eric Schmidt (01:44):

And Alex, thank you for all the help you've given me in strategy and AI over the last few years.

Alex Wang (01:50):

Well, so I want to really dive into two main topics. The first is AI in the new world order and the second is the AI industrial revolution. But before we get there, you were at Google through many formative years of AI development. Many modern techniques of AI were actually sort of invented or strongly developed at Google in your time there. And I'm kind of curious, the past decade or decade plus has obviously been this incredible boon for AI and its emergence, but what was your first a-ha moment with AI? When did you first realize the potential of the technology and what the sort of like massive potential in the world could be?

Eric Schmidt (02:32):

Well it was really 2011. What happened was a group of people, including the founders I think of this movement, did an experiment where they analyzed YouTube trying to figure out what they could find. And they discovered the concept of cats using essentially on unsupervised learning as we know it now. And it was quite interesting because we always thought there were a lot of cat pictures in YouTube, but the fact that that's what it would discover after looking at the corpus of YouTube information was rather disturbing.

Eric Schmidt (03:07):

But all of a sudden that started the prospect, a group inside of X, Google X created a group called Google Brain. And that group began to build what you know today as Birch and Transformers and the other systems. Google had been using various forms of machine learning for advertising for a long time. But the fact that we then had a language model and we had an ability to do predictive text and we could actually look at binary synonyms and so forth, allowed us to materially improve both search and advertising. Today, my opinion, I'm no longer there, is that much of the gains that they're seeing in revenue and in search quality are actually coming from the technologies built starting with Google X and discovery of tech cats in YouTube 10 years ago. In a similar time, we also purchased a company called Deep Mind, which seemed impossibly expensive when we bought it. And today it looks like one of the smartest decisions ever made because of course they're the leader in reinforcement learning globally.

Alex Wang (04:17):

Yeah, no, it's absolutely incredible. I think to your point, the impacts of these technologies, they're so deeply embedded now within these large technology products that it's almost impossible to imagine what the gains would look like without these incredible technologies.

Eric Schmidt (04:34):

Well, I can give you a simple way of thinking about it, which is AI companies are defined as companies that do something involving learning. And if you think about it, if you're a consumer tech company, the more clicks you have, the more learning opportunities you have. So here you have a situation where every action at Google you take, you have a learning opportunity for both the quality of the answer, but also the quality of the ad. Now the goal 20 years ago was to end up with one search result, one ad, the perfect search result and the perfect app. Well, this technology may get us close to that in the sense that the understanding that you get from learning, what works in this situation in this context is so powerful. And the interesting story inside of Google was it in both cases, the traditional teams, the normal teams, if you will, who were doing traditional algorithmic programming, who were brilliant, I might add, right?

Eric Schmidt (05:37):

It brought the company to this point. We created a competitor in the form of a machine learning model, typically built on top of what you would know is TensorFlow using the same amount of database and so forth and so on and on our fast CPU's. And we would see what would happen. Often, the interesting thing is that the computer did not come up with a very different result, but it got there in a very different way. So what would happen is you'd run the tests and then the traditional team would look at what the AI was sampling and would say it never occurred to us that there was a correlation between A and B. So if you believe at its core in the last decade, AI was good at looking for patterns that humans couldn't see. Just mathematically, it couldn't see them. Then that produced a lot of the gains.

Eric Schmidt (06:31):

Now the same is actually true in inverse, which is now generative models, which is what everyone is doing, have the property that they can both be used in things like Ganz, but you can also use them to generate something new, which is the next day. So think of that as prediction. So we went from deep understanding, and because we had the understanding, we could predict the likely next outcome with some certainty. Well, that's very powerful too. So if you go back to the consumer tech companies, just to close it out, why have the consumer tech companies gotten so excited about this? Because it directly improves customer quality, directly improves customer revenue, and it also allows individual targeting to the person or the cohort that the person appears to be part of without having to profile them. Now, it's important to say that it's not perfect. They make mistakes and so forth, but that's driving this huge increase across the board.

Alex Wang (07:32):

Yeah. I actually really want to, this will be a discussion topic later, because one thing that I want to dive in with you on is what is the fundamental change to business models that AI helps enable? So, but to kind of dive into our first primary topic around AI in the new world order, one of the things that we've spent a lot of time talking about is what does AI do to governments, how governments need to interact with one another, how they interact with their citizens. And there's a lot of pretty deep sweeping changes. And so kind of taking a big step back, what geopolitical trends do you see that will ultimately impact the long-term benefits of AI and what are sort of the important shifts that we should be paying attention to kind of like the global level today?

Eric Schmidt (08:20):

Well, I would say to start with that the most important competition is not going to be Google versus Apple or whatever. It's going to be China versus the United States. And let me make the case for it. We all understand how the power of America, our innovation model, we're the envy of the world, this was an accomplishment from people over 70 years, essentially since Vannevar Bush, maybe 80 years, to build a system of research scientists, innovation, and so forth. It brought us GPS. It brought us semiconductors. It brought us the internet, all of the things, brought us, social networks, et cetera. Everything that you could imagine came out of that maelstrom. China has a different model, but an equally powerful one. They've got four times as many engineers. They've got a huge number of companies. As you know, because we've looked at this, the companies work 9, 9, 6, 9:00 AM to 9:00 PM, six days a week. By the way, that's illegal in the United States.

Eric Schmidt (09:19):

Don't do it here. So you have a work ethic, you have a profit motive, and you have a scale of platforms that China can exploit. They'll be exploited differently. But the important thing is that both the US and China are large enough markets to make their own weather, right? They're so big, they generate their own storms around each other. They create platforms that are unique within their own domain. That is a stable structure for the rest of our lives. And unless something changes materially, I don't think we're going to see one internet. I think we're going to see this splintering, at least at the applications layer. And the fundamental reason this is occurring in retrospect is the internet when I was doing this at Google was essentially optional. You could do everything without it. You could do it with it. People like you and me thought it was really cool.

Eric Schmidt (10:11):

It worked really well. Today, the internet is no longer optional. The internet is fundamental to everything. So every form of human activity occurs on the internet, including the ones you don't like. And if you look at China, the Chinese have solved the internet problem in their own way. They've criminalized or made it impossible to be anonymous on the internet. Well, that cuts down a whole bunch of stuff. So all of your speech is recorded and you can be again, the police can prosecute you and using their laws, which are quite vague in these areas. So it's a very different internet. They also don't allow the American companies to operate. And we typically don't allow them to operate in our country with the notable exception of TikTok. And so the result is two different information spaces, and those information spaces will shape the outcomes because people take cues of the information space in the tech industry and you and I have done this for a very long time.

Eric Schmidt (11:08):

The tech industry has its own sort of sociology. It's kind of a certain worldview. It's somewhat libertarian. It's quite liberal in terms of personal behavior. It's relatively conservative financially. It has its own little political zeitgeist and it's big now. But I learned a while ago and I'll say it to everyone, we're not the same as everyone else. The fact that you and I and everyone watching this are tech people, we are a minority compared to most societies, probably thank goodness, by the way. And we need to respect the fact that they operate in other ways. I think that to answer this to be one level more specific, China is building an internet that makes sure that China remains in power, that it maintains what it considers an appropriate level of control over its citizens. They see this in the surveillance and in social credit systems, that all of these things that are at various levels of deployment.

Eric Schmidt (12:07):

Now I'm not a Chinese expert and you know more about it than I do, but I would bet that their strategy is going to work, that the technology that they're deploying will be successful. I'm not endorsing it by my saying that, please don't take me out of context. I'm simply saying, I think that they have a strategy. They have the resources then their directive. The last time, two times ago when I was in China, I was with the minister who regulated us. And he gave a speech which I attended where he explained that the only solution to the problems in the internet was regulation.

Eric Schmidt (12:38):

They're not hiding behind what they're doing. They know what they're doing. I'll give you another example. It's very hard to get a VPN now because the VPNs use the system, the great firewall uses the various forms of machine learning to try to find them, right? So that's another example of a small thing, but added all up, they're very different. So having said that, the next question is what do the other 195 out of 197 countries do? And a reasonable prediction is the democracies will form around this US Western consensus and the democracies, by the way, include Japan, South Korea, India is a democracy, and the countries that are fundamentally authoritarian, or they're so weak they can't tell what they are, are highly likely to be influenced by the Chinese architecture because they benefit from it.

Alex Wang (13:30):

Yeah, well, one thing that's pretty curious is that China is the biggest lender to a huge percentage of the countries in the world. They're also the biggest trade partner to many, many countries, the biggest lender too. It's something like 70% of countries or whatnot. What do you think happens to countries that are sort of in the middle, ones that are highly dependent on China for economic activity, but also ideologically are more aligned with democratic values in the United States?

Eric Schmidt (14:03):

Well, a good example is Australia and Australia over the last year has taken a very tough stand against the demands of the Chinese to essentially quelch certain kinds of descent and other, if you will, interference in Australian affairs. And they've done that knowing that they're by far their largest trading partner is China and China is central to their future growth. So far that strategy has worked. I think it's reasonable to expect that the BRI countries, the so-called belt and road initiative countries, will all become essentially clients if you will, of the Chinese information space. So it'll have the Chinese information rules, which includes surveillance and some forms of censorship, because literally they're going to get their technology from China. So they sort of get it for free. To me, the hard question is what does Germany do? Number one business partner of Germany is not the US, it's China, right? The companies that I have spoken with in Europe.

Eric Schmidt (15:03):

Right. The companies that I have spoken with, in Europe most recently, their number one international market is not the U.S., it's China, and their number one supply base, the things they buy come from China not the U.S. Now, I think they're going to stay in the Western fold for a hundred zillion reasons, but they're in a tough spot as this division occurs. I'm not suggesting that we're going to fully decouple. What I'm suggesting is that we're going to have an incredibly uncomfortable competition with China, where China will uncomfortably force people into these spots and they'll have to make tough decisions.

Eric Schmidt (15:41):

So if you go to the leading countries, the leading Western partners, we're going to keep them. You start to wonder about countries like Hungary, as an example, which is having trouble deciding how close it wants to follow EU norms. I think they'll probably stay in the West. But let's pick my favorite example, Tunisia, which is a fantastic country. They don't make much. It's very nice people. Very well run by comparison to its peers. It will have more money and more opportunities with China than the U.S. How does it decide?

Alex Wang (16:13):

Right. Right. And especially when we think about how this pertains to AI, and we've talked about this, there are two ways to do AI. There's one way in which you do AI which is authoritarian in nature. You know, you're focused on massive data centralization in a relatively unambiguous way. And then there's ways to do AI which are far more mindful of bias, and ethics, and privacy, and all the things that I think in the Western world we think are really important.

Alex Wang (16:46):

One interesting front of this battle has been that China has been leading the creation of global norms on AI, and there's certainly this sort of conflict around the standards of AI. One thing I'm curious about is, how do you think this ultimately will develop? Do you think this is just yet another front of a general bifurcation, and what can be done to avoid a paradigm where maybe the more authoritarian regime of AI becomes the dominant one globally?

Eric Schmidt (17:18):

Well, I think the fight is underway as you said, and everyone focuses on the fact that China has more data and less privacy rules. I will tell you that the commission that I had the privilege of leading, which finished its work a few months ago, called the National Security Commission for AI, appointed by Congress, found that algorithms are just as important as data. And that we recommend a number of things, perhaps the most important of them for universities in addition to doubling research funding, which is sort of a no-brainer, is to build a national research network. The idea is that the big companies, Google being an obvious example, have tremendous hardware, tremendous scalability. But little startups, smaller research groups, those sorts of things, don't have it.

Eric Schmidt (18:05):

So a number of universities have put together a proposal, I'm familiar with the one from Stanford, and we're working hard to try to get that funding into the NDAA. The NDAA is the yearly appropriations bill for the military and national security along with a number of other things. But what I would say is, we can't win by adopting somebody else's good practices. We have to take our good practices and make them stronger. How did we get here? First, we allowed high-skills, high-value immigration. Sort of a no-brainer. Most people would prefer to work here than in China. If they're not from either, they'd prefer to work in the West. Why don't we let them in? Why don't we work hard to increase our research funding and also work hard to get more talent into our government?

Eric Schmidt (18:58):

Government has a lot to say about out how these things are doing. Remember, the American government is very complicated. You have all the states, all the federal regulatory bodies which are allegedly independent, and then you have the White House, and the military, and so forth. These are huge operations. They have relatively little technical talent to even understand these debates. Most of the AI work that you see coming out of them has been done by volunteers, outsiders. I worked for the defense department for five years as a consultant, and as part of that we produced an AI ethics proposal for the DOD which it adopted, which is one of our better wins. So it can be done, but it requires all of us ... Let me just get on my soapbox for a sec. Sorry. This is a national security challenge for the United States.

Eric Schmidt (19:44):

If you want for the next 20 or 30 years for American technology, American values, American startups, to be global platforms, we need to get our act together now because our competitor, by the way, they're not our enemy but they're our competitor, China, is busy doing exactly. They're doing it in energy, transportation, electronic commerce where they're already leading, surveillance, which they're already leading. They're working very hard in quantum where we're still leading I think, and they're working very hard to catch up in AI. I'll give you an example. In March, we said that we were one to two years ahead of China in AI. In June, they demonstrated a universal model of a size similar to that of GPT-3, OpenAI's GPT-3, which is a significant accomplishment on China's part. Now, maybe it's not as good, but the important point is they know what they're doing and they're on their way.

Alex Wang (20:34):

Yeah. You know, one question that naturally comes out of this ... I think you've kind of said in the past, "Hey. The government is not prepared for the ways in which digital technologies and AI are going to completely change the way that systems work." And I think to your point, I think it's very important that governments have more technical talent so they're able to sort of see around the corner and understand what are the overall implications around how the system will evolve in tandem with these digital technologies. What do you think are some of the key ways in which the government needs to be thinking about in which the system will evolve, and therefore what are the key initiatives that matter a lot outside of just ensuring that we have in innovative environment where we're building the great technologies?

Eric Schmidt (21:15):

Well, let's start by saying we need to have an innovative environment where we're building the great technologies. Let me observe that the vast part of the innovation is occurring in private companies. 50 years ago the vast amount of this kind of innovation was being done in government labs and in universities. So it's crucial that the tech industry be allowed, if you will, to go as far as it can with these technologies. And one of the issues is people like to prematurely regulate things that have not occurred yet. So why don't we wait until something bad happens, and then we can figure out how to regulate it. Otherwise, you're going to slow everybody down. Trust me, China's not busy stopping things because of regulation. They're starting new things. That's the first point.

Eric Schmidt (22:02):

I think the second point is that we have to remember that in a national security situation like this, we actually have to have a coherent national plan. If you take a look at Operation Warp Speed, you had a situation where universities invented MRNA, private sector built the vaccines, and the government guaranteed the market whether it worked or not. I don't know whether you want to call that industrial policy. You call it what you want to. And under Trump. And it was a national emergency where we were very, very, and correctly, worried about it and we innovated. By the way, BioNTech, which is the source of Pfizer, is a European, actually German company. So there's lots of examples where if we put our mind to it, we can do this.

Eric Schmidt (22:48):

Now, it requires leadership at the presidential level. It also requires shifting resources. What I found in my work with the government is that everyone gives speeches all the day about what the government should do, but the government only does what it's supposed to be doing at the moment and it tends to self-propagate. So if you want to change a company, you change from customers and bottom up. If you want to change the government you have to start from the very top, and it's very directed. Another example with the DOD was they have a process which is internally known as the POM process. The funding for AI was proposed, and then you have to wait two years for the money to show up, and then you have to get the deployment plan, and so forth. It's built around 15-year weapon systems.

Eric Schmidt (23:33):

Now, we managed to find a way to get around that, but it's a good example of how the systems aren't capable of rapid change. And if AI is anything, it's something which normalizes an awful lot of data and provides new insights. We should spend a minute and talk about how powerful AI will be, for example, for science. Right? It will transform our understanding of biology, and chemistry, and material science, and all of those kinds of things. Those are the basis of the next trillion-dollar industries. You know, the trillion-dollar industries that exist today are in software. Thank goodness. That was what I was doing. And the next generation will be in the application of digital technology and AI in these other industries which are huge. Right? So think about drug discovery, healthcare, 18% of GDP today. Anything that you can do that materially affects that is a huge company.

Alex Wang (24:28):

Yeah. No, I'm really excited to talk to you about it. And in a sec here we'll talk about the AI industrial revolution. I think it'll be a great topic. But just to kind of motivate what you're saying right now as part of the national security commission on AI, you came up with a list of recommendations. And I think one of the things that you just noted, which I'm very sensitive to, is that the timelines upon which governments implement these sorts of recommendations can be quite long. It can be years, years at a time. It's not months or faster. What do you think the urgency here is? What is the time window in which we need to be operating in to be successfully competitive?

Eric Schmidt (25:11):

Well, I'll give you an example of TikTok. TikTok is taking the world by storm. It's extremely popular in the United States. President Trump, using a series of tactics, tried to get it changed, tried to get it to be, the U.S. operation, hosted and owned most recently on the Oracle platform. None of that actually happened. TikTok is a good example of the first real breakout platform from China. By the way, it's a high quality platform. And much of its apparent success is because it has a different AI algorithm for matching. It actually matches not to who your friends are, but rather to what your interests are, and using a very, very special algorithm.

Eric Schmidt (25:58):

That's an example where I would've told you that would not occur for another five years. So we have relatively little time, maybe a year or two, not five or 10, to get ourselves organized around the initiatives that we raised. To repeat, more money for research, building a national research network, working with our allies, establishing guidelines and ethics rules that apply to everything that are consistent with American values, hiring people into the government. In our report we make a set of very specific recommendations for how the defense department and the intelligence communities should work. They're typically of the form, take this function and make it more senior, and give it more resources.

Eric Schmidt (26:46):

We also make a couple of other suggestions, including creating a civilian university for technical talent that would be free in return for up to five years of work in any form of government, not just the military. We also have a proposal for a reserve corp where people would spend, it's modeled basically on ROTC, where people could spend up to 30 days inside the government helping them and then go back to their jobs in a legally supported and promoted way. There are plenty of people who want to help our government get to the right outcomes here. They want national security. They care about doing things the right way. They want to do it with right ethics. They want to be involved. We can do this.

Alex Wang (27:27):

Yeah, and then just as a closing thought here. I agree with you. I think the urgency is really there. It is a question of the next few years not a question of the next decade in terms of the initiatives and what we need to drive. What are some of the most critical and existential problems that could arise due to AI technology, there being a mismatch in AI technology between the U.S. and China? For example, it's not a fun day if China builds an AI that's able to silently execute complex and difficult to track cyber attacks, and so-

Eric Schmidt (28:03):

I think there are simple ones that are obvious. It's a balance of power in terms of cyber. So you could imagine an offensive cyber weapon or a defensive cyber weapon that was stronger than anyone else's. That's kind of an obvious argument. One of the ways the military thinks about this is, they think, "Okay. Well, we'll take the people who build that and we'll put it in the equivalent of Los Alamos, and we'll keep it a secret." But one of the things that's different about AI is that there's almost no containment of AI. The technology leaks. Literally the ideas leak so fast that you don't have much advantage on one side or the other, which is sort of a new grand strategy, the stable paradigm problem. That's the way I would describe that.

Eric Schmidt (28:50):

Let me give you an example of AGI, which we'll talk about. But let's imagine that one of the ... Well, let's assume that there are 10 countries that are working on AGI, and that you end up with three or four in China, three or four in the U.S., and a couple sprinkled around, including one in Israel, and a couple within Europe, and so forth. Maybe one in Russia. You get the idea. What happens if one of them invents something that the rest of them wants that's really hard to steal? Right? I don't know. Not a good example, but they figure out how to cure cancer in a way that nobody else can, so that would be bad.

Eric Schmidt (29:31):

Well, now let's take it one more to the extreme. Let's imagine that one of them builds a system that's so dangerous that we don't want even that country to use it. For example, it can answer questions like, "How do I kill a million people tomorrow?" It's hugely dangerous. You clearly don't want to make that happen. It's reasonable to expect that again in our lifetimes, that there will be the equivalent of nuclear non-proliferation discussions under a new regime where we're trying to say, "These things can be built-

Eric Schmidt (30:03):

... new regime where we're trying to say, these things can be built, but we don't want very many of them. And we want to keep them under some form of guard and we don't want extreme terrorists using them. And we only want them to be used in certain situations even by their owner governments. The thing that's changed in the last 10 years is the aggressive use by governments of cyber and influence and the Russian interference in 2016, that's all new. It didn't happen 10 years ago. If you take it to its logical extreme, then you're going to end up with things which are as dangerous as nuclear or close to it.

Eric Schmidt (30:39):

And we don't have any language by which to discuss what does balance of power mean. How do we keep it under control? How do we make sure that the equivalent of plutonium doesn't get leaks? There is no analogy. We haven't figured out that doctrine yet. And if we don't, then you're going to end up in a situation where China has a poor level of security around its AGI. Somebody copies the network inside of it. They copy it onto the equivalent of a USB stick. And they take it over to another country where they're not subject by that rule and something bad happens. So we have to have this conversation. I'd rather have it now.

Alex Wang (31:19):

I think it's a great call to action. AGI and AI, in general, as a technology that is as powerful, potentially, as nuclear weapons in the previous age, but with very different properties. It's very easy to replicate. It's hard to contain. And, these cause very real challenges for the new world order. With that, I wanted to segue way into this topic that I could tell that you're really excited about, which is the application of AI to all industries in the world. We like to use this term, the AI industrial revolution. And one of the incredible things about AI is just how broadly applicable the technology is. As you mentioned, it can be applied to most industries in some way that is deeply transformative.

Alex Wang (32:06):

And, it can be used across most of these domains to really great benefit, almost very uniquely from any other technology. If you think about the internet or software, those were most of the value that was actually generated by creating almost new domains or new ecosystem environments. But AI has this ability to be maybe more foundational, maybe more cross cutting into many of the existing industries. What technology analogy do you think is best suited for thinking through AI and its implications? Do you think it's more like the computer, more like the internet, more like electricity? What do you think that the right analogy there is?

Eric Schmidt (32:47):

It's hard to know. Electricity was pretty important, too. And I lived through the mainframe revolution, the personal computer revolution. And each revolution has seemed bigger and more impactful than the last. So it's fair to say that AI and ML are world changing, norms changing, and especially because they tend to work on information spaces. So one of the ways to understand how society works is, society has morays and values that are embedded in information space, which each of us is raised in and lives in. So let's imagine a situation where you have AI systems that are shifting it. How do we want to deal with that? A simple example is that you have a two year old, you get a two year old toy that can speak to it. The three year old gets a smarter toy. By the time the kid is 10, this toy that's been upgraded, of course, is by far his or her best friend.

Eric Schmidt (33:51):

What happens when the best friend tells it something wrong or encourages bad behavior or observes bad behavior? We've never had a situation where we've had a human kind of intelligence that's on par with how humans raised their children, operates their society, and so forth. There are obvious examples from misinformation. So today we already have a misinformation problem. But because of targeting, AI systems will be able to learn which human biases, these are recency bias and so forth and so on, to exploit to get you even more passionate about something which is false. And that cannot be good. So in the same sense that you have the damage that's possible in the information space because of this targeting, and because we live in an information space, you have this extraordinary gains that will occur with savants that can help you understand specific fields.

Eric Schmidt (34:53):

Let's look at synthetic biology. So synthetic biology can be understood as like ECAD, way back when. And that you're basically building biological organisms, but you're not making copies. They work differently. But, biology is a lot about coming to the, I may not say this correctly, to the lowest energy state in any of the formulations of the atoms and molecules and the things that are synthesized around with them. And machine learning is very, very good at doing that. So machine learning is very, very good at guessing which set of compounds, for example, will improve or make worse this outcome. And I'll give you an example. There's a drug MIT called halicin. It's a combination between synthetic biologists and computer scientists. And the synthetic biologists said, we want to build a new general purpose antibiotic like the ones we already have. There hasn't been one in decades.

Eric Schmidt (35:54):

And people have obviously been looking. So what they did is, collectively they organized the system to generate as many compounds as they could that had some kind of antibiotic resistance. And then they did a further network that looked for the ones that were farthest away from the current ones. And they came up with a compound, which is now in various forms of trial called halicin. Now that's something that humans couldn't do. And I think it's going to work, because you know the story of AlphaGo and alpha chess that not only did they beat the humans, which was a surprise in the case of AlphaGo in 2016 and caused a massive, massive reaction to that in China. But more importantly, they discovered new moves in games that are 2000 years old.

Eric Schmidt (36:43):

Now that's extraordinary. So example after example where the simple ones are parts management in your inventory. That's an easy one. That's essentially a prediction. But, why don't we do one where when I go to the hospital, it predicts what I'm showing up in the hospital for. And let's see how well that works. That'll help the doctors, because I am, although I think of myself as distinct, I'm genetically very similar to 999 other humans. I just don't know them. But, the computer can find them and say, they all had this problem. America has the same problem.

Alex Wang (37:20):

Totally. Yeah. I want to dive into two pieces that you had just mentioned here. And so the first is the scientific and the second is the economic implications. One of the things that I'm equally excited about is how AI is actually being successfully applied to science, whether it be biology, drug discovery, material science, et cetera. And to exactly what you just mentioned, AI has enabled us to do significantly more of the science digitally and very, very efficiently on computer which takes out many fields like biology or physics or material science or whatnot. It takes out a lot of the costly components of actually doing the science, which the alternative is you have a bunch of scientists and they have to do work in a lab.

Alex Wang (38:03):

And it's just very, very costly. It's meaningfully different. What do you think are the scientific implications of this? One thing that's been noted is that over the past few decades, scientific discovery has actually slowed over time. Do you think that this results in a Renaissance or a re acceleration of scientific discovery?

Eric Schmidt (38:26):

It should be a Renaissance. So I'll give you an example. I'm one of the funders of an important project at Caltech, which is trying to do, essentially, a new forecasting model for climate change. And this particular group focused on clouds. And I was not aware that it's impossible to simulate clouds using Navier-Stokes because of the number of equations and computation. And not even all the computers in the world will be fast enough to really simulate them. But, it turns out that you can learn an approximation of how clouds work that's quite workable. I was in another meeting where there was a need to understand... It's hard to describe. Whether a mouse was sleeping or not. And I won't bore you with why they needed to know this. And it's very difficult to tell if a mouse is sleeping so they didn't have very good labeled data.

Eric Schmidt (39:27):

So what they did is they built a natural model of the physics of sleeping for a mouse. And then they generated the training data for synthetic mice sleeping. And they built successfully a model that would tell them whether the mouse was asleep or not. Now this is humorous, but it's really quite an accomplishment. They knew the laws of physics. They had no training data. They generated the training data and they were able to do it. So a simple formula for you is that sometimes in science, AI is used to approximate a function. We don't know the function at all, or we can't compute it. Quantum criminal dynamics is another good example. There's just not enough computers in the world and nor will there ever be to get this stuff right. And so you need an approximation and the approximation works really well.

Eric Schmidt (40:24):

The other aspect of science is this generative part where you can generate new things and you can try them. They're both very important. Most science progress, seems to me, to occur after the development of a new instrument. A new microscope, if you will. And the problem with things like spectrographs and so forth is that they're very, very expensive to operate at scale. So with these techniques, we can use existing data, existing databases and natural datas, physics, simulations, and so forth. And we can really break through that limit. So that's why we should imagine that what you said is true and that you'll see these breakthroughs. And if you care about climate change, which I do a lot, this will probably be the thing that will allow us to address it through innovation because the current approaches are just not working well enough.

Alex Wang (41:16):

Yeah. And you had alluded to this before where you'd mentioned, if AI is actually able to insert into each one of these industries in each one of these scientific pursuits and drive breakthroughs, that's going to result in trillions of dollars of value over the next few decades. And I'm curious to dig into that model a little bit more deeply. What do you think are the economic implications of AI being scalably applied to every single industry and scientific area?

Eric Schmidt (41:48):

So let's talk about industries that are not regulated or lightly regulated. Those industries will fairly quickly be disrupted by these techniques because either an existing company or a new company will adopt them and they really will solve the problem better. And that will then create a crisis for the number two, number three, number four. And that's called capitalism. When that happens, there's an awful lot of destruction of jobs and of shareholder wealth, as well as winning and so forth. And so the question is in aggregate, do you create more jobs or less jobs? And unless we make a few changes in the way we operate, we'll probably end up with less, at least, high quality jobs. And the reason is that the winners tend to be concentrated and they tend to get more of the spoils. There are other jobs available, but they're not very interesting ones.

Eric Schmidt (42:44):

And so I think that along the way with this technology, it's super important that we figure out a way to use AI, to solve some of the problems that have bedeviled us. I'm familiar with a company that is using AI to try to determine why certain parole officers put everyone that they see into jail for parole violations and others don't. That's an example of a real human crisis for those people involved that's an inefficiency in our market. I'm just using that as an example, there are a thousand such examples. We need to figure out a way to use these tools to produce a better educated and more empowered and higher income workforce. If we don't the Gini coefficient, literally the gap, between the richest and the poorest in every society is going to increase. And that's clearly not good.

Alex Wang (43:37):

And what do you think are some of the... One of the very interesting things about many of the recent advancements in AI is that, if you look at where the advances are happening, it's actually cognitive work that is being more successfully automated. You look at the codex or copilot systems out of OpenAI, Microsoft, you look at AlphaFold coming out of DeepMind, and these are esoteric skills or skills that are difficult to train humans on. But, they're being automated very well, partially because they're so digital in nature. What do you think of the implications of this? Where at the same time, more manual skills are actually much harder to automate. Robotics has been this devilishly hard problem for a very long time. Self-driving has been a hard problem. How do you think this plays out over the course of the next five to 10 years?

Eric Schmidt (44:29):

I think it's reasonable to expect we're going to see, in purely digital industries, extremely rapid change from the AI platforms and the universal models. If you take a look at a GPT-3, and then the products that Microsoft is offering now around coding, that's a good example where, in programming, you get a return signal, you get a reward signal. Did they like my suggestion? So it's reasonably obvious that if you have a big enough model and you have a big enough training set, you should be able to build quite a good product. And I'm sure that...

Eric Schmidt (45:03):

... big enough training set, you should be able to build quite a good product. I'm sure that that's their strategy.

Eric Schmidt (45:05):

I also know that they have competitors coming. Not only do you have them, but you have three or four competitors who will be well funded. That collectively will move the industry forward. Who wouldn't want the computer to help them write their code, anyway? That would benefit everybody. I think you're going to see that much more quickly than you'll see some of these other industries.

Eric Schmidt (45:25):

The industries that move the slowest are the ones that are not subject to commercial or regulatory pressure. Most regulated industries, the companies and the regulators essentially have worked it out. They tend to agree on everything, and they tend to be difficult for new entrants to enter. If the result is a partitioning between regulated industries, where it's very hard to enter, and then this incredibly fast evolution on the unregulated side ... That will create other problems and other disparities.

Eric Schmidt (46:02):

The simplest way to think about it is, if you're not offering the equivalent of an Android, an iPhone app to do anything, what's wrong with you? Today someone your age and your generation says, "Why do I have to carry a passport? Why do I have to go to a bank? Why do I have a vaccination card?" These all seem dinosaur strategies, and yet they made sense 20 years ago.

Eric Schmidt (46:28):

I think you're also seeing a situation where the rate of change is outstripping humans' ability to change it. My own view is that the rate of change in the next 10 to 20 years will be much faster than the last 10 years, and, boy, what a 10 years that has been, because of the compounding. You have the situation where it's combinatorial innovation. Each layer builds on the top, and those layers are getting filled in very, very quickly. If you start with the presumption of, in every business and in every task, there's some learning function, we just have to find it, then you can probably build a killer app.

Eric Schmidt (47:06):

The most obvious one that I would like to emphasize here is education. You have all the things you need. You have a lot of students, you have a lot of learning. You have a lot of quick clicks or equivalent of clicks. Why have we not figured out the optimal way, literally the scientifically optimal way, to teach English, math, physics, science, and so forth?

Eric Schmidt (47:27):

I'm busy funding some of this activity, but it's amazing to me that you have this immense education industry which is many percentage points of GDP, and almost no innovation in the underlying science, literally the computer science of learning, in the field. It's an oversight that we need to correct.

Alex Wang (47:47):

Yeah. 100%. I think I agree with you, which is that one of the fundamental shifts at a physics of business perspective, one of the things that happened when we had software was that all of a sudden you had this thing that was free to replicate. That was a big change. It changed how we thought about many things.

Alex Wang (48:09):

One thing that you're mentioning is, now with AI or with machine learning algorithms, you have this new physics that is occurring, which is systems that just get better, very, very, very quickly, that have compounding improvement to quality, and effectiveness, and functionality. That's, maybe, the underlying node that's changing, but [crosstalk 00:48:36]-

Eric Schmidt (48:36):

Can I just add ... Let me just add that, when we started all of this, I've been doing this now for 45 plus years, it never occurred to me that we would end up with this amount of concentration of power in countries, in the form of China, and in companies, in the form of the US leading companies. The entire vision of technology, which, of course, the internet all came out of Vietnam and the Vietnam anti-war movement, was decentralized control, not centralized control. Freedom of the individual's, remember, the end-to-end principle in the internet.

Eric Schmidt (49:22):

It's important to understand that the structure we have now, where you have this concentration of power which is both economic, social, moral, regulated, and so forth, again, using China and the US, that may not ultimately be the state. This may be a situation where the technologies go from centralized, to decentralized, to centralized, to decentralized.

Eric Schmidt (49:46):

It's presumed that the Chinese model of authoritarian control is going to be the dominant one. But I can imagine that, with the empowerment of these models, of giving each person their own supercomputer identity human friend, they're going to be a lot more powerful too. I don't think we understand this.

Eric Schmidt (50:08):

It's canonic to say, "Everything will be structured. Everything will be hierarchical, and these big companies will be formed." But let's imagine that, in your next company you found, you found one that builds the assistant that helps everyone get through life. Is that company going to be a bigger company than the current companies? If you pulled it off, yeah. You would be, and you would also be regulated to death.

Eric Schmidt (50:36):

What happens instead is, let's say that you do this after your current company, and there are 100, and no single one becomes dominant because they're all specialized. That, in fact, people are different enough that there's no single market. I just don't think we know.

Alex Wang (50:52):

Yeah. This is a great segue to, I think, one of the last questions, which is, if we think really far out, what are the changes to how business is done? If you really look at it, my view is, if you look at the last generations of technology, the internet, the personal computer, the phone, et cetera, there's effectively three business models that, over time, dominated. There's ads, which you know a lot about, there's enterprise software, which you also know a lot about, and e-commerce, which we have in the United States. It's huge in China, massive in China. Those have been the three dominant business models that technology has really enabled. What do you think are the new sorts of business models that potentially, or, when exist, because of the advent of AI or these new technologies that are being integrated?

Eric Schmidt (51:43):

We've talked about for decades that there will eventually be micropayments of one kind or another, and we're still waiting for those micropayments. It makes sense to me that advertising sponsorships, subsidies, all of those, will be part of it.

Eric Schmidt (52:03):

But, at the end of the day, I have a different way of thinking about it, which is, just build a product that's 10 times better than they incumbents. I didn't say 10%, I said 10 times. If it's 10 times better than ... None of these things matter, because it'll all come to you.

Eric Schmidt (52:23):

If it's 10%, the incumbents have such strong incumbency benefits, including regulatory capture, brands, and so forth, and so on, that the standard is high. I think that, with AI, you have an opportunity across most fields to build a better mousetrap for every ... Sorry to use mice as an example. In pretty much every industry.

Eric Schmidt (52:49):

It's not just AI. Most industries do not do digital design. For example, there's a concept called digital twinning, where you build an entire software version of the thing you're going to build. This is now done in the car industry, for example. Most of the manufacturing industries are not doing it. Most industries still do things the way they were done 10 or 20 years ago at the human level. For every one of those, there is a new way of coming in with a sharp technical team, using the data and learning the outcome quicker.

Eric Schmidt (53:26):

Tech is the first one, because we're the tip of the spear. We have the best training environment. We don't have the regulatory requirement. We don't have the capital cost. But the same applies to all these others. I'm not worried about the monetization, I'm worried about getting users. If you can basically get users and get a growing business, trust me, we can make money. We can make money by licensing, selling, transferring the technology. We can build widgets. We can build another widget. We can sell that, and so forth. People always blame the revenue, because they didn't have a good revenue plan. Why don't you just build a great product? If you have a great product, your customers will come to you.

Alex Wang (54:04):

Yep. This has been super wonderful. I want to close on one important question, one that I know that you've thought a lot about. This could be viewed as almost like a call to action to the audience we have today.

Alex Wang (54:17):

The common moonshot associated with AI is often AGI, which could be very, very far in the future. Who knows exactly what it means, anyway? But one of the things that I'm really excited about is, what are the other grand challenges of AI that might be sooner that we can all get really excited about? You mentioned climate change earlier, which is one that you're personally very excited about. What do you think are the big, grand challenges of AI that are soon, but also deeply important?

Eric Schmidt (54:47):

What I've noticed is that the simple formulation of, basically, traditional learnings, self-supervised learning, partially supervised learning, and so forth, those were last year's scenario. What people are doing now is they're building extremely sophisticated multimodel reasoning systems. They generate a set of candidates, and then they get rid of those candidates in some other way, and then they do something else.

Eric Schmidt (55:14):

There's typically a two or three-stage pipeline. Getting that pipeline right is the job of, whether we like it or not, PhDs in those areas, because they really have to understand at a very deep level what this network is doing. I wish the network could figure itself out, but we don't know how to do that yet.

Eric Schmidt (55:34):

It seems to me that the greatest short-term opportunity is to build enough of an infrastructure that these powerful models can be done by master's students instead of PhD students.

Eric Schmidt (55:47):

Historically, in tech, this stuff started by PhD students building it, and then it became common. You won't remember this, but there was a time when email was considered a vertical. It was something that you added, you had to buy. Then it became ... We used to say it was vertical, and went to horizontal. It became part of the platform.

Eric Schmidt (56:08):

I want, in software, for the tools that we're describing now to be so commonplace that people of, let's just say, relatively normal technical education can master them. The problem in our field today is it fundamentally takes a PhD in math, or physics, or computer science to understand what these things are doing. That's not a good long-term situation.

Eric Schmidt (56:33):

The digitization, the process of digitizing the world is a massive business. It's a massive calling. There's lots and lots of opportunities. We're too reliant now on these very rare specialists, men and women who are really, really good at this, and we need to make it more common.

Eric Schmidt (56:54):

It's the old thing of ... We talk about the top universities, and they certainly have a big impact. But the universities that have the biggest impact are the large state schools that generate so many people that fill the companies of the country. Let's focus on that too. Let's focus on making the tools for them such that they can really do sexy stuff, but they don't have to understand it.

Eric Schmidt (57:17):

When I started as a computer scientist, the first thing they taught me was bubble sort. My guess is that there's about a zillion GitHub bubble sort and sorting algorithms. Why do I really need to understand that, except that I had to get a good grade in my class? Why do I need to understand how to make modems work, which I used to do? All of that stuff should be elided.

Eric Schmidt (57:39):

That the notion of progress is platform progress. We used to worry about things like language scanners, and translation, and so forth, and so on. All of that stuff should be excerpted so that the programmers that we work with are working on the really hard problems. Furthermore, ideally, with these universal programming models, coding models, that the computer will help you figure out how to solve the problem.

Alex Wang (58:05):

Yeah. This is a great call to action. It's something that I think many people in the community are really passionate about, is, how do we democratize AI and machine learning and make it accessible to all? With that, that's a-

Eric Schmidt (58:16):

Look. Just take the stuff you're doing and show it to your roommate, who'll go, "What? What are you doing?" Try to get it so that you can even explain to a reasonably normal, intelligent person how these systems work, and then figure out a way to build models that ... That's always how we work.

Eric Schmidt (58:38):

By the way, I mentioned open source and GitHub. I was part of the open source movement when it was started. Open source is critical for the knowledge sharing that goes on, because people share the stuff, and we really do move faster because of sharing.

Eric Schmidt (58:53):

To go back to your earlier point about the scientific discoveries are slowing down, and so forth, how do we accelerate that? We work together to build very powerful knowledge platforms that the next generation, which in your industry is every year or two, can build the next generation of app on, and that app solves a really important problem.

Alex Wang (59:12):

Amazing call-out. Thank you so much for taking the time today, Eric. This was a really interesting conversation. We covered a lot. Excited to chat next.

Eric Schmidt (59:23):

Okay. Thanks, [Alex 00:59:24]. I'll see you soon.

Dive in
Related
59:30
video
A Global Perspective on AI With Eric Schmidt
Oct 6th, 2021 Views 38.8K