Below is an excerpt which came originally from an email today which in turn came from an audio interview: [links to both at end]
BW: In just a few years, your company, OpenAI, has gone from being a small nonprofit that few outside of Silicon Valley paid much attention to, to having a multibillion-dollar arm of the company with a product so powerful that some people spend more time on it than they do on Google. Other people are writing op-eds warning that the company and technology that you’re overseeing has the potential to destroy humanity as we know it. For those who are new to this conversation, what happened at OpenAI that led to this massive explosion in only just a few short months?
SA: First of all, we are still a nonprofit; we have a subsidiary capped-profit. We realized that we just needed way more capital than we could have raised as a nonprofit, given the compute power that these models needed to be trained. But the reason that we have that unique structure around safety and sharing of benefits—it’s only more important now than it used to be. The last seven years of research has really paid off. It took a long time and a lot of work to figure out how we were going to develop artificial intelligence, AI, and we tried a lot of things. Many of them came together, some of them turned out to be dead ends, and finally we got to a system that was over a bar of utility. Some may argue whether the product is or isn’t intelligent, but most people would agree that it has utility. After we developed that technology, we still had to develop a new user interface. Another thing that I have learned is that making a simple user interface that fits the shape of the new technology is important, and usually neglected. We had the technology for some time, but it took us a little while to find out how to make it really easy to chat with. We were very focused on this idea of a language interface, so we wanted to get there. We then released that to the public, and it’s been very gratifying to see that people have found a great deal of value in using it to learn things, to do their jobs better, and to be more creative.
BW: ChatGPT is the fastest-growing app in the history of the internet. In the first five days, it got a million users. Then over the course of two months, after it launched in January, it amassed a hundred million users. Right from the beginning, it was doing amazing things. It was all anyone could talk about. It could take an AP test, it could draft emails, it could write essays. . . . Most recently, before I went on Bill Maher, I knew we were going to talk about this subject, so I asked ChatGPT for a Bill Maher monologue, and it churned it out in seconds. And it sounded a whole lot like Bill Maher! He was not thrilled to hear that. Yet, you have said that you were embarrassed when ChatGPT-3 and 3.5, the first iterations of the product, were released. Why is that?
SA: Well, Paul Graham, who ran Y Combinator before me and is a legend among Silicon Valley, once said to me, “If you don’t launch a Version One that you’re a little embarrassed about, then you waited too long to launch.” There are all of these things in ChatGPT that still don’t work that well, and we make it better and better every week.
BW: What are you using ChatGPT for right now?
SA: Well, this is the busiest I’ve ever been in my life, so at the moment, I am mostly using it to help process inbound information. Summarizing emails, summarizing Slack threads. I take a very long email that someone writes and it gives me a three-bullet-point summary. That may not be its coolest use case, but that’s how I’m personally using it right now to help my day-to-day.
BW: What is its coolest use case?
SA: Well, I get these heartwarming emails from people every day telling me about how they use it to learn new things and how much it has changed their lives. I hear from people in all different areas of the world. It takes very little effort to learn how to use it and it can become someone’s personal tutor for any topic they wish. A lot of programmers rely on it for different parts of their workflow. That’s kind of my world, so we hear about that a lot. There was a Twitter thread recently about someone who says they saved their dog’s life because they input a blood test and symptoms into GPT-4.
BW: I’m curious where you see ChatGPT going. You use the example of summarizing long-winded emails or summarizing Slack. These are menial tasks, like ordering your groceries, sending emails, making payments. But then there are different tasks—tasks that are more foundational to what it is to be a human being. For example, things that emulate human thinking. Someone recently released an hour-long episode of The Joe Rogan Experience with you as the guest. Yet it wasn’t actually Joe Rogan. And it wasn’t actually you. It was entirely generated using AI language models. So, is the purpose of AI to do chores and mindless emails, or is it for the creation of new conversations, new art, new information? Because those seem like very different goals with very different human and moral repercussions.
SA: I think it’ll be up to individuals and society as a whole to see how they want to use this technology. The technology is clearly capable of all of those things, and it’s clearly providing value to people in very different ways. We also don’t know perfectly yet how it’s going to evolve, where we’ll hit roadblocks, what things will be easier than we think, what things will be much, much harder. What I hope is that this becomes an integral part of our workflow in many different tasks. It will help us create. It will help us do science. It will help us run companies. It will help us learn more in school and later on in life. I think if we change out the word AI for software, which I always like doing, so instead say, “Is software going to help us create better,” or “Is software going to help us do menial tasks better, or is it going to help us do science better?” And the answer, of course, is all of those things. If we understand AI as just really advanced software, which I think is the right way to do it, then the answers may be a little less mysterious.
BW: Sam, in a recent interview, when you were asked about the best- and worst-case scenarios for AI, you said this of the best-case: “I think the best is so unbelievably good that it’s hard for me to imagine.” I’d love for you to imagine, what is the unbelievable good that you believe this technology has the potential to do?
SA: I mean, we can take any sort of trope that we want here. What if we’re able to cure every disease? That would be a huge victory on its own. What if every person on Earth can have a better education than any person on Earth gets today? That would be pretty good. What if every person a hundred years from now is a hundred times richer in the subjective sense? Maybe they’re happier, healthier, have more material possessions, more ability to live the good life in the way it’s assigned to them than people are today. I think all of these things are realistically possible.
BW: So, what’s the other side of it? You said the worst-case scenario is “lights out for all of us.” I’m sure a lot of people have quoted that line back to you. What did you mean by it?
SA: I understand why people would be more comfortable if I would only talk about the great future here, and I do think that’s what we're going to get. I think this can be managed. I also think the more that we talk about the potential downsides, the more that we as a society work together on how we want this to go, it’s much more likely that we’re going to be in the upside case. But if we pretend like there is not a pretty serious misuse case here and just say, “Full steam ahead! It’s all great! Don’t worry about anything!”—I just don’t think that’s the right way to get to the good outcome. When we were developing nuclear technology, we didn’t just say, “Hey, this is so great, we can power the world! Oh yeah, don’t worry about that bomb thing. It’s never going to happen.” Instead, the world really grappled with that, and I think we’ve gotten to a surprisingly good place.
BW: There’s a lot of people who are sounding the alarm bells on what’s happening in the world of AI. Recently, several thousand leading tech figures and AI experts, including Elon Musk, who co-founded OpenAI but left in 2018; Apple co-founder Steve Wozniak; and Andrew Yang, who you backed in the last election, signed this open letter that called for a minimum six-month pause on the training of AI systems more powerful than ChatGPT-4. They wrote, “Contemporary AI systems are now becoming human competitive at general tasks, and we must ask ourselves, should we let machines flood our information channels with propaganda and untruth?”
SA: We already have Twitter for that.
BW: [laughs] “Should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
That’s what they wrote. And I think there are several ways to interpret this letter. One is that this is a cynical move by people who want to get in on the competition, and so the smart thing to do is to tell the guy at the head of the pack to pause. The other cynical way to read it is that by creating fear around this technology, it only makes investments further flood the market. I also see a pure version, which is they really think this technology is dangerous and that it needs to be slowed down. How did you understand the motivations behind that letter? Cynical or pure of heart?
SA: You know, I’m not in those people’s heads, but I always give the benefit of the doubt. Particularly in this case, I think it is easy to understand where the anxiety is coming from. I disagree with almost all of the mechanics of the letter, including the whole idea of trying to govern by open letter, but I agree with the spirit. Some of the stories I hear about new companies trying to catch up with OpenAI and their discussions around cutting corners on safety I find quite concerning. I think we need an evolving set of safety standards for these models where, before a company starts a training run, before a company releases a new model, there are evaluations for the safety issues we’re concerned about. There should be an external auditing process that happens. Whatever we agree on, as a society, as a set of rules to ensure safe development of this new technology, let’s get those in place. For example, airplanes have a robust system for this. But what’s important is that airplanes are safe, not that Boeing doesn’t develop their next airplane for six months or six years or whatever.
BW: There were some people who felt the letter didn’t go far enough. Eliezer Yudkowsky, one of the founders of the field, or at least he identifies himself that way, refused to sign the letter because he said that it actually understated the case. Here are a few words from an essay he wrote in the wake of the letter: “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ . . . If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter. There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.” How do you understand that letter? Why are some of the smartest minds in tech this hyperbolic about this technology?
SA: Look, I like Eliezer. I’m grateful he exists. He’s like a little bit of a prophet of doom. Before this, it was that the nanobots were going to kill us all and the only way to stop it was to invent AI. And that’s fine. People are allowed to update their thinking, and I think that actually should be rewarded. But if you’re convinced that the world is always about to end and, in my opinion, you’re not close enough to the details of what’s happening with the technology, I think it’s hard to know what to do. So, I think Eliezer is super smart. But the field of AI in general has been one with a lot of surprises. I think this is the case for almost any major scientific or technological program in history. Things don’t work out as cleanly and obviously as the theory would suggest. You have to confront reality, you have to work with the systems, you have to work with the shape of the technology or the science, which may not be what you think it should be theoretically. You deal with reality as it comes, and then you figure out what to do about that. Many people never thought we would be able to coexist with a system as intelligent as GPT-4, and yet here we are. So I think the answer is we do need to move with great caution and continue to emphasize figuring out how to build safer and safer systems and have an increasing threshold for safety guarantees as these systems become more powerful. But sitting in a vacuum and talking about the problem in theory has not worked.
BW: You’ve compared the ambitions of OpenAI to the ambitions of the Manhattan Project. And I wonder how you grapple with the kind of ethical dilemmas that the people that invented the bomb grappled with. One of the things that comes to mind is the pause letter. Many people are asking you to pause research. Meanwhile China, which is already using AI to surveil its citizens, has said that they want to become the world leader in AI by 2030. They’re not pausing. So, let’s discuss your comparison to the Manhattan Project. What were the ethical guardrails and dilemmas that they grappled with that you feel are relevant to the advent of AI?
SA: I think the development of artificial general intelligence, or AGI, should be a government project, not a private company project, in the spirit of something like the Manhattan Project. I really do believe that. But given that I don’t think our government is going to do a competent job of that anytime soon, it is far better for us to go do that than just wait for the Chinese government to go do it. So, I think that’s what I mean by the comparison. I also agree with the point you were making, which is that we face a lot of very complex issues at the intersection of discovery of new science and geopolitical, or deep societal implications, that I imagine the team working on the Manhattan Project felt as well. Sometimes it feels like we spend as much time debating the issues as we do actually working on the technology, and that’s a good thing. It’s a great thing. And I bet it was similar with people working on the Manhattan Project...
Full interview text at https://www.thefp.com/p/is-ai-the-end-of-the-world-or-the.
Audio inteview at https://open.spotify.com/episode/4dyaQmPIMq5Bas54kgZJTf.
==========
All these comparisons with the Manhattan Project make yours truly a bit nervous. It didn't work out exactly as foreseen by its UC-Berkeley professor research director J. Robert Oppenheimer (or by anyone else). So, regard the predictions of the current developers of AI with that lesson in mind:
Oppenheimer: Part 1
https://ia601509.us.archive.org/28/items/sacramento-city-at-risk/Oppenheimer_1980_Episode_1.mp4
Oppenheimer: Part 2
https://ia801509.us.archive.org/28/items/sacramento-city-at-risk/Oppenheimer_1980_Episode_2.mp4
Oppenheimer: Part 3
https://ia801509.us.archive.org/28/items/sacramento-city-at-risk/Oppenheimer_1980_Episode_3.mp4
Oppenheimer: Part 4
https://ia601509.us.archive.org/28/items/sacramento-city-at-risk/Oppenheimer_1980_Episode_4.mp4
Oppenheimer: Part 5
https://ia801509.us.archive.org/28/items/sacramento-city-at-risk/Oppenheimer_1980_Episode_5.mp4
Oppenheimer: Part 6
https://ia601509.us.archive.org/28/items/sacramento-city-at-risk/Oppenheimer_1980_Episode_6.mp4
Oppenheimer: Part 7
https://ia801509.us.archive.org/28/items/sacramento-city-at-risk/Oppenheimer_1980_Episode_7.mp4
No comments:
Post a Comment