Home » Blog » Alliance Discussion with Dr. Vincent Conitzer: The Call to Pause Training of Certain AI Systems

Alliance Discussion with Dr. Vincent Conitzer: The Call to Pause Training of Certain AI Systems

In May, Vincent Conitzer, PhD, Professor of Computer Science at Carnegie Mellon University and Director of the Foundations of Cooperative AI Lab (FOCAL), joined us for an alliance discussion to share his thoughts on the concerns that prompted the Future of Life Institute’s open letter calling for a pause in the training of AI systems more powerful than GPT-4. Here are some highlights from his remarks: 

On the concerns that prompted the Future of Life Institute open letter: 

“I think the motivation behind [the letter] is, [AI training] is racing ahead so fast that there’s a new version of these models every half a year and each one can do new things that we hadn’t seen before; we haven’t had a chance to catch up. As scientists, there’s so much that we could be studying about these models, but before we even get the chance to do that, the next one is already out. Similarly, from a societal perspective, these things are now starting to impact society in various ways.  

“My colleagues in humanities are very concerned about their essay assignments, and whether those are still meaningful or how they need to adapt them. We’re seeing in many areas of life that people start to use these models in their day-to-day work. Is that really a good thing for us to be doing? There are other concerns that maybe they will start to be used for misinformation – you could generate very tailored misinformation to a particular person, as long as the model has some idea of who that person is.” 

On the reality of an equitable pause in the training of AI systems: 

“… Maybe the most responsible players are the ones that are going to abide by the pause and then we have the less responsible players making progress; however you interpret more or less responsible, that could be a legitimate concern. I think the letter called for [all key players] to abide by this pause. Well, what happens if some of them agree, and some don’t agree? I admit that in some ways, I saw the letter somewhat cynically … that [the pause] was probably never going to happen. I think this was a good thing to get done and sign just so people would see that it may not be feasible, but in principle, I would have been happy to see that pause if it was really followed by all key players as the letter asked for.” 

On the rapid advancement of AI systems: 

“My colleague at the University of Texas at Austin, Peter Stone, had this nice example – imagine that we were in this situation where we had the Model T – an automobile in the early 20th century. Now from there to the late 20th century, when we have highways, and everybody has a car. It took us a long time to get there – technology took a long time to get there, infrastructure took a long time to build up. In the process, we had a lot of opportunities to adjust rules and expectations about how to drive – driver’s licenses, all these types of things. There’s a concern that here we don’t have time for that because it’s moving too fast; we don’t really know how to adapt society to it.” 

Watch the full discussion here. 

Home