During a Historic Weekend at Bowdoin, Panel of AI Experts Discusses Technology that Will Change History

By Rebecca Goldfine
As part of the celebratory inauguration of President Safa Zaki, a diverse panel of artificial intelligence experts reflected on how society might best position itself to embrace the new technology and generate the most good.
AI panelists (l-r) Eric Chown, Rumman Chowdhury, Daniel Lee, Carlos Montemayor, and Peter Norvig. 
Left to right: Moderator Eric Chown and panelists Rumman Chowdhury, Daniel Lee, Carlos Montemayor, and Peter Norvig.

The four speakers included quantitative scientist Rumman Chowdhury, electrical and computer engineering professor Daniel Lee, philosophy professor Carlos Montemayor, and computer scientist Peter Norvig

The discussion was moderated by Eric Chown, Bowdoin's Sarah and James Bowdoin Professor of Digital and Computational Studies.

In her introduction to the Friday afternoon event, Senior Vice President and Dean for Academic Affairs Jen Scanlon said it's imperative that higher education equips its students with the tools and contexts to engage with "the sweeping" technology, "to ask the kinds of questions that lead to imaginative and humane ways of reaping its benefits, exploring and containing problems, and advancing just futures."

The following is an edited and condensed version of the discussion that followed:

Chown: Please talk a bit about what got you interested in AI and what you’re excited about doing in that world today.

Chowdhury: What is exciting to me right now about AI is the increased accessibility of the kinds of models that have generally been housed in Silicon Valley and exclusive tech environments.

Lee: My interest in AI asks how can we get AI systems to work in the real world? By the real world, I mean the grungy physics and messiness of our everyday lives. How can we interface with the new intelligence we’re building so it actually improves people’s lives?

Chown: What should a Bowdoin student know about AI when they graduate?

Norvig: They should know how to use the current tools. They should be aware of what’s coming, what’s possible, and what’s science fiction. They should be aware of the various threats and promises, as well. So if you’re thinking of a medical future, how likely is it that there will be amazing discoveries that will lead to cures? If they go into politics, they should be worried about disinformation. You don’t have to know all the technical details, how AI works behind the scenes, but you have to know the impact.

Chowdhury:
The critical thing here is to be smart about the output and input—what is the data you’re giving up, and what is being put in front of you? As it becomes more realistic, more hyper-targeted to you, hitting all the things that make you feel emotional, how can you, as a thinker, step back, look at the situation, and ask, "Is this real? Is it driving the kind of change I want to see? Or is it manipulating me?"

A lot of the language of AI by design asks us to give up thinking...and I think that is what drives a lot of the fears people have about the technology—that they will no longer have agency.

Montemayor: It should be asked, "How will AI help everyone? How can it be democratic? How can we use the potential of AI to benefit most people, or ideally all people?"

Norvig:
This idea of giving up control: I think if you’ve given up control, the system is poorly designed. We should think of two dimensions and say, "If I want, I should have more autonomy." But there’s a separate dimension where we can also have more control. There should be a system where we have both more autonomy and more control, and if I want to give up control I can, but I shouldn’t be forced to make that trade-off.

Chown: The history of technology and inequity is not fantastic, right, and there are a lot of people who don’t have access to AI and won’t in the near future. So how can AI be the driver that improves inequity instead of just another technology that creates more haves and have-nots?

Chowdhury: We still have vast swaths of the world that don't even have internet connectivity...and entire parts of the country that aren't participating in what we would consider the rudimentary digital world. The starting point is how do we get internet connectivity to more parts of the world [if they want it]?

Lee: These systems are built on massive amounts of data. If you have a data set that has a bias for whoever is in power, whatever comes out of AI systems will amplify those biases. There is a lot of current work to eliminate those biases, both in training sets and the learning algorithms, but it is still a nascent field, and people don’t know how to do it properly yet.

Norvig: A good thing is that a rising tide lifts all boats, but the tech tide tends to raise the yachts rather than the rowboats. A recent study looked at call center workers, and giving them access to AI tools. They found that less-skilled workers got more benefit than the high-skilled workers…So I think here there may be a potential to lift up more people than in most cases of technology.

Montemayor: There needs to be a dialogue. The more the public and the more universities are involved in this dialogue, the better the chances the technology is used correctly.

Chown: The users of AI are asked to trust that it will do good things for us and trust that the companies making AI have our best interests in mind. But generative AI and what it produces is helping to make us trust each other and the information we get less. Could you all speak to trust from your own point of view?

Lee: How do you build trust? In a societal way, you demonstrate that, if you deliver a product to me that I agree with or I can validate in some way, that builds trust. If you build an artificial system, it will also have to go through some sort of protocol like that where humans can validate that it is doing the right thing at the right time.

Chowdhury: I don’t think we should trust the technology blindly or trust the companies to automatically do the right thing. The company's incentives are not necessarily your incentives, and not necessarily the incentives that lead to humans or society flourishing. You know the brakes on your car will work not because you automatically trust that Ford will do the right thing, but because there are laws. If companies break the laws, they are fined, there are recalls, there are methodologies that have been built—all these things lead to you trusting the technology. For AI to be integrated into the systems of everyday lives there is a massive trust barrier to overcome, which is good. It is good that the bar to being considered trustworthy is high for technology that could be telling you what you should think, or making decisions, or writing things for you. Trust only comes when there are standards and recourses if things go wrong, and right now none of that exists in our field.

Norvig: I want to describe a possible future where I have a phone with a bunch of icons on it. When I push one, I have given away all my information to Uber or Dunkin Donuts, or to some company, and I have to trust them to do good for me. I know they want to keep me as a customer, so they have some incentive to be on my side. But for the most part they're on their own side, they’re not on my side. What if we replaced all those buttons with one button that says, "I’m your agent, and I'm on your side." So you only need to trust this one program, and then it will go out to Uber and make a car arrive or a pizza arrive, and now the amount of trust that you have to divide is less. AI still has to earn that trust, but there is a possible future world where the agent is on my side, not the company’s side.

Chown: How will AI in the future help Bowdoin graduates and people in the audience flourish? 

Chowdhury: Over 100 years ago, we had similar conversations like this at the last Industrial Revolution...about what we will do with all this free time. People talked about only working three days a week, and spending four days learning to play an instrument, write novels, take care of families. We also had these conversations with email and digital telephones. But they didn’t make our lives faster or easier; they made us work more. In order to have a flourishing and actually productive life, first we need to define productivity as more than contributing to the GDP. Second, we need to be very intentional…we have to collectively agree that this freed-up time and ease of getting and doing things will not lead us to spend time doing more work; we will spend that time doing something else. It is very hard, because we have been hyper-trained to reach for that golden ring, and the next golden ring a little higher up. 

Montemayor: As for flourishing, the last Industrial Revolution, and the recent digital revolution, didn’t turn out to be great for all humanity. With this new technology, we at least have the opportunity to think through how it will help problems. There has to be a political discussion for how to use this technology…We have more potential than just helping workers and releasing us from constantly chasing the next opportunity, but we need to think about exactly what that means...I think that is what is interesting about universities—these are places where we should think of these things and use the tools we have and the understanding we have across disciplines to help achieve these larger goals and spur the social dialogue we need to have. 

Norvig:
One way to look at it is we want our systems to be aligned with what we really want, and I see a change over time in how we have [addressed that]. Originally, software was all about algorithms, and then it was about big data, and now in this third shift, we have great algorithms to optimize things and we have great data, but we don’t have great ways to specify what it is we are trying to optimize. What is it that we want? What’s fair all across society? Every engineer assumes somebody will find that answer, so they don’t have to worry about it. Maybe we should pay more attention to that at the societal level; at the governance level (what should be legal and not legal); at the engineering level (I want to build a product, how do I make good decisions?); and at the individual level (do I really want to spend my time watching cat videos? Maybe I should be doing something else with my life?).

Chown: This sounds like the purview of the humanities, talking about values and so forth. How are humanities connected to these projects and determining the alignment of AI systems? How can humanities play a bigger role?

Chowdhury: The alignment problem is, if you build a sufficiently competent AI system, will we be able to control it? Hundreds of millions of dollars are being spent on this problem. My take on it, if I put on my humanities hat, and I put on my philosophy hat for a second, is that there are three questions to ask when addressing the alignment problem: What is competence? What is control? What is consciousness?

How do we define competence when we have AI systems that can pass a bar exam or medical licensing? ChatGPT can pass the bar exam, so how are we defining competence for someone who has a license to practice law? If AI achieves a level of competence such that I am now scared of it, is this thing conscious? Answering those questions will tell us about ourselves and tell us more about our values and how we’re trying to thrive with AI systems.

Montemayor: How do we align our values in general, and what does that mean about our dignity and freedom? That is an enlightenment way of putting it, but the idea is, that is where our dignity resides—in our values and how we protect them.…However we end up using these technologies, we need to connect it to the larger project of the humanities. We need to think very deeply, not just about which goals we are trying to reach but why we should care more about some problems than others and what issues will be most central in the project of enhancing freedom and dignity with technologies and knowledge.

Lee: If you look at history, the biggest application of every new technology has been the military—dynamite, nuclear weapons. And I’m afraid we haven’t overcome that, which we know from current events. All of these things will be repeated unless we understand the humanities, the human condition, sociological drives, and historical antecedents.

Chown: What has AI taught us about humanity? We talk about AI learning from humans but what have we learned from AI about ourselves?

Norvig: With these large language models that are so popular, everyone agrees there is amazing technology there. But to me it is not so much the peak learning networks that are amazing or the data centers that are amazing—the most amazing thing is we wrote all this down, and there is so much there! Our ability to create language is doing so much more than I ever thought. We know we’re better at language than any other animal by a long shot, but I don’t think anyone realized how much is there. We thought language was a transport mechanism, and you needed a person at either end to make sense of it, but now a computer that has no direct connection to the word, that can only look at language, can take so much out of it. That tells me we did something amazing in inventing language.

Lee: I am still amazed by how great our human brains are. We still don’t understand ourselves. A great question for the future is, what is special about our brains that we haven’t yet been able to replicate?

Chowdhury: All the things we give monetary value to—all the things we think of when we think of productivity or GDP—are the things that AI is very good at. And all the things we put no value to, AI can’t do well. It can’t care for a baby. It can’t calm a crying child. It can’t create some of the most beautiful songs I’ve ever heard. It makes me think it may be time to reprioritize what we consider to be of value. Maybe what it is teaching us is that we may have a moment for recalibration…and can say maybe those days when we're not working aren’t wasting time. Instead, that is what it means to be human. That is what it means to provide high value.

Question from the audience: Have any of you thought about the regulations that the government should be working on, or have you looked into what the EU is doing with its AI Act and thought that is something the US should follow?

Montemayor: One thing I think about is that there are regulations that are international, and AI is going to be international, it will be used by everyone. So I wonder if the framework that emerges should be an international framework.

Norvig: In addition to government regulations, there are other regulations. There is self-regulation (companies have regulations for training and release of products). There are [potential] professional regulations for software engineers (much like civil engineers). There could be third-party regulations. The last time there was a technology that developers talked about killing everybody, it was electricity. Underwriters Laboratories jumped in and said, "We’re going to put a little sticker on your toaster saying it won’t kill you," which was great, because consumers trusted it, and that meant the brands were willing to undergo certification. And Underwriters Lab has a new program in AI.