We’re a Long Way from Killer Robots, but AI Still a Serious Threat

By Tom Porter

Talk of killer robots and other existential threats to humanity are overblown and a long way from being a reality, said Eric Chown. Furthermore, he added, they are distraction from the real threat that artificial intelligence (AI) already poses.

AI Panel on radio
Digital and Computational Studies scholars answer questions on artificial intelligence: Crystal Hall, Eric Chown, Fernando Nascimento.

“AI is affecting our lives right now every day in ways most of us are unaware of,” said the Sarah and James Bowdoin Professor of Digital and Computational Studies (DCS), “whether it's the algorithms that are choosing what to show us on social media or algorithms that are determining who gets a loan... Our focus right now should be on uncovering that kind of thing,” he added.

Chown was a guest on Maine Public Radio’s call-in show, Maine Calling, along with his Bowdoin colleagues Crystal Hall and Fernando Nascimento, also from the DCS program. The show, broadcast in front of a live audience on June 1, 2023, at a venue in Brunswick, looked at the ethical issues associated with AI, such as the controversial ChatGPT technology now being widely used, including among students.

Can AI be a beneficial tool? Or does it open up dangerous avenues for fake news, plagiarism, and other negative applications? How can it be regulated? These were some of the questions asked by host Jennifer Rooks and members of the public.

Ironically, said Chown, some of the worries over programs like ChatGPT—which can generate its own written content in the style of a real person—are mitigated by the fact that they're just learning from their own data, meaning they’re not going to get much better as time goes on because they will continue to regurgitate their own information.

There is, however, an urgent need for a serious conversation about how to regulate AI, said Professor of Digital Humanities Hall, who is also director of the DCS program. The pace of change is fast outstripping our ability to react, she explained, and when it comes to forming policy around AI, Hall said we can learn from the way we have handled other developing technologies in the past. “I’m thinking about all the ways that we regulate pharmaceuticals and label the batch number, their provenances attached to every vial and bottle,” said Hall, also citing similar labeling techniques in the automobile industry.

Assistant Professor of Digital and Computational Studies Nascimento, who has a background in philosophy, said the European Union has made encouraging progress in the regulation of AI.  “The EU is far ahead of other places,” he observed, referring to the proposed AI Act recently approved by the European parliament, which takes aim at technologies like ChatGPT by requiring certain types of AI-generated content to be labeled as such. “So, we already have legislation,” said Nascimento, “the problem is how can we implement it?”

Amid the concern over AI, panelists also stressed the good uses the technology can be put to in many areas, including health care, education, combating climate change, and promoting equity. Furthermore, said Nascimento, the emergence of AI has made us “rediscover our humanity.” For example, he explained, in DCS classes at Bowdoin there is now more emphasis on face-to-face discussions and oral examinations, not dissimilar from how learning and teaching would have taken place in ancient Greece, he added. Listen to the show.

Last year, Bowdoin was selected to be one of a handful of schools from across the nation involved in a project sponsored by Google and the National Humanities Center aimed at developing academic courses that tackle the ethical issues raised by AI technologies. Read more. The College also participates in the Computing Ethics Narratives, another national initiative involving Bowdoin faculty, aimed at integrating ethics into undergraduate computer science curricula at American colleges and universities.