New Rules

By Bowdoin Magazine

John Wihbey ’98 works at the intersection of media and technology, with increasing focus on AI’s power to shape and influence media and communications.

His new book, Governing Babel, places these high-stakes questions in the context of history with an eye toward new rules and smart regulation.

John Wihbey, photo by Jared Leeds

John Wihbey ’98 is director of the AI-Media Strategies Lab and associate professor at Northeastern University and is also cofounder of Northeastern’s Institute for Information, the Internet, and Democracy and a faculty researcher at the Ethics Institute. Photo by Jared Leeds.

You’ve been studying the intersection of media and technology for longer than AI has been the focus of the conversation. How has your work changed and developed over the years?

I started out as a practitioner—a digital editor, producer, reporter—and was, in the early 2000s, thinking a lot about how media spreads and how those patterns of spreading were starting to change, particularly with the advent of social media and peer-to-peer networks and other kinds of technologies that facilitated the sharing and reuse of media. The moment we came fully into what you would call the algorithmic era with data—maybe around 2015—was precisely when I became a full-time academic.

So, I had been teaching and writing and doing journalism and editing and producing, but I started to focus all my time as a tenure-track faculty member who was starting to build a research agenda. And I was taking very seriously this idea that reaching publics, helping inform publics—kind of the original mission you might think of what professional news media tries to do—was changing radically, because the delivery mechanism and the ways that people can be informed and the ways that reporters can inform themselves were all changing very rapidly.

So, I started thinking about what the purpose of the press and journalism was in a networked world. That was my tenure research, which led to my first book, The Social Fact: News and Knowledge in a Networked World. Bowdoin Magazine was kind enough to mention it in one of those little blurbs, so thank you. I still have the issue.

Well, that’s good!

Oh, it’s very meaningful, right? In any case, around that time I started thinking a lot about AI as well. There’s a chapter in that book that is at least half wrong about where AI would go as it transitioned out of what you might think of classical AI or narrow AI, which is sort of essentially advanced machine learning, and accelerated into some new phase. I did not know that it would be manifested in generative AI and hit the consumer market in late 2022, but that’s what happened. 

I was trying to think through what that would mean for the practice of journalism, but also for those managing social media platforms and social networks. Right after I published the book, I started doing more direct consulting and engagement with social media companies, primarily Meta and Twitter at that time. I actually served for a couple of years as a research consultant to Twitter in kind of the golden age, the heyday of classical Twitter, pre-Elon Musk purchase and then conversion to X. 

And so I got to see that up close and participate in that world, and I got really interested, in social media governance questions, which were very much bound up with algorithms and this increasing use of deep learning and neural networks. So, you could sort of see the seeds of what would become generative AI starting to bubble up through the big tech companies. So, that’s the whole trajectory. Thanks for allowing me to unfold it all! Around the time of my sabbatical, I was reading really deeply about the fundamentals of AI. I wanted to understand it.

Once it became a phenomenon through ChatGPT, I pointed my research agenda more squarely toward it, and I specifically wanted to understand what its implications were for media industries. So, news media, social media, entertainment media, strategic communications, public relations. These are all things I teach. I teach a pretty broad set of courses and competencies, and I knew that I needed, both for teaching purposes and for research purposes, to understand this new, huge variable in the equation of how media was changing.

In making that pivot, I also realized that there needed to be an ethical, kind of critical, stance. A lot of scholars and people who are in the AI safety community, the responsible AI community, have taken that stance and that posture, because there’s a lot of risk associated with these new generative technologies. But I also wanted to understand the practical dimensions for media practitioners of these new technologies.

And, actually, the ethics and the practice go together. What I do with my students, what I’ve done the last two semesters, is sort of kick the tires on lots of the different generative AI tools and think about different use cases within the different media industries—in public relations or news media or investigative reporting or social media management or media product development—and think about where the ethical boundaries are, where we can use these tools and feel confident that they’re doing what we want them to do, where they’re aligned with our intentions and where they’re fulfilling things accurately, and where we need to disclose to our audiences how they’re used.

So, that’s sort of where I’m at now. I’m one of four faculty developing this new institute called the Institute for Information, the Internet, and Democracy, and we’re pursuing these different pathways within the AI and media and communication space. 

How do you think that journalism as a field has responded to this moment? Do you think there are things it could do better?

The overarching narrative for media is embedded in this idea that the internet happened to us—and we don’t want it to happen to us again. I think a lot of journalists and a lot of editors and news outlet owners are taking very seriously the idea that we are seeing a new platform emerging. I mean, if you think about it, there was this sort of classical static internet, and then there was the interactive web, and then social and mobile. This is probably the next phase, and maybe it’s an even more important phase than, let’s say, mobile or cloud. Maybe it’s truly revolutionary. I think a lot of people are taking it really seriously.

But, at the same time, I think people are quite concerned that the technologies are not truly ready to render the kind of accuracy, fidelity, and trust you need for these technologies to do lots of automated things. They’re not rule-based like a spreadsheet; they’re probability predictive models that can make errors and make mistakes. They also have some tendencies to kind of shave truth in certain ways—they sometimes have mean or average tendencies, which can skew things. So, news judgment and journalistic judgment from humans is still required. 

But, on the tech side, lots of big media, like The New York Times or The Economist, are working fast and furious on new kinds of products and new kinds of tools for their journalists. Most of the big news outlets that are syndicated or have some capacity have teams that are working on new tech to empower reporters and editors to be more productive, faster, more accurate, whatever their objectives are. So, it’s sort of a mixed bag. But I will say it’s still a very nascent field, and at the conferences that I attend, you see lots of flowers blooming.

It’s not clear exactly how bright they will ultimately be. But I think journalists appreciate the ability to quickly, on deadline, attain knowledge and information, and these tools clearly hold that possibility. It’s just that they may not deliver with an accuracy rate yet that can be fully trusted—though I think we’re getting closer all the time. 

Some of the people that these journalists are reporting on don’t seem to care as much about where the technology is in its accuracy trajectory as they do whether it’s effective at convincing people of their points of view. Do you think that’s something we can solve?

Probably not through journalism alone. Some of these are sort of cultural, political norms that I think have to be reinforced by many different actors in civil society, the political community, and local communities too. Mis- and dis-information have always been with humans and human societies, but it’s gotten to scale in such a way that I don’t think journalism alone can necessarily turn the tide. I think it has to work in combination with other forces that validate journalism and fact-based storytelling. 

Do you think we kind of have to pass some kind of peak moment of mistrust before we can, as you say, turn that tide? 

Going back to the ’70s in some ways, civic trust has seen a steady decline. There’s just been a secular, long-term decline in trust in institutions. But the press is a particular outlier, in terms of having very low ratings by the public—though there are some questions about what those ratings actually reflect. Is it a kind of drum beat from political elites who have said over and over again that the press can’t be trusted and is terrible and is fake news or whatnot? It needs to be said that it predates Trump.

It goes back at least to Nixon, if not further. And some of the Democratic presidents have been quite critical of the press as well. But whether we have to get to a peak moment, it may be that trust has to be measured in other kinds of ways, that we’ve gotten to a point where refracting national institutions through the broad lens of media—including social media—is the wrong way of doing it, and what needs to happen is sort of a renewal at the local and the state levels. You need a sort of grassroots, bottom-up surge of civic renewal.

I think there’s a pretty good case that that’s what we have to do. I don’t know what peak mistrust would look like except maybe the collapse of the republic, and I’m trying not to contemplate what that would look like. I guess I’m a little less dystopian or into catastrophizing than maybe some of my colleagues.

You put the advent and the rise of social media into a historical context with other types of media. Tell me how you see all this related to the FCC and other historical moments. 

I do try to do that. I try to put social media in this larger context, what I refer to in the book as the 100-year journey. That begins, really, at the end of World War I, where the courts first start to define the First Amendment. I mean, it’s sort of unbelievable, but we had a first amendment through the eighteenth and nineteenth centuries, but it really didn’t have a lot of specific meaning. It was often left to localities and to the local courts and local leadership to decide. So, you have that, then you have the rise of broadcast technologies, and through the FCC, the first inkling of regulation, really, in 1934, when the FCC is set up.

I then try to look at, what are the deeper principles that inform technology and communications regulation over the past century, and how could some of them be rendered or made visible again in the social media era? And it’s not an easy thing to do. Technology scholars often say that there’s a particularism of each technological revolution that just doesn’t track or map to the prior eras, and that’s often said about social media. The speed and the scale, the user-generated content that’s not advanced by professional journalists or media producers, makes it all not just qualitatively different, but some order of magnitude different.

And that’s probably all true, but there are some deep principles that I try to surface around how we value speech in the public sphere, what we think of as fair, what we think of as responsible, and how we think about people’s ability to respond to claims and counterclaims in the public sphere. And I try to bring that forward, to think about what could be some light-touch regulation. Around the world, we’re seeing all kinds of different regulatory ideas come to fruition, whether it’s the Australians restricting young people from even having a social media account to the Europeans saying that there should be these elaborate mechanisms for reporting content that seems to violate national law to, you know, more Draconian forms of regulation in the more authoritarian countries or in countries with mixed regimes where there are very powerful governments. 

So, I think there’s a big question about what the United States is going to do. My thesis is that there are some deep principles that we can pipe into the current moment that might allow us to put some regulation on the books. It probably needs to be under a new type of digital regulator. I think there’s a chance that a new presidential candidate or set of candidates might run on something like that. Nobody quite has. First, it polls really well to say that you’re going to try to get social media under control. Whether you can actually do it is another question.

But we know how campaigns go, and coming into office with some new energy around this issue—maybe there needs to be a new digital regulatory agency. I float that in the book, but I’m just building on the work of others there. But that’s kind of where I’m coming from. It should be said that the FCC and our current agencies really don’t have a lot of regulatory authority over the social media platforms and the tech world generally. They have a little bit, but not much.

You suggest a fundamental American tension between our desire for freedom, to do whatever it is we want to do, and a sense that there are things that are dangerous to spread without regulation.

What do you think we can do about that tension?

Let’s take, to be specific, disinformation about elections. Well, disinformation is not illegal under the United States Constitution, nor is hate speech. When disinformation tries to deceive people about the time, place, and manner of voting, when it tries to materially disenfranchise a class of people in a really aggressive way, it starts to feel like something that we might want public policy to intervene on. The same could be said, I guess, of the use of AI to carry out any of those things.

When people are abused online or feel very unsafe and are threatened and doxed—leaking people’s home address and whatnot—that tends to push people out of the public sphere, which has a consequence for their free speech rights. So, there are tensions and tradeoffs, I think. These have always been there. The use of hate speech and the use of different kinds of threats have been part of our political byplay for a long time. We have a very wide-open speech environment relative to most countries around the world.

I guess I’m not saying that we need to wholesale change that, but the power and the virality of some things online have the potential, I think, to damage the public interest. So, I think we have to be very careful about trying to censor or regulate anything in particular. But my argument is that we should ask the hosting companies to show a duty of care and to show some kind of responsibility and responsiveness to large patterns of harm. Now, how that would be translated into law is a very difficult question. What would the rules be? 

But my preference would be to create a regulatory forum where you could begin to create a record of what is known. And we tend to know things through anecdote. Something comes out in a Congressional hearing; we read something in the Wall Street Journal. But there isn’t a consistent record of what the harms are of things like child sexual abuse material, Russian disinformation. Whatever it is, it tends to be very unstructured. I think what we have to first do is structure a bunch of evidence, create some baselines—in a very concrete way of putting it, make the relevant companies fill out some paperwork. 

And then proceed from there to see if we could get some kind of bipartisan consensus about what we would expect from the companies in terms of preparation and responsiveness. And it’s a fast-moving space, a really fast-moving space, just because of the speed of technology. I don’t think we can have any confidence that what we learned ten years ago, anecdotally, is necessarily applicable now. I think we need something that’s more systemic, more current, and agile.

The title of your most recent book, Governing Babel, suggests that sort of there’s some force that doesn’t want a common language, because speaking a common language means you can get something done.

Where did that come from for you and why is the idea that people all speak different languages relevant now?

The story comes, of course, from Genesis—the quick summary is that there was a city where the people tried to build a tower toward God, and because they showed hubris in doing that, God decided to scatter them and make them speak many different languages. So, they stopped building the tower. It seems sort of a parable for many things, but communication scholars and artists have been very interested in the parable of the tower. Incidentally, Babel is from a Hebrew word that means to confuse. So, there’s sort of a witty dimension to it.

In the Renaissance, there were lots of folks who painted the tower, because they were fascinated by it—Bruegel is the classic artist whose painting everyone knows. But communication scholars through the twentieth century have been really interested in the tower as a metaphor. There’s a great work from the ’60s—I think it’s called A Tower in Babel by Eric Barnow—and I quote that to begin my book. And then Jonathan Haidt, who’s been writing a lot about youth and social media, riffed on it in The Atlantic a couple of years ago. It sort of recurs. I thought it was an apt metaphor particularly because of the global dimensions now, which has this sort of biblical echo. 

There’s this world of many languages and lots of chaos accelerated by these digital technologies. I also thought—and maybe this is a little bit of author’s fancy—the big platforms now, seven or eight of them, around the world have more than a billion, you know, monthly active users. And so, they’re networks, but in some ways they’re kind of large tower-ish things that are bestriding the world, right? So, I was kind of playing with the metaphor.

I think there’s something deep in that parable, and it does evoke this kind of world government idea. You know, who would govern all of this? But that’s not really the point. I think the point is how can we, given that the platforms now stretch across national lines, whether it’s Meta or Google or TikTok, Byte Dance, X, WhatsApp, WeChat, et cetera, they have this cross-national dimension in so far as they do cross nations. There’s an implication that we need rules that are somewhat aligned because the content proliferates across borders. And we’re coming to a moment globally, as I said before, where I think there is a lot of public appetite for getting this under control.

But how the platforms would embody getting it under control is really unclear. I think, frankly, a lot of countries are looking to the Americans, because so many of the platforms have their headquarters in California. And the fact that we have really nothing to say on the regulatory front is I think deeply frustrating for the Brazilians, for the Indians, for the Australians, for the Japanese. So, my book is, in some ways, a quiet manifesto for getting our act together and helping societies around the world grapple with this platform power that we’ve sort of unleashed from our shores. 

Why do you think the US is so slow and or so silent on this? 

A couple of reasons. One is we’ve always had a very libertarian ethos around regulating speech and media content in any way, shape, or form. We also, in 1996, passed a law that allowed for online websites, and subsequently platforms, that host user-generated content to not be liable for content that they host but did not themselves produce. And we’re sort of stuck in the ’90s, in some ways—which is funny, because that’s when I was at Bowdoin, and I vaguely remember the Telecommunications Act being signed then, which was a very historic moment for President Clinton and for Congress.

It was passed overwhelmingly. But we have not gotten the momentum to pass anything that would update these increasingly archaic notions and laws that are still on the books. It took from the ’30s to the ’90s to get meaningful reform. Now we’re thirty years on, and I think this is bipartisan—most people think that there has to be some kind of reform soon. It’s not clear which direction, though. Various states have tried to pass reforms that are more conservative.

Regarding the recent Texas and Florida state-level social media regulatory efforts, which were both conservatively motivated laws to force the platforms to avoid “censorship,” the US Supreme Court said recently, “You can’t do that. You can't force the platforms to do things like that.” More liberal efforts have also hit headwinds. So, it’s not clear where we could land. I do think it has to be bipartisan. You think about the Take It Down Act that President Trump and the First Lady and Ted Cruz championed, for instance, which was passed overwhelmingly by Congress and signed into law earlier this year.

That’s the one that allows people who have revenge porn and information like that about them to act?

Exactly. And, you might say that’s not a concept that extends to other things and is a sui generis thing. But my view is that we need to get lots of little wins in order to build toward something new. The other thing I try to argue is that, whether it’s court decisions in the early part of the twentieth century or the decision to create the FCC or the decision to create the Fairness Doctrine or the decision to repeal the Fairness Doctrine, none of these acts are inevitable and come out of nature. They are a combination of jurisprudence, political leaders coming to a new set of ideas and forming new concepts and then enacting them in a creative way to meet a moment.

It’s often said about social media that there just is no remedy here. It doesn’t map onto anything in our prior history. There’s no way we could imagine something that would be First Amendment-respecting but also bring things roughly into line with broadly shared norms. And I say, most of the past moments where we’ve made big decisions about speech and the public sphere were moments of invention and creative response to a new set of circumstances. And I believe we could do it again. I don’t know when that’s going to be, when the political momentum will be right, when the cultural moment will be right.

But I do think building a structure of evidence so that we could start to think about rule making would be an important first step toward getting some small points of consensus. Back to the Take it Down Act, that was a small moment of consensus. There are others, too, I think, particularly around youth harms. I think that’s probably the next frontier, and you see a lot of states kind of moving in that direction. A lot of states are moving against deceptive AI. The states are our laboratories of democracy, and there’s a lot of action there. So, maybe those are the green shoots where it all starts.

How do you encourage your students about all this? 

It’s funny, because sometimes the views of the professoriate or the talking heads and the things we’re most interested in are totally not the things young people are interested in. And sometimes they align. I try to meet my students where they are. One thing that connects to Hastings and connects to my thinking and my conversations about the future, particularly in the media and communications industries, is the rapidly changing workplace, particularly around entry-level jobs and the expectations of employers for young graduates and the competencies they need to have to be credible job candidates.

Many businesses and institutions need a kind of reverse mentoring. They need very tech-savvy people to come in and to help them, whether in helping older workers adopt these technologies in ways that are responsible, safe, and effective, or in other ways. For instance, almost every institution has an AI committee, right? Young graduates who have good AI readiness, good AI literacy could plausibly step right in and serve on AI committees of any institution or business and bring a fresh set of eyes and a new voice. I think there’s tremendous opportunity there for AI-ready students. 

That’s not to say—and I say this to my students, as well, because I teach business students, communication students, journalism students, computer science students—that you have to major in AI. It just means you need to use the tools enough that you have a sophistication about what they can do and what their limits are, that you’re paying attention to the trends in the industry. I think that’s really important. Sometimes new models will come out and the fidelity of the multimodal visuals that they produce are an order of magnitude better, for example—now it’s time to say, “You know what, maybe we could use that in our design sketches and our product design approach.”

So, it’s important to pay attention to the trends, to have a set of habits where you’re committed to keeping up with what’s going on in AI. That doesn’t mean it has to overwhelm your other interests. In some industries, it’s just going to be a tool—a very powerful tool maybe, but there are many other tools, too. I think that the Hastings investment is exactly right.

I went to Bowdoin with totally brilliant, amazing colleagues, and I’m sure this next generation is even more brilliant. I’m not sure you have to necessarily be the world’s greatest user of APIs and code and Python and all of that. That’s important for people who are going to go into engineering or computer science or whatnot. But I think just knowing how to use the tools in an office setting, in an institutional setting, is sufficient. Then, as you get into an industry, you’re going to find applications and new uses, and the tools will change.

I read somewhere that you said there’s a tendency to think of this as a young person’s world, but this is not in fact their revolution. I’m interested in what that means to you.

I don’t know what it means, exactly. I do know that, at least in my anecdotal experience—in some of the survey research I’ve done—you start to see a degree of skepticism, a kind of reticence around some of the technologies. I think in part these are media narratives that are scaring a lot of young people. There are going to be no entry-level jobs, you’re going to be replaced, there’s going to be no work. For liberal arts majors—and I was an English and philosophy major—the things we really care about are being diminished or maybe cast aside in favor of this kind of headlong race toward a technology-first world.

Think back to how the social media revolution in the early 2000s was largely youth driven, whether by founders like Mark Zuckerberg or the early adopters coming out of dorm rooms. It was very much a youth-led or youth-powered revolution. 

We even saw that in some of the political revolutions that took place around the world in the late first decade of the twenty-first century and with the Arab Spring in 2011. It’s not clear that the AI revolution is a youth revolution. It seems more of a managerial class that’s adopting these technologies, imposing them on society. I don’t think that’s quite right, but I sense a reticence among young people. Maybe it’s just an anxiety about where the job market is going. But your question was also about the implications of that. I am hoping that students—and this is what I tell my students—will be both users and powerful skeptics at the same time.

This is the ideal, I think, of the liberal arts education, actually—to sort of hold contradictory things in one’s mind and still proceed.

So, I’m hoping that they can bring a degree of skepticism and reticence to the technology so that we don’t give up some of the most important things—in terms of cultural values, in terms of the value of great books and great ideas and literature and art and music and history and all the things I studied. But they still recognize that there’s tremendous power in these technologies to help us synthesize large amounts of information, to bring in new ideas, to blend ideas, to give us the basis of new ideas.

One thing that, I think, most people would say about generative AI is that it is a tremendous way of solving for the blank-page problem that we have as thinkers, as professionals. It can really stimulate our thinking. And we may cast aside whatever six ideas Claude or GPT or Gemini give us, but just having something—a thought partner—that can give us a starting point can often get us to a higher altitude more quickly. So, thinking about AI technologies as empowering and amplifying and accelerating our power, I think, is the right way to think about them, rather than as a replacement, per se. There’s this saying that you may not be replaced by AI, but you may be replaced by someone using AI. I use that, and usually I get nods from students and they sort of understand. 

You seem to me to be a bit more optimistic than some people who write and talk about all of this. What gives you that optimism, and what would you say to people who are reading this that would give them some optimism too?

Thanks for saying that. I’m not sure my family always thinks I’m that optimistic about everything! I think the story of our country, in particular, is a story of grappling with new technologies and making mistakes but ultimately figuring out rules of the road and norms that can help us start to grapple with harms and disparities and inequities. And, if we look back at history, we see it often takes quite a while for that to happen. 

But as long as we keep our institutions strong and healthy—and that means public institutions but also private institutions, cultural institutions—we have a really strong foundation for trying to figure out these technology problems. That’s no guarantee that in the short term we will, but I think we stand a good chance of coming up with reasonable norms and rules if we believe in the power of human beings to come together collectively, to deliberate, and to try to come to some rough running consensus about how to help humans flourish.

That sounded extremely hopeful. 

I know, right? I’m trying to be a little Pollyanna-ish at a time that feels like a winter for a lot of people.


Bowdoin Magazine Winter 2026

 

This story first appeared in the Winter 2026 issue of Bowdoin Magazine. Manage your subscription and see other stories from the magazine on the Bowdoin Magazine website.