top of page
Radio show microphones

Wonks and War Rooms

clipart1711928.png
RSS Feed Icon_White
Spotify Icon_White
Google Podcasts icon_White.png

Regulating Big Tech with Taylor Owen


Taylor Owen is a professor of public policy at the Max Bell School of Public Policy at McGill university, and his research focuses mainly on tech regulation. This episode, he and Elizabeth define and categorize types of regulation. They discuss what tech regulation looks like, how lobbying impacts tech regulation, why regulation of tech is difficult, and the balance governments (both in Canada and internationally) must grapple with between regulation options and public good. This episode, recorded in January 2022, does not explore current legislation or the nitty gritty of the regulatory options. However, Elizabeth and Taylor provide the background for understanding what the options are and why regulation and self-regulation happen.


Additional Resources:

 

Episode Transcript: Regulating Big Tech with Taylor Owen


Read the transcript below or download a copy in the language of your choice:



Elizabeth: [00:00:04] Welcome to Wonks and War Rooms, where political communication theory meets on-the-ground strategy. I'm your host Elizabeth Dubois. I'm an associate professor at the University of Ottawa, and my pronouns are she/her. In today's episode, we're talking about regulating big tech, and my guest is Taylor Owen, who's just been appointed to the Expert Advisory Group on online safety for the federal government. Now we recorded this episode back in January before we knew about this advisory group, before we saw what kinds of legislation the government was considering. So a lot of our discussion is broader and not specific to the current policy environment, but I do think it offers some really useful context. So I hope you enjoy it. And with that, I'll hand it over to Taylor to introduce himself.

Taylor: [00:00:50] Sure. Hi. My name is Taylor Owen. I'm a professor of public policy at the Maxwell School of Public Policy at McGill University. And I work broadly on tech regulation at the moment. Most of my work is on how we should or shouldn't or may or may not or can or can't regulate the big tech platforms and what the consequences of doing that might be.

Elizabeth: [00:01:12] Amazing. Thank you. I'm so excited to have you here today. We're talking all about regulation and its various forms as applied to big tech or not applied. So I'll kick things off with some basic definitions from some academic literature and we'll see whether or not that makes sense.

Taylor: [00:01:28] Okay.

Elizabeth: [00:01:29] So first off, one of the terms we hear a lot is "self regulation", which is this idea that companies can create and apply their own regulations, right? They're like, "We know our tech best. We know what our users need most. Let us do it. Government. Don't worry."

Taylor: [00:01:46] Yep.

Elizabeth: [00:01:47] Then there's this idea of quasi regulation where the government creates broad guidelines for companies, but then there's no real way of forcing companies to follow those guidelines. It's just like, "Hey, it'd be great if you did this. Could you please?"

Taylor: [00:02:02] Yeah.

Elizabeth: [00:02:02] Then we've got co regulation, which would be this idea of like tech companies are creating and applying their own regulations. But the government does create some level of regulation that enforces those regulations. And there's kind of a cooperation between the government and the companies to create that regulatory environment. And then there's some straight, like, no, this is the regulation - the government has told you this is what's going to happen. Top down, companies go implement. So are these making sense? Would you modify definitions for any of them? Add any other missing categories?

Taylor: [00:02:36] I mean, I think those are absolutely the descriptive categories for different arrangements that we end up with between public sector and private sector actors in this space. But I sometimes wonder if we really only need one category, which is regulation. I mean, really what we're talking about is a government that desires private actors to behave in a certain way and so creates rules for them to behave. Now, in the absence of that, there is no regulation.

Elizabeth: [00:03:05] Right.

Taylor: [00:03:06] And that and companies can act how they want within the laws that we've defined elsewhere in our economy and our society. And so they can set rules for how their products are used. But those aren't really regulations. Those are just terms of service for how people can or can't use their services. So if I can, I can't post nudity to Facebook. That's not really me being regulated by Facebook. That's just a rule I have to abide by if I want to use the service of Facebook. Let's say Facebook is providing [a service to] me. Now, if the government tells Facebook that they are not allowed to post nudity or if they do, they need to share data on who's consuming it and who's sharing it, or they need to be taxed for some extra harm they might be doing to society or whatever the government might want to do. That's a regulation. So yes, all those categories describe sort of different arrangements, but at the end of the day, I think it's a little bit more binary.

Elizabeth: [00:04:01] Yeah, that that really rings true for me because one of the things that I find frustrating in this space is this idea of like living without regulation as some sort of actual option, which is not possible because these platforms, in order to exist, in order to do any sort of filtering of information for us which we rely on and is helpful largely in our lives. Like they have to make rules for how to use their tools, like that's what their tools are.

Taylor: [00:04:29] Sure. And they are also regulated in all sorts of ways by governments formally, even just to exist as a company or exist as a media company or exist as a technology company or a company that shares data or its all sorts of rules they already abide by. So the idea that this space is unregulated in any meaningful way is wrong. Like we we can debate how it's regulated and whether it's sufficiently regulated. But no, it's like this is heavily regulated. Space companies abide by all sorts of rules. Democratic governments and non-democratic governments have set for them to exist in their markets.

Elizabeth: [00:05:03] Totally. Yeah. And you're kind of hinting at the international aspect to which will come in a second. But let's spend a little more time with this idea of like the types of regulation that already exist and like, you know, you've just mentioned a few different ones. What are the kind of main ways you think of when we're thinking about regulating big tech? What are the biggest kind of government levers or prods that we've got?

Taylor: [00:05:28] Yeah. I mean part of the challenge is that these companies are so many things. So we tend to think of regulatory regimes as being highly siloed as sectors of the economy. So we regulate the health care sector one way we regulate banking with another set of mechanisms. We regulate media companies and broadcasters another way. And that's traditionally how we've built up this kind of apparatus of the regulatory state. And that's because in the industrial economy, these things were all different. Health care companies and hospitals were fundamentally different than banks. And you needed different tools and different whatever oversight regimes and different laws and the whole thing to regulate it differently. The challenge we face now is these platforms often are all these things and something new. So Facebook tried to launch a currency. Amazon's getting into health care systems. Amazon's a marketplace and a media company, right? Like Google is 20 different things all at once. So we don't have a regulatory metaphor that fits or analogy that fits well for these companies. And so it's actually hard, I think, to say what existing regulations apply to them because it depends what version of them we're talking about and what use case of them we're talking about. And this is really hard for governments to get their heads around because governments really want to know what category do we put these companies into? And there just isn't one. So it's actually ten different categories and maybe a new category as well because they do new things. And that's why you get this sort of governance confusion, I think, on the government side, because they just don't know what to do with these things.

Elizabeth: [00:07:12] Yeah, I think you're right. And there's just so much overlap. And then there's also, you know, like I think back to some of the earlier conversations about how to deal with tech companies, and the big thing was like, you know, we're a neutral conduit for information. We're not a publisher. We don't treat ourselves like the broadcasters and the publishers. We just make it possible to share information.

Taylor: [00:07:35] Yeah.

Elizabeth: [00:07:36] And even the companies that really heavily relied on trying to categorize themselves as at least distinct from that, have now in the last few years acknowledged that like actually sometimes we are in that category, but also where a lot of the time not in that category.

Taylor: [00:07:51] Yeah. And like that whole debate gets at something really kind of foundational to the development of these platforms, which is how were they going to be seen and viewed and regulated by governments as they grew in their early stages? And the whole debate about whether they should be considered a neutral platform that is not liable for the things their users said and did on them, versus being seen as a media company or a publisher, in which case they would be held liable like a newspaper is, or a broadcaster for the things that were said and done on their platform was an existential debate for them. I mean, there is no question that if early versions of platforms had been held liable and treated like a publisher, we would not have the companies we have now, they grew on the backs of that protection from liability. And that itself was a governance decision. That was an act of governance to put in a law that gave them liability protection. And so that it's not really a regulation, but it's a rule they have to abide by that was created by governments that gave them that freedom to allow users to say and do things that a normal publisher probably wouldn't.

Elizabeth: [00:09:02] Yeah, totally. Totally. And it really speaks to how there are so many aspects of this that all fit together. There are different kinds of rules that are put in place or not. Like sometimes there's intentional choices not to put a new rule in place -

Taylor: [00:09:17] Absolutely.

Elizabeth: [00:09:18] Sometimes whole new regulatory systems are imagined and implemented, and sometimes it's little tweaks to existing ones. There's also this lobbying effort. So I've heard in your podcast you talk about the tech lobby before and you kind of hinted in your last response there about how these companies wanted themselves framed.

Taylor: [00:09:40] Yeah.

Elizabeth: [00:09:41] And maybe you can speak just a little bit about what constitutes the tech lobby, what's their role? How does that play into whether or not one kind of regulation shows up or another?

Taylor: [00:09:53] Well, I mean, in many ways it's no different than any other sector, industrial sector, where these are some of the largest companies that we've known in human history.

Elizabeth: [00:10:06] Yup.

Taylor: [00:10:07] And their profit margin or their profit or their business model is very dependent on a certain set of rules. And so they have a very strong interest in the way they are or aren't regulated, which is totally understandable.

Elizabeth: [00:10:29] Yeah. You have a business, you're trying to make profit, you've got shareholders.

Taylor: [00:10:33] Right.

Elizabeth: [00:10:33] You want to have a say if you can.

Taylor: [00:10:34] Of course. And we have, for better or worse in our democratic system, created rules for businesses or private entities to lobby and speak to governments to represent their interests. And so it's not surprising in any way whatsoever that these companies would also do that. I think for a long time, though, we because we generally saw these companies as not just being machines for making money for shareholders. We saw them as doing other things, as providing social goods, as private and democratic goods, as solving all sorts of social problems. And that's part of the sort of imagination that these companies put forward, right.

Elizabeth: [00:11:21] Yep.

Taylor: [00:11:21] They were going to be all these other things. They weren't just like a car manufacturer. They were changing the world in society. And in many ways they did in really meaningful ways. But because we had all that image tied up, I don't think we saw their lobbying as sort of crass corporate lobbying. We thought it was just them trying to help us understand the Internet or them trying to help silly politicians who don't understand technology, understand the way it works. But that's not - I personally don't think that ever is what it was. I think it was always exactly what it should look like on the surface, which is large corporations defending their interests in the face of government regulation that will affect their business. And I think we now see that in a much more clear eyed way. But it took some time to get there. And now, like the biggest lobbyists in Brussels, in Paris, in Ottawa, and in DC are tech lobbyists. There's no doubt about it. It's been more than any other industry.

Elizabeth: [00:12:22] Yeah. Why would we expect any difference for major companies to have major lobbying efforts? That makes a lot of sense. The kind of education needed to get to the point where these tools could be used and could be understood enough to actually generate the kind of questioning of like, "Oh, like should it be okay for like salespeople to also be lobbyists in government at the same time? Like, should it be okay that the public affairs manager is also, you know, the one who's like the tech support?" Like, you know, there's a lot there was a lot of overlap, particularly in Canada, which is a pretty small market.

Taylor: [00:13:01] Yeah. Like you've done amazing work on this, but like how? Like the people selling ads and helping governments sell ad campaign ads were the similar people who were also shaping laws and also. Right. So like, that's a challenge. And I mean, you point to what we know about these companies, like know about the effect of these companies. A big part of why lobbying and the views of the companies were so persuasive for governments is because we just didn't know very much about the effects on society. I mean, we've embedded a new set of tools in almost all of our social and economic interactions across entire societies in the course of ten years. And so it's taken time for research to catch up and for us to understand what the effects were and for that understanding to be communicated to governments. So of course, governments were asking the tech companies like what's going on here and how do you work and how should we think of you? Because there wasn't really an alternative.

Elizabeth: [00:14:04] Yeah. There was no other way to know.

Taylor: [00:14:06] It's such a great way of putting it. Yeah. No other way to know.

Elizabeth: [00:14:08] And like, sometimes there's this critique of, like, a lack of transparency and accountability, which I agree with in a lot of areas when it comes to how big tech companies make their choices. Like we don't know enough about what's going on inside. But I think it's also important to recognize that in the early days for most tech companies, even if they wanted to be transparent and accountable, things moved so fast and different parts of a single company didn't know what was happening.

Taylor: [00:14:38] No. Unlike they weren't collecting these data, they weren't storing them properly. Nobody was asking for it. Governments had no capacity to deal with it. And so all of that is understandable that we kind of let this thing grow to a certain extent without a pretty light touch. But we're in a different place now, though.

Elizabeth: [00:14:55] Yeah, exactly. It's a different world right now. We are all very, very dependent on many of these companies that are pretty foundational to our information environments, which underpin our politics, our economy, our culture, like all of it. Right.

Taylor: [00:15:11] Yep.

Elizabeth: [00:15:11] So where does that leave us then for what regulation looks like now and where it needs to go?

Taylor: [00:15:18] How long do you have to talk this through? So. Look, I mean, I think some things have foundationally changed from that narrative we just talked about. I mean, one is we do know a lot more about the social, economic, behavioral, democratic effect of the use of these technologies at large in our society. Like we just know more. We know more because of researchers. We know more because of journalists. We know more because of civil society. We know more because of regulatory investigations. Right. We just know that in addition to all the amazing things that these technologies afford. There's also some negative externalities and negative externalities that aren't necessarily being self corrected by the market itself. And that is when we expect governments to regulate, to use the term of this conversation.

[00:16:03] I mean, that is precisely when we, in fact, expect governments to regulate. When a market is not correcting for a perceived negative externality the democratic society thinks should be minimized. That's why we use regulation or laws or whatever government action. And so that's where we are. So we know there's some negative things. We also know there's a lot of good things and we need to design a new set of tools or apply existing ones that are at the sort of disposal of governments in order to reduce those negative costs. And that's the conversation we're now in. We're beyond whether they should be regulated. We're beyond even what the harms are. I think everybody sees them and knows them and accepts them. The question is what tools do democratic governments have? And we can talk about non-democratic governments, too, but what tools do democratic governments have to minimize those harms while maximizing those benefits? And there's a lot of challenges here because these are new companies, as we say, these are large companies. These are companies that are global and not necessarily based in Canada. They, as we said, spanned all sorts of different sectors. That's a challenge.

Elizabeth: [00:17:12] Yeah.

Taylor: [00:17:12] But that's the task at hand. That's the governance task at hand. And I think governments are trying to figure it out and learning from each other and experimenting. And it's a pretty exciting time for that conversation.

Elizabeth: [00:17:22] Yeah, absolutely. And I would add, you know, you've listed off all of these different kind of ways that things vary. I would add that right now there's a tendency to just talk about online harms, at least in Canada. That's the term that seems to be most used right now.

Taylor: [00:17:38] Hmm.

Elizabeth: [00:17:38] So we've got online harms. I think there's a recognition that it's a variety of harms, but we still throw them together like we got to even like the bill itself that the federal government is dealing with. It's just like online harms - writ large - and not all online harms are going to be able to be dealt with in the same way I would say.

Taylor: [00:17:58] Absolutely not. Absolutely not. Like even the thing like you can think of also a spectrum of all sorts of different harms, but even the things that are like clearly illegal, like -

Elizabeth: [00:18:07] Mm hmm.

Taylor: [00:18:08] - Forget all just the bad stuff that isn't illegal that we would like to have less of on the Internet, which is a big part of this. But even just like the things the federal government has said are illegal offline, which need to be, we need to figure out how to get rid of them online. So child exploitative sexualized content and hate speech and threats, incitement to violence and like things that we all accept as a society early to varying degrees, we accept are illegal. How do you enforce that online? Even that doesn't have a one size fits all answer. Like we deal with those different types of content and those problems really differently in the physical world, and we probably need to digitally as well, right? So that's even leaving aside all the other stuff that is even much harder to get up.

Elizabeth: [00:18:59] Yeah. Looking at the things that should be the simplest problems, they're still really difficult.

Taylor: [00:19:05] Absolutely. Absolutely. And no governments figured out how to do that. In fact, there's no perfect way to do it at all, which is a hard thing for governments to accept. I think governments, if something is illegal and dangerous, the impetus for the government is to stop it and to reduce it to zero.

Elizabeth: [00:19:25] Right.

Taylor: [00:19:25] And on the Internet, that's just not possible. So we're dealing not with absolutes, but with risk and percentages and likelihoods and diminishing over time, and that's minimizing risk. And that those are concepts just aren't always satisfactory for regulators.

Elizabeth: [00:19:43] Yeah. It's difficult for regulators. It's also difficult for users. It's difficult to understand why you as an individual, it might be okay that you get hate speech thrown at you, but when somebody who's a public figure gets hate speech like that becomes a major problem, or why one person gets deplatformed in another person doesn't get deplatformed for similar content. Like those are really difficult things to understand when you don't know the whole big picture.

Taylor: [00:20:15] Yeah. And this is one I struggle with a lot here. Is that the scale in which we're talking about these potential harms and abuses online are so vast. I mean, there are a hundred billion pieces of content posted to Facebook services every day. Like the amount - and when you look at some of their takedown reports, which to their credit, they're increasingly releasing their Twitters reports on the type of content they're deleting - like it's billions and billions and billions of pieces of harmful speech. And as a government engaging in that, you have to deal with that scale and that scale. You're not going to get rid of it all. You have to minimize and to minimize, you have to put in things that aren't absolute. And that's a regulatory approach you have to take, I think. But that doesn't absolve the fact to your point that each one of those pieces of hate speech is directed at a person that person has received it. And to them, they're not a statistic on this broad scale. And they don't care that Facebook serves billions of people a day because they were harmed by that speech.

Elizabeth: [00:21:26] Yep.

Taylor: [00:21:26] And that's a really difficult thing to reconcile. And I struggle with that one, frankly. I mean, particularly when you see the abuse, see who is targeted with that abuse disproportionately and who is being silenced by that kind of harm. This is it. It's very difficult to say to those people and those groups and those constituencies in our society that you have to accept that degree of harm in order for us all to get the benefits of the Internet. That is not a value proposition that I'm comfortable making.

Elizabeth: [00:21:57] Yeah. It gets even more complicated, as you say, when we get out of the situations where it's like largely societally, we can agree that hate speech shouldn't be accepted. And then we get into things like mis- and disinformation -

Taylor: [00:22:11] Hmm.

Elizabeth: [00:22:11] - where yeah, there is great potential for public harm and I think, you know, COVID 19 pandemic anti-vax kind of conversations, anti mask conversations.

Taylor: [00:22:21] Yeah.

Elizabeth: [00:22:21] And the way that health data gets portrayed or doesn't get portrayed by different groups like that is very clearly a public health issue. And there's real lives on the line. But also it's not necessarily illegal, right?

Taylor: [00:22:42] I mean, absolutely. There's and this is probably the much bigger problem with the nature of our public discourse or our public sphere is that there are a whole bunch of things that are we can we can broadly agree or not positive for democratic society, whether that's false information circulating widely or harmful speech about public health that is lowering vaccine adoption rates or whatever, speech from foreign actors that are undermining election integrity initiatives. Right. Like these things are generally not good, but they're not illegal and they circulate widely and they may be incentivized in ways online that they aren't in the offline world.

[00:23:25] And in many ways, that's a much bigger problem, I think. But it also doesn't mean but that doesn't mean that it can't be regulated. And regulation does not just mean binary takedowns or making things illegal. Regulations also create incentives and provide accountability over systems and industries. And so we don't when we look at the financial sector, for example, we or the health sector, we don't just ban outcomes and we don't just penalize people for doing bad things. We regulate the way in which products are developed or the incentives that are underlying them or the design of the systems, that incentive that build. Right.

[00:24:11] And that's where we're starting to move. I think on a regulatory front these digital platforms systems are not just talking about the bad things that are said and done and the outcomes of them, but actually the structure itself. And are there incentives in this system that we can get at through regulation that would minimize the amplification of harmful speech or the financial motives for being an anti-vax grifter, or the data collection that allow for micro-targeting for foreign actors to be able to target our voting behavior more efficiently. Or like what are those structural things sitting in this system that regulations can get at to change that incentive structure? And that's also regulation.

Elizabeth: [00:24:58] Totally, totally. Like don't let political advertisements be bought with foreign currency.

Taylor: [00:25:03] Right. You'd think that would have been a one we could have done right from the start. But, you know, sometimes it takes a couple of runs to learn this.

Elizabeth: [00:25:10] You know, eventually you get there.

Taylor: [00:25:12] You do on some things. Yeah, some things.

Elizabeth: [00:25:14] When we're thinking about this case of disinfo specifically, are there approaches that you think are particularly promising?

Taylor: [00:25:24] So the two broad approaches that have emerged for harmful speech and disinformation are ex post takedown regimes for things we might not want. And we, as regulators or governments say, okay, we want to use platforms to take those down. Or these ex ante structural incentives that I was talking about and those those bucket of things I think are far more effective. And our government made the mistake of going with the former in their initial, initial proposal. And I think that's being backpedaled pretty quickly. And we're now going to see a different approach. The focus much more on these structural things, on the structural stuff. There's a few things on disinformation that I think work really well. The EU has has essentially put a risk assessment at the core of it. So they've said that you platforms have a huge amount of power over what information disseminated through society. You have all the data on how it's circulating and how people are consuming it and what your algorithms are amplifying or not. And you also, as like as a smart company that engages in this space, can probably do some analysis over what could be causing more harm or not. And is your product when you deploy it, amplifying really bad things or positive things.

Elizabeth: [00:26:49] Mm hmm.

Taylor: [00:26:49] And we are going to put some of that onus on you to do that risk assessment.

Elizabeth: [00:26:54] Right.

Taylor: [00:26:54] And those risk assessment reports are going to be demanded of you if we, as regulators, see undue harm in our society due to a product. So we're not going to hold you accountable for every bad thing that is said. But if we start seeing that anti-vax content is running rampant on your platform and is being driven into groups through algorithmic recommendations or next up algorithms or whatever, and we flag that you need to tell us and show us the onus is now on you to show that your products are not creating that problem and that you've actually done the thoughtful risk assessment to show that that design decision you put in place are not causing that problem. And that's a real mentality change, I think, right? It's not in the onus of the regulator to show it's happening. It's on the onus of the company to show it's not.

Elizabeth: [00:27:48] Yeah.

Taylor: [00:27:49] And I think that kind of shift is really important. So that's one that I think is a big deal. I think just data collection limitations are going to be a big one. Like we've tended to think of privacy modernization law as being a separate thing that we should update our privacy laws like we are in Canada and we're talking about doing in Canada because it is bad that this data is collected and that's our starting point, right? That the social harm is that too much data is being collected. And I don't think that really gets at the challenge. The challenge is what that data is being used for. And I think we're starting to see a more nuanced view of that, that things like building a facial recognition algorithm, using data about our faces has a different level of risk and therefore needs to be regulated in a different way than collecting data about things we've liked on social media. Those are two totally different things with different types of consent, with different types of implication for the content that circulates through society and so on and so forth. So I know I'm rambling on a bit, but I think there's just different nuanced approaches into that ex-ante approach that like governing the structure that governments are starting to land on. And the stuff is tricky, but they're getting

Elizabeth: [00:29:10] Yeah.

Elizabeth: [00:29:11] Yeah, yeah. I think that's really interesting. And the idea of shifting the onus to be on them, shows us that you're not making this a bigger problem than it needs to be, I think is really helpful because it goes back to ideas like technological affordances. Right. And I did an episode last season about technological affordances and and talking about how, like the actual design of a tool impacts how we choose to use it and what kind of information is likely going to flow through it and who's going to be able to get data from it and who's not and all of those things. It has so many knock on effects. And so when the idea is as you're designing, you need to be thinking about potential harms is really useful. It does make me wonder like, okay, but what if different companies do it better than others? How does the government decide when they need to go do the check? What stops them from unfairly targeting certain companies and not others?

Taylor: [00:30:07] Who has the resource, what companies have the resources to do it too? I mean, that's just a vastly different capacity level and the company side for sure.

Elizabeth: [00:30:14] Yeah. And we already talk about, you know, meta being like this complete like everything kind of company at this point. Alphabet, you know, regulation like that can be responded to, I guess reasonably, if you are a massive company with tons of resources, if you're the tiny competitor who's probably about to get bought out by one of those. Not so much.

Taylor: [00:30:39] Yeah. Yeah. Although that I mean, the Digital Service Act in the EU has differential responsibilities depending on your size of company and depending what kind of sector. So I mean, in some ways they're really smart in how they go about this, but they have all sorts of differentiation, both for size of company, but also for what your tools are used for. So if you're a platform in the health sector, you're going to be regulated much more stringently than if you're a social platform, right? If you're targeting kids, you're going to be regulated differently than if your primary users are adults. If you develop an AI that's used in the policing sector, you're going to be regulated totally differently than if you're doing an AI for whatever a chat bot for an app, you know. So I think part of this is getting to different layers of differentiation across all these different categories.

Elizabeth: [00:31:33] Yeah. And I think that really goes back to what you had talked about earlier. Like there are so many different contexts in which regulation can and cannot be helpful. The types of regulation, the types of approaches like there isn't this one size fits all like, great. We've solved it. Now the internet's going to be beautiful again.

Taylor: [00:31:56] Totally.

Elizabeth: [00:31:56] I say again as if it ever was fully beautiful.

Taylor: [00:32:01] We can have a bit of nostalgia, but best not to be overwrought with it. Yeah.

Elizabeth: [00:32:05] Yeah. But yeah, it's really heartening to think that our governments have gotten to the point where we're able to think about this in a really nuanced way. And despite the fact that it doesn't jive super well with the way that at least the Canadian government, but many, many Western democracies have, you know, a bureaucracy that's really divided. You need to know which department or agency something goes to. And so to know that there are some movements towards dealing with this in a way that's more representative of what the Internet actually is like, is, is good.

Taylor: [00:32:43] It's good. Yeah. I mean, I think we don't want to be the last person to be overly optimistic about our chances here, but -

Elizabeth: [00:32:52] Yeah. I don't think it's solved. I think we've just taken a baby step in the right direction.

Taylor: [00:32:56] But look, if you were to have said five years ago that most Western democracies were going to be actively thinking through how to broadly define governance, the Internet in totally new and imagined ways, I don't know, people probably wouldn't have believed you, but so we are in a very different place.

Elizabeth: [00:33:17] Everyone would be like, that's rather optimistic.

Taylor: [00:33:19] Exactly. Exactly. So whether we're building the right capacity to it'll keep up with where these companies are going and the new things they're building. That's an open question.

Elizabeth: [00:33:33] Yeah. We'll find out. All right. We are at time. Unfortunately, there's so much more that we could talk about, but we'll have to end it there. I end each of my podcasts with a little pop quiz, though.

Taylor: [00:33:47] Oh, God.

Elizabeth: [00:33:49] So it's easy. You got this. It's a short answer. How do you define what regulation versus self regulation is?

Taylor: [00:34:00] Regulation are the rules that the government imposes on private actors. Self-regulation are the rules that private actors decide for themselves, that are the terms of those using their services.

Elizabeth: [00:34:18] Awesome. That was so concise. Beautiful. Thank you.

[00:34:25] All right. That was our episode on regulation of big tech. I hope you enjoyed it. To learn more about this or any of the other concepts or theories we talked about today, you can check the show notes or head over to Polcommtech.ca. We've got transcripts in English and French completely annotated. This special season on mis- and disinformation is brought to you in part by a grant from the Social Sciences and Humanities Research Council of Canada and the Digital Citizen Initiative.


Wonks and War Rooms Season 4 logo

1/21
bottom of page