top of page
Radio show microphones

Wonks and War Rooms

RSS Feed Icon_White
Spotify Icon_White
Google Podcasts icon_White.png

Meta Oversight Board with Julie Owono

Julie Owono is the executive director of Internet Sans Frontières (Internet Without Borders) and a fellow at the Berkman Klein Center at Harvard University. She is also an inaugural member of the Meta Oversight Board, an independent group of experts who review appeals of content moderation decisions on Facebook and Instagram, which is the topic of today’s episode.

Elizabeth and Julie talk about the background of the board and the appeals process, how cases are chosen, the possibilities and limits of what can be accomplished with content moderation, and the lesson Julie has learned in her time serving on the board. They also discuss what counts as content moderation and who gets to be involved in that process, the paradox of tolerance and where the line is drawn when it comes to limiting people’s freedoms of expression.

Additional resources:


Episode Transcript: Meta Oversight Board with Julie Owono

Read the transcript below or download a copy in the language of your choice:

Elizabeth: [00:00:04] Welcome to Wonks and War Rooms, where political communication theory meets on-the-ground strategy. I'm your host Elizabeth Dubois. I'm an associate professor at the University of Ottawa and fellow at the Berkman Klein Center at Harvard University. My pronouns are she/her. Today we're talking about content, moderation, platform governance, and the Meta/Facebook/Instagram Oversight Board. And we've got an Oversight Board member, Julie, here to chat about it. Julie, can you introduce yourself, please?

Julie: [00:00:31] Yes. My name is Julie Owono. I have several hats—I'm leading a nonprofit based in Paris called Internet Sans Frontières or Internet Without Borders that works on freedom of expression online. And I'm also a member of the Meta Oversight Board, which we will talk about today. And I'm also an affiliate at the Berkman Klein Center for Internet and Society, researching for the past three years, the intersection between free expression, harmful content, and how to find democratic solutions to those two issues.

Elizabeth: [00:01:04] That is wonderful. I am so excited to have you as a guest on the podcast today. Most episodes I have a specific topic and I do a little intro on that topic at this point and then we get into the questions. But with this episode, we're doing it a bit different because I've done a previous episode on content moderation with a former Googler who did content moderation for them, and I did an episode on platform governance with Taylor Owen, who's a professor at McGill University. So for listeners, if you haven't heard those episodes before, you can go back and have a listen. They definitely help fill out this conversation. And what I'm excited to do is talk today—kind of bringing those two different things together and talk to you about what the Oversight Board does, the relationship between content moderation and freedom of expression, how that connects with what Meta (the company) does and how that impacts us as users. So there's a whole bunch to get into. But I thought I would start off with, can you explain what the Oversight Board is for?

Julie: [00:02:06] Yes, absolutely. So for those who don't know, the Oversight Board comprises independent members from all around the world who come together to make binding decisions on what content Facebook and Instagram should allow or should remove from their platforms based on respect for freedom of expression, freedom of speech, and respect for other human rights as well. The Board is independent. It's an independent body with people from around the world, as I said. And basically, the Board was set up to give users a unique opportunity to appeal the content moderation decisions that until now were made only by a handful of executives in Silicon Valley or by content moderators in the background. We were created to hold Meta accountable for the biggest challenges, really in content moderation. And this is, of course, I mean, we see it as a compliment, not a substitute for other efforts to—in this space to to hold social media accountable. We, of course, need governments, civil society, tech industry experts, researchers to play, to play a part. But what we believe is that independent oversight process can absolutely help produce stronger and more principled decision than what Meta has been doing so far and anything that Meta can achieve on its own.

Julie: [00:03:42] So we—to give you briefly a short history, the Board was—has started taking cases in October 2020. We were created in May 2020, but started taking cases in October of the same year. And we usually look for cases that represent significant and difficult content moderation issues that have the potential not only to affect the individual user concerned by the individual content at stake, but also that can affect many more users. And we have taken cases from all over the world, from the United States, from India, from Colombia, from Brazil, from Nigeria, and half of them came from what I personally like to call "global majority", what others will call "global North countries".

Elizabeth: [00:04:30] Mm hmm.

Julie: [00:04:32] But through these cases, we have examined a range of Facebook community standards and Instagram community guidelines—from—ranging from hate speech policies to violence and incitement or health misinformation. And yeah, I can say that after two years, over two years now, of doing this process—

Elizabeth: [00:04:53] Mm hmm.

Julie: [00:04:54] There have been plenty of lessons learned. And yes, very happy to share them in detail as we discuss.

Elizabeth: [00:04:59] Thank you. That's a really fantastic overview of what's happening. And one thing that I just kind of want to highlight about what you said is my understanding is the Oversight Board takes on these cases, kind of well after the initial piece of content maybe was decided to be put up or taken down.

Julie: [00:05:18] Yes.

Elizabeth: [00:05:19] And—and then the idea is like, let's take the time to look in depth, even though a lot of the content moderation we think of is very on the go very fast, very "in the moment" because stuff spreads really fast on the Internet. Right. And so it's this sort of like, a bit of a pause afterwards, to more deeply reflect. Right.

Julie: [00:05:40] Yes, that's it. That's exactly what it's about. It is true that in most cases—because to tell you about the process, that's probably something also very important to to for your listeners. So in the standard process is that when a user files an appeal or either a user or even Meta itself, because the cases can come from those two conduits, the company itself struggling to apply its own rules and users who are not happy with a moderation decision. Well, when we receive an appeal, the board has 90 days to publish/render its decision. It has exceptionally happened that it has been slightly more for very contentious cases.

Elizabeth: [00:06:26] Mm hmm.

Julie: [00:06:27] But we also have the possibility of having a process that's more expedited. We haven't used that process yet, but it exist. But it is true, that in general, the board decision comes after the fact. Nevertheless, the—the decisions that we make. As I was saying, we're trying to make sure that they impact not only the individual user, but also future potential cases that resemble the one that we have received. And—and what I'm saying this is the binding part of the decision concerns the obligation for Facebook to take down or leave up the individual content. So that's the binding part of our decision. And then we have a more, a less binding well, not binding part, which allows us to make what we call, well, "recommendations to the company"—recommendations on the policies of the company. And what is interesting is that although these recommendations are non binding, nevertheless, the company has to, must respond to those recommendations. And the responses can go from "No, we will not implement this because it's not implementable or for X reason", or "We will implement this" or "We will see the possibility of implementing this". And I think that the interesting—one of the interesting parts of the the work of oversight board is precisely that public conversation that happens between a platform and a gigantic platform like Meta. Well, Facebook and Instagram specifically, having for the first time, to explain publicly its internal processes and how it makes a decision and how technically recommendations are feasible or not.

Julie: [00:08:17] This is, I would say, a big leap forward from where we were just three years ago, where, when—and I can speak from experience having—having being part of an organization that has made countless of recommendations to companies without necessarily having a follow up, engaging a lot with those platforms, without necessarily seeing the effect of that engagement. Well, here we finally see—live—a live discussion, an online discussion on, yes, the the possibilities of content moderation and the the limits sometimes of what is possible to do. And yes, for me, that that's where the the impact is particularly interesting. Not only does it bring more transparency and content moderation processes, but also it provides accountability, because, as I was saying, there is a binding part of the decision that forces the company to apply whatever the board says. And finally, I would say it places content moderation, not just as an object that only belongs to platforms, but really as a society conversation. Where do we want to draw the lines and how do we want to do it in a manner in which everyone is informed of, of yes, of the processes and of content moderation in general.

Elizabeth: [00:09:44] Yeah, I really like that point that you're emphasizing about how this creates new transparency and it also creates a new understanding really, of what counts as content moderation and who gets to be involved in that process. Thinking about it as a thing that us as users are not just impacted by, but can be involved in to a certain extent. For a long time, conversations around any of the information, kind of, control filtering prioritization that different social media platforms made was like, well, "That's that's our secret sauce, that's our black box. We can't let you in. That's going to let all our competitors know we're never going to be able to make it!" And now we're seeing new approaches that allow for some transparency, allow for some engagement from different kinds of stakeholders like you're describing. And that means that we can all just have a much better understanding of how our information is showing up on our screens or not, which is useful in a whole bunch of different ways.

Julie: [00:10:43] Absolutely. And what I can say specifically to that level of transparency, that I hope that the oversight board processes and decisions bring in this whole debate of social media accountability is that this was expected. Right. People were waiting for that. And I can just mention one number.

Elizabeth: [00:11:07] Mm hmm.

Julie: [00:11:08] We have received so far, more than 2 million cases submitted by individuals who would like to see their content reviewed by an external body and having a decision published on that. 2 million. That is—that is— I cannot even begin to imagine.

Elizabeth: [00:11:28] How do you pick?

Julie: [00:11:29] We usually, as much as possible, try to pick cases that highlight recurring problems on the platform, recurring problem with different community standards. So I was talking about hate speech earlier. Hate speech is a big purveyor of cases. I mean, content moderated based on hate speech are a big purveyor of appeals, but also bullying and harassment. We've had cases related to that. Dangerous organizations. What organization can you talk about or not talk about on platforms? We've had cases, for instance, related to the, to discussing the human rights of the leader of the PKK (Kurdistan Workers' Party), the Kurdish group, and his leader being detained by the Turkish authorities and conditions that have been criticized by the UN. Can you discuss that on Facebook? Or can you discuss the Taliban and what they're doing in Afghanistan without being associated as someone who is praising those organizations? So a very different range of policies. So that's on the one hand, one of the criteria.

Julie: [00:12:45] The second criteria is, of course, how it can affect not only the individual user, but also potentially other users and potentially other cases. I mean, avoiding other cases, I should probably say, to be more precise. And also cases that pose significant questions on the tension between respecting fundamental and human rights, because that is something that Meta has committed to do. Meta has a whole publicly published policy on how it will try—to well—on how it will respect human rights of its users while offering its services in line with the the principles outlined in the UN—United Nations Guiding Principles and Business and Human Rights—which explained that, of course, the primary responsible for upholding and protecting human rights are governments, are states.

Julie: [00:13:44] That is their responsibility. But we know that other actors can inadvertently negatively impact human rights. And those actors are companies because many of the companies, not only in the tech sector, but really in general, do have an impact on society and on human rights. And that, of course, is even more visible with social media platforms and how we've seen how they can impact elections, how they can impact health responses, how they—can a lot of—a lot of other issues. So we also try to take cases that will give the opportunity to the board to come in with a solution proposal on how to resolve a potential tension between protecting, upholding certain human rights while at the same time allowing for people to continue to express themselves. So this is the way we try to pick cases. It's not always easy. This is a task of us, of a special committee that the oversight board has set up, which is called the Case Selection Committee. And yes, but we try as much as possible to have a diversity geographic diversity of cases, policy diversity, so cases touching on different community standards and also different human rights at stake in the case. Yes.

Elizabeth: [00:15:05] That makes sense when you can't take all of it in. You get those little bite sized pieces and do the targeted ones as well as you can so that you can build out over time.

Elizabeth: [00:15:16] So I want to kind of switch gears a little bit now that we've got a good kind of basic understanding of what the Oversight Board does and what kinds of choices are being made, to talk about content moderation as it's related to this idea of freedom of expression. And so often the conversation kind of ends there. But like the next line is it is balanced against freedom from harm and hate and whatever else we want to protect in terms of what we think constitutes a space in which people are going to want to engage. Right? And when you're a government, you're thinking about the rights of your citizens. When you're a company like Facebook, you have to think a little bit about those rights, but also a little bit like, what do we want our users to experience so that they stay on our platform? And I imagine the Oversight Board, you're thinking a bit about that. So what I like to do when I'm thinking about freedom of expression, how it's balanced against potential harms, is bring up the idea of tolerance when I'm working with students in my Political Communication classes. And we talk about how, you know, you don't need to agree with all of the ideas, but you need to be tolerant of a wide variety of ideas in order to have a strong democratic system. But there's this paradox that philosopher Karl Popper pointed out, where it's this Paradox of Tolerance that says, "In order to maintain a tolerant society, you have to be intolerant of intolerance." And that becomes problematic because then you're just back to that point of, well, where do we draw the line of what's too much and what isn't?

Julie: [00:16:51] Yeah. Yeah.

Elizabeth: [00:16:52] Do you see this connected to the kinds of questions of content moderation that you are grappling with?

Julie: [00:16:59] I have to say, first of all, that I'm an avid, I mean, the thinking of philosophers and thinkers like Karl Popper has really influenced my understanding of freedom and freedom of expression, particularly in the liberal society. But I have to say that the role of the Oversight Board is not necessarily to tell a company like Meta what it should tolerate or not. Rather, what we think are added value is, is helping the company having clear internal processes, clear rules, public rules, principles that the company upholds, that are—that allow to make the necessary balance between sometimes very conflicting rights, right? The right to right to health versus right to free expression, right to right to life versus right to free expression. Et cetera, etc.. And making the balance in a way that is respectful of international human rights principles. And what are those principles? Let's remind ourselves the International Covenant on Civil and Political Rights that was adopted in the framework of the United Nations, to which virtually all member states of the United Nations have agreed to respect. Well, that Covenant talks about freedom of expression, a lot of rights, but specifically, freedom of expression. And what does this covenant say? The covenant says, "States should not interfere with people's right to express themselves. And the interference should happen only very exceptionally and in a manner that is strictly defined and precisely in a manner that is legal." That means there should be rules that clearly outline what you are allowed to do or not in a manner that is proportionate. That is to say, in a manner, that an interference should not have disproportionate effects on the right to free expression, but also on other human rights of the citizen, the individual concerned.

Julie: [00:19:23] And last but not least, "The measure of interference should be necessary with the the legitimate aim that is being pursued." Right. This is the framework of analysis of the Facebook Oversight Board—Meta Oversight Board. This is what you will find in all our decisions. Are the rules clear? Are the rules in this case clear? Were there clear—was the user— was the normal user and this specific user in particular in a place to know exactly what is allowed or not.

Julie: [00:19:51] Secondly, is the censorship applied by Facebook, if the content was taken down, is that censorship? Was that censorship proportionate? Should we only take down? Are there other ways that can make things a little bit more proportionate? Was this interference applied by Facebook necessary? Was there another way to achieve the goal that you wanted to achieve? If the goal was to protect the life of a certain group, was there another way you could have done that? These are questions that we are grappling with, and these are answers that we try to bring in to the company and also to the general public. And what we have noticed is that by doing that, by offering this framework of resolving this very complex issue, because before we didn't have any framework, before it was, "Oh, do I agree with what X, Y, Z said?" When I say we, I mean executives in the Silicon Valley and other content moderators. There were, of course, internal guidelines. But as many Meta Oversight Board decisions show, many of those guidelines would benefit from greater clarity.

Julie: [00:21:07] And this is something that is also admitted by Meta itself, and that's precisely why they have requested the help of the Meta Oversight Board. So this framework that we're trying to, you know, normalize in the industry, is how we try to resolve this tension of—Yes, where do we draw the line? Should we even draw it? And if we do, how should we do it in a manner that respects fundamental rights, that respects the principles that sustain the liberal democracies that we have been trying to build and to safeguard for the past, let's say, three, four hundred years, more or less—the modern liberal democracies, of course, I'm talking about here. So, yes, this is how—the and and it's not far from what Popper himself wrote. He did write that there should be interferences. But it's not for us individuals to decide. I'm really badly paraphrasing and I apologize to Popper especially but he did mention also the need for rule of law, the need for institutions to to make sure that this balance is safeguarded—without.

Julie: [00:22:19] And all of this boils down to checks and balances. Everyone, every corporation, every individual, every group that has a little bit of power, must have checks and balances in different forms, whether these are processes, whether these are other institutions. There should be ways to limit the power held by a very small group of individuals, because no matter your good intentions, you could be the greatest wise person in the world. But if you have the whole power of deciding what is wise and what is not wise, you will abuse it. That—

Elizabeth: [00:22:58] Yeah.

Julie: [00:22:58] Is obvious. And so in democratic societies, it's important to have those checks and balances. And this is, I would say, probably what the Board is trying to contribute to, how to—what are the processes that will ensure that those guardrails and checks and balances will exist in content moderation?

Elizabeth: [00:23:14] Yeah, that all makes a lot of sense to me. And I think you're right about this idea of any amount of power in the hands of a small group of people can be abused and probably eventually will be abused, if only because that small group of people are only going to have one perspective or very few perspectives from which they see the whole world. And they can't possibly know all of the impacts immediately. They need to have processes in place to help them understand what all of those kind of knock on effects are.

Elizabeth: [00:23:46] It makes me wonder about the idea of these online spaces and what we should expect of them in terms of what we should be allowed to post and share and what we shouldn't. So one way of thinking of this question is like, should all spaces online—or let's take social media to narrow it a little bit. Should they all be tolerant? Do they need to live up to that, sort of like, democratic ideal? Are there certain spaces where it's okay that they are meant for very particular kinds of opinions to be shared and other ideas might get pushed away and moderated out? There's a bunch of tricky questions in there. So I'll just leave that little tiny thing there for you to, just solve.

Julie: [00:24:33] Wow. I wish I could. But no. What I will say is, even if we had, or despite the fact that we have those platforms that you're talking about, which are the platforms that allow certain types of ideas that probably would not be tolerated in other spaces. Even on those spaces, I think there is a thirst, in general, for rules and for clarity and for process. And this is what we see in response to lots of—for instance—in the United States, some states want to force companies to accept every content, no matter what, or at least not to moderate based on political affiliation, whatever that means. But even those proponents of unfettered free expression and unfettered moderation or, lack of, in existence of moderation—even for those, when some of these proponents of unfettered freedom come to get involved in the business of actually managing a social media platform. Oh my God, what a—I think it's a great exercise of humility. For me, it has been. Ever since I've been able to have a deeper look into how social media such as, Facebook or Instagram, actually operate. It is daunting and accommodating the opinions and expression of thousands, millions of users, even users who are probably on the same, probably sharing some ideas, even in that there will be disagreement. So how do you adjudicate those disagreements in a way that allows the platform to continue to thrive? We've seen with Twitter that there is no easy answer, that you cannot make the economy of having rules clear, rules of having clear processes in place. Otherwise the consequences, first of all, what's interesting for platforms, advertisers do not want to be associated with a messy platform.

Elizabeth: [00:26:51] Mm hmm.

Julie: [00:26:52] That, already is probably an argument for making sure you have those rules in place. But even beyond that, even if advertisers are not there anymore, they're still—I mean, there are consequences to society, and society has at several times expressed its willingness not to experience certain things online. When you take very specific content such as child abuse, no one, no one, wants to see it's time that his or her or their timeline filled with abuse material—child abuse material. Very little. I mean, no one, unfortunately, of course, they're criminals, but the majority of people of Internet users don't want to see that. You also have some people who don't want to see nudity. Whether we agree or not, it is a reality. What do you do? Are you going to tell them, "Don't use social media platforms, don't use the internet at all" ?

Elizabeth: [00:27:47] Mm hmm.

Julie: [00:27:47] Because yeah, so all this to say is no matter what platform, what type of platform, what type of opinions you allow on your platforms, a very minimum level of moderation will be required. Because just for the fact that advertisers do not want to be associated with some type of content, and not only that, more broadly, society increasingly, is debating what— how much impact—it wants the online world to have on the offline world.

Elizabeth: [00:28:22] Yeah. And I would, I would add to that and say most of these tools we use, these Internet enabled tools, social media being the kind of main example we're talking about today, their base function is to sort and filter and organize information. And we don't always think of that as content moderation. We tend to think of, you know, "Okay, the way that they prioritize content, so that stuff that I might want to click on and favourite, shows up in my timeline or news feed before stuff that I'm not going to care about". Or in search, have a search result that's useful to me, pop up first, right? We think about that sort of, filtering and organizing work, separately from a lot of the content moderation work, because the content moderation work, I think, ends up being really connected to our view of when human rights might be violated or what we think is socially acceptable to have there rather than like the practical functionality that we see.

Elizabeth: [00:29:27] And so one of the things that I just like to add to the content moderation conversation is, it doesn't matter whether or not we think content should be moderated. The point of search and social media is, in fact, to moderate, in terms of curation. And so, I like to start from that premise and be like, "Okay, so we need transparency, we need to understand how it's happening, and then we need to layer on all of those things you were talking about"—about the things that society kind of roughly agrees on already or even where we want to go, the kind of social progress we might want to see or not.

Julie: [00:30:02] Right, no, absolutely. And you know, it helps me come in with very important lessons that the Board experience has taught me. And I think also my colleagues so far, there are five key lessons, really, that that I would like to share.

Elizabeth: [00:30:23] Please. Let's hear 'em.

Julie: [00:30:24] Yes. The first one is, of course, governance. Any social media platform, since we're talking about those. But I would say tech industry in general—that's very personal—not the Board opinion. But any social media platform needs to have bodies that are separate from Meta. We are, for instance, which allows us to independently assess the existing content moderation processes and also separate us from all the commercial and PR interests, which are absolutely of no concern to us. If we make a decision that's going to be bad for the business of the company. That's not—this is not something we think about. We think about human rights by design.

Julie: [00:31:11] The second lesson is transparency. And you rightly reminded us of that, in order to trust a company's decision, people need to understand how those decisions are made. And so far, tech platforms have operated in a very opaque way that has fuelled, you know, theories about, "Oh, some platforms might be more liberal or against, by default, against conservative viewpoints". So to avoid that, it's important to be transparent. Third, very important lesson, diversity. Most Meta users are located outside of North America and Europe. When you think about that, it helps you put in perspective all the reporting that we're receiving when we talk about social media harms. Because most of the reporting tends to focus a lot on, you know, those two regions of the world, which are extremely important. Absolutely. But which are not disconnected from what's happening elsewhere. And we tend to forget that. We tend to think that whatever happens in the US happens in the US. No, it doesn't. Even if—

Elizabeth: [00:32:22] Mm

Julie: [00:32:22] You

Elizabeth: [00:32:22] Hmm.

Julie: [00:32:22] You take the war in Ukraine, very good example. The disinformation operations that have been that, that have been, you know, flooded with regard to this conflict, well, they go beyond the borders of Ukraine. They go beyond the borders of the EU. They go to Africa. How—it's very interesting to—I think this conflict is a very interesting case study of how interdependent we are in the information age. I use that example to explain the importance of diversity going beyond just North America and Europe. And this is really what at the DNA, at the heart of the DNA of the board. I myself come from Africa originally, I'm Cameroonian, but I'm also French. I've lived in France now I live in the US. I have colleagues who are in Pakistan, who are in Taiwan, in Australia, etc., In the United States and other places.

Julie: [00:33:17] The fourth key lesson learned is we need principles. People need to know that, as a principle, everyone is subject to the same rules. That is an idea that was not, that is still not common, because there's still this assumption that some rules are applied to others and not to certain other groups. But once you have those principle clearly outline and once you have the other three, the diversity, transparency and governance, you build an environment of trust and you build an environment where people will—are more likely to respect the rules and potentially harmful content will decrease. Last but not least, partnership—very key lesson. No one can solve this alone. The platforms can't solve this alone. Governments cannot solve this alone by just regulating.

Julie: [00:34:10] And the Board doesn't—cannot solve this alone by making its decision. Independent oversight bodies, such as the one that I'm working on, will only have a lasting impact if companies not only listen to the proposals that are made by those independent oversight bodies based on consultations with different stakeholders—I did mention the number of 10,000 stakeholders with whom we interacted in the course of the past two and a half years since we started taking cases. So the impact of those independent bodies will only last if companies not only listen to our proposal, but actually act on them. And that's another key lesson and I'll end up there.

Julie: [00:34:53] Implementation. We often receive, as Board members a question of, “Okay, you make decisions, but the only binding part is putting back the content up or taking it down. How do you actually make sure there is an impact of your recommendations?”, for instance. And that's why we've set up an implementation committee which tracks those recommendations, tracks the responses of the company and tracks—the implement—the actual implementation of those recommendations by Meta. All of this in a very public manner to continue to allow for the public conversation that I was talking about. And so far I can say that Meta has been—has responded to 128 of our 180 recommendations. And yeah, I think that's a really good ratio, really bad at math. So I wouldn't be able to tell you the figure in terms of performance, but it's an encouraging sign that the company is following up on on the conversation. Definitely.

Elizabeth: [00:35:55] Yeah, absolutely. And that summary of those key lessons is really helpful. I mean, there's like 17 more podcast episodes we could do, but we don't have the time for that. I would just say in the show notes, we always add lots of links to different things, so we'll add links to some of the places where you listeners can go check out those implementation statistics and some of the other things that we've talked about today. And there's so much more to dig into on this. But the thing that I'll, sort of, leave us with is, when we think about the role of content moderation and the relationship between oversight boards like this [Meta Oversight Board], between companies, governments, other stakeholders involved, users. I think one of the things we need to remember is that it's going to continue to evolve and change, and we're going to keep needing to update what our understanding of meeting that balance is and what the best ways of meeting that balance is. Because social norms change, because technology changes, because rules and regulations change. And so rather than thinking of this as an endpoint, it's really part of the process.

Julie: [00:37:09] The picture you paint is so accurate in terms of the technology changes. Here we are. We are really focused on our worries about 2.0 social media platforms.

Elizabeth: [00:37:21] Mm hmm.

Julie: [00:37:22] But, apparently, we're on the brink of a bigger revolution. I think we're already in there. We're seeing generative A.I. becoming increasingly popular and normalized. There are certainly great things to that. But when you think about disinformation content, it—

Elizabeth: [00:37:43] Mm hmm.

Julie: [00:37:44] It does ask a lot of questions. We're also probably on the brink of another revolution in the form of the the Metaverse, which will allow more direct interactions, I mean, interactions that will feel even more real than the ones that we already have in the 2.0 version of social media. So, yes, changes happen all the time, and that is precisely why we need those principles that are in place. What we will never abdicate on. And it's really timely to have this conversation, to have those principles set, those values set now, those processes being clearly outlined now and those human rights, you know, this exercise of balancing different sometimes conflicting, conflicting human rights—all of this is required right now. So that we can avoid the, yeah, the pitfalls that—

Elizabeth: [00:38:44] Mm hmm.

Julie: [00:38:45] We've seen in the past few years. So, yes.

Elizabeth: [00:38:49] Absolutely. Thank you. That's wonderful. All right. We're coming up to time. I've got my final question, which is a little pop quiz. So it's going to be an easy one for you, though, because today's short answer question is, can you, in just one sentence, summarize what the Oversight Board does?

Julie: [00:39:11] The Oversight Board helps Meta treat its user in a more fairly and transparent manner.

Elizabeth: [00:39:18] That's fantastic. Thank you so much. I really appreciate it.

Julie: [00:39:21] Thank you, Elizabeth.

Elizabeth: [00:39:26] All right. That was our episode on the oversight board. We talked about content moderation. We talked about freedom of expression. We talked about finding a balance between freedom of expression and various online harms and human rights threats. It was a full episode, and I hope you enjoyed it. We've got tons of links in our annotated transcripts available in French and English, and you can check out a variety of resources in the show notes. Head over to for more. Thanks and have a great day.

Wonks and War Rooms Season 5 logo


Os comentários foram desativados.
bottom of page