top of page
Radio show microphones

Wonks and War Rooms

RSS Feed Icon_White
Spotify Icon_White
Google Podcasts icon_White.png

AI and Democracy with Seher Shafiq

This week Elizabeth talks with Seher Shafiq, a program manager at the Mozilla Foundation and expert in civic engagement, particularly in the context of elections and engaging marginalized people in the vote. They discuss how AI is impacting Canadian elections, civic engagement, and democracy. They look at helpful and not so helpful uses of AI tools in elections and chat about ways these tools could be used to increase voter engagement. Seher concludes the episode with suggestions for how we can deal with the lack of trust in AI, including an emphasis on digital literacy. 

Side note: We are collecting examples of impacts of the podcast and we’d love to hear from you. Could you take two minutes to fill out this short questionnaire for feedback on the podcast.

Additional Resources:


Episode Transcript: AI and Democracy with Seher Shafiq

Read the transcript below or download a copy in the language of your choice:

Elizabeth Dubois: [00:00:04] Welcome to Wonks and War Rooms where political communication theory meets on the ground strategy. I'm your host, Elizabeth Dubois. I'm an Associate Professor and University research chair in politics, communication and technology at the University of Ottawa. My pronouns are she/her. Today we're talking about AI and elections with Seher Shafiq. Before we get into it, though, one more call out: If you've got any feedback for us about the impacts and outcomes of the podcast, if you've used it in your work or your volunteer life in some way, please let us know. We've got a link to a short survey in the show notes, or you can email us at That's All right, let's get into the episode. Seher, can you introduce yourself please?

Seher Shafiq: [00:00:51] Hey, thanks for having me. My name is Seher Shafiq. I'm a program manager at the Mozilla Foundation, and I have a deep background in civic engagement with a focus on elections and voter engagement for those that are typically underrepresented in civic life.

Elizabeth Dubois: [00:01:05] Thank you so much for being here. I am very excited to talk about the various ways AI is impacting our elections and our civic engagement, our democracy. I'm going to start off with- Well, normally it's a pretty academic definition. I'm actually going to Wikipedia for this first one, "Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems, as opposed to the natural intelligence of living beings." That's the first sentence in the Artificial Intelligence Wikipedia English page, and that includes a lot of different things. That's really broad. In some of the work that I've done- so there's a report that myself and Michelle Bartleman, who's a PhD student at uOttawa, wrote, looking at AI in politics in Canada [Consult: The Political Uses of AI in Canada] . We talk about there needing to be these core components of data computing and decision making as our way of framing what counts as artificial intelligence. But the reality is there's a ton of different definitions, and it's kind of changing as the technology evolves. So I want to start out with some level setting. What do we think is AI or at least what are the AI tools or instances that matter most in election contexts? Are there particular kinds that jump to mind for you?

Seher Shafiq: [00:02:21] I think, in my view- and I'm not a technical expert, so keep that in mind. But from where I stand, AI is anything that takes data and then spits out a statistical average of that data. The first thing that comes to mind for everybody is ChatGPT. How it can be used in elections to generate campaign content really quickly. Social media content, voter information content for election management bodies or community based organizations that are trying to reach voters or campaign offices that are trying to reach voters. The other thing that comes to mind is synthetic audio and images. So robocalls that impersonate a candidate or any community leader [Consult: The impact of generative AI in a global election year, a recent article that provides examples of AI candidate impersonations]. Even generated images that could be used, potentially, to deceive voters or to show an image that doesn't actually exist for the purposes of furthering a political campaign.

Seher Shafiq: [00:03:11] I also think about augmented analytics, which is something that you mentioned in your report [The Political Uses of AI in Canada] and how even before we had AI helping us to produce analytics about voter behavior and election prediction results, even at that time, that impacted how voters perceived certain races and their voting behaviors. And so what does it look like now that we can get those analytics even quicker and even earlier on in a race? And how does that impact voter behavior? And also what is the data going into those analytics and what's represented there? And are those analytics a true representation of the sentiment that voters are having in a particular election cycle? That's another big one.

[00:03:53] And then kind of along the lines of ChatGPT, which I mentioned earlier, how chatbots [For example, City of Markham’s virtual assistant] can be used by campaign offices and strategies to share information with voters, but also how they can be used by community based organizations and election management bodies to share voter information and make voting more accessible. So there's a lot there, both good and bad. But the short of it is this is very impactful technology that, if harnessed well, can have an amazing impact on engaging voters, especially those that are underrepresented. But if mismanaged and manipulated, it could have an increase in the distrust that a lot of people have in our democracy and elections here in Canada. So there's a lot there.

Elizabeth Dubois: [00:04:39] There's so much there. Thank you for laying out that map for us. I think you've hit on so many really good examples and I'm excited to dig into them. One thing I wanted to do before we start digging in more is talk a little bit about the idea of generative AI. So you mentioned ChatGPT and, you know, creating synthetic images and audio and those kinds of things. And those all fall into this category of generative AI, sometimes called gen AI. And it's a type of artificial intelligence system that can generate text, images, other media in response to prompts. So it's that ChatGPT, you send it a prompt and it spits something back at you. These models, they don't just generate the content out of nowhere. They've been trained on massive amounts of data that's been collected from the internet, typically. And they learn patterns and the structure within all of that training data. So these models have been taught to look for those patterns and then they look for them. That's that kind of statistical analysis you were talking about in your definition of AI. And then they generate new content that has similar characteristics. So I just wanted to intro that to them because it's one that we sometimes hear about in the news, but it's not necessarily explained in greater detail.

Elizabeth Dubois: [00:05:58] All right. So let's dig into what all of this looks like. Obviously there's been a whole big conversation about deepfakes [Consult: The Evolution of Disinformation: A Deepfake Future, a report published by CSIS], this fear of a politician being put in a video to appear as if they said something they never said, or appear as if they've spent time with people that they've never met or they're in some other kind of compromising position. And we've seen this with world leaders all over the place. There's also been examples with celebrities and tons of what's usually framed as pretty negative examples of impersonation with these kind of obviously negative consequences. Do you think that deepfakes are a major threat to elections? Are there ways that we could use these generated videos that would be helpful in elections and in campaigning? Because right now it's really framed as, "any video like that is going to be bad."

Seher Shafiq: [00:06:53] So the perspective that I bring and the experience that I have is working with folks who are already marginalized by many systems, but including our democracy and elections. And I worked with immigrants and refugees to do voter engagement and with community based organizations across Canada that focus on populations like homeless people, youth, LGBTQ communities, women. And the common thread amongst all of these groups is that there's a lack of trust in the system, and we know that there's a rise in polarization in Canada and in the world, really, but there's also this weakened trust and this sentiment that, "the system doesn't work for me," and rightfully so, because it doesn't for a lot of reasons. So when I think about deep fakes and generative AI and other generated videos and images, especially in the context of elections, I think about how even for people who have a high level of civic and digital literacy, there is so much to learn to be able to spot the differences between something that is real and something that is fake. And so when you take that and place that burden onto people who already have language barriers, literacy barriers, a distrust of the system. And then during an election cycle when they already feel like their vote doesn't make a difference, there's no point in engaging. When you layer on top of that something like deep fakes that are impersonating political candidates or celebrities endorsing a political candidate or all the dirty games that happen in election cycles, I am so fearful that those people who already feel like there's no point engaging in the system will just give up, because there's no way of knowing what is real and what is not. And right now in Canada, we don't have the infrastructure in community based organizations to provide programing that bridges that gap of civic and digital literacy. To bring people up to the level where we can all be able to spot this material and make informed decisions, even without deepfakes. In order to learn about what is happening in an election cycle, do you know how much time it takes? And think about people who are in survival mode. People who are working several jobs, raising several children, have so much mental, physical, emotional stress in their lives. Who has the time to watch a debate and be informed of all of the political parties, especially when they're not knocking on your door because they have data that says that you're less likely to vote? How are those kinds of people going to then also engage in this intricate level of being able to tell what is real and what is fake, and it's very complex. I think it complicates a lot of the barriers that already exist, and it makes a bigger gap in terms of digital and civic literacy, and it disproportionately impacts those that are already the most underrepresented in our civic spaces and voting.

Elizabeth Dubois: [00:09:46] Yeah. You know, we think about the role of media and digital literacy. And, I did a whole season of the podcast about media and digital literacy [Episode 9 of Season 3: Mapping theories for media and digital literacy]. And yeah, it's great. Like let's increase media and digital literacy. Let's put effort towards it. But that puts new burden on people. And as you're saying, the people who are least likely to get targeted with correct information because they're unlikely to vote, so the major campaigns aren't paying attention to them. And the people who are most likely to have a lack of trust already, or a lack of time and energy to figure it out, they're maybe in a new system trying to work within a language that they don't speak. It's just like layer upon layer, it's stacked against them. And then we also have seen that these generative AI tools have been used specifically to target communities who speak different languages or are from different cultures, and have different cultural backgrounds and expectations. These tools are often getting used specifically to create disinformation campaigns among those underrepresented communities.

Seher Shafiq: [00:10:51] Yeah. I mean, it's like the WhatsApp disinformation campaigns, but on steroids.

Elizabeth Dubois: [00:10:56] [Laughter]  

Seher Shafiq: [00:10:56] When you add in the idea of deepfakes and generated content that literally is fabricated- Like it's one thing to have a WhatsApp forward that is [that] somebody just wrote an essay with capital letters and very alarmist language, and it's convincing enough that it's forwarded. But now imagine that with an image that depicts something that isn't true, it's just the potential for that to spread even faster than disinformation already spreads is huge.

Seher Shafiq: [00:11:23] I also want to talk a bit about augmented analytics, because we were just speaking about how those who are least likely to vote already aren't having political candidates engage with them, and I volunteered for political campaigns and gone through the whole experience of you get a voter list and you go into a neighborhood. There's certain doors that you have to knock on because they're flagged as most likely to vote. And the other ones, you don't waste your time there because you don't have enough time to go to every door. So why would you? Elections are kind of gamified in a way. It's literally a game. It's a game of how much people power do you have that can go physically to the doors of people who will vote, and how well can you do that compared to other political parties? And so when you have augmented analytics and you have even more micro data on who is going to vote, why would you go to a Toronto housing building where there's nobody on that list that's flagged as having voted in the past? Why would you waste your time there? That's already happening, and I volunteered for a candidate who still went to those buildings. And every single person, we knocked on their door and they said, "No one ever comes here. There's no point. You're only here to take my vote, and you're not going to do what's right for our communities." So it can further perpetuate this gamification of political campaigns and campaign strategies and further entrenched these inequalities, these inequities that exist in the system. Because campaign strategies are this game of door knocking and pulling the vote. And the fact is, you just can't go door to door to every single person.

Elizabeth Dubois: [00:12:58] Yeah, exactly. You know, unless you're maybe in Yukon, where there's so few people that you could physically drive to every single door. In a downtown Toronto riding or in any other major areas, we aren't going to be able to see campaigns get to every single door. They have to make choices, and they make choices, as you're saying, in this way that really further entrenches the divides between those who are likely to vote and those who are not likely to vote. And that has all of these systemic repercussions. And it's really a problem when you have particular groups of people who have particular needs, who are consistently not being invited into the process. And I think one thing that's really interesting is with AI, usually the conversations about bias and discrimination built into AI tools is at the level of how the tool was trained to do the automation of the analysis and what training data was used. But what you're bringing up, I think is really important in that the way the AI tools are getting used and get incorporated into campaigns, even if the tool itself was designed in a way that was really attentive to having a wide variety of training data, having a wide variety of perspectives involved in building the tool, when it's implemented in our campaign structure, it's still going to be used in a way that amplifies these kinds of divides.

Seher Shafiq: [00:14:29] Yeah. And even when those analytics are used not for campaign strategies but to predict election results, there's the same thing that can happen with these communities. So I think about the 2022 Ontario election, and it had the lowest voter turnout of any Ontario election in history at 43% and I remember that race. And I remember most people that I know and [they had] this general sense of there's no point in voting because we know who's going to win the election. And then you have these AI bots like Polly, which predicts outcomes of elections. And I think about- Okay, let's say Polly had predicted that Doug Ford was going to win, right? Polly is analyzing public opinion, public sentiment. It's analyzing who's tweeting about what, what language they're using. Of course, it's a very intricate process, but it's only inputting data that exists online. It's only inputting data from people that are engaging in conversations about the election. What about the people in those TCHC buildings [Toronto Community Housing] that I mentioned who are talking about this with their friends, and they just know that there's no point in engaging because the election is already won.

Elizabeth Dubois: [00:15:41] Yeah. Or who are talking about it online in a language that Polly has not been trained on particularly well. And so the data maybe gets ignored because it's not in a major language, and so it seems like it's such a small proportion. But if you have a whole bunch of small proportion languages, you eventually miss out an important chunk of that population.

Seher Shafiq: [00:16:04] Yeah, I mean, it's language, but it's also if they're engaging at all online and where they're engaging. And so the twisted part here is that it could very well be- and actually we have the numbers to show it. It could very well be that the majority of Ontarians did not want to vote for Doug Ford,but that data was not captured. And that sentiment was not reflected in all of the media interviews that said, "Here's where the party stands. Here's the popularity of each party." And so there is a deceptive messaging going out to the public, which then led people to feel like their vote doesn't matter. There's no point.

Elizabeth Dubois: [00:16:43] Yeah. And that idea of polling during a campaign impacting the choices people are going to make, that's not new. But I think one thing that is new about how these AI tools are being used to try and do, essentially a version of polling is that they deal with so much data and they get presented as this, "Wow, look how amazing this technology is. It's a black box, but it's so cool and you should just trust it." And then there's news reports using information from those systems. And we don't even understand necessarily how the systems work, who was missed out in the calculations that were being made. And yet they kind of hold more power than a traditional phone poll because they're new and technology-driven, and we just don't understand them well enough to critique them.

Seher Shafiq: [00:17:34] Yeah. And that's what happens when you have shiny AI that there's a lot of hype around it. But who is really sitting down and unpacking that black box and shining a light in it and seeing what's inside? And that's a lot of the work that Mozilla does. We recently put out a report about the Common Crawl corpus, which is the database that is used by a lot of large language models to train them, and all of the inherent biases that are in there. And I know that's a bit different from the Polly example that we're talking about right now, but it's a similar concept where if you open up the black box and really shine a light inside, what are you going to find? And how does that [impact] voter behavior?

Elizabeth Dubois: [00:18:13] Absolutely. And we'll link to any publicly available information you guys have on that. But I want to change gears slightly, because we've been really hating on AI so far. And I think we've done a very good job of highlighting a lot of the problems. But there's also potential benefits, right? You mentioned kind of in the intro there, some of the ways that it could be used for good. Let's talk about some of those. What are some of the ways you see that these kinds of tools might be used to invite people into their political system, and get more people voting from those underrepresented groups?

Seher Shafiq: [00:18:46] Yeah. I mean, there's so much potential to use this technology for good as well. I'll start by talking about how people end up voting. So someone doesn't just land in Canada or turn 18 years old and then become civically engaged and want to vote in every single election for the rest of their life? There's a long journey to get there, and that involves feeling like you belong in Canada, feeling like you can trust the system, engaging in civic life and having a positive experience, and then deepening that engagement to the point where you eventually vote. And then later on in that spectrum of engagement, you maybe run for office, or you do more than just voting. And so, especially with marginalized communities, a lot of them don't see themselves represented in the political system. And we now have early academic research that is showing that when that happens, those communities are less likely to engage. So when we talk about deepfakes and generated content, we have the power now to produce very quickly material that resonates with diverse communities. You can take voter information materials like, let's say it's Elections Canada, for example. They should do this.

Elizabeth Dubois: [00:20:04] [Laughter]

Seher Shafiq: [00:20:04] Hope they're listening. So they could take voter information material and very quickly generate culturally adapted materials that are then shared with different communities. And community based organizations and nonprofits can do the same thing.

Elizabeth Dubois: [00:20:19] Yeah, absolutely. And that kind of use of these tools, I think a lot of organizations, particularly an organization like Elections Canada, where trust in their system is of the utmost importance. They're hesitant to start making use of these kinds of tools for fear of backlash. Right. And so one of my questions then is how do we signal appropriate use and how do we ensure that it's appropriate use of these tools in order to design those culturally appropriate pieces of content and messaging, for example? So the case that comes to mind that really, I think for me, highlights some of the issues here is [when] there was a candidate running for mayor in New York City, I think, in their last mayoral election, who did not speak Spanish, but had a video of himself speaking Spanish and delivering the same messages that he had been delivering in English. So it wasn't a matter of like, "Oh, you're saying one thing to one community and something else to a different community." But on the one hand, it was reaching a population of folks who probably very much appreciated having this content delivered in their first language, but it also made it seem like he actually spoke Spanish and therefore was more connected to that community. And the major thing that I think is a problem is he didn't disclose that this was AI generated and that they had used this AI tool. So I think that disclosure is a core component of it. What do you think are the other core components of making sure that these tools are used in a way that is still trustworthy and is ethical, I guess?

Seher Shafiq: [00:21:57] Yeah. So it's so interesting that you brought up a political candidate, and I'm here thinking about voter information from Elections Canada, because that's the lens that I look at it from. But you're right, there have to be some guardrails around that as well. Huge potential to harness this technology for good by benevolent actors, if you will. Like electoral management bodies, community organizations to reach those audiences. But for sure, there's so many issues that the example that you just gave. The fact that he doesn't speak Spanish, the non-disclosure aspect of it, and then also just, communities aren't just their language. Like, you can't just take a video of a candidate and have them speak Spanish and convince those communities that this candidate is there for you. The communities will see right through that. But to answer your question, I think there need to be- In this era where it's so easy and quick to produce this material, and the impact for harm is so, so high, there need to be some updates to political campaign rules. There has to be some sort of a disclosure. There have to be some rules around the level of "fakeness," is the word that comes to mind. You can't just fabricate things that literally do not exist and then use that as campaign material. There's an ethics element to that. And I don't know what that looks like, but there have to be some updates and really quickly too, because this technology is advancing so quickly and it's so accessible to everybody. There have to be some updates to campaign rules that prevent this mass deception of people who are already vulnerable to being manipulated and deceived by misinformation and intentional disinformation. So I don't know what that answer is, but...

Elizabeth Dubois: [00:23:40] Yeah, I think we're all kind of going to be working towards that. We're going to be setting some norms. And I actually think your mention of electoral management bodies like Elections Canada, for example, or civil society groups that are not affiliated with a specific political party, those kinds of groups can do some, like standard setting, lead by example approaches. They could start using these tools in a transparent way that is clearly explained and understood and does it well so that it does engage groups that aren't normally engaged, but not in a way that is deceitful the way the disinfo[rmation] campaigns are.

Seher Shafiq: [00:24:21] Yeah, especially community based organizations. When I worked with them, there was such an appetite for voter information and for us to come in and do presentations with their clients in a way that really was adapted to that demographic. And just imagine how much more could be done if some of that content was easily generated, easily customized, more easily accessible. Community based organizations could save a lot of money by harnessing some of this information. Not fully giving it over to AI, but just using it as a tool to hire more staff necessarily.

Elizabeth Dubois: [00:24:58] Yeah, absolutely. I think that there are tons of opportunities. I think one thing that I always try and mention at this point in a conversation like this is a lot of these tools seem like they're free because they're still training the tools, and the data you put in gets used. So blanket rule: don't upload personal data as you're trying to use these tools to inform your campaign strategy, your approach, that sort of thing. But definitely there are ways to experiment with these tools that could be really, really helpful.

Seher Shafiq: [00:25:30] For sure.

Elizabeth Dubois: [00:25:31] All right. One of the themes of the podcast this season has been how personal influence in politics is shifting with technology. And so, you know, when I say personal influence, I'm harking back to political communication theory, the idea of opinion leaders, anybody listening who doesn't know what that is, you can listen to our previous episodes [Consult: The Two-Step Flow and Opinion Leaders with Nick Switalski and Personal Influence in Politics], which we'll link to in the show notes. But the basic gist of it is I've been really interested in how the role of people in sharing information with their friends and family as they influence their opinions, behaviors, and attitudes. How that is changing with technology. So we had an episode on the one step flow of communication, looking at things like augmented analytics and micro-targeting, and how data about people can be used to kind of target them in a way that means that maybe their friends and family don't need to be the "targeters," right? You don't need as much of the friends and family role because you can reach them directly, which is questionable whether or not that actually works. I'm wondering if you see ways that these generative AI tools that we've been talking about might be replacing or augmenting or changing what that kind of personal influence, social pressure, social support looks like. Can I do some of the job that friends and family do in convincing people to vote, or to care about climate change, or whatever other political thing is on the table?

Seher Shafiq: [00:26:56] I think there's a bit of a spectrum in the level to which those influencers can impact views and behavior. So if you look at TikTok and all of the content that's out there about so many niche topics that you know are fed to you based on your interests, [it's] very convincing. I learned so much on TikTok. I learned so much about different views, or I learned deeper about certain things that I'm passionate about. And that influences me because the person speaking to me is someone who looks like me, it's someone who talks like me, it's someone that resonates with me and I get it. So yeah, to some extent there's an information, educational aspect of it that does work. When it comes to something like voting, I don't think that online AI generated material can influence something like taking half an hour out of your day to leave work, go to the voting station. Voting only takes a few minutes. It's not even about that. But to actually physically go and do something, I don't think that AI is going to be as influential as people, and the reason is because there's no trust. Or not [that] there's no trust, but it's certainly not the same level [of] trust as if your sister was like, "Hey, let's grab a coffee and vote on our way to work today."

Elizabeth Dubois: [00:28:14] Yeah. And I can even imagine an information environment as we learn how these tools are going to be integrated and how we negotiate what we think is okay and what we think isn't as a society. I can even imagine a situation where we become even more dependent on our friends and family and colleagues because, as you described earlier, that information environment can just get flooded with so much information. Some of it is disinfo, some of it isn't, but not all of it is relevant. And who do we turn to? What do we trust? If people are losing trust in news media already, people are losing trust in politicians and political parties, in governments. I can imagine a world where we start looking to our friends and family even more, to try and get a read on whether or not we are going to take on board some message that we see online.

Seher Shafiq: [00:29:07] Yeah. And even when you think about political campaigns, there's a reason that there's so much effort put into door to door knocking. It's because that is the thing that we know will actually influence somebody to physically go and vote. Campaigns have the ability to reach all of these people online and in a lot of other ways, even not online, even in tangible ways, with fliers and stuff. But it's proven that when you go door to door, it's more likely that the person will actually go and vote. And so, yeah, there's something about human connection that will never, ever be replaced. And I think influence is one of those things.

Elizabeth Dubois: [00:29:40] Yeah. And you know, what you're saying really resonates with what Hamish Marshall, who was on the episode on the one step flow [of communication], was saying. He's an experienced conservative campaign manager and digital strategist and that's exactly in line with what he was saying. We could talk about this all for so much longer, but we're running out of time. And so before we end, I want to make sure we have a little bit of time to talk about what we do next. Right? So we have this information environment where we know AI tools are being used in elections. Who's involved? How do we deal with it? What do we do about this lack of trust in information in elections? How do we ensure that those who are most at risk in a situation like this are not further marginalized and put into even more risky positions? Do you have any solutions for us?

Seher Shafiq: [00:30:32] I have some thoughts. I think, first, the digital literacy piece is huge and like I said, it impacts those that are already marginalized the most. And so we should fund community based organizations to do digital literacy and civic engagement work, especially during election cycles. Community based organizations know their demographics the best. Community leaders know how to talk to their communities the best. We should support them in bridging that gap. I think that political campaign rules, like we talked about earlier, need to be updated so that producing doctored content that is intended to deceive voters is not allowed. Because the potential for harm is too high for us not to adjust political campaign rules to address for what's available now to campaigns. And then lastly, I think social media platforms have a level of responsibility to prevent the dissemination of misinformation and disinformation, which becomes harder when they've all axed their trust and safety teams [Consult: Tech layoffs ravage the teams that fight online misinformation and hate speech]. So there's also that. We could do a whole podcast episode on that. And then as part of that, there's a government regulation piece as well of governments pressuring social media companies as well to make sure that they are behaving responsibly and not propagating the spread of misinformation and disinformation. So those are three things to start.

Elizabeth Dubois: [00:31:52] Those are great, and I really love that those three things to start, which are all fairly big things, really paint the picture of how interdependent all of this is. We need a lot of different people to be involved in solving this kind of problem, and generating our new norms and rules and regulations around how we're going to deal with this technology being embedded in our campaigns.

Seher Shafiq: [00:32:15] Yeah, it's very complex.

Elizabeth Dubois: [00:32:17] It is. All right, so final question. It's the typical pop quiz that we have on the podcast. So we've spent all of this time talking about AI and elections. Can you give me your kind of one liner on how you would define artificial intelligence?

Seher Shafiq: [00:32:37] I think artificial intelligence is anything that generates new content based on the data that's been inputted into a machine or an algorithm.

Elizabeth Dubois: [00:32:47] Yeah, I think that's great. I think that's a definition that works really well for generative AI. I would just say that we could expand it a little bit. It doesn't necessarily need to generate new content. It might generate new insights or other things. If it's like augmented analytics tools for example.

Elizabeth Dubois: [00:33:05] Wonderful. Well, thank you so much for chatting with me. This was a great conversation. All right. That was our episode on AI and elections. I hope you enjoyed it. As always, we've got links to a bunch of resources in the show notes and in the annotated transcripts that we've got available on our website. It's available in English and French. So head on over to I also want to acknowledge that I am recording from the traditional and unceded territory of the Algonquin people, and I want to pay respect to the Algonquin people, acknowledging their  long standing relationship with this unceded territory. Thanks for listening.

Wonks and War Rooms Podcast season 6 logo


Los comentarios se han desactivado.
bottom of page