Adam Watson (00:00)
Welcome back to “Simplifying the State,” the podcast where we break down politics so you don’t have to try to figure out why the French government collapsed in less than 24 hours after it was created. As always, I’m Adam Watson.
Nicholas Perrin (00:12)
I’m Nicholas Perrin.
Drew (00:13)
And I’m Drew Garfinkel.
Adam Watson (00:15)
All right, now before we start, if you would be so kind as to rate us and follow the podcast wherever you’re listening, as well as share it with anyone you think would enjoy it, such as friends, family, or just a random person you see at Kaldi’s. Also, follow us at “Simplifying the State” on Instagram. We always post when we drop a new episode, as well as clips from episodes we think you’ll like.
Okay, so today’s topic we’re going to be talking about is AI and how it can shape and influence politics — and how it will impact politics in the United States. So, I think a lot of people have heard of Sora, right?
Nicholas Perrin (00:54)
Yeah.
Drew (00:54)
Yeah, I’ll show that to you.
Adam Watson (00:55)
Yeah. So, how do you guys think stuff like Sora is going to impact politics in the United States or government in the United States?
Nicholas Perrin (01:05)
Well, deepfakes have already been created that have been able to trick people. Even in the last presidential election, there were AI-generated images. One I saw was Harris giving a speech with a lot of people in an audience who were all wearing red, and there was a Soviet flag visible.
Those could only trick, you know, moms or whatever — like “Facebook mom” types. But with Sora too, it’s so difficult to tell what’s real and what’s fake. Deepfakes could become even scarier than they already are.
Drew (01:31)
Facebook moms, yeah.
Yeah, to answer your question, I’d say not as much as we think will happen because of this — in politics specifically. Obviously, people are going to lose jobs because of this, but politically, I don’t think it’ll be as much as we think.
Now, I mean, in the future, we don’t know what this technology will look like, but right now, you can still tell. And if you’re going to fall for it, like, you’re a Facebook mom and you’re already falling for these AI images. You guys have seen those AI images, right? It’s like, clearly fake. You’re going to fall for it anyway. So, these people who are very susceptible to it are going to be even more so.
Adam Watson (02:29)
Yeah, I mean, before Sora, I would have agreed, but I think Sora has really kind of changed how I feel about that. It’s still in its early testing phase, only available in an invite-only system, and not available to the general public, so I’m not even sure it’s reached its full capabilities yet.
But it’s going to just be constantly evolving. And I mean, one of them without the watermark — like, I saw the watermark and then I realized — but without the watermark, for a little bit, I genuinely thought one of them was real.
If the premise of what it was and what it was saying wasn’t so ridiculously impossible — like, it wasn’t JFK scrolling Instagram Reels or whatever — then I would’ve fallen for it, maybe because of just how real one of them looked.
Now, this could have just been a uniquely realistic video, but I think it could have an impact on how we trust the government and politicians. If deepfakes like this can be made, then it could erode public trust in whether something is legitimate or not.
Drew (03:43)
I wanted to clarify my point that, like, I don’t know who it was — we talked about this last episode, Adam — where this guy got a sombrero photoshopped on his head.
Adam Watson (03:58)
Yeah, Hakeem Jeffries.
Drew (04:02)
That already happens. I see it happening more — like, you can get people to say things they didn’t say, like politicians. But for my previous statement, I kind of take that back, because I’m seeing ways it could go bad.
Nicholas Perrin (04:20)
Yeah, and kind of relating to that, we know that Trump and the White House have been using AI-generated videos as public statements or, you know, “jokes,” as Vance would say. But in the future, that could be very dangerous if the executive government of the United States is making deepfakes like that and uses them maliciously.
Adam Watson (04:51)
Yeah. I also think this could be used by foreign actors like Russia, Iran or China to influence our elections. If they were to create a really realistic deepfake — say, it’s 2028 and the two candidates are J.D. Vance and Gavin Newsom — and one of them creates a deepfake of Newsom saying something really bad, like a “hot mic” moment or whatever, and then puts that on social media and lets it spread naturally, that could possibly work.
Not a large number of people in the United States are constantly tuned in to politics or the news. Some get their news entirely from social media. We talked about this a couple of episodes ago — the danger that getting all your news from social media can have. But if someone only gets their news from social media and they see something like this, that could impact the way they see one candidate or another. That could allow foreign actors to influence our elections and undermine our democratic process.
Nicholas Perrin (06:13)
An image of an F-22 was shown on the ground surrounded by people, and it was made to look like an anti-American technology propaganda piece. It was obviously fake.
Adam Watson (06:33)
Yeah, it was like a giant, scaled-up F-22 or F-35 or something like that.
Nicholas Perrin (06:37)
Yeah, it was the size of a building. But if they could make that and time it well for some mission over the Middle East, that could definitely affect trust in the military — and the American government in general — which would obviously not be good for the United States.
Drew (07:04)
Yeah, obviously, there’s still propaganda going on now — we talked about it last week, and you mentioned it, Nicholas. Those AI videos of Trump’s posting — it was like he was playing with a cowbell or something. You all saw that?
Adam Watson (07:21)
Yeah, like him dressed as the Grim Reaper, that thing, yeah.
Drew (07:24)
Yeah, don’t know what the point of that was, but I’m guessing we’re going to see more of that.
Adam Watson (07:31)
Yeah, I mean, this administration has definitely not only not shied away from using AI and stuff like that, but they’ve actively embraced it. I don’t know if I could even call it a communication strategy because a lot of what’s being posted doesn’t seem to have any relevance or attempt to deliver a message.
It just seems like they’re posting things the president finds cool — like a photo of him as a Jedi for May the 4th. I don’t know what the communication benefit of that was. Same with the Grim Reaper AI video. It just seems like a 17-year-old posting random stuff on Instagram — very random and no clear message behind it.
Nicholas Perrin (08:24)
A lot of what we’ve seen from the Republican communication strategy has been like that — not really a clear message, just random stuff. But as long as it captures media attention and shifts it away from the opposition, that’s all they really need.
I was wondering if AI would become a partisan thing — would Republicans be pro-AI? I know the Trump administration has been pretty pro-AI, trying to loosen regulations on AI companies.
But some Republicans have pushed back, and there’s been a bipartisan movement toward more restrictions. There are also people on both sides who support AI for the race against China or something. So do you think it could be a partisan thing — not to split parties, but bipartisan on both sides?
Adam Watson (09:37)
Do you mean in opposition to it or in support of it, or like used as a political tool?
Nicholas Perrin (09:41)
Both — in opposition and in support, not necessarily as a political tool.
Adam Watson (09:47)
Right. It’s hard to say; it’ll depend on a lot of things. But I think AI in general has the potential to be a bipartisan thing — possibly in opposition to it.
If you look at the pros and cons of AI, the pros are mostly that it makes life easier. It helps with search tools, writing and things like that.
But the cons — increased energy bills wherever data centers are made, water supply issues, noise pollution, high costs, environmental concerns, job losses — those are big ones. I think job losses will matter more nationally, while environmental stuff will matter locally.
I think it could be a unifying thing for both the left and right to come out against.
Drew (10:39)
Yeah, people are going to lose jobs. That’s the biggest one for me.
Adam Watson (11:07)
AI and the extreme development of AI.
Drew (11:12)
Yeah, you’d hope so. You’d hope we could get some unity on this issue and get some regulation passed, too, because the technology is moving fast and we have to keep up with it.
Adam Watson (11:16)
Right.
And it’s not just about stopping these massive data centers from being built, which are often bad for the communities they’re built in. It’s also about the regulation of the tech itself.
Like you talked about, Drew — it’s advancing at such a rapid pace. Not just chatbots and stuff advancing, which could cost jobs, but also the AI-generated videos and images. That needs to be regulated, too.
Historically, Congress has done a poor job of regulating digital things like social media and the internet. But I think this could be something that motivates Congress to act.
Drew (12:18)
Yeah, I just want to note that AI isn’t always a bad thing. Like if I’m on my phone, on TikTok, “AI” — I’m doing air quotes — is giving me the videos I see. It’s not just ChatGPT or Sora; it’s used fundamentally in a lot of different things.
Adam Watson (12:24)
Yeah, no, I think AI in general isn’t a bad thing. I mean, if you struggle with spelling — I think we all know I do — things like Grammarly can be useful for improving your writing.
But I don’t think AI should be writing all your stuff for you.
Nicholas Perrin (12:39)
And for medical applications as well.
Drew (12:42)
Mm-hmm.
Adam Watson (13:05)
Writing essays and communicating are pretty fundamental human things that AI shouldn’t take over. Same for journalism — I don’t think AI should be making all the news stories. There’s a time and place for AI, but there are parts of life and society it shouldn’t be involved in, and it’s starting to creep into those.
Nicholas Perrin (13:32)
Yeah, and going back to the bipartisan push against AI — Josh Hawley and Dick Durbin, Republican and Democrat respectively, introduced an act in Congress to put strain on AI companies in the case of harm coming to a person because of AI. There have been two recent cases where children killed themselves because of AI.
There’s bipartisan pushback on that. Hawley also introduced another act to actually regulate AI. I don’t know if that’ll make it through Congress, but we’ll see.
Adam Watson (14:24)
Yeah, I also know there was something in the Big Beautiful Bill that Marjorie Taylor Greene later came out against. It prohibited states from making their own policies about regulating AI for 10 years, which she said she wouldn’t have supported had she known that.
So, you’re already seeing members of Congress on both sides coming out against AI and in support of regulation. You’re also seeing this at the local level — here in St. Louis County, there was a data center that was going to be built, but the local community came out against it. They actually put a moratorium for a year on new data centers.
So, I think public opinion isn’t on the side of AI and massive data centers. They pressured their local government to stop it — and it worked.
Drew (15:46)
Yeah, I can’t blame people who don’t like AI. It’s kind of scary, right? It used to be, “Look at this photo I generated,” now it’s like, “Is this video AI or not?”
Drew (16:02)
Back in 2022 or 2023, I used all these tools and looked at all these AI-generated photos. I had a couple hundred on my phone. It was fun then — but now it’s just scary.
Adam Watson (16:25)
Yeah, before it was wacky — like, “That’s so obviously fake but funny.” Now it’s like, “Okay, is this real? Is this fake?”
I was watching this Instagram video of a guy looking at an AI-created clip of a deer rescuing a person. The AI was told to make it as realistic as possible. He’s a professional video forensic analyst and had to really look for details normal people wouldn’t notice to tell it was fake. That’s one of the scary parts — how good AI is getting.
Nicholas Perrin (17:12)
And not just visual stuff, but auditory stuff too. Like in that video we talked about with Hakeem Jeffries — when Chuck Schumer was speaking, you could kind of tell it was fake, but not really.
I listen to some music that uses AI-generated voices, and some of it is so human I couldn’t tell at all if it wasn’t labeled as AI.
Drew (17:49)
What do you guys think about how AI affects people outside of politics?
Nicholas Perrin (17:55)
We talked about data centers — there are people living within a football field of one who barely has running water. They have to collect water overnight to use for basic necessities, and their electricity is unreliable. They didn’t have any say in when the data center was built.
Adam Watson (18:26)
Yeah, I saw that.
In the last year, our electricity rates have gone up by about 12%. Data centers use massive amounts of energy. There was one built in Missouri that’s using so much power that they’ve talked about restarting the Three Mile Island nuclear plant just to power new AI data centers.
So not only is it draining water from communities, it’s also causing power rates to go up for people across the country.
Drew (19:31)
There are chatbots and image generators taking jobs away from authors and photographers. I’m sure we’ll see AI-generated animation in the future, too.
Adam Watson (19:42)
Yeah. There are industries — art, journalism, writing — that AI could technically replace, but writing a novel or painting an oil painting is such a uniquely human thing to do. I don’t think AI should be in that field at all.
Nicholas Perrin (20:27)
For artists, the only way to really prove your work might be to do it entirely physically, in a way AI couldn’t. Even digital artists could have their work mimicked or faked.
Drew (21:00)
Yeah, like when people post digital art and everyone’s just like, “It’s AI, it’s AI.” I see so many comments like that.
Nicholas Perrin (21:16)
Yeah, I remember this one music video cover — it didn’t cite any artist, and there was a massive debate in the comments about whether it was AI. I couldn’t see anything AI-esque about it, but every comment was about that and not the actual song.
Adam Watson (21:39)
All right, thank you for listening to “Simplifying the State.” We’ll be back next week with our next episode. In the meantime, like we said at the beginning, make sure to rate the podcast, follow us, and check out “Simplifying the State” on Instagram. We’ll talk to you guys next time.