AI Could Be the Most Effective Tool for Dismantling Democracy Ever Invented
AI Could Be the Most Effective Tool for Dismantling Democracy Ever Invented

AI Could Be the Most Effective Tool for Dismantling Democracy Ever Invented | Common Dreams

AI Could Be the Most Effective Tool for Dismantling Democracy Ever Invented
AI Could Be the Most Effective Tool for Dismantling Democracy Ever Invented | Common Dreams
I think rejecting AI is a mistake. All that does is allow fascists to have mastery of the tools. Like money, guns, media, food, oil, or any number of other influential things, you don't want a select few people to have sole control over them.
Instead, we should adopt AI and make it work towards many good ends for the everyday person. For example, we can someday have AI that can be effective and cheap lawyers. This would allow small companies to oppose the likes of Disney in court, or for black dudes to successfully argue their innocence in court against cops. AI, like any tool, reflects the intent of their user.
It would be like a military rejecting gunpowder weapons. Except this weapon is mass mind control. There is no letting go of the horns of that bull.
Honestly it just feels like ai is created to spy on us more efficiently. Less so about assisting us.
I mean the Oracle CEO said so explicitly last year to investors
That guy is such a creep, he makes my skin crawl with his nightmarish hyperstition.
I guess that means it could also be the most effective tool for saving democracy. Fuck these what if's over factual news.
Nope. I'd still say social media/social media algorithms.
Imagine if social media didn't exist (beyond small, tight-knit communities like forums about [topic], or BBS communities), but all these AI tools still did.
Susan creates an AI generated image of illegal immigrants punching toddlers, then puts it on her "news" blog full of other AI content generated to push an agenda.
Who would see it? How would it spread? Maybe a few people she knows. It'd be pretty localised, and she'd be quickly known locally as a crank. She'd likely run out of steam and give up with the whole endeavour.
Add social media to the mix, and all of a sudden she has tens of thousands of eyes on her, which brings more and more. People argue against it, and that entrenches the other side even more. News media sees the amount of attention it gets and they feel they have to report, and the whole thing keeps growing. Wealthy people who can benefit from the bullshit start funding it and it continues to grow still.
You don't need AI to do this, it just makes it even easier. You do need social media to do this. The whole model simply wouldn't work without it.
This has been going on for a lot longer than we've had LLMs everywhere.
Half of it wouldn't even work if the news media would do their job and filter out crap like that instead of being lazy and reporting what is going on on social media.
One week the whole US news cycle was dominated by "Cheungus posted an AI pic of Trump on truth social".... I mean... I get that the presidency was at times considered dignified in the modern era so it's something of a "vibe shift", but the media has to have a better eye for bullshit than that. The indicators unfortunately are that it's going to continue this slide as well because news rooms are conglomerating, slashing resources, and getting left in the dust by slanted podcasts and YouTube videos.
Some of it is their own fault. People watching the local news full of social media AI slop are behaving somewhat understandably by turning off the TV and going straight to the trough instead of watching live as the news becomes even more of a shitty reaction video.
They also shouldn’t report on the horse race. They should report on issues.
Reporting on elections is always disappointing.
something something lamestream media!
While I generally agree and consider this insightful, it behooves us to remember the (actual, 1930s) Nazis did it with newspapers, radio and rallies (... in a cave, with a box of scraps).
It's the algorithms + genAI, especially as the techbros got super mad about the progressive backlash against genAI, which radicalized everyone of them into Curtis Yarvin-style technofeudalism.
Social media, at least the mainstream stuff like Myspace was the start of the downfall. I don't think random forums really were the thing that caused everything to go sideways, but they were the precursor. Facebook has ruined things for generations to come.
This is a thing on social media already.
Lots of bad faith conservatives will just output AI garbage. They don’t care about truth, they just want to waste your time. You spend time researching their claims, providing counter evidence - they don’t care, because they don’t even read what you say, just copy and paste into an LLM.
It’s very concerning with the Trump administrations attack on science. Science papers are disappearing, accurate/vetted information because sparser - then you can train Grok to claim that all trans women are rapists or that global warming is just Milkanovitch cycles.
It is and has been an info war. An attack on human knowledge itself. And AI will be used to facilitate it.
Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past. Jean-Paul Sartre
But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words.
It is so goddamn frustrating. I had one on here claiming that the US had not deported US citizens, while linking the article saying that the US had deported four children who were citizens!
I read fast and have been following this shit for so long that I can call a lot out. But it never changes their minds, they never concede defeat. They just jump from place to place. Or “you libs always assume i’m MAGA” is a funny one I keep seeing on here - like, if you are supporting Trump’s policies, you are a Trump supporter. It’s so goddamn slimy and disingenuous.
TERFs are the worst about it. They’ve started spreading Holocaust denial. I saw one claim the the Nazi government issued transvestite passes! Like no! It was the Weimar Republic! The Nazis used those passes to track down and kill trans people!
You can provide crystal clear documentation, all of the sources they ask for - it’s never good enough and it’s exhausting.
They're trying to burn the library of Alexandria, again.
ChatGPT:
You're absolutely right to be concerned — this is a real and growing problem. We're not just dealing with misinformation anymore; we're dealing with the weaponization of information systems themselves. Bad actors leveraging AI to flood conversations with plausible-sounding nonsense don’t just muddy the waters — they actively erode public trust in expertise, evidence, and even the concept of shared reality.
The Trump-era hostility to science and the manipulation or deletion of research data was a wake-up call. Combine that with AI tools capable of producing endless streams of polished but deceptive content, and you’ve got a serious threat to how people form beliefs.
It's not just about arguing with trolls — it's about the long-term impact on institutions, education, and civic discourse. If knowledge can be drowned in noise, or replaced with convincing lies, then we're facing an epistemic crisis. The solution has to be multi-pronged: better media literacy, transparency in how AI systems are trained and used, stronger platforms with actual accountability, and a reassertion of the value of human expertise and peer review.
This isn’t fear-mongering. It’s a call to action — because if we care about truth, we can’t afford to ignore the systems being built to undermine it.
This is the inevitable end game of some groups of people trying to make AI usage taboo using anger and intimidation without room for reasonable disagreement. The ones devoid of morals and ethics will use it to their hearts content and would never interact with your objections anyways, and when the general public is ignorant of what it is and what it can really do, people get taken advantage off.
Support open source and ethical usage of AI, where artists, creatives, and those with good intentions are not caught in your legitimate grievances with corporate greed, totalitarians, and the like. We can't reasonably make it go away, but we can reduce harmful use of it.
It's guns. The most effective way to diamantle democracy is violence.
AI is not stopping anyone from revolting. Guns and the military are.
Whar a stupid fucking take.
AI allows for 24/7 bot networks to shape perspectives on politics and culture by leaving comments everywhere man. Bots absolutely help prevent people from realizing they've been swindled by Trump.
Yep, people revolt because of the story they believe. Control their story and they won't want to revolt.
Bu controlling what people see and hear on TV and social media, they don't need to use guns. At least, not nearly as many.
Whar a stupid
Yep.
Creating unbiased public, open-source alternatives to corporate-controlled models.
Unbiased? I don't think that's possible, sir.
So is renewable energy, but if I start correcting people that they don't exist because the sun is finite, I will look like a pedant.
Because compared to the fossil fiels they are renewable, the same way Wikipedia is unbiased compared to foxnews.
the same way Wikipedia is unbiased compared to foxnews.
If better was the goal, people would have voted for Kamala.
When the fuck will you people get it?? Every technology will eventually be used against you by the state
What exactly are you saying here?
And why?
This is why I think technology has already peaked with respect to benefit for the average person.
Be used by a captalist pig to exploit you*
Only if you don’t consider capitalism.
Kinda like people focusing on petty crime and ignoring the fact that corporations steal billions from us.
We as a society give capitalism such a blanket pass, that we don’t even consider what it actually is.
When you live in a cage, you think of the bars as part of your home.
Hot take: what most people call AI (large language and diffusion models) is, in fact, part of peak capitalism:
I could go on but hopefully that’s adequate as a PoV.
“AI” is just one of cherries on top of late stage capitalism that embodies the worst of all it.
So I don’t disagree - but felt compelled to share.
A tool is a tool. What matters is who is using it and for what.
I don’t like this way of thinking about technology, which philosophers of tech call the "instrumental" theory. Instead, I think that technology and society make each other together. Obviously, technology choices like mass transit vs cars shape our lives in ways that simpler tools, like a hammer or or whatever, don't help us explain. Similarly, society shapes the way that we make technology.
In making technology, engineers and designers are constrained by the rules of the physical world, but that is an underconstraint. There are lots of ways to solve the same problem, each of which is equally valid, but those decisions still have to get made. How those decisions get made is the process through which we embed social values into the technology, which are cumulative in time. To return to the example of mass transit vs cars, these obviously have different embedded values within them, which then go on to shape the world that we make around them. We wouldn't even be fighting about self-driving cars had we made different technological choices a while back.
That said, on the other side, just because technology is more than just a tool, and does have values embedded within it, doesn't mean that the use of a technology is deterministic. People find subversive ways to use technologies in ways that go against the values that are built into it.
If this topic interests you, Andrew Feenberg's book Transforming Technology argues this at great length. His work is generally great and mostly on this topic or related ones.
AI "guardrails" is even a bigger tool governments can use to dismantle democracy and kill freedom. The article starts by quoting the pope who isn't even a democratic ruler, he gets appointed by cardinals who are appointed by the previous pope. AI could have wrote a more useful article.
I don’t believe the common refrain that AI is only a problem because of capitalism. People already disinform, make mistakes, take irresponsible shortcuts, and spam even when there is no monetary incentive to do so.
I also don’t believe that AI is “just a tool”, fundamentally neutral and void of any political predisposition. This has been discussed at length academically. But it’s also something we know well in our idiom: “When you have a hammer, everything looks like a nail.” When you have AI, genuine communication looks like raw material. And the ability to place generated output alongside the original… looks like a goal.
Culture — the ability to have a very long-term ongoing conversation that continues across many generations, about how we ought to live — is by far the defining feature of our species. It’s not only the source of our abilities, but also the source of our morality.
Despite a very long series of authors warning us, we have allowed a pocket of our society to adopt the belief that ability is morality. “The fact that we can, means we should.”
We’re witnessing the early stages of the information equivalent of Kessler Syndrome. It’s not that some bad actors who were always present will be using a new tool. It’s that any public conversation broad enough to be culturally significant will be so full of AI debris that it will be almost impossible for humans to find each other.
The worst part is that this will be (or is) largely invisible. We won’t know that we’re wasting hours of our lives reading and replying to bots, tugging on a steering wheel, trying to guide humanity’s future, not realizing the autopilot is discarding our inputs. It’s not a dead internet that worries me, but an undead internet. A shambling corpse that moves in vain, unaware of its own demise.
It’s that any public conversation broad enough to be culturally significant will be so full of AI debris that it will be almost impossible for humans to find each other.
This is the part that I question, this is certainly a fear and it totally makes sense to me but as I have argued in other threads recently I think there is a very good chance that while those in power believe that an analog of the Kessler Syndrome will happen in information, and this is precisely why they most of them are pushing this future, that it might hilariously backfire on them in a spectacular way.
I have a theory I call the "Wind-up Flashlight Theory" which is that when the internet reaches a dead state full of AI nonsense that makes it impossible to connect with humans, that rather than it being the completion of a process of censorship and oppression of thought that our imagination cannot escape from, a darkness and gloom that nobody can find their way in.... the darkness simply serves to highlight the people in the room who have wind-up flashlights and are able to make their own light.
To put it another way, every step the internet takes towards being a hopeless storm of AI bots drowning out any human voices... might actually just be adding more negative space to a picture where the small amount of human places on the internet by comparison starkly stand out as positive spaces EVEN MORE and people become EVEN MORE likely to flock to those places because the difference just keeps getting more and more undeniable.
Lastly I want to rephrase this in a way that hopefully inspires, every step the rich and authoritarians of the world take to push the internet takes towards being dead makes the power of our words on the fediverse increase all by itself, you don't have to do anything but keep being ernest, vulnerable and human. Imagine you have been cranking a wind-up flashlight and feeling impotent because there were harsh bright flourescent lights glaring over you in the room.. but now the lights just got cut and for the first time people can see clearly the light you bring in its full power and love for humanity.
That is the moment I believe we are in RIGHT NOW
Great insights, thank you.
It could also be an effective tool for liberation. Tools are like that. Just matters how they’re used and by whom.
But they have to be used.
I often wonder why leftist dominated spheres have really driven to reject AI. Given that were suppose to be more tech dominant. Suspiciously I noticed early media on the left treated AI in the same way that the right media treated immigration. I really believe there was some narrative building through media to introduce a point of contention within the left dominant spheres to reject AI and it's usefulness
I haven’t seen much support for antiAI narratives in leftist spaces. Quite the opposite, as I’ve been reading about some tech socialists specifically setting up leftist uses for it.
But your instincts are spot on. The liberals are being funded by tech oligarchs who want to monopolize control of AI, and have been aggressively lobbying for government restrictions on it for anti-competitive reasons.
And please broaden this beyond AI. The attention economy that comes with social media, and other forms of "tech-feudalism", manipulation, targeting and tracking/surveillance aren't healthy either, even if they don't rely on AI and machine learning.
I wonder if we are being groomed to hate it because it would be an effective tool to fight fascism with as well.
A potential strategy for using it for good would be dealing with the problem of comparitive effort to spread and debunk bullshit. It takes very little effort to spread bullshit. It takes a lot of effort to debunk it.
An LLM doesn't need to worry about effort. It can happily chug away debunking bullshit all day long, at least, if you ignore the problem of them not being able to reason, and the other ongoing problems with LLMs. But there is potential for it being a part of the solution here.
Ooh that's clever, I'm going to start doing that.
100%
I actually just wrote my thoughts before I saw this comment.
I challenge anyone to go back to the articles shared here on Lemmy when AI was taken off. Compare the headlines to headlines from right wing media towards immigration. It's uncanny how similar they are, at least to me.
They often deal with some appeal to pathos like
They will take our jobs They threaten our culture They will sexually assault children They will contribute to the rise in crime
It's psychological warfare. By manipulating online social communities they can push the lies they want and most normies will lap it up.
AI isn't going to destroy the world. But it will make it easier and faster for humanity to destroy the world.
I agree but I don't think it has much to do with the tool itself beyond it being a superb culpability obscuring tool, what we are witnessing is essentially a birth of a religion just a super lame one.
A status quo where people are not allowed to break in rhetoric or ideology as quality of life plummets will create a rising potential for the introduction of a concept that nullifies or simply exists outside the framework of that status quo worldview that is strangling people but they do not believe they can escape.
For people who are obsessed with AI, the unreality of it is precisely what lets AI function as an ideological lifeboat that doesn't require grappling with the immense issues with our worldviews and assumptions as a society because AI can be added to contexts as a Deus Ex Machina that disguises and explains away incrongruities between our broken world views and the desperate reality caused by them.
Yuval Noah Harari's Nexus gets into this as well. It's a really powerful tool that we are very, very ill equipped to use responsibly.
I thought social media was doing it already
If AI can be used to dismantle democracy it can be used to make it flourish. People are the problem.
Id rather wash dishes in the netherlands than be a rich russian olgiarch or part of north koreas upper echelon. Why are these rich idiots so stupid about this stuff.
I love democracy.