Skip to content

How Artificial Intelligence Is Rewriting Voter Perception

 

H. M. Nazmul Alam :

In the run-up to Bangladesh’s next national election, a new battlefront has emerged long before voters queue at ballot boxes: the digital sphere.

Early evidence suggests that generative artificial intelligence, once hailed as a marvel for productivity and creativity, is rapidly becoming a potent influence on political discourse.

Its fingerprints are already visible across social media, and unless urgently addressed, AI is poised to become one of the most crucial and troubling factors shaping voter perceptions in the coming polls.

A recent investigation by a daily newspaper exposes how AI-generated content is flooding platforms like Facebook with politically charged videos and reels designed to sway public opinion.

In the period between mid-December and mid-January, researchers identified 97 distinct AI-generated pieces of political content shared by a network of pages, profiles, and public groups, amassing over 1.6 million engagements in just 24 hours after posting.

Much of this content appears to manipulate narratives rather than inform voters blurring the line between fact and fabrication on a scale that traditional misinformation efforts never achieved.

Unlike conventional propaganda or simple “fake news,” synthetic AI content is engineered to mimic reality. It can produce voices, faces, and environments that look and sound eerily authentic.

In Bangladesh, bespoke clips show people who do not exist pledging their support for particular parties or denouncing rivals, all scripted and rendered by generative algorithms.

Some portray supposed members of minority communities endorsing parties they may never have heard of.

Others show self-assured voters parroting talking points that serve specific political agendas.

The effect is not mere deception; it is a crafted illusion of consensus and grassroots enthusiasm where none may exist.

The sophistication and reach of this trend reflect broader global patterns. Studies of elections in other countries reveal how generative AI is being used to create content that resonates with deep-seated cognitive biases, reinforcing pre-existing beliefs and polarising audiences.

This is not about proving one argument right or wrong; it is about drowning out nuance and factual debate with AI-generated noise that feels real but carries no accountability.

In Bangladesh’s context, where political loyalties already run deep and digital literacy is uneven, the stakes are especially high.

Anecdotal evidence from comment sections beneath these AI-generated posts shows that many users take such content at face value, often reacting with support or approval as if engaging with authentic political messaging.

Some are even unaware that the “people” they see onscreen are entirely synthetic.

The implications for electoral integrity are profound. When voters cannot distinguish between real discourse and AI fabrication, the very basis of democratic choice, informed consent, is undermined.

Instead of engaging with verifiable claims about policies, performance, or leadership qualities, voters may be swayed by emotional resonance crafted in a digital workshop. This is not speculation; it is happening here and now.

One of the most insidious features of AI-generated political messaging is its volume and scalability.

Traditional misinformation campaigns were constrained by the need for human creation and dissemination.

AI transcends that limitation, enabling an endless stream of tailor-made content that can adapt to different audiences and contexts.

Whether it is mimicking the voice of a working-class voter, portraying clergy lending religious authority to a candidate, or fabricating endorsements from minority communities, AI disinformation can be personalised to exploit social fissures and cultural fault lines.

This capability poses a particular danger in a highly competitive election environment. Political actors, motivated by victory rather than truth, may find AI tools too tempting to resist.

In Bangladesh, the early signs already show alignment of AI-generated narratives with partisan interests.

The investigatory dataset included content aligned with a range of political camps, revealing an attempt by various actors to weaponise AI in pursuit of strategic advantage.

The question is not just one of volume but of psychological impact. AI content often taps into confirmation bias — the tendency of individuals to favour information that confirms existing beliefs.

When synthetic content reinforces what a person already thinks or fears, it can solidify opinions without any grounding in verified reality.

Layering this effect across millions of users can shift perceptions en masse, changing not just what people think they know, but how they feel about their choices, their rivals, and the legitimacy of the entire political process.

The danger extends beyond individual misperceptions to systemic erosion of trust. When voters encounter conflicting narratives, some factual, others fabricated, they may begin to doubt all information sources.

Legitimate news outlets, fact-checking organisations, and even electoral institutions risk losing authority as AI-generated noise drowns out credible voices.

We have seen in other contexts how repeated exposure to misinformation diminishes trust in reliable media, making it harder for truth to reclaim its place.

This has broader implications for democratic participation. Disillusionment with the political process can lead to apathy or disengagement; particularly among young voters who are most enmeshed in social media ecosystems.

If citizens feel that the information environment is rigged or fabricated, they may question the very value of voting.

In extreme scenarios, such dynamics could depress turnout or even foment social unrest based on false premises planted by AI-generated campaigns.

Addressing these challenges requires a multifaceted response. Enhancing digital literacy among voters is paramount.

Citizens need the tools to critically evaluate what they see online, to recognise synthetic manipulation, and to demand accountability from those who disseminate such content. At the same time, social media platforms must play a proactive role.

They have the technical capacity to detect patterns of coordinated inauthentic behaviour, yet the will to enforce these measures consistently often lags behind the pace of abuse. Transparency in content moderation and clear labelling of AI-generated material are essential steps.

Regulators and electoral authorities also have a part to play. Current legal frameworks and codes of conduct must be updated to reflect the realities of an AI-infused information landscape.

It is no longer sufficient to punish outright lies after the fact; there must be preventive mechanisms that deter the creation and spread of misleading synthetic content before it goes viral.

Enforcement, however, must be balanced to protect freedom of expression while curbing manipulation.

Yet perhaps the most fundamental safeguard lies in a well-informed electorate. Voters who are aware of the risks posed by AI manipulation are less likely to be swayed by synthetic narratives.

Public awareness campaigns, media literacy initiatives, and civic education can inoculate citizens against deception.

When voters understand that not everything they see, no matter how polished, is authentic, the power of AI to distort reality diminishes.

(The writer is an Academic,
Journalist, and Political Analyst based in Dhaka, Bangladesh.
Currently he teaches at IUBAT. He can be reached at [email protected])