There is something deeply ironic about the age we are living in. We call it social media, yet much of what happens within it is anything but social. We scroll, we react, we consume, and increasingly we argue. But the human connection that once defined conversation is often missing.
Billions of people now participate in this digital public square every day. Entire identities, communities and political viewpoints are formed through screens rather than through shared spaces. And just as societies are beginning to understand the consequences of algorithm-driven media, another force is arriving that will amplify those consequences dramatically: artificial intelligence.
From my perspective, this is not simply a technological shift. It is a civilisational one.
In my own life, I have tried to approach this carefully. My children are not allowed on social media. I have been quite strict about that. They simply cannot use it. Occasionally, they are allowed to watch something harmless on YouTube, usually funny cat videos or perhaps a cooking recipe. That is about as far as it goes.
This is not because I am anti-technology. Quite the opposite. My work sits directly at the intersection of media, politics and global dialogue. But the digital environment we have built carries psychological consequences that we are only beginning to understand, and I believe the next generation deserves a little more protection until we fully grasp those consequences.
The irony is that while I am cautious at home, my professional life has placed me directly inside these debates. Through the Global Indian Network and our podcast conversations we regularly engage with people who shape nations. Heads of state, ministers, business leaders and global thinkers join us to explore geopolitics, economics and the evolving identity of societies. These discussions often move quickly across borders and ideas.
Yet one observation continues to trouble me. Even among highly capable leaders, many do not fully appreciate the scale of the transformation that artificial intelligence is about to unleash. This is not merely another technological innovation. It represents a generational shift in how human beings understand truth, authority and trust.
For centuries, societies relied on identifiable sources of information. Teachers transmitted knowledge. Journalists reported events. Religious leaders shaped moral frameworks. Institutions acted as filters that helped societies determine what information could be trusted.
Artificial intelligence disrupts that entire model.

We are now entering an era in which persuasive narratives, historical explanations, and ideological arguments can be generated instantly by machines. A simple prompt can produce essays, speeches and commentary that appear thoughtful, informed and authoritative.
The system generating them does not need to understand truth. It simply needs to generate something that sounds plausible. That distinction matters more than many people realise.
Human beings are wired to trust other human beings. It is almost certainly a survival instinct developed over thousands of years of social cooperation. Our brains evolved to read subtle cues in tone, expression, and body language to determine whether someone is credible.
Artificial intelligence bypasses those cues entirely. Increasingly, we trust what arrives through a screen. A video. An audio clip. A written argument. Yet we often have no clear understanding of whether the content was produced by a person, a machine or some combination of both.
As AI systems become more sophisticated, that distinction becomes even harder to recognise.
This is not simply a shift in how information is produced. It is a transformation in how reality itself is interpreted.
Over the years, I have travelled to more than seventy countries. One of the most remarkable aspects of that journey has been the extraordinary reach of the global Indian diaspora. In many places, it is genuinely difficult to travel far without encountering someone from our community.
But those travels have also revealed something else. The extraordinary power of digital narratives.
I remember the riots in Leicester not long ago. Two communities clashed in what many described as a sudden eruption of tension between Hindu and Muslim groups. But when we examined the situation more closely through conversations on our podcast with people close to the events, a striking detail emerged. A significant portion of the inflammatory online commentary that fuelled the tensions had originated outside the United Kingdom.
Digital narratives had travelled across borders and landed inside a city thousands of miles away.
The same dynamics appear elsewhere.
In Suriname, I once posted a piece online that generated more than 180,000 impressions. For a small country, that number was astonishing. It showed how a single digital message can shape the direction of a national conversation.
In countries such as Guyana and Trinidad, one can often watch the process unfold in real time. Comment sections become arenas where identity politics ignite. Communities are framed against each other. Historical grievances resurface. Political tensions intensify.
The platforms reward it. Outrage spreads faster than understanding.
There was another moment during my travels that has stayed with me. I remember sitting in a hotel in Kampala speaking with a woman who had lived in Uganda for many years. During our conversation, she handed me a book titled Ugandan Indian Colony.
At first, I thought the idea sounded absurd.
But when I looked at the publication date and began reading the arguments contained in the book, a cold chill went down my spine. These were not new ideas. They had existed for decades, quietly circulating beneath the surface.

The more people I spoke to about it, the more I realised those narratives still held resonance in parts of the national conversation.
What struck me most was how unaware many in our community seemed to be of this undercurrent. It was almost as if people were living in a hotel beside a rising tsunami, believing that the thin pane of glass between them and the wave would somehow protect them.
This is the dissonance that increasingly concerns me. Part of it may come from a lack of travel. When people move through the world physically, they encounter nuance. They see the complexity of societies and the humanity of those who live within them.
Digital spaces rarely offer that nuance. They reward simplification, emotional reaction and tribal loyalty. Artificial intelligence will accelerate these dynamics dramatically.
AI systems can generate persuasive narratives instantly. They can produce political arguments, reinterpret historical events and reinforce identity-based narratives at a scale that human beings alone could never achieve.
What once required coordinated effort can now be generated by a machine in seconds.
This is why the debate around artificial intelligence must extend beyond questions of economic productivity or technological innovation.
It is fundamentally about trust. Our societies are already experiencing a trust deficit. Institutions are questioned. The media is questioned. Governments are questioned.
At the same time, our vulnerabilities are increasing. Communities that exist across multiple nations and identities are particularly exposed. Narratives that originate in one part of the world can rapidly influence perceptions somewhere else. Artificial intelligence will accelerate that process.
And yet the deeper challenge may lie in the limits of the human mind itself. Human beings are not designed to constantly question the authenticity of every message they encounter. Our cognitive systems evolved for face-to-face interaction and repeated social contact.
Yet modern life has already reduced many of those interactions. We travel less in meaningful ways. We interact less frequently with strangers. Increasingly, we rely on mediated communication through video, audio and text. Now we are adding artificial intelligence to that environment. The words we read, the voices we hear, and the narratives we encounter may no longer originate from human experience at all. They may be generated by systems optimised for engagement rather than understanding.
The cognitive strain created by this environment should not be underestimated. Trust, the very foundation of every functioning society, becomes fragile. Artificial intelligence will undoubtedly bring extraordinary benefits. It will accelerate scientific discovery, transform healthcare, enhance education and unlock new forms of creativity.
But powerful technologies always demand thoughtful stewardship. Because we stand at a moment in history where the architecture of information itself is being rebuilt. For most of human history, we trusted what we saw with our own eyes and heard from people we knew. In the world now emerging, that instinct will no longer be enough.
The next generation will grow up in a world where narratives can be generated, identities can be influenced, and truths can be manufactured at scale. The real question is not whether artificial intelligence will shape our societies. It already is.
The real question is whether we will develop the wisdom to guide it before it begins shaping us in ways we no longer recognise. And perhaps the deeper challenge is this. In an age where machines can produce knowledge instantly, the rarest human quality may become something far older than technology itself.
Judgement.
The ability to pause, to question, and to understand before reacting. Civilisations have always depended on that instinct. The question now is whether we will preserve it.

Let us know your thoughts. If you have burning thoughts or opinions to express, please feel free to reach out to us at larra@globalindiannetwork.com.

