When dealing with human individuals, it’s essential to understand that we are not creating logic every time; we are creating emotions. Emotions and meaning-making are among the key factors that define our existence. As AI increasingly takes on the role of a storyteller in our society, it is crucial to navigate how meaning is created and how it can be manipulated.
The process of meaning-making extends beyond technology, as it encompasses cultural factors that shape politics, identity, creativity and trust. The question is no longer whether AI can produce convincing messages; instead, we must consider how well we are equipped to interpret them.
When OpenAI’s Sora can conjure a bustling Mumbai street in seconds or Midjourney can reproduce a Rembrandt-like portrait with eerie precision, it’s difficult not to marvel at the brilliance of machines. Yet beneath this spectacle lies a deeper question: if meaning can now be statistically manufactured, what becomes of the human act of meaning-making?
Semiotics—the study of signs, symbols and the ways in which we create meaning—has always been grounded in human experience. A red rose symbolises love because, as a culture, we have recognised and agreed upon its significance. A meme goes viral because it captures a shared moment of humour, irony or recognition. Meaning is a collective product of memory, emotion and social context.
Artificial intelligence disrupts that foundation. Large models trained on billions of data points can now generate text, images and sound that mirror familiar cultural patterns without ever sharing the human experience behind them. They reproduce the form of meaning—the symbols, styles and structures—without possessing its substance.
This shift from experience to imitation changes how communication functions. When a model paints a nostalgic village scene, it does so not from memory but from probability. When a machine writes a poem about loss, it does so without having experienced loss itself. Yet to us, the output feels recognisable and emotionally charged. The illusion of meaning is powerful—and deeply disorienting.
On the surface, AI has democratised the creation process. Anyone can now compose music, design visuals or write stories without formal skill or access to creative tools. However, at a deeper level, this automation risks erasing the meaning itself. Culture becomes a database of reusable fragments—replicated, remixed and reassembled without memory, identity or emotion.
For semiotics, this raises new challenges. Traditional communication presumes a sender, a message and a receiver—three points connected by intention. AI erases one of these entirely. The message exists, the receiver interprets it but the sender is not conscious. Meaning thus shifts from being an act of expression to an act of simulation.
The paradox becomes sharper in a culturally layered country like India. Can an AI-generated image of Lord Krishna hold the same devotional weight as one painted by a believer? When a machine writes a Bollywood-style love song, does it carry forward the sentiment of a cinematic tradition, or simply recycle patterns from data?
Across advertising, news media and entertainment, these tensions are already visible. Generative models create flawless faces, perfect smiles and emotionally precise gestures—all designed to appeal, yet detached from real human experience. These are signs without origin, expressions without emotion, symbols without history.
The outcome is a strange kind of authenticity crisis. AI-generated creations appear emotional but are not experienced emotionally. They seem authored but have no author. They replicate the language of feeling while hollowing out the feeling itself.
And yet, this moment also forces an evolution in our understanding of meaning. As audiences become more visually literate and critically aware, they may learn to decode AI-generated content differently—to read it not as communication between people, but as a reflection of the data-driven systems that produced it.
Semiotics, too, must evolve. It must move beyond studying human intention to interrogating algorithmic logic—to understand how meaning emerges when no mind stands behind it. The study of signs now demands the study of systems.
The larger question, however, is one of recognition. Can we still distinguish between meaning that is felt and meaning that is fabricated? Can we, as audiences, preserve the emotional and cultural depth that makes meaning human, even as machines grow fluent in our symbolic language?Because in the end, meaning has never belonged solely to the message—it has always lived in the mind that interprets it. Perhaps that, for now, remains our last truly human frontier.
Ishayu Gupta is from School of Media and Communication, University of Leeds, UK; Neelatphal Chanda is from the Department of Media Studies, Christ University, India.