Your basket is currently empty!

With little effort, practically at the touch of a button, we can now turn data into detailed and sophisticated data stories, with visualisations, narratives, even videos. And, by gosh, we can also have human-like conversations with the stories themselves — this is all thanks to our newfound power of Generative AI, and the rising power of AI agents.
This incredible leap in automated data storytelling, however, comes with a large hidden risk. This risk doesn’t emerge from the technical problems of the technology itself, but from the diminishing role of the human as the core competitive differentiator. This calls for a reimagining of the human–technology relationship, where the human differentiator arises from a renewed purpose.
This journey began in the mid-20th century. At the 1956 Dartmouth Summer Research Project on Artificial Intelligence, a group of early AI pioneers, including John McCarthy and Marvin Minsky, gathered to establish AI as a distinct field of scientific research. Before Dartmouth, ideas about intelligent machines did exist but were scattered across numerous disciplines such as mathematics, logic, and psychology. There was no unified field or shared agenda. McCarthy wrote the proposal for this workshop in 1955, in which he coined the term Artificial Intelligence and articulated a profound vision for this new field: simulating human intelligence in its entirety.
“that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
Storytelling is a key feature of human intelligence, and its simulation is central in any attempt to achieving this vision of AI.
We can think of stories as our brain’s preferred data format for organising information and extracting meaning from it. As Mark Turner, a prominent cognitive scientist, argues in The Literary Mind (1996):
“Narrative imagining — story — is the fundamental instrument of thought. Rational capacities depend upon it. It is our chief means of looking into the future, of predicting, of planning, and of explaining. It is the basis of selfhood, of empathy, of moral judgment. We are not only users of stories, we are shaped by them. Story is not a special topic, but the foundation of thought.”
Likewise, Steven Pinker, another prominent cognitive scientist, underscores the same point in How the Mind Works (1997):
“Cognitive psychology has shown that the mind best understands facts when they are woven into a conceptual fabric, such as a narrative, mental map, or intuitive theory.”
Together, Turner and Pinker remind us of our profound cognitive reliance on stories.
Furthermore, this reliance is more than just a curious human habit. It is, as Patrick Winston argued, the defining feature of intelligence itself. Winston, a former student of Minsky and former Director of the MIT AI Lab, which Minsky and McCarthy had previously co-founded, posited the Strong Story Hypothesis, which states:
“The mechanisms that enable humans to tell, understand, and recombine stories separate human intelligence from that of other primates.”
Winston reinforces the point that storytelling has always been at the heart of what makes us human. If AI is to simulate human intelligence, a key metric of its success is its ability to master the art of storytelling.
Since Dartmouth, progress in AI has been marked by several pivotal breakthroughs: from the rule-based logic of expert systems in the 1970s, to the re-emergence of neural networks in the 1980s, through to the invention of the Transformer architecture in 2017. And it’s really the latter invention that paved the way for Large Language Models (LLMs). These models have significantly advanced the capability for AI to simulate the human talent of storytelling. Though this simulation remains a work in progress, it is sufficiently mature that we can put it into production.
Stories can come in many forms — fact, fiction, or data-driven. Data stories are simply the modern version of an ancient art form. They are narratives based on data and insights to convey meaning and drive action. As Harvard Business School explains:
“Data storytelling is the ability to effectively communicate insights from a dataset using narratives and visualizations. It can be used to put data insights into context for and inspire action from your audience.”
Today, AI is advanced enough to create insights from data, as well as the corresponding visuals and narratives. Test this out yourself. Head over to ChatGPT and ask it to create a dashboard and narrative from the Northwind dataset. If you’ve ever built dashboards, you’ll know how remarkable this is. What once took a developer days or weeks can now be done in minutes. Data storytelling is a true realisation of some of that initial AI vision from Dartmouth.
However, as the technology matures to realise more of the Dartmouth vision, another challenge emerges. This is not about AI itself, but about the role of human intelligence. This is why, despite the obvious benefits, it would be a critical oversight for any data strategist to embrace automated storytelling without also addressing a large hidden risk, one which stems from an over‑reliance on the machine and an under‑utilisation of the human potential.
The risks do not stem from technical flaws in AI, such as hallucinations. These issues can be managed and their impact reduced with methods like retrieval‑augmented generation (RAG). Nor do the risks lie in AI’s current shortcomings in simulating human data storytelling. In the coming years, we can expect these limitations to diminish as innovations close the gap between human and machine capabilities. As Demis Hassabis, Nobel Laureate and CEO of Google DeepMind, noted in an interview with The Guardian this year:
“we’ll have something that we could sort of reasonably call AGI [Artificial General Intelligence], that exhibits all the cognitive capabilities humans have, maybe in the next five to 10 years, possibly the lower end of that.”
Ironically, the risks of automated data storytelling emerge from the very progress we make towards AGI. As the gap between human and AI narrows, and as AI becomes increasingly democratised and commoditised, data storytelling will become a standard productivity tool — cheap, universally accessible, and as commonplace as email or cloud storage. As that happens, the edge that companies enjoy from human‑crafted stories will reduce significantly, because everyone will have access to the same automated capability.
Warren Buffett describes competitive advantage as a moat, like the ditch surrounding a castle that protects it from attack. Until now, human intelligence has been that moat. As AI displaces it, the moat, and the human differentiator it represents, shrinks. Companies must therefore build a new moat, one defined by a reimagined relationship between humans and technology, where the human advantage is given a renewed purpose.
Building a new moat has been made more difficult due to the anthropomorphisation of AI. This is a problem explored by a number of academics and commentators, including renowned MIT Professor Sherry Turkle, whose research covers human-technology relationships. As she wrote in her book, Alone Together (2011):
“Technology is seductive when what it offers meets our human vulnerabilities. And as it turns out, we are very vulnerable indeed.”
To redefine the human-technology relationship, we must first dehumanise AI, see it strictly as a tool, and view data storytelling as a product of that tool. The human differentiator will be new ways of interacting with these tools.
So, what does that human differentiator look like? I would suggest that organisations need to create a new set of human capabilities for interacting with AI generated stories. Borrowing from Andrej Karpathy, who introduced the term vibe coding, let’s call these new capabilities vibe data storytelling.
Much like the shift towards vibe coding, where AI democratises and commoditises code creation, the human role in data storytelling is to provide the vibe. In this new relationship, the vibe data storyteller should be able, with natural language, to direct and steer automated data stories while navigating the output for its unspoken assumptions, cultural context, and potential biases. This relationship can elevate automated data stories from bland products to compelling narratives that inspire, persuade, and resonate with people.
Andrew Ng once said of vibe coding:
“As coding becomes easier, more people should code, not fewer!”
The same applies to storytelling. As AI makes it easier to generate stories, more people, not fewer, should be empowered to shape them with vibe.
In fact, by cultivating more vibe storytellers, organisations can rebuild the moat to be wider and deeper than before. Those that see AI only as a means of replacing human labour may gain efficiency, but little more. They may well be overtaken by those with a broader vision of AI, who treat it as a catalyst for new human capabilities and renewed competitive advantage.
Our masterclasses equip business leaders, emerging leaders and innovators with the knowledge and critical thinking needed to successfully leverage data and AI, propelling their careers and organisations forward.