Skip to content
R&D stories

Let Me Tell You: Towards live, participative storytelling. Powered by AI

Alessandro Zangirolami

Senior Media Specialist

Brand experiences


A fresh approach to reshaping brand narratives through the seamless integration of AI’s creative power and human interaction

“The force of […] images comes from their being material realities in their own right […], potent means for turning the tables on reality — for turning it into a shadow.”

Susan Sontag
The last hundreds, possibly thousands of years, we humans have strengthened our rightful position as storytellers and creative animals, extracting narratives from almost anything in sight — and got very, very good at that.
So good that brands and entire industries have now understood that their strength lies in being story-driven, to their core, and going all-in on that. The idea is: if every aspect of what brands are and do CAN be a story (their vision, people, heritage and products), then they aren’t just a part of the conversation — they ARE the conversation. They are storytelling ecosystems that are starting to relentlessly reshape and grow the artistry.
In all this, AI appears — at times challenging us in our own storytelling field, at others empowering us in ways literally unimaginable only 3 years ago. Suddenly, here’s a competing entity that not only surprises us with a great — albeit partial — gift of creation, but can at the same time expand like never before our own ability to narrate, create, imagine: a sparring partner, a megaphone, a competitor, and a cheerleader. All within these emergent models that are giving shape to our wildest dreams (hallucinations?) and are only destined to become the engine that translates what we do into stories and new spaces of meaning in the near future.
Furthermore, what we are increasingly observing are signals telling us of a movement towards a more open-sourced approach to brand storytelling. The strongest brands we see now are those that have an open approach to their narratives, one that allows for a cultural share of voice. In this landscape, the user becomes not only a herald of a brand’s voice, but a literal, canonical part of what they say.
We at MAIZE tend to be very excited by these things, so our crew tried out an idea and envisioned something very simple and very complex at the same time: “Let Me Tell You”, an AI-powered system that transforms human-machine dialogue into text, voice, and images in real-time, all converging into a short story, an ever-changing hallucination opened between the possible and the fantastic.

It is our newest exploration of tech and behavioral interstices, and it aims at creating tangible participative moments of live, impromptu storytelling. In its current version, it takes the shape of an experiential room where the user participant is offered one of the most fundamental moments in our human journey: the chance of being told a tiny little story. A brief-yet-immersive story, uniquely generated here-and-now around the participant’s initial dialogue with its surroundings — with the machine.


During development, we have put some of the emergent possibilities of machine language and content generation models to the test on different fields. Beneath the visual experience of Let Me Tell You is a text-to-image (and audio) diffusion system: a series of intertwined generative models trained to synthesize images from various kinds of inputs and text descriptions, live.


We ask something to the participant — something simple, something feeling-brand-product-based — and their response triggers a generative pipeline that blooms into simultaneous directions. A GPT model fleshes out a story around initial responses; a text-to-image (diffusion) model starts a visual translation of the content of the story, synthesizing everything into a sequence of representative images or animations; in parallel, a voice model synthesizes the entire story into a spoken-word, warm audio narration. All in real-time.


Whilst the models are all up-and-running, the output for the participant is almost immediate: from a dark, suspended environment, the images narrating each user’s story start filling the curved immersive screen, like floating narrative illustrations to what each person is being told. The machine speaks to them through its own and out-of-thin-air voice and eyes. It tells them their unique story in a somewhat soothing, implausibly intimate experience.


The ‘toolkit’ unlocked by this project resonates with a lot of what we do, here at MAIZE: particularly, it allows us to find interesting ways to make technology and emerging languages research converge towards a new stepping stone for our clients’ brand experience possibilities.


“Let Me Tell You” has its own ability to power up the quality of physical activations across different scales: in-store & retail, pop-up events, big industry conventions, or more artistic experiences. What if you could enter a more intimate relationship with your favorite new bag at your favorite high-street store, be able to touch it/move it/talk to it, and in return it tells you a story uniquely generated for you? What if you could create an endless, public branded story-stream by adding onto what those before you created, an epic user-brand narrative that simply never stops? These are just a few quick-and-dirty examples of things that are technically possible through this systematic dialogue between live interactive technologies and generative AI models. And we are only beginning to scratch the surface.


From a technical standpoint, the system works, it is scalable, adaptable to different outputs and briefs, and it’s constantly upgradable to incorporate foreseeable hardware and technological breakthroughs in the AI and interaction fields. And, after testing it out, it almost feels like (machine) magic unfolding before your eyes.

Please accept marketing-cookies to watch this video.

A project like Let Me Tell You can greatly expand the possibility for designing meaningful experiences for our clients, through activations that are truly participative and distributed. At the same time, it responds to their needs for relevant, spirit-of-the-time solutions to their big questions:

  • it’s an opportunity to empower and encourage users to co-create and be the owner of a ‘cultural share’ of the brand’s voice;
  • It’s inherently playful, and revisits practical concepts like ‘product discovery’ and ‘brand heritage’ through uniqueness, surprise and awe-inspiring moments and experiences that generative-AI models bring to the table;
  • It allows for the creation of a larger brand universe, through unique, ever-growing narratives that can only add to the brand’s voice. 


Decentralization of brand narratives towards participants and communities is the way forward. The empowerment level that current and foreseeable AI models offer is about to reshape the way we think about and create user experiences, especially (but not only) physical immersive ones. In fact, it has already changed how we interact with machines in our creative approach in ways that even surpass the impact that interactive designs systems had a few years back, taking that knowledge capital to the stratosphere.


We can all agree that the first, original tryout of user empowerment from the early 2000s wasn’t exactly flattering nor truly effective (we remember what user-generated-content used to look like, right?), but the current voltage of creativity, responsiveness that these systems bring to the real-time, live experiences game is something else, and it’s already started to reframe the landscape towards us, the users. Em, participants.


Do you want to know more about Let Me Tell You?


Pick a channel and start a conversation.