The World in Their Hands: Building Our Location Feature (and Getting It Wrong First)
Tandem's new location feature lets families co-read stories set anywhere in the world. Here's how we built it, what we got wrong, and what we did about it.
Author: Erica Layer, COO @ Tandem and mum of two
Last week, my son and I went fishing in Elk River, Minnesota, my hometown. Then we rode a dala dala through Kariakoo Markett in Dar es Salaam. Then we followed an ice cream ball rolling through the streets of Old Town, Bern, Switzerland. Each story was set in a place that means something to us, sparking real conversations about culture, language, and travel.
This is what our new location picker does. You spin a globe, drop a pin on any place in the world, and Tandem generates a story set in that location. You read about local landmarks, written with the local dialect, with local details. It's the feature I'm most excited about since we launched.
But before I tell you how to use it, I want to tell you what we got wrong, and what we did about it.
Our idea
We built the location picker because we believe stories are one of the most powerful ways children learn about the world.Not through facts or maps. A child who has met Sam navigating the subway in New York City, or Baraka selling bananas in Dar es Salaam, brings something different than a child who has simply read that those places exist.
We wanted families to be able to bring their world into those stories, the places that matter to them, and to discover places that are completely new to them too.
What we found in testing
When our team started testing the feature with our own kids, we loved it. After a few rounds of prompt engineering, the localisation was remarkable: specific street names, local foods, specific phrases, references that made you smile if you knew the place.
But we also noticed something uncomfortable.
When I generated stories set in Mozambique and Dar es Salaam, both followed a similar pattern unprompted: a young boy living in poverty, with a sick grandmother, wearing torn clothes, going out to grow food or sell bananas in the market. And we noticed other patterns too: stories set in places like the Isle of Man reaching for the same handful of dialect quirks and coastal clichés, rather than reflecting the actual diversity of life there.
Both issues point to the same underlying problem. AI models learn from the world's data, and this data is almost alwaysbiased, especially amongst people or in regions that are underrepresented.
Why it matters
There is no single story for any place on earth. Walk through London or Lagos, Mumbai or New York, and you'll meet people who look different, dress differently, speak differently, work in every imaginable job, and live in every imaginable circumstance. A city like Dar es Salaam contains chemical engineers and street vendors, grandmothers who farm and grandmothers who teach, children in school uniforms and children at the beach. The same is true of a small island, a mountain town, or a suburb of Manchester. Every place is full of people living completely different lives.
We believe every child should be able to meet themselves in a story, and to be surprised by the characters they meet in places they have never been. That means the stories we generate cannot default to the most familiar version of a place, or the idea that is most represented in available data. They need to reflect the real range of human life that exists there.
This kind of representational bias is one of the risks of AI-generated content, precisely because it can look, on the surface, like authenticity.
What we did about it
As soon as we identified this pattern, we went back to the prompts that guide how our AI generates location-based stories.
It had started as a single line: just an instruction to include references to the place if it was real. We built it up from there: capturing sensory detail and local atmosphere, finding the small specific observations that make a resident feel seen rather than described, and explicitly telling the AI to avoid stereotypes.
But we were still missing something. A prompt can do a good job of evoking the smell of a market or the sound of a street, and still default, entirely unchallenged, to a narrow version of who might be standing in it. So we added one more layer: an explicit instruction to look past the most common portrayal of a place and find the diversity that exists within it.
AI will always reflect what it has learned, and what it has learned skews towards the majority. But with careful prompting, you can push it to search further, to surface the range of people and lives and perspectives that are there in the data, just not always at the top.
Before launching, we generated dozens of stories across multiple locations and reviewed them using our story rater tool, which assesses each story for quality, representation, safety, and risk. It confirmed what we'd suspected, and caught a few things we'd missed: repeated character names, the same cultural shorthand appearing regardless of context, and details that carried assumptions about what life in certain places looks like without any plot reason to be there.
We worked through each of those findings, updated the prompts, and tested again. We also discussed the location picker at length with our independent ethics committee, who help us think through the risks of new features before we ship them. We're committed to publishing our bias audits openly and will keep running them regularly. If you want to see how we assess our stories, you can rate one here: https://rater.playtandem.com/
What we're doing going forward
We're not claiming we've fixed it. Bias in AI is not a problem you solve once and move on from. It's something you watch, test, listen for, and keep working on. A single sentence in a prompt is not enough on its own. We'll keep testing across a wide range of locations, particularly places that are underrepresented in the data AI systems are trained on.
There's a flag button in the app on every story. We read those reports and act on them. We also ask for feedback after every story, and that feedback shapes what we build and how we prompt. You are part of the process.
We chose to write this post because we think the honest story of building technology is: feature, test, find the problem, fix it, test again, publish with humility. We're proud of what the location picker does. We're also honest that we didn't get it right first time, and that we'll probably need to keep refining it.
The goal, always, is that a child anywhere in the world, wherever you drop that pin, might meet a character who surprises them, delights them, or simply shows them that people everywhere are more than one story.
Erica
(COO @ Tandem)