An Interview with Mina Azimov, Founder of RapidEye
Life in Motion: Charting the World’s Pulse is a citizen science project that invites anyone, anywhere, to contribute short videos of daily life, cultural moments, local events, or emerging trends – helping train AI to better understand the world as it truly is: authentic, dynamic, and shaped by real human experiences. Your perspective helps build a smarter, more inclusive AI platform for the future.
What inspired you to create RapidEye?
Growing up, I was always drawn to the way stories on screen connect us to the world – and to each other. As I got older, my friends and I would often point out amazing places, clothes, or moments we saw in shows or films, wondering how to find them in real life. But tracking those things down online was difficult – and often impossible.
Later, while leading the product and design team at NBCUniversal, I was approached with a unique project: The Golf Channel was collaborating with Tory Burch and wanted to explore how to bring that partnership to life digitally. I saw it as the perfect opportunity to build something I had long envisioned – an interactive video system where viewers could engage directly with what they saw on screen. We began experimenting with computer vision, which in 2016 was still in its early stages. The timing wasn’t quite right, but the vision stayed with me.
Not long after, we partnered with Amazon Rekognition to integrate computer vision into NBCUniversal’s central post-production workflows, giving us a firsthand view of how AI could transform media operations.
As AI tools evolved, I realized the technology had finally caught up to the idea. That’s when I built RapidEye – a platform where you don’t just watch a video; you converse with it, ask questions, get answers, and even act on what you see.
That idea – of making video interactive – was only the beginning. What truly excites me is that people from around the world can now help shape this platform – and in doing so, shape how AI understands and responds to the world around us. That’s the heart of Charting the World’s Pulse: an open invitation for Citizen Scientists to help build a living, learning map of human experience.
When you think of “charting the world’s pulse,” what does that mean to you personally?
Growing up, television was always on in the background – the hum of stories, music, and news made people feel connected to something larger. Today, we’re more connected than ever through streaming, TikTok, Instagram, Netflix, Spotify, and endless feeds. But despite that constant connection, it’s easy to feel disconnected from what’s truly happening in the world.
I used to think social media made us globally aware, but I’ve come to realize we still live in algorithmic bubbles. During the pandemic, I believed we were united through a shared digital experience. But recent global protests and humanitarian crises have made me realize how fragmented our perspectives are. For example, during the early days of the protests in Iran or the women-led demonstrations in Afghanistan, access to firsthand, real-time content was sparse unless you followed specific individuals. Broader public awareness often came late, if at all.
I imagined a platform where real people, in real time, could share what they’re witnessing – where everyday life, cultural moments, and emerging movements are mapped globally. That’s why we call it Charting the World’s Pulse. It’s not just about watching videos. It’s about building a living system that learns from us.
And because this is an AI-native video platform, every contribution shapes how our models evolve. We’re building multimodal systems – integrating computer vision, large language models, and real-time recognition – that respond to what we see and say. This platform trains through direct engagement with reality.
How do you see everyday people shaping the future of AI through this project? Why do their voices and videos matter to you?
I read a compelling book called The Alignment Problem by Brian Christian, which highlights how most AI systems today are trained on internet-based datasets – datasets that contain baked-in historical biases. These include COCO, ImageNet, and LAION, which, while massive, often reflect a skewed lens of the world. If left unchecked, these biases can influence how AI makes decisions in education, healthcare, and government.
A 2021 Stanford study on foundation models emphasized how dataset choices can “amplify existing inequalities” and shape downstream behaviors in unintended ways. That’s why citizen-contributed data is so critical. It provides a corrective lens and helps AI reflect the world as it is – not just as it’s been scraped and summarized.
RapidEye’s AI system is designed to improve through this kind of real-world input. We’re building a new dataset composed of videos contributed by people from around the globe. These aren’t just uploads – they’re contextual, labeled, and paired with user engagement to help our AI models learn from actual behavior and cultural context.
Their voices aren’t just important. They’re essential. This isn’t just about training better models – it’s about making sure the AI that shapes our lives sees the world more clearly, equitably, and authentically.
What do you hope people feel when they contribute to Life in Motion? Empowered? Connected? Seen?
All of the above. Like most modern audiences, I watch content from Instagram, TikTok, and YouTube. These platforms started by empowering creators to share their voices. But many creators we’ve spoken with say their content is no longer surfaced, even to their followers, unless they pay for exposure. Others express concerns that their content is being deprioritized or algorithmically suppressed.
We’re building RapidEye to be a platform where your voice isn’t just heard – it’s extended. Through our AI assistant, your content becomes interactive. Your world becomes searchable, explorable, and meaningful to others. For researchers, it’s a new lens into global change. For audiences, it’s a direct line to human experience.
At the heart of it, we want contributors to feel like they’re not just uploading a video – they’re co-creating a living, learning system that understands and reflects our world in motion.
Get Involved
If you’ve ever wanted to help shape how AI sees and understands the world, this is your chance. You don’t need a lab or a degree – just your perspective.
🌍 Whether you’re filming nature in your neighborhood, documenting life in your community, or capturing cultural moments around you, your video can help AI learn in a way that’s human-first and globally inclusive.
🎥 All you need is a phone and a few minutes. Contribute a short video and tag what you see – objects, actions, or events. Your data will help train RapidEye’s AI models to better understand context, diversity, and real-world environments.
🤝 This is about more than algorithms. It’s about building a global community that helps AI evolve responsibly and inclusively. Join us on SciStarter at Life in Motion to get started.