Intro
As a co-founder of this self-funded startup, I wore every hat: researcher, product manager, UX designer, animation specialist, AI engineer, frontend developer, and marketing specialist.
The product was a cross-platform AI-powered storytelling and educational app that generated personalized stories for young children. Kids (or their parents) could pick from a library of characters, settings, and genres, or build entirely custom narratives from their own ideas. Every story was designed to be genuinely educational, not in an academic sense, but ensuring that the content was meaningful, age-appropriate, and enriching rather than disposable AI-generated filler. This was enforced through the AI workflow itself.
Stories came in two formats: illustrated picture books and audiobooks. Picture books could be read independently by the child, read aloud by a parent, or narrated by the app with pages turning along with the voice. Both formats offered a selection of preset voices or the option to use a custom one, like a parent's, so a child could hear a familiar voice even when that parent wasn't in the room.
The idea originated with two of the co-founders, both UX designers and mothers, who wanted a safe and genuinely age-appropriate way to keep their young kids engaged during car rides or the occasional moment when a parent's hands were full at home.
I came on board to bring generative AI, LLM, technical expertise, and UX design as well. A developer joined after me, making four co-founders total. We built the product entirely on personal funds, working evenings and weekends over roughly a year and a half.
Research
I led competitive analysis across the emerging children's AI storytelling space, evaluating apps on content quality, safety filtering, UX patterns, and how they handled AI integration.
Alongside that, I did deep technical research: benchmarking LLMs, image generation models, text-to-speech generation models, and APIs against each other for output quality, per-request cost, response latency, and the robustness of their safety controls. This research informed both our product positioning and our technical architecture, revealing where the market was underserving parents, especially around content safety and the overall quality of AI-generated material.
Formal user research (interviews, usability testing, surveys) was not conducted during my involvement. Three of the co-founders were parents with children in the exact target age range, providing continuous firsthand insight into what parents and kids actually needed. Some foundational research had also been completed before I joined.
Product Management
Product decisions were made collectively across the founding team. We defined the product's positioning, mapped out core use cases and user stories, built the roadmap, and scoped what the MVP would include for an initial iOS release. This was an ongoing, iterative process as technical constraints reshaped our plans regularly. AI processing times influenced interaction design, and the cost per generated story forced hard conversations about what we could realistically sustain without outside investment.
Design System
One of the other co-founders selected the initial color schemes, which were then collectively refined by the team.
I built the design system in Figma on top of that foundation: defining design tokens, establishing a components library following atomic design methodology, and structuring everything to scale across mobile, tablet, and web (even though the first release was iOS-only). Having that system in place early paid off throughout the project. It kept the UI consistent as we iterated and made it straightforward to spin up new screens without reinventing visual decisions each time.
UX and Visual Design
Information architecture, user flows, and visual design were collaborative work shared with the other designers. This project was a valuable experience in designing UX around generative AI and LLM integration.
At the time, very few consumer products were doing this well, and most competing apps treated AI as a backend black box with little thought given to how it shaped the user experience. Working directly on both the AI pipeline and the design gave me a practical understanding of how generative AI's capabilities and limitations should inform UX decisions, something that has only become more relevant as AI-powered products have become the norm.
Development
I tested, designed and built the generative AI pipeline: developing and stress-testing prompts for reliable output, writing Python scripts for automated asset generation, and implementing multi-layered safety mechanisms to filter and validate every piece of AI-generated content before it could reach a child.
The developer and I jointly tested APIs using Postman before integration, and my working Python scripts became the foundation for the Firebase Cloud Functions that the developer built to run the AI workflows on the server-side.
Firebase setup, data architecture, storage, and management were a shared effort between us. I brought some prior experience with Firebase from personal projects, while the developer handled the deeper infrastructure work, including security hardening to protect the database, Cloud Functions, and user accounts. I also helped populate the data bases with required content and assets.
On the frontend, I built the initial key screens in Swift for iOS, establishing the visual and structural foundation. The developer then took those screens, implemented the background logic, and extended the build while I shifted my focus back to AI development and UX design.
Marketing and Animations
I designed some of the brand identity, including the logo, and created illustrations used as the app's mascot, in-app visuals, and other marketing materials.
Animations were entirely my work, produced both in Adobe After Effects and as Lottie animations in Figma, used within the app. I also built the marketing website end to end on Squarespace, selecting and customizing a template, then writing custom HTML and CSS to fill the gaps where the platform's native tools weren't enough.
Final Design
The product reached a complete, functional MVP. A working build was live on Apple TestFlight with real user accounts and Firebase powering the backend. All primary flows were implemented: browsing and selecting characters, settings, and genres; creating custom stories; viewing illustrated picture books with multiple reading modes (self-read, parent-read, app-narrated); listening to audiobooks; and choosing between preset or custom voices. The brand assets, App Store screenshots, and marketing site were all complete and production-ready.
Final Designs and Developer Specs
Results
The MVP was finished but never launched publicly. The product delivered genuinely high-quality output: illustrations that looked hand-drawn, lively voice narration with real intonation, and a robust safety pipeline built specifically for children's content. That quality came at a cost per story that required outside investment to sustain, and the team chose not to compromise on output quality to cut expenses since that quality was the entire value proposition.
The broader AI landscape also moved extraordinarily fast during the development period. Apple announced free on-device AI capabilities, OpenAI made ChatGPT with voice and image generation freely available to all users, and the App Store was flooded with competing apps. A small, self-funded team with no dedicated resources simply could not keep pace with the speed at which major companies were commoditizing AI-generated content. The decision to freeze the project was a recognition that the market had shifted beyond what was within our control.
This was my first experience building a product from 0-to-1 as a co-founder of a startup, and it remains one of the most formative. It gave me hands-on experience and depth in every discipline a product touches: AI prompt engineering and safety architecture, design systems, data architecture, product management, research, UX design, frontend development, brand identity, and marketing. The biggest takeaway was learning, through direct experience, how AI's technical realities (processing latency, generation costs, safety filtering complexity) aren't just engineering problems, they're product and UX problems that shape every decision from roadmap to interaction design.
© 2024 Aleksei Mikhailov. All rights reserved by the respective copyright owners. No part of this portfolio may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the copyright owners.