AI may be rewriting the rules of product development, but some of its most interesting builders aren’t chasing hype. They’re learning, often the hard way, how to ship something people actually want.
Millette Gillow learned that lesson early. While working with founders inside The Tech Bros — a deliberately ironic name for a collective and accelerator that supports early-stage builders — she kept seeing the same pattern repeat: slick demos, enthusiastic feedback, and then… nothing. Products that impressed in pitch meetings quietly failed once they reached real users. “It’s super easy to end up building something that demos well yet no one wants,” she said. “That’s why I highly recommend mom-testing anything you build before you build it.”
By “mom-testing,” Gillow means stripping away insider language and imagined use cases, and asking a brutally simple question: would a non-technical person actually use this, understand it, or pay for it? Inside The Tech Bros’ programs, she’s seen founders abandon features that sounded impressive but failed that test – AI copilots that required too much setup, automation tools that saved seconds rather than solving real pain, or products that users praised in theory but never returned to in practice. What users say vs. what they do
That kind of blunt pragmatism has become a hallmark of a new wave of AI product makers: technically fluent, design-literate, and increasingly skeptical of labels like “AI-native.” Gillow argues those definitions miss the point. “They’re inconsistent and can be quite woolly,” she said. “If your core product is AI, you can probably call yourself AI-native. Otherwise, you’re just a company — and that’s absolutely fine.”
The focus, she argues, should be on whether something works in the messy reality of people’s lives. “Great businesses are great businesses, whether they’re ‘AI-native’ or not, as long as they’re solving a real problem.”
That tension between what users say they want and how they actually behave is something Swedish founder Amalia Berglöf has encountered firsthand. She is the founder of Crewcial, a community-tech startup aimed at helping professionals build meaningful, durable networks — not just collect contacts. Early on, she believed she knew what that should look like. Her users, she thought, wanted structured networking tools and clear prompts for engagement.
In calls and demos, they told her exactly that. Then they used the product — and did something else entirely.
“What people say in demos and in calls isn’t even close to what they do as soon as they try it out,” Berglöf said. Users described an idealized version of themselves: highly social, proactive, and intentional. Their behavior told a different story. Engagement dropped when interactions felt forced. Features designed to encourage participation often created friction instead.
“I realized I was running into a catch-22,” she said. “I needed to build a better basis before even building the actual networking experience.” Instead of doubling down on her original idea, Berglöf paused development and re-examined the problem. She began observing what users gravitated toward organically — passive signals, lightweight interactions, and moments that felt useful rather than performative.
Listening to behavior rather than feedback reshaped Crewcial’s direction. “I listened to my users instead of sticking to my idea,” she said. “I’m so thankful for that.”
The speed trap
Both founders agree that AI has made experimentation easier than ever — and that this is both a gift and a trap. Tools like Cursor and Claude, which Berglöf calls “life-changing,” allow solo founders to move from idea to minimum viable product (MVP) in minutes. But that speed can mask deeper issues. “Scaling responsibly is harder than building fast,” she said, especially for non-technical founders who may not see where systems will break under real-world use.
That pressure to move quickly is also forcing builders to rethink what “good product sense” means in the AI era. Berglöf worries that some founders are outsourcing judgment too readily, particularly when it comes to user research. “I’m worried about people using LLMs for user research without understanding their biases,” she said. In areas like health, she noted, models trained on skewed data can quietly reinforce gaps — including poor coverage of women’s issues — leading to products that simply don’t work for large parts of their intended audience.
In her own work, that awareness has shaped how she evaluates AI-driven insights, treating them as inputs rather than answers. “Those who understand the problem and understand distribution will win,” she said, not those who assume the model knows best.
The invisible assistant
For Gillow, the end goal is technology that disappears rather than dominates. “I’m a big believer in tech that gets out of our way,” she said. The most successful AI products, in her view, will feel less like tools and more like quiet infrastructure: invisible personal assistants doing useful work in the background rather than demanding attention.
Design plays a critical role in making that possible, and Gillow believes it’s an area where startup culture is shifting. “There’s a certain snobbishness among ‘tech bros’ when it comes to good design,” she said. “But technical ability and an eye for aesthetics are a dream combo.” She argues that women builders, often combining engineering, design, and user empathy, are helping to reset those priorities.
Berglöf sees the same trend in Sweden’s startup scene, supported by structural factors such as a law that allows employees to take six-month sabbaticals to start a company. “We’re seeing more women who can both code and design, and that’s incredibly powerful,” she said. “Understanding your client’s job-to-be-done is more important than technical skills today. Slapping an AI label on your product doesn’t work anymore.”
Both founders see the next phase of AI innovation as quieter, more grounded, and more consequential. Berglöf hopes it will be defined by companies tackling problems like climate resilience, education, and democracy – less obsessed with the mechanics of AI, and more focused on why it’s being used at all.
Gillow puts it more simply. “Build stuff that helps people,” she said. “And make sure it actually ships.”






