AI Writes Code at Machine Speed. Holmes Raised €1.1 Million to Test Whether It Actually Works.

The conversation about AI in software development in 2026 has two chapters that rarely appear in the same article. The first chapter is about how AI coding tools have made writing code dramatically faster. Cursor, GitHub Copilot, Claude Code, and Codex have fundamentally changed the speed at which software gets written. The second chapter is about what happens after that code is written and whether it actually works.
Holmes, a Belgian startup that raised €1.1 million in a pre‑seed round announced on May 12, 2026, is betting that the second chapter is the bigger commercial opportunity.
The company's thesis is specific and technically grounded. Robin Praet, one of Holmes's three co‑founders, described it directly: "Code today is written faster than ever, often with the help of AI. But whether the product actually works the way users expect is a different story. Holmes covers that exact gap: not whether the code is right, but whether the product holds up in real use."
The pre‑seed round was led by Syndicate One, with participation from Roeland Delrue and Willem Delbare, the co‑founders of Aikido, the developer‑focused cybersecurity startup; Louis Jonckheere, the co‑founder of Showpad, the sales enablement platform; and serial entrepreneur Thomas Van Overbeke. The funds NewSchool, RDY, and 100IN also participated.
The Founding Team's Exit Track Record
The Holmes story starts with who the founders are, because their backgrounds explain both why they are credible to investors and why they understand the specific problem they are solving.
Robin Praet and Robbrecht Delrue co‑founded Smartendr, an AI ordering solution for the hospitality industry, which was acquired by OrderBilly in September 2025. The acquisition gave both founders direct experience of building a software company through product‑market fit and exit, specifically in a context where the end product needed to work reliably for non‑technical end users in high‑pressure service environments.
Sofie Buyse brings a different and arguably more directly relevant background. She was the first employee at Henchman, an AI tool for legal professionals that provides contract and knowledge base intelligence. Henchman was acquired by LexisNexis for €136.1 million, approximately $160 million, in June 2024. Buyse was the product manager during Henchman's growth and acquisition, and she described her personal experience of the QA problem with unusual specificity: "At Henchman, I ran into it myself: QA is work everybody knows is critical, but nobody wants to own. Skilled QA testers are hard to find and expensive to hire, so testing usually falls on product managers and developers, on top of everything else they're already doing. And when they lose focus for even a moment, those bugs end up in front of users. That's why we built Holmes: it takes testing off their plate and runs it continuously in the background, so the team can keep building without worry."
Three co‑founders with two successful exits, investor backing from the founders of a well‑capitalized cybersecurity company and a successful SaaS platform, and personal experience of the problem they are solving. That founding profile explains why Syndicate One led a pre‑seed round rather than waiting for more commercial evidence.
What Holmes Actually Builds
Holmes is an autonomous quality assurance platform with a specific design philosophy: it does not require engineers to write and maintain test scripts. Instead, it observes how people actually use the product, learns the flows that real users follow, and then continuously tests those flows to verify that they still work after code changes.
The QA process has become a bottleneck at exactly the moment AI coding tools have made it most critical. Engineers manually write and maintain tests, while QA testers click through products to check that everything still works. AI tools like Claude and Codex already help developers write code faster, but what looks correct in the code can still cause problems in a real‑world environment.
The specific failure mode Holmes is designed to catch is the gap between code correctness and product behavior. A codebase where every function operates exactly as specified can still produce a broken user experience if the interaction between components produces unexpected state combinations that the individual tests did not cover. AI‑generated code is particularly prone to this category of failure because the AI optimizes for local correctness at the function level without full awareness of the systemic effects across the application.
Holmes integrates directly into the tools development teams already use, running inside the CI/CD pipeline and surfacing issues in the developer's existing workflow rather than requiring a separate testing environment. The platform is built for teams shipping at AI speed, and claims that it catches problems before they reach users.
Robbrecht Delrue's description of the timing captures the product's commercial moment precisely: "No software company builds a large QA team from day one. But every company eventually reaches the moment when manual testing starts holding back their growth. That's the moment we built Holmes for."
The company is currently working with 30 design partners who are shaping the product, with broader access rolling out in the coming months. Its advisory network includes senior engineering leaders from Collibra, Luzmo, Upsellplus, and Lighthouse, providing both technical guidance and market intelligence from companies at different stages of growth.
The €1.1 million in pre‑seed capital will fund product development to reach the public launch milestone and establish the initial commercial pipeline. The founding team's exit track record, combined with the Aikido and Showpad co‑founder participation, creates a credibility profile that positions Holmes for a Series A conversation sooner than a typical pre‑seed company without similar pedigree.
The broader market for autonomous QA tools is being created by the same AI coding boom that Holmes is designed to address. As the volume of AI‑assisted code commits grows, the proportion of that code that undergoes meaningful quality assurance testing is declining. A company that makes rigorous QA automatic, continuous, and developer‑transparent is addressing a gap that will only grow as teams ship more code faster.





