Does the World Need Another Data Tool?
I ask myself this question more often than I'd like to admit.
There are over 400 companies in the "modern data stack" landscape. BI tools, ETL platforms, data warehouses, analytics engines, visualization layers, orchestration frameworks — you name it, someone's funded it. The data tooling market is worth tens of billions of dollars and growing. The last thing it seems to need is another entrant.
And yet, here I am, building one. So either I'm delusional, or the question itself is wrong.
I think the question is wrong.

The Saturated Market Illusion
From the outside, the analytics market looks saturated. Hundreds of tools. Billions in funding. Every conceivable problem seemingly addressed by multiple well-funded companies.
But here's what that landscape chart doesn't tell you: most of these tools serve the same customers. They're built for companies with dedicated data teams, six-figure analytics budgets, and the engineering bandwidth to stitch together a stack of specialized tools.
If you're a Fortune 500 company with a 20-person data engineering team, you're spoiled for choice. Snowflake or Databricks for your warehouse. dbt for transformations. Looker or Tableau for dashboards. Fivetran for ingestion. Airflow for orchestration. Monte Carlo for observability. The stack is deep, the options are plentiful, and the budget exists to make it all work.
But what if you're not that company. If you're a startup founder trying to add analytics to your product, or a small business trying to understand your own data, or a team of two engineers who need insights but can't hire a data specialist or afford those specialized tools then that landscape chart is useless to you. Those 400 tools might as well not exist.
The market isn't saturated. It's saturated for a specific customer. Everyone else is still underserved.
Experience the pain
Your reality determines your difficulty level.
What is your business scale?
How much can you spend monthly?
How would you rate your analysis skills?
Why I Know This Personally
I didn't set out to build a data tool. I set out to build a customer success product.
The goal was specific: detect patterns to figure out whether a customer is likely to churn, or whether there's a way to expand revenue from them. To do that, we needed to track metrics across various data points and determine the health of each customer. Usage patterns, support tickets, billing data, product engagement. Stitch it all together and you'd have a picture of who's happy, who's at risk, and where the opportunities are.
This sounded easy. It wasn't.
The problem hit us almost immediately: every single business tracks their metrics in different ways. They use different tool stacks. Some have everything in Salesforce. Others have a patchwork of spreadsheets, databases, and SaaS products that don't talk to each other. There's no unified way of analysing data across these businesses without either paying for incredibly expensive tools or losing a significant number of nights of sleep trying to stitch it all together yourself.
We tried. We really tried. But building a product that fundamentally depends on data analytics is impossibly hard when the analytics infrastructure underneath assumes you're already a data expert with a big budget. We weren't. We were a small team trying to build a customer success product, not a data platform.
That product didn't fail because the idea was bad. Plenty of companies need better customer health tracking. It failed because the gap between "we have data" and "we can analyse data" was wider than we ever imagined, and every tool that claimed to bridge that gap either cost a fortune or required expertise we didn't have.
The LLM Bet
After the customer success tool, I stepped back and looked at the problem differently. Maybe the issue wasn't which analytics tools to pick. Maybe the issue was that configuring all of these tools was the core problem. Too many knobs, too many integrations, too much assumed knowledge.
So I did what most founders in this space have done recently. I thought: if I just add an LLM and let users ask what they want in plain English, they'll finally get the answers they need. No configuration headaches. No SQL expertise required. Just ask a question, get an answer.
I built Doorbeen.
I carried the lessons from the first product with me. The biggest one was that users were scared of connecting tools to their production databases. They didn't trust a black box with their data. So I made Doorbeen open source. Full transparency. You can read every line of code that touches your data.
It was a better product than the first one. The open source decision was right. The natural language interface felt magical in demos.
But here's the worst thing: outside of the demos, most of the time the agent failed to give users answers that would actually help them significantly. It could generate SQL. It could return results. But the results were often wrong in subtle ways like wrong joins, wrong filters, wrong assumptions about what a metric actually means in the context of that specific business. Text2SQL is impressive in a controlled setting with a clean schema. In the real world, with messy data, inconsistent definitions, and business logic the model has never seen, it falls apart.
And when it falls apart, users don't know. They get a confident-looking answer that's quietly wrong. That's worse than no answer at all.
The Epiphany
That's when it hit me. Text2SQL isn't just "not enough." The whole approach we take to data analysis might be wrong.
We keep trying to make it easier for people to interact with the existing infrastructure by building better UIs on top of warehouses, natural language on top of SQL, drag-and-drop on top of complex pipelines. But the infrastructure itself was never built for these people. It was built for data engineers and analysts. Adding a friendlier interface doesn't fix a foundation that was designed for someone else.
Maybe what we need isn't another layer on top. Maybe we need to rethink the foundation itself.
Data analysis needs to work for a world where it's not just built for select human beings who are already experienced and have access to the right tools. It needs to work for everyone and that includes the AI agents that are starting to take so much weight off our shoulders. If an agent can't reliably analyse data because the infrastructure doesn't give it enough context, that's not the agent's fault. That's the infrastructure's fault.
We need to build a world where data analysis is accessible to all human beings and agents in a fair way. Where nobody feels dumb because they can't figure out how to get answers from their own data. Where nobody feels left out because they can't afford the tools or don't have the technical background.
This isn't a feature problem. You can't solve it with a better chart library or a smarter chatbot. It's a foundation problem.
The Real Question
So "does the world need another data tool?" is the wrong question. The right question is: does the world need a data tool for the people that existing tools ignore?
And the answer to that is obviously yes.
Look at who's been left behind by the current analytics landscape:
Startup founders who need analytics in their products but can't afford a six-figure stack and a data engineer to run it. I was this person. Twice.
Small engineering teams who understand their data but don't have time to become experts in dbt, Airflow, Looker, and the rest of the modern data stack alphabet soup.
Companies building AI-powered features that need their data to be accessible to agents and models, not locked behind dashboard UIs designed for human eyeballs.
Non-technical teams who've been promised "self-serve analytics" for a decade and still can't get a straight answer to a simple business question without filing a ticket with engineering.
These aren't edge cases. This is most of the market. The analytics industry has been so focused on serving the top 5% of companies that it's forgotten about the other 95%.
What I'm Betting On
I'm betting that the next wave of analytics isn't about more powerful tools for data teams. It's about accessible tools for everyone else.
Maybe I'm wrong. Maybe the market really is saturated and there's no room for a different approach. Maybe the 95% of companies currently underserved by analytics tools will stay underserved forever.
But I don't think so. I think the analytics industry built an incredible ecosystem for a narrow audience and mistook that audience for the whole market. I think I'm not the only founder who's pivoted away from a good idea because the data infrastructure underneath was too hard. And I think the rise of AI agents is about to make the foundation problem impossible to ignore.
The question that keeps me going isn't "does the world need this?" It's "why didn't this exist when I needed it?"
I still don't have a good answer to that one.