Somewhere in your stack right now is a platform you paid real money for. Your team went through onboarding. Tickets are flowing through it. Utilization looks fine on paper.
And yet, if someone asked you what that platform is actually doing for your business, you'd struggle to answer.
It’s not just your MSP, it's the industry default. MSPs invest in service desk software, measure success by whether the team is using it, and then wonder why margins haven't moved. The problem isn't execution. It isn't the vendor. It's the metric.
Most MSPs are measuring the wrong thing entirely.
Only 1 in 4 MSPs hits best-in-class service gross margins, according to Service Leadership benchmarking data. For most, margins are tighter than they should be, and the reason isn’t the market or the pricing model. it’s what they’re measuring.
EBITDA gets thrown around a lot in MSP circles. It matters. But for service leaders, the number worth obsessing over is more specific:
Service gross margin — what it costs you to deliver service versus what you're earning from it.
The math is simple. The reality is uncomfortable. For most MSPs, service margins are tighter than they should be because the primary cost driver is people. Salaries. Benefits. Hours spent triaging, routing, following up, and touching the same ticket three times before it's resolved.
Software doesn't automatically change that equation. A new platform doesn't reduce headcount or labor hours on its own. Adoption of a feature doesn't free up a single engineer. The only thing that moves service gross margin is changing how work gets done, specifically, how much of that work requires a human and how much of it doesn't.
Until you're measuring your service desk investment through that lens, you're flying blind.
The industry default for measuring software success is feature adoption. Is the team using it? What percentage of tickets are going through the platform? How many users are active?
These are proxy metrics. And they're weak ones.
A triage feature with 90% adoption means nothing if it isn't reducing the labor cost of delivering service. An AI tool that your team uses every day is a productivity theater if it isn't changing the underlying economics of how work gets done.
The problem is that feature adoption is easy to measure. Service gross margin improvement is harder. So MSPs default to the easier number, and then wonder why a year of solid adoption hasn't moved the business forward.
Measuring adoption instead of value is like tracking how often your team opens their email client instead of whether deals are closing. Activity and outcomes are not the same thing.
There's a cost center in most MSP service desks that doesn't show up cleanly on a P&L. It's not a line item. It's the accumulated labor of moving work around, routing tickets, assigning engineers, following up on stalled requests, re-triaging what got missed the first time.
Dispatch work. And it is eating your margin on every single ticket.
The reason it's invisible is because it's distributed. A minute here to re-route a ticket. Three minutes there to chase down a response. Five minutes to figure out who should own an escalation. Multiply that across your full ticket volume and you're looking at a material chunk of your labor cost, none of which is billable, none of which is visible, and almost none of which requires a human.
This is where automation has the highest return, not in making your engineers faster at resolution, but in removing the coordination overhead that sits between an incoming ticket and the right person acting on it. Automating dispatch work doesn't just save time. It directly expands service gross margin by reducing the labor cost per ticket.
For a deeper look at what triage accuracy actually costs when it's measured in real dollars, not feature metrics, the numbers are sharper than most MSPs expect.
Thread's automated ticket triage is built specifically around this problem — not making ticketing faster, but removing the human coordination layer that makes ticketing expensive.
If feature adoption is the wrong scorecard, what's the right one?
Cost per ticket. How much does it cost you in labor to fully resolve a ticket from open to close? If you can't answer this, you don't know your service economics.
Labor hours per resolution. Not time-to-close, labor hours. A ticket that gets resolved in four hours because it sat in a queue for three of them is not a well-served ticket.
Service gross margin by client segment. Are your margins consistent across clients? Or are certain clients (or certain ticket types) quietly destroying profitability while others carry the load?
These aren't exotic metrics. They're the numbers that tell you whether your service operation is healthy. And they're the numbers your service desk software should be directly moving.
This is what Intelligent Service Delivery (ISD) actually means. A service operation where every layer (conversations, intelligence, and agents) is connected and working toward the same financial outcome. If you want to understand the full framework, the ISD Manifesto lays it out directly.
Here's a simple test for your current service desk investment:
Can you tell me what it's doing to your service gross margin?
Not what features you're using. Not how many tickets it processes. What is it doing to the cost of delivering service versus the revenue you're generating from it?
If you can't answer that, you don't have enough information to know if your investment is working. You have activity data dressed up as performance data.
The MSPs that are pulling ahead aren't the ones with the most features turned on. They're the ones that connected service delivery to financial outcomes, and built a service desk that they can hold accountable to the margin.
That's the standard Thread is built to meet.
See how Thread measures value for strategic MSPs — book a demo.