The question of “fair” pricing has become urgent. Tool spend has climbed, yet results are uneven and budgets are tight. Fixed monthly fees look safe on paper, but when demand dips you pay for capacity you don’t use. Usage‑based models promise alignment to value, but without visibility they can feel risky. Finance needs predictability; operations needs flexibility; leadership wants proof that spend tracks outcomes, not activity. This article sets out clear definitions, a fairness lens you can apply to either model, practical forecasting methods and guidance on when each approach fits, especially if your stack centres on HubSpot.
Quick definitions
Usage‑based pricing, often called per‑run or per‑outcome, charges for consumption that meets a predefined standard. You and your provider agree acceptance criteria in plain English, map them to data signals and verify each run objectively. Billing is tied to what succeeded, not what was attempted. Monthly rental pricing charges a flat amount for access, capacity or a defined scope of work over a period. It is simple to plan, but it typically measures activity and availability rather than verified outcomes. The crucial difference is the unit of value: in a usage model the unit is a verified outcome; in a monthly model the unit is time, capacity or scope.
What makes pricing “fair”?
Fairness starts with alignment to value. If spend only rises when value rises, the model feels equitable even when totals fluctuate. Predictability matters next: finance must be able to forecast with confidence and avoid end‑of‑month surprises. Control is the third ingredient: the buyer should be able to modulate spend as demand changes without renegotiating core terms. Transparency underpins all of this. It should be obvious what you are paying for, what counted, what didn’t and why. Finally, incentives must align so both parties benefit from improving outcomes, not just doing more work.
Head‑to‑head through the fairness lens
On value alignment, usage‑based pricing has the edge. When each chargeable unit is a verified outcome, spend and value move together. Monthly rental can still be fair if outcomes are stable and scope is well defined, but by default it risks charging the same regardless of result. On predictability, monthly fees are easy to budget; you know the number in advance. Usage‑based can be equally forecastable once you treat outcomes as units, estimate volumes, apply a unit price and model low/most‑likely/high ranges. The difference is cultural: one masks variability; the other quantifies it.
Control is where usage‑based shines for variable environments. If you run a campaign, launch a new market or hit seasonality, spend scales with activity without carrying idle cost through quiet periods. Monthly rental offers a different kind of control: capacity reservation. You are paying to have a team, toolset or scope available regardless of demand, which can be valuable if continuity is paramount. Transparency favours usage‑based when it is built on acceptance criteria and verification. Because each run has an audit trail, you can see throughput, yield and latency clearly and improve the right step. Monthly models depend on crisp scope and change control to avoid ambiguity.
Incentives align cleanly under usage‑based pricing. Providers win by increasing yield and reducing latency, not by padding hours. Monthly models can be aligned to outcomes, but it requires discipline: embed definitions, agree the evidence and review results against those standards, not just activities completed.
Forecasting and controls that make each model work
Usage‑based forecasting is straightforward once you adopt unit thinking. Start with a baseline derived from recent volumes and conversion flows. Estimate how many triggers will lead to attempted runs, and what proportion will meet the acceptance criteria. Multiply verified outcomes by the unit price, then model ranges to reflect campaigns and seasonality. Put simple controls in place: usage dashboards visible to both teams, internal thresholds that trigger a check‑in, planned surge windows for big pushes and pause or defer options for non‑critical runs if priorities change. Review variance weekly so adjustments happen early.
Monthly forecasting relies on defining scope and utilisation. Be explicit about what is in scope, what requires a change request, and the assumptions behind expected throughput. Watch for scope creep; small additions erode value if they displace higher‑impact work. Put periodic value reviews in the calendar and hold both sides to them. If outcomes are not being met, either adjust scope or consider shifting the work to a usage‑based footing where verification can tighten execution.
When usage‑based is fairer
Usage‑based pricing is fairer when demand is variable and you value elasticity. Launches, events and seasonal peaks create lumpy volumes that are expensive to cover with flat fees. It excels when outcomes are clearly defined and your systems can expose reliable verification signals; you avoid paying for attempts that do not meet the standard. It is also a good fit if you want a tight feedback loop. Because each run is measured, you can see exactly where yield drops or latency creeps in and fix the right step. Finance teams that prefer elastic spend, paying more when value is created and less when it is not, tend to favour this approach once visibility is in place.
When monthly rental is fairer
Monthly rental is fairer when volumes are stable and the work is truly business‑as‑usual. If continuity and capacity reservation matter more than elasticity, a flat fee makes sense. It is also a practical choice during early discovery and stabilisation phases when processes are still evolving and outcomes are not yet verifiable. In procurement environments with fixed budgets and a premium on simplicity, monthly models can reduce friction, provided you maintain discipline around scope and value reviews.
Hybrid approaches that balance fairness and predictability
You do not have to choose one model for everything. Many teams adopt a hybrid deliberately. A commit‑to‑consume model sets a small monthly base for continuity and layers usage pricing on top for bursts, preserving predictability while aligning spend to value during peaks. Tiered usage bands reward scale efficiency by lowering unit cost at higher verified volumes without losing transparency. Packs with limited rollover can smooth seasonality: you pre‑purchase a block of outcomes with the ability to carry a portion into the next period, avoiding the “use it or lose it” scramble that distorts priorities. Whatever the blend, agree simple guardrails, forecast ranges, thresholds and planned review points, so both sides stay in lockstep.
HubSpot‑centred examples
HubSpot makes usage‑based models tangible because outcomes map neatly to CRM events and property changes. Paying per verified enriched and routed market‑qualified lead is natural when the acceptance criteria are tied to Contact and Company properties, enrichment status and timestamps. Paying per verified meeting booked and accepted with the required context maps to Activity logs and Deal creation rules. Paying per ticket resolved to a defined standard draws on Ticket status, time to resolution and follow‑up signals. In each case, verification uses the data you already trust.
Monthly rental also has a place in a HubSpot‑centred world. Steady background work such as data hygiene routines, asset updates and scheduled reporting can sit comfortably on a flat fee if volumes are consistent and scope is clear. The key is to keep outcome‑critical flows, where timing and quality drive commercial impact, in a model where verification and elasticity matter, and reserve flat fees for predictable, lower‑variance tasks.
How to implement without surprises
Start by writing acceptance criteria in business language. Describe what must happen for an outcome to count, the inputs required to start and any exclusions or edge cases. Map those criteria to specific properties, events and timestamps in HubSpot and adjacent systems so verification becomes a simple check rather than a judgement call. Choose the unit of outcome and define what constitutes an attempt versus a success. Build a basic forecast by taking recent volumes, layering on campaign plans and seasonality, and producing a range rather than a single number. Set up shared visibility with dashboards and alerts, and agree an operating cadence for reviews and changes. With this groundwork, either pricing model becomes easier to run and defend.
Answers to common questions
Usage‑based pricing is not universally cheaper; its fairness lies in alignment. In a strong month you will spend more because you created more value. Monthly pricing can include outcome reporting, but unless definitions and evidence are explicit it tends to drift back to activity measures. Avoiding surprises with usage is about preparation, not luck: forecast ranges, set thresholds, and review weekly so decisions happen early. If your data is not ready for verification, stabilise the process first, often on a monthly footing, while you improve signals, then switch the outcome to usage when verification is reliable.
Summary and next steps
Choose usage‑based pricing when demand is variable, outcomes are clear, and you want spend to track verified results with a tight feedback loop for improvement. Choose monthly rental when volumes are stable, capacity reservation matters and simplicity outweighs elasticity, especially during early discovery. Many teams will blend both: flat fees for background continuity; per‑run pricing for outcome‑critical flows. If you’re unsure, list three outcomes that matter, write one‑line acceptance criteria for each, estimate volumes over a typical and a peak month, and test both models against that reality. The fair option will reveal itself in the numbers and the incentives.
Contact us to review pricing options for your outcomes, book a call to build a usage forecast on your HubSpot data, or get in touch to discuss a hybrid model that balances predictability with value alignment.