Usage-based Pricing is Tricky, Indeed

May 12, 2023

Author

James D. Wilton

Managing Partner

Read Bio

Indeed is backtracking after a failed attempt at usage-based pricing. But our Managing Partner, James Wilton, argues that unlike Indeed’s pricing model, the collapse was very predictable.

The Wall Street Journal published an interesting article about the fallout around Indeed’s new pricing model. It’s behind a paywall, so to give the sense of it without providing too much detail (far be it for me of all people to inhibit a business’s ability to monetize its own special sauce!) I would say the main takeaways are:


  • Indeed started charging businesses “per application” (which, for the uninitiated, is a usage-based pricing model)
  • This led to several small businesses getting “hammered” with large fees for unexpectedly large numbers of applications (though note that Indeed does allow you 72 hours to reject an application and not pay for it).
  • Also (lesser point) Indeed uses dynamic pricing to adjust the price per applicant based on the # applications expected and other factors. While (at least IMO) this isn’t a bad thing it itself, it does add a layer of opaque pricing muddiness – which customers rarely like – to an already very muddy situation.


This has led to a large number of customer complaints, and Indeed is left trying to minimize the fallout.

I sympathize with Indeed. I don’t disagree with what they are trying to do, which presumably is trying to get more revenue growth from their pricing model through improved price differentiation, using a usage-based metric they believe reflects the value in what they do, which is drive qualified applications.

Unfortunately, it is a case of “right idea, imperfect execution.” I’d suggest that anyone with a strong pricing strategy background could have predicted this result.


What went wrong?

Indeed fell victim to the two greatest pitfalls in usage-based pricing:


1. Unpredictability. Unsurprisingly, customers like to know what they are going to be paying for what they’re purchasing. A lot of usage-based metrics are inherently unpredictable. As a customer it’s very difficult for me to know exactly how many (say) API calls, user sessions, or content downloads my team are going to need in a month or year. And so it is with applications for a job. I know how many I want, but I don’t know how many I’m going to get. And that’s a problem.


2. Tenuous Value Alignment. I would suggest that “# applications received” is sort of aligned to value but not fully. Few job posters would claim to not want more job applications. Getting more is definitely valuable, up to a point. As a poster I need to get enough applications so that I can be reasonably sure I’ll get enough good ones to fill the job with a strong candidate. But once I’ve hit that limit, getting more doesn’t really add more value. This is frequently the case with other usage-based metrics. I might agree that if the metric goes up, my value increases. But I likely won’t agree that if my usage increases by a factor of 10, my value increases at the same rate.

These two things together describe why Indeed met the reaction it did. Customers couldn’t predict the amount of applications they would get. And when they exceeded the number they wanted/needed, they didn’t get value from the extra, and so they weren’t willing to pay for them. Hence surprise. Hence outrage.


What could Indeed have done / do better?

If I worked with the team at Indeed, I would encourage them not to panic and immediately revert to what they were doing before. Clearly, they can’t keep the new pricing strategy. ButI’d suggest that there are some modifications they could make that would keep the intent of what they’re trying to achieve – more value-based, higher revenue growth potential – but remove the challenges they’ve encountered. (Some of these are combinable, but some aren’t. Read this as a list of separate ideas rather than components of a pricing strategy)


  • Allow a cap: Simplest first. Just allow customers to cap the number of applications they will allow, or spend for the job posting. This means that customers have the confidence that spend will never be greater than $X. This is hardly an innovative approach – LinkedIn does this.
  • Scale the “per app” price: As I mentioned earlier, your willingness-to-pay for extra applications before you hit your target is going to be a lot higher than it will be for extra applications once that threshold has been crossed. So why not align the price per application with the willingness-to-pay? Adjust the rates with the number of applications. That means Indeed will generate a bit extra if they overdeliver on applications but (hopefully) not so much that customers would complain.
  • (My favorite) Use usage-based tiering: For SaaS aficionados this is the job application equivalent of “active users.” It would involve going back to charging per job posting, but tiering the price of the post by how many applications are received. For lower than X applications (lower than wanted/needed), it’s a low price. Between X and Y applications (expected range) it’s a medium price. For greater than Y applications (aspirational target) it’s a high price. This allows them to monetize application volume, but in a much more predictable and value-aligned way.


There are also options they could explore with moving away from “# applications” as a price metric entirely, and towards something more output and/or value-based:


  • Charge by “# interviews. ” I suspect that “# interviews” is more value-based to customers than # applications. It’s certainly more output-based, since good applications will result in an interview. Hiring managers typically aim to do a minimum number of interviews after receiving applications for a role, but if it’s too hard to whittle down the candidates to that minimum number (i.e., several look great on paper, and we want to speak to all of them) they may run extra interviews. In this way, if Indeed truly is delivering great candidates, we would expect it to be reflected in a higher number of interviews. That said, the customer has complete control of their interviews, so it’s significantly more predictable as a metric. They would have to make sure they could audit the number of interviews. I suspect this would be achievable through withholding candidate contact details until the customer requests to interview the candidate.
  • Charge per hire: The ultimate output-based model here would be to charge only when the customer makes a hire. This is risky for Indeed – there is a good chance that the customer decides not to hire an Indeed applicant – but it’s potentially lucrative, since the willingness-to-pay for a hire significantly exceeds that for an applicant. Notably, this is the model that many recruitment agencies use, and it is not unusual to see fees at 20-25% of the new hire’s first year salary. Indeed could charge significantly less than that and still be highly profitable.


I’ve no doubt there are more. What pricing models would you propose? Drop me a note at james.wilton@monevate.com.

By James D. Wilton May 28, 2025
Outcome-based pricing (OBP) is one of the hottest topics in AI and SaaS monetization today. Instead of charging customers for access or usage, vendors charge based on measurable results. The idea? Customers only pay when they see real value. It sounds like the ultimate pricing model - perfectly aligned incentives, no wasted spend, and a direct link between cost and benefit. So why don’t more companies use it? Because in reality, OBP is much harder to execute than it looks. It’s been around for decades, but few companies truly succeed with it. That’s because OBP introduces complexity, risk, and friction that can make it more challenging than traditional SaaS models. Here are the five biggest pitfalls of OBP - and what to do about them. 1. Defining the Right Metric is Harder Than It Looks The biggest challenge in OBP is choosing a metric that accurately reflects value - without creating unintended consequences. If the vendor defines success too loosely, customers will feel overcharged. If the metric is too restrictive, vendors won’t get paid fairly. Example: Zendesk’s AI Ticket Resolution Pricing Zendesk introduced AI-powered customer service pricing based on resolved tickets. But customers pushed back - because Zendesk’s definition of a "resolution" didn’t always match what customers considered a real resolution. The lesson? A pricing metric must be: Meaningful to the customer (aligned with their definition of success). Tied to the vendor’s real value-add (not just surface-level activity). Difficult to game or manipulate (or customers will optimize against it). 2. Attribution is a Nightmare (Even with AI) Choosing the right metric is only part of the battle - there’s still another problem: Can you prove that YOUR product drove the result? In many cases, multiple factors contribute to an outcome. If revenue grows, was it because of the AI-powered sales tool, better sales reps, or an overall market uptick? Example: IBM Watson & Salesforce Einstein Both were positioned as transformational AI platforms, but customers struggled to isolate the AI’s impact. They could see business improvements, but couldn’t confidently say, “Watson/Einstein was responsible for X% of that success.” Notably, neither IBM nor Salesforce uses OBP for these products. Why? Attribution is too difficult. If vendors can’t prove they caused the outcome, customers won’t want to pay for it. A better approach: Control more of the process (the more your product influences the outcome, the easier it is to claim credit). Use proxy metrics (if direct attribution is hard, find leading indicators that correlate with success). Offer hybrid pricing (mix base fees with OBP so revenue isn’t fully dependent on attribution). 3. Baselining Gets Messy, Fast Even if a vendor picks the right metric AND can prove attribution, there’s yet another challenge: How do you measure improvement? The problem: Many OBP models assume a static baseline - but in reality, customer environments change over time. Example: Fraud Prevention in Financial Services Some AI vendors charge based on the reduction in fraudulent transactions. But this raises tough questions: What’s the starting fraud rate? (Pre-existing fraud levels may fluctuate.) Should the baseline reset each year? (If the vendor permanently reduces fraud, do they still get paid for maintaining it?) The lesson? Customers won’t want to pay for improvements they believe they would have achieved anyway. And vendors need a way to continuously justify their impact. A better approach: Define clear baseline periods (e.g. compare against the 6 months before implementation). Adjust pricing over time (the vendor’s impact might be front-loaded, requiring a different model in later years). Use tiered pricing (higher fees early, lower fees as impact normalizes). 4. Revenue Delays Can Kill a Vendor Even if everything else works - the metric is solid, attribution is clear, and baselining is fair - there’s still one big problem: Vendors often don’t get paid until months (or even years) after delivering value. This creates massive cash flow risks. Many SaaS companies depend on predictable, upfront revenue to fund operations. But OBP means revenue recognition is delayed, making forecasting difficult. Example: Riskified’s Outcome-Based Model Riskified, a fraud prevention platform, only gets paid when transactions are successfully approved without fraud. This aligns incentives - but it also means their revenue is inherently unpredictable. The lesson? While this approach works for Riskified, not every vendor can afford to wait for long-term verification before getting paid. (Note: Investors may not love it either - Riskified trades at just 1.89x EV/Revenue, a very low multiple for a SaaS company.) A better approach: Charge a mix of fixed fees + OBP to ensure steady cash flow. Offer performance tiers (higher base fees for lower-risk customers, full OBP for riskier bets). Use milestone-based payments - instead of waiting for full verification, charge in phases. 5. Customers Prefer Predictability - Even Over Potential Savings Even if an OBP model delivers better value, many customers still choose predictable pricing over variable costs. Why? Most businesses prefer stable, budgetable expenses over a fluctuating fee - even if the predictable price is technically more expensive. Example: Conversational AI in Customer Support A vendor offering an AI chatbot asked customers to choose between: Payment based on how many conversations the AI fully handled (OBP model). A flat subscription fee. Most customers chose the flat subscription. The lesson? Even if OBP is theoretically the best model, buyers often prefer predictability. The existence of an OBP option, however, can signal vendor confidence and reinforce the value of a fixed-price plan. A better approach: Give customers a choice (some will prefer OBP, but many want predictability). Use OBP as an anchor (show the OBP price, but steer customers toward a fixed option). Cap OBP costs to reduce buyer anxiety. Final Thoughts: OBP Works - But It’s Not for Everyone Outcome-based pricing sounds great in theory, but it’s tough to get right. When structured poorly, it leads to: Customer friction (over unclear metrics or unfair pricing). Revenue instability (due to attribution and baseline issues). Delayed payments (which can crush cash flow). The best OBP models: Pick the right metric - aligned to value and hard to manipulate. Solve the attribution problem - proving the vendor’s role in success. Balance cash flow - with a mix of fixed fees and variable components. OBP isn’t broken - but it’s not a magic bullet. Companies that embrace it need to go in with open eyes and a clear strategy. What’s your take? Have you seen OBP succeed or fail? Let’s discuss.
By Annika Li March 24, 2025
Customers love integrations but not paying for them. How to structure API pricing to align with perceived value while maintaining profitability?
By James Wilton March 18, 2025
AI has changed everything—from how we work to how we interact with technology.
By James Wilton February 11, 2025
The following is an edited excerpt from Capturing Value: The Definitive Guide to Transforming SaaS Pricing and Unshackling Growth, the new book by Monevate founder James D. Wilton.
By James Wilton October 1, 2024
James argues that Canva's 300% price increase for Teams is reasonable, but poor messaging, lacking empathy and fairness, fueled customer backlash, underscoring the need for better communication in SaaS pricing changes.
September 6, 2024
While GenAI remains the hotbed of innovation in tech, the level of innovation is not as high for AI’s pricing strategies - dive deep into GenAI pricing insights and strategies!
By James Wilton August 19, 2024
James D. Wilton's 7 B2B PLG Commandments guide effective pricing strategies for SMBs. After covering packaging in his first article, he now explores critical aspects of PLG pricing through commandments four to seven.
By James Wilton August 19, 2024
Product-led growth (PLG) is effective for targeting SMBs, but requires a carefully crafted B2B pricing strategy. James D. Wilton outlines 7 commandments of B2B PLG pricing, focusing on the first three that deal with packaging.
By Max Baughman August 14, 2024
Behind the scenes pricing is where the real money hides in plain sight and implicit price metrics are the unsung heroes of B2B SaaS revenue optimization. Learn all about them and how to ethically implement this powerful strategy.
SHOW MORE