Methodology

We publish numbers across this site — median response times, conversion lifts, missed-lead recovery counts — and a number without methodology is just marketing copy. This page documents exactly how those numbers are measured, what cohorts they come from, and where the boundaries of our claims are.

Start your 7-day free trial if you want the same dashboards we use to compute these metrics, applied to your own lead flow.

What we measure

Three core metrics carry most of the data on our pages:

We do not measure or claim "leads generated." That is a function of the originating platform (Thumbtack, Yelp, Facebook, etc.). Our scope begins the moment a lead enters our system and ends when the contractor either books the job or marks the conversation as not-a-fit.

Defining response time

Two timestamps:

  1. T0: Inbound lead webhook reaches Auto-Respond. For Thumbtack and Yelp, the official webhook timestamp. For Facebook Lead Ads, the moment the lead form payload arrives. For Google LSA, the moment the lead notification reaches our integration.
  2. T1: First outbound AI reply dispatched on the corresponding channel.

Response time = T1 − T0, measured to the second. Time the lead spent inside the originating platform before being delivered is not included — we cannot observe that interval, so we exclude it rather than estimate it.

Defining a booked appointment

A booked appointment satisfies all four conditions:

Soft conversions ("interested," "warm," "callback requested without confirmed time") are not counted. We chose this stricter definition deliberately even though looser definitions would inflate our headline numbers.

Defining missed-lead recovery

The recovery numbers we cite (e.g., "8-14 additional booked jobs per month") are computed as:

We require lead volume to remain within roughly 20% between pre and post windows. If lead volume materially shifted (e.g., a heat wave doubled HVAC volume mid-deploy), that contractor is excluded from the recovery cohort to avoid attributing seasonal noise to the auto-responder.

Cohort definitions

When a page on Auto-Respond cites "the Q1 2026 contractor cohort" or "our March 2026 Thumbtack data":

Sample size policy

We will not publish a percentage from a sample too small to support it. The minimum bar for a cited cohort is several hundred inbound leads and dozens of contractor accounts. Per-trade subsets still meet these minimums when we make a per-trade claim.

Where the underlying sample is genuinely small (e.g., a feature still in pilot), we either decline to cite a number publicly or we state the sample size next to the figure.

Platform coverage

Auto-Respond integrates natively with Thumbtack, Yelp Request a Quote, Facebook Messenger, Facebook Lead Ads, Instagram DMs, Google Local Services Ads, Angi, Nextdoor, Porch, and Houzz. We only publish numbers from channels where data flows end-to-end through our system. We do not extrapolate to channels we have not built integrations for.

Exclusions

Excluded from every cited cohort:

Why ranges, not point estimates

Two contractors with the same Thumbtack volume can see meaningfully different lifts from the same auto-responder configuration. Trade, geography, ticket size, and competitive density all shift the answer. A single percentage would imply a precision the underlying distribution does not have.

Ranges like "roughly 2-3x" or "32-42%" reflect the inter-quartile spread across the cohort. The top and bottom deciles are not used to define the bounds. If your own results sit outside the published range, that is normal — the range describes the typical contractor, not every contractor.

Update cadence

Headline numbers are reviewed every quarter. Per-page citations include the cohort window inline (e.g., "in our Q1 2026 contractor cohort") so the data vintage is always visible to readers. When we refresh a figure, the surrounding cohort window updates with it.

If you find a number on Auto-Respond without a clear cohort window, or older than the current quarter without justification — that is a content bug. Tell us.

Where this methodology is referenced

Frequently asked questions

Why ranges instead of single percentages?

Per-contractor variance is large. Same auto-responder, same Thumbtack lead volume, different conversion lift depending on trade, market density, and average ticket size. A single point estimate would suggest a precision the data does not have. Ranges reflect the actual spread we see across our customer cohort.

How is "response time" defined in Auto-Respond data?

It is the wall-clock interval between an inbound lead webhook arriving at Auto-Respond and the first AI-generated reply being sent on the corresponding channel. Time spent inside the originating platform before the lead is delivered to us is not included — we cannot observe it, so we do not estimate it.

What counts as a "booked appointment"?

A calendar event on the contractor's connected Google Calendar or Microsoft 365, linked to a customer record from a tracked inbound lead, scheduled within seven days of the lead arriving, and not flagged as a misfire. Warm replies, "interested but not yet committed," and callback requests without a confirmed time are not counted as bookings.

How big are the cohorts behind your numbers?

Cohorts named on our marketing pages include thousands of inbound leads across hundreds of contractor accounts, in the relevant trade and time window. Smaller cohorts are noted explicitly when cited. We decline to publish percentages from samples too small to be meaningful.

How often do you refresh the data?

Quarterly review of headline numbers across the site. Per-page citations carry the cohort window inline (e.g., "Q1 2026") so the data vintage is always visible. When a number changes, the prose around it changes too.

Run the same measurement on your own leads

Start your 7-day free trial and watch the response-time and conversion dashboard populate with your own data.

Start 7 Days Free Trial