Methodology
We publish numbers across this site — median response times, conversion lifts, missed-lead recovery counts — and a number without methodology is just marketing copy. This page documents exactly how those numbers are measured, what cohorts they come from, and where the boundaries of our claims are.
Start your 7-day free trial if you want the same dashboards we use to compute these metrics, applied to your own lead flow.
What we measure
Three core metrics carry most of the data on our pages:
- Response time. The interval between an inbound lead webhook reaching Auto-Respond and the first AI-generated reply being dispatched on the right channel (Thumbtack, Yelp, Facebook, SMS, voice, etc.).
- Lead-to-booking conversion. The percentage of leads in a cohort that become a confirmed calendar event accepted as a real job within seven days.
- Missed-lead recovery. The count of additional booked jobs per month captured after deploying the auto-responder, compared to a pre-deploy baseline at roughly equivalent lead volume.
We do not measure or claim "leads generated." That is a function of the originating platform (Thumbtack, Yelp, Facebook, etc.). Our scope begins the moment a lead enters our system and ends when the contractor either books the job or marks the conversation as not-a-fit.
Defining response time
Two timestamps:
- T0: Inbound lead webhook reaches Auto-Respond. For Thumbtack and Yelp, the official webhook timestamp. For Facebook Lead Ads, the moment the lead form payload arrives. For Google LSA, the moment the lead notification reaches our integration.
- T1: First outbound AI reply dispatched on the corresponding channel.
Response time = T1 − T0, measured to the second. Time the lead spent inside the originating platform before being delivered is not included — we cannot observe that interval, so we exclude it rather than estimate it.
Defining a booked appointment
A booked appointment satisfies all four conditions:
- Calendar event on the contractor's connected Google Calendar or Microsoft 365.
- Linked to a customer record from a tracked inbound lead.
- Confirmed by the contractor (not auto-cancelled, not flagged as misfire).
- Scheduled within seven days of lead arrival.
Soft conversions ("interested," "warm," "callback requested without confirmed time") are not counted. We chose this stricter definition deliberately even though looser definitions would inflate our headline numbers.
Defining missed-lead recovery
The recovery numbers we cite (e.g., "8-14 additional booked jobs per month") are computed as:
- Pre-deploy baseline: Booked jobs from inbound leads on the relevant channel, in the 14-30 days before auto-responder activation.
- Post-deploy window: Booked jobs from inbound leads on the same channel, in the 14-30 days after activation.
- Recovery: Post-deploy booked jobs minus pre-deploy booked jobs, normalized to a 30-day window.
We require lead volume to remain within roughly 20% between pre and post windows. If lead volume materially shifted (e.g., a heat wave doubled HVAC volume mid-deploy), that contractor is excluded from the recovery cohort to avoid attributing seasonal noise to the auto-responder.
Cohort definitions
When a page on Auto-Respond cites "the Q1 2026 contractor cohort" or "our March 2026 Thumbtack data":
- Time window: The named calendar period in US Pacific time.
- Channel scope: Only the named channel — Thumbtack-cohort claims do not include Yelp, Facebook, etc.
- Customer scope: Active paid customers for the full window. Trial-only accounts and accounts that churned mid-window are excluded.
- Vertical scope: Service trades — HVAC, plumbing, roofing, electrical, landscaping, cleaning, and adjacent. B2B and non-service verticals are not in service-business cohorts.
Sample size policy
We will not publish a percentage from a sample too small to support it. The minimum bar for a cited cohort is several hundred inbound leads and dozens of contractor accounts. Per-trade subsets still meet these minimums when we make a per-trade claim.
Where the underlying sample is genuinely small (e.g., a feature still in pilot), we either decline to cite a number publicly or we state the sample size next to the figure.
Platform coverage
Auto-Respond integrates natively with Thumbtack, Yelp Request a Quote, Facebook Messenger, Facebook Lead Ads, Instagram DMs, Google Local Services Ads, Angi, Nextdoor, Porch, and Houzz. We only publish numbers from channels where data flows end-to-end through our system. We do not extrapolate to channels we have not built integrations for.
Exclusions
Excluded from every cited cohort:
- Internal test accounts created by Auto-Respond staff.
- Leads identified as staff, related parties, or other internal traffic.
- Subscriptions cancelled or refunded during the cohort window.
- Leads from channels we do not natively integrate with.
- Contractor accounts where lead volume shifted more than 20% between pre and post windows (recovery analyses only).
Why ranges, not point estimates
Two contractors with the same Thumbtack volume can see meaningfully different lifts from the same auto-responder configuration. Trade, geography, ticket size, and competitive density all shift the answer. A single percentage would imply a precision the underlying distribution does not have.
Ranges like "roughly 2-3x" or "32-42%" reflect the inter-quartile spread across the cohort. The top and bottom deciles are not used to define the bounds. If your own results sit outside the published range, that is normal — the range describes the typical contractor, not every contractor.
Update cadence
Headline numbers are reviewed every quarter. Per-page citations include the cohort window inline (e.g., "in our Q1 2026 contractor cohort") so the data vintage is always visible to readers. When we refresh a figure, the surrounding cohort window updates with it.
If you find a number on Auto-Respond without a clear cohort window, or older than the current quarter without justification — that is a content bug. Tell us.
Where this methodology is referenced
- Comparison table footnotes on the Thumbtack auto-responder page.
- FAQ answers on the AI receptionist pillar.
- Hero stat citations on the lead response automation pillar.
Frequently asked questions
Why ranges instead of single percentages?
Per-contractor variance is large. Same auto-responder, same Thumbtack lead volume, different conversion lift depending on trade, market density, and average ticket size. A single point estimate would suggest a precision the data does not have. Ranges reflect the actual spread we see across our customer cohort.
How is "response time" defined in Auto-Respond data?
It is the wall-clock interval between an inbound lead webhook arriving at Auto-Respond and the first AI-generated reply being sent on the corresponding channel. Time spent inside the originating platform before the lead is delivered to us is not included — we cannot observe it, so we do not estimate it.
What counts as a "booked appointment"?
A calendar event on the contractor's connected Google Calendar or Microsoft 365, linked to a customer record from a tracked inbound lead, scheduled within seven days of the lead arriving, and not flagged as a misfire. Warm replies, "interested but not yet committed," and callback requests without a confirmed time are not counted as bookings.
How big are the cohorts behind your numbers?
Cohorts named on our marketing pages include thousands of inbound leads across hundreds of contractor accounts, in the relevant trade and time window. Smaller cohorts are noted explicitly when cited. We decline to publish percentages from samples too small to be meaningful.
How often do you refresh the data?
Quarterly review of headline numbers across the site. Per-page citations carry the cohort window inline (e.g., "Q1 2026") so the data vintage is always visible. When a number changes, the prose around it changes too.
Run the same measurement on your own leads
Start your 7-day free trial and watch the response-time and conversion dashboard populate with your own data.