Skip to main content
← Insights
Reviews & local

Review velocity beats review volume.
the 2026 local signal.

By Vince Schwellenbach10-minute read

A practice with 200 Google reviews accumulated over ten years, and nothing new for the last eighteen months, now regularly loses the Map-pack to a practice with 120 reviews earned over the last year. That observation, repeated across our client book and consistent with what BrightLocal’s 2024 local-search industry report documented, is the quiet evolution of Google’s local algorithm over the past three years.1

The agencies still chasing review volume are fighting the wrong battle. Google’s signal has tilted toward recency and cadence, how many reviews you’ve earned recently, on what rhythm, with what consistency. Velocity, not volume.

Fig. 1 · Why velocity outperforms volume
Reviews20015010050Year 1Year 3Year 5Year 8NowPractice A: 200 reviews · slow recent velocityPractice B: 120 reviews · steady cadencePractice B outranks Practice A in the Map-pack.Recency weighs more than accumulated total.

What changed in the algorithm.

Google has never published its local ranking formula, and it never will. What it does publish are high-level guidance and, occasionally, researcher interviews that hint at directional changes.2 Combined with observed behavior across hundreds of client accounts, three shifts are clear since 2022:

Recency weight has increased. A review from last month counts for more than a review from 2019. The half-life of a review’s ranking contribution is now roughly 12-18 months in our observation. This is consistent with how Google has publicly described its approach to “fresh signals” in search more broadly.3

Cadence consistency matters. A practice that earns 2 reviews per month, every month, outperforms a practice that earned 24 reviews in a single promotional push twelve months ago. Google appears to read bursty review profiles as either (a) promotional incentives or (b) manipulation; neither earns algorithmic trust.

Response rate is itself a signal. Practices that respond to every review within 48 hours consistently outrank practices that don’t, controlling for review count and rating. This is observable in split tests across paired locations and is documented in BrightLocal’s 2024 Local Consumer Review Survey as a behavior that Google explicitly rewards.1

What the right cadence looks like.

There is no single target; the right velocity depends on patient volume and vertical. A concierge internal-medicine practice seeing 8-12 patients a day cannot reasonably produce the same review velocity as a medspa seeing 40. The useful benchmark is review velocity relative to practice throughput.

VerticalReviews / month (typical)Reviews / month (top-10%)
Concierge medicine3-58-14
Specialty medicine4-810-18
Dental specialty6-1014-22
General dentistry8-1418-30
Medspa10-1824-40
Medical weight loss6-1215-24

The “top-10%” column is what practices actively running a systematic review workflow produce. It’s not an outlier number; it’s the realistic ceiling for a practice that operationalizes requests. Most practices we audit are in the “typical” column or below, usually because they’re asking ad hoc, in person, at checkout, which converts poorly.

The ten-step review workflow we run for every MapsPRO client.

None of this is exotic. What makes it work is that it runs every day, measured every month, and is never allowed to drift.

01
Trigger at 24-48 hours

Request is sent after visit completion, not at checkout. Patient needs enough time to form an impression and not so much that the visit has faded.

02
SMS-first, email-second

SMS open rates are 5-8x email. First request is text, second (if no response by day 7) is email with a different subject line.

03
One click to Google

The link opens directly to the Google review composer, pre-targeted to the correct location. No intermediate page, no form, no filter.

04
Provider-specific attribution

The review link carries a provider parameter so the review can be associated with the individual clinician. This matters for multi-provider practices.

05
No review gating

No ‘were you satisfied?’ filter directing happy patients to Google and unhappy ones to a feedback form. That’s a Google TOS violation and it is detected.

06
Respond within 48 hours

Every review, positive or negative. A thoughtful 2-3 sentence reply. Not a template. Response rate is itself a ranking signal.

07
Flag negative reviews for the physician

Clinical leadership sees negative reviews before the office manager responds. Tone and substance must be consistent with the practice’s standard of care.

08
Never ask twice

A patient who didn’t leave a review after the second request doesn’t want to. Escalation reads as desperation and damages the long-term relationship.

09
Measure velocity, not total

The dashboard tracks new reviews per month per location. That is the number that moves rankings. Accumulated total is a vanity metric.

10
Audit for compliance monthly

Google’s review policies and their interpretation change. What was compliant in 2023 may not be compliant today. A monthly review of workflow, language, and tooling keeps the practice clean.

The review-gating trap.

“Review gating” is the practice of asking patients “were you satisfied?” and routing the ones who say yes to Google while routing the ones who say no to a private feedback form. It is extremely common. It is also a direct violation of Google’s review content policies.4

Practices sometimes ask why, if the positive reviews are all legitimate, this is a problem. The answer is that it’s not about the authenticity of the individual reviews; it’s about the distributionthat reaches Google. An ecosystem where only happy patients ever leave reviews produces an artificially-inflated view of every practice, which undermines the signal for consumers. Google’s stated position is that review collection must not be conditional on the sentiment of the review. Enforcement has become substantially more sophisticated since 2023.

The right posture is to ask every patient, with no filter, and to respond thoughtfully to the ones who leave unflattering reviews. A 4.7-star practice with occasional 3-star reviews answered gracefully reads as more trustworthy than a 5.0-star practice with no variation at all, and Google’s algorithm agrees.

Rating versus volume versus velocity.

When practices ask what they should optimize for, the ranked priority in 2026 is:

  1. 1.Rating ≥ 4.5. Below this threshold, patients filter the practice out before proximity or prominence even matter. Most patient-filter behavior crosses this threshold in our observation.
  2. 2.Velocity consistency. A steady monthly pace. Not bursts. Not promotional campaigns. Daily, weekly, monthly habit.
  3. 3.Response rate. Ideally 100%, acceptably > 90% within 48 hours.
  4. 4.Volume. A useful tiebreaker between otherwise-equivalent practices, but the least weighted of the four in 2026.

A practice that hits those four produces a review profile that both Google’s algorithm and a prospective patient will find convincing. Both audiences are reading similar signals, in similar order, for similar reasons, that’s the quiet alignment that makes local SEO for healthcare so much more tractable than it looks from the outside.

References.

  1. 1. BrightLocal. Local Consumer Review Survey 2024. brightlocal.com.
  2. 2. Google Search Central. How Local Ranking Works. support.google.com.
  3. 3. Google Search Central Blog. Content Freshness Signals. Google Search Central.
  4. 4. Google Business Profile. Review Posting Policy (“review gating” explicitly prohibited). support.google.com.
Vince Schwellenbach
Vince Schwellenbach
Founder · Macbach · Tampa Bay · Healthcare-exclusive since 2007
Check your review velocity

Where does your practice rank
against the vertical benchmark?

The Practice Audit compares your review profile against the vertical benchmarks in this piece, plus the four other dimensions that decide Map-pack placement.