The healthcare schema graph.
linked, not scattered.
Most healthcare sites have schema. Almost none of them have a linked schema, a graph of entities connected via @id references so Google reads the practice as a single coherent medical entity rather than a set of unrelated pages that happen to sit on the same domain. That distinction is the difference between showing up in AI Overviews and being invisible to them.
In 2026, the AI Overview block sits above the organic results for an expanding share of healthcare queries, north of 40% in most of the verticals we track.1 The block is assembled from structured sources the LLM behind it trusts. Schema is one of the strongest trust signals available. But unlinked schema, one fragment per page, each in isolation, reads to a generative system the same way a stack of unsigned index cards reads to a researcher: technically present, practically unusable.
What “graph” means here.
Schema.org supports two modes of serialization.2 One is a flat JSON-LD document where each page carries one or more independent schema blocks. The other is a JSON-LD @graph document that contains multiple schema entities, each assigned a stable @id, cross-referencing each other. Google’s Structured Data Guidelines explicitly recommend the graph pattern for complex sites.3
For a healthcare practice, the difference is load-bearing. A dermatologist with six providers, twenty procedures, four locations, and a hundred condition pages does not have “a piece of schema.” She has an entity graph: the practice, its locations, its providers, their specialties, their procedures, the relationships between them, and the content that describes them. Flat schema serializes each page in isolation. Graph schema reveals the connections.
Every node references the central MedicalBusiness via @id. The graph is what Google reads, not the individual pages.
The core types for healthcare.
Schema.org has a substantial medical vocabulary that most sites underuse.4 The core types you should have across a healthcare site:
The @id linking pattern.
The syntactic move is simple: every entity gets a URL-based @id that is stable across pages, and every reference to that entity uses { "@id": "..." } rather than re-declaring the whole entity. For a practice at example.com:
{
"@context": "https://schema.org",
"@graph": [
{
"@type": "MedicalBusiness",
"@id": "https://example.com/#organization",
"name": "Parker Medical",
"url": "https://example.com/",
"medicalSpecialty": "InternalMedicine"
},
{
"@type": "Physician",
"@id": "https://example.com/providers/parker#physician",
"name": "Ryon Parker, MD",
"worksFor": { "@id": "https://example.com/#organization" },
"medicalSpecialty": "InternalMedicine"
},
{
"@type": "MedicalProcedure",
"@id": "https://example.com/services/concierge-medicine#service",
"name": "Concierge Internal Medicine",
"provider": { "@id": "https://example.com/#organization" }
}
]
}On every page, the root MedicalBusiness entity is referenced by @id, not re-declared. The Physician on the bio page references the organization. The Service on the service page references the organization and, where appropriate, the specific physician. The FAQ on the condition page references the condition and the organization.
Google’s crawler assembles these fragments across the site into a single graph, because the @ids match. Without @ids, Google sees three independent entities, none explicitly related. The crawler may infer the relationship from context; it often doesn’t. The AI Overview pipeline, which depends heavily on explicit structured relationships, almost always doesn’t.
The six common schema mistakes we see.
Inherited schema on healthcare sites tends to collect the same failure modes. These are the ones we fix first on every engagement:
How to verify.
Two tools. Google’s Rich Results Test5 validates individual pages against rich-result eligibility, use it to confirm specific types (FAQPage, Physician, Service) render correctly. Schema.org’s Schema Markup Validator6 checks the full graph structure and @id resolution. Run both on every page after a schema change.
Beyond validation, the practical test is the AI Overview itself: search the practice’s name and primary specialty in Google. If the AI Overview shows the practice accurately (correct specialty, correct providers, correct procedures) the graph is reading. If it shows generic specialty information without naming the practice, the graph is either absent or fragmented.
What the payoff looks like.
Practices that move from fragmented to graph-linked schema typically see three effects within 60-90 days. First, rich-result eligibility expands. FAQ, AggregateRating, and Sitelinks treatments start appearing where they didn’t before. Second, the practice begins appearing in AI Overview citations for relevant queries where it previously didn’t. Third, the “People also ask” and “Related searches” surfaces begin pulling questions from the site rather than competitors.
None of this is a growth-hack. It’s engineering correctness applied to a surface most agencies treat as cosmetic. The results are boring and durable, which is the point.
References.
- 1. Observed across Macbach’s active client book, April 2026. Query volume shift measured against comparable 2024 Q4 baselines.
- 2. Schema.org. JSON-LD Serialization (including
@graph). schema.org. - 3. Google Search Central. Structured Data Guidelines. developers.google.com.
- 4. Schema.org. Health and Medical Types. schema.org/docs/meddocs.
- 5. Google Rich Results Test. search.google.com.
- 6. Schema.org Validator. validator.schema.org.
Is your graph linked
or scattered?
The Practice Audit surfaces schema presence, type coverage, and the specific gaps that keep a site out of AI Overview eligibility.