Vibe coding by doctors: great for prototypes, dangerous for production.
AI coding agents (Claude Code, Cursor, GitHub Copilot, Lovable, v0, Bolt) have made it possible for a non-engineer physician to ship a working website in a weekend. The tooling is real and the productivity gain is real. The HIPAA exposure, the schema gap, the accessibility regressions, and the silent bugs that pass type-check but break under live patient load are real too. A practical read on where vibe coding helps a healthcare practice and where it ends in an OCR fine or a vanished search ranking.
A physician with no engineering background can now ship a working website in a weekend.
That sentence was science fiction in 2022. By the second quarter of 2026 it is a description of an actual workflow used by an unknown but rising share of solo and small-group medical practices. AI coding agents (Claude Code, Cursor, GitHub Copilot, Lovable, v0, Bolt, plus whatever next-generation toolchain ships before this article’s next refresh) make it realistic for a clinically-trained physician to produce a working, deployed, public-facing website in days, with no engineering team. We have seen the workflow up close. Doctors describe it variously as “vibe coding,” “AI coding,” or simply “Claude coding,” though the tools and the names will keep shifting.
The productivity gain is real. The compounding question is where the work stops. There is a meaningful difference between the prototype that explores a concept and the production website that handles patient appointment requests under HIPAA scrutiny. AI coding agents make the first one easy. The second one is exactly where the tooling stops being a productivity gain and starts being a compliance and credibility liability.
This piece is not anti-AI. Macbach uses the same tooling on every active client build. The argument is about where the line sits between conceptual building (where vibe coding excels) and production-grade healthcare infrastructure (where the discipline is the deliverable and the tools are a means, not an end).
Why this is happening now.
Three vectors converged in the last eighteen months. AI coding agents crossed the usability threshold for non-engineers around mid-2024 and improved sharply through 2025; a doctor who can read English well enough to describe what a page should do can now produce code that actually does it. The agency cost layer for a custom healthcare website (mid five figures for a schema-complete, fast, HIPAA-aware build) reads as expensive against a free or low-cost AI subscription that produces something that “looks similar.” And template-shop platforms (Squarespace, Wix, default WordPress kits) have not closed the YMYL gap that medical practices need; they ship templates, not compliance stacks.
The doctor logic is rational. The tools work. Agency quotes are higher than the perceived deliverable. Template shops do not solve the schema and HIPAA layer. So the practice owner spends a weekend with Claude Code or Cursor and produces something that, on the surface, looks like a modern medical practice site. The site loads, the pages exist, the contact form submits. From outside the clinical operator’s perspective, the engagement is done.
The trouble is that “loads, pages exist, form submits” is the surface layer. The substance lives in five other layers, none of which AI coding agents handle by default.
Where vibe coding genuinely helps.
Three categories of work are excellent fits for AI coding agents in a doctor’s workflow.
Prototyping a new page or feature before committing. A practice owner who wants to test a new landing-page concept, a service-line layout, or a calculator widget can ship a working prototype in hours. That prototype is useful for internal team review, for showing a marketing partner what was envisioned, and for resolving the ambiguity in “something like this.” This is exactly what the tooling is best at, and the speed advantage over a stakeholder-deck-and-wireframe cycle is real.
Internal tools that do not touch patient data. A treatment-plan presentation generator, a financial option calculator, a recall scheduling helper, a referring-dentist case-summary template. If the tool runs inside the practice and never handles PHI, the compliance surface drops to near-zero and AI coding agents are an unalloyed productivity win.
Learning the stack.A doctor who spends a weekend building a Next.js practice site with Claude Code ends the weekend able to read a real engineering team’s output, ask better questions, and make informed decisions about agency engagements or in-house hires. The skill-acquisition value of the workflow is the most under-rated benefit. Doctors who code, even at the vibe level, hire better marketing partners.
Where vibe coding produces a compliance liability.
The exposures cluster on the production-side of a patient-facing healthcare website. The five categories we see most often in audits of doctor-built sites:
1. The form pipeline is not BAA-covered. The default workflow most AI coding agents suggest for a contact form involves Resend, SendGrid, Mailgun, Formspree, or a similar email/forms service. Each of those services offers HIPAA-compliant tiers, but only with a signed Business Associate Agreement. A doctor following the default tutorial signs up at the consumer tier, deploys, and is now transmitting patient appointment requests through a non-BAA pipeline. Per HHS OCR enforcement bulletins, that is a reportable breach if any submission contains PHI. Every appointment-request form submission contains PHI by definition.
2. Analytics and tracking pixels capture PHI. Default GA4 setups capture URL paths and form-field values into event payloads. A doctor-built site that names a confirmation page /booking-confirmed?for=mohs or that sends form-field contents into generate_lead events is leaking PHI into Google Analytics, which is not BAA-covered. Add Meta pixel, TikTok pixel, or LinkedIn Insight tag (all common in AI-generated marketing-site templates), and the leak multiplies. HHS OCR has issued multiple bulletins on exactly this configuration since 2022; enforcement actions are no longer rare.
3. The schema layer is missing or fragmentary. AI coding agents generate JSON-LD when prompted, and sometimes when not, but the output is rarely a connected graph. Organization without an @id; Physician without medicalSpecialty or NPI in identifier; per-page WebPage without a breadcrumb reference; FAQPage stuck on a page where the questions are not actually visible. The page indexes. The page does not get cited in AI Overviews because the entity graph is not legible to the citation model. The doctor sees Search Console traffic and assumes the SEO works. It works for the wrong queries and not for the high-intent ones.
4. Accessibility regresses silently. AI-generated React components are typically functional and visually credible but rarely WCAG 2.2 AA compliant without explicit instruction. Color contrast on accent-colored buttons drifts under 4.5:1, focus rings are removed for visual style, form labels are sometimes missing or duplicated, ARIA attributes are copy-pasted from training data without verification. Lighthouse Accessibility scores in the 70s and 80s are normal for vibe-coded sites; healthcare YMYL standards require 100. The legal exposure is the ADA Title III pattern that has produced thousands of healthcare-site lawsuits since 2020.
5. Security headers are absent or wrong. Content Security Policy, HSTS, X-Content-Type-Options, Referrer-Policy, Permissions-Policy. AI coding agents rarely emit any of these. A healthcare site that does not enforce HTTPS-only via HSTS preload, that allows arbitrary inline scripts via a missing CSP, or that leaks referrer to third-party origins without the right policy is an audit-bait pattern. None of these are catastrophic on their own; together they make the practice an easier target for both compliance and security findings.
The silent-bug class.
Beyond compliance, AI-generated code carries a specific bug class that compounds in healthcare contexts. The code passes the TypeScript type-check, builds clean, deploys, and looks correct. Then it fails in production under conditions the prototype never tested.
RLS misconfiguration. A site backed by Supabase or any Postgres-with-row-level-security service that ships a default-allow policy looks fine when one user (the doctor running the prototype) tests it. Once live, every user sees every row. We have audited live doctor-built sites where the contact-form table was world-readable.
CORS misconfiguration. A wildcard Access-Control-Allow-Origin paired with credentialed requests is an exfiltration vector that AI coding agents emit confidently because the pattern shows up in Stack Overflow training data. The fix is one line; the vulnerability is real.
Form spam unmitigated. A working contact form without rate limiting, honeypot fields, or CAPTCHA-equivalent screening will receive several thousand spam submissions per week within a month of going live. Doctor-built sites we have audited often have inboxes choked with form spam to the point that real appointment requests get lost.
Migration breakage on legacy URLs. A doctor migrating from a WordPress site to a Next.js or similar SPA-friendly stack via vibe coding rarely preserves the legacy URL structure or implements 301 redirects from old slugs. The new site goes live; six months of accumulated SEO authority on the old slugs 404s; rankings drop sharply; the doctor blames the new site without realizing the old equity was the issue.
Mixed-content errors and CSP failures. A page that loads HTTPS-secure but pulls fonts, images, or scripts from HTTP origins (or from origins blocked by a strict CSP) breaks silently in production. Doctor-built sites we have audited often have console errors on the home page that the operator never sees because the browser does not show them by default.
The professional layer.
Macbach uses Claude Code, Cursor, and similar AI agents on every active client build. The argument is not that doctors should not touch the tools. It is that the tools are necessary but not sufficient for production healthcare websites, and the gap between necessary and sufficient is exactly where the agency-vs-DIY conversation matters.
The professional layer is five things, in order:
One. A documented BAA chain: every vendor in the form pipeline (forms, email, CRM, replay, analytics where used) has a signed BAA on file or is replaced with a vendor that does. The chain is audited quarterly.
Two. A connected schema graph, page-by-page, audited via Schema.org validator and Google Rich Results Test on every release. Organization, MedicalBusiness or specialty subtype, Physician with medicalSpecialty and NPI, Service or MedicalProcedure, FAQPage where Q and A are visible, BreadcrumbList. Linked by @id so the entity graph is legible.
Three. Accessibility verified to WCAG 2.2 AA with Lighthouse Accessibility score of 100 and axe-clean output. Re-verified on every templated page type per release.
Four. Security headers enforced site-wide: HSTS with preload, CSP scoped to required third-party origins, X-Content-Type-Options, Referrer-Policy, Permissions-Policy. HTTPS-only with no mixed content, verified.
Five. Analytics configured to exclude PHI by default with documented event payloads. GA4 events whitelisted to non-PHI parameters; pixels deferred to non-PHI pages; replay tools (where used) configured to mask form fields and confirmation paths.
None of these are exotic. All of them are routinely missed in vibe-coded production sites because the tooling does not emit them by default. The professional discipline is the audit pattern that catches them every time, not the tooling that almost never does.
A decision framework.
The choice is not binary between “DIY everything” and “agency everything.” A working framework for a doctor running a practice in 2026:
- 01.Vibe-code the prototype. Use Claude Code, Cursor, or whatever tool you prefer to build the concept site, the new landing page, or the internal tool. The productivity gain is real; capture it.
- 02.Draw the BAA line. Anything that touches a patient's contact information, scheduling intent, or clinical context belongs behind a BAA-covered pipeline. If your AI-generated code routes form submissions to a non-BAA service, that piece is not production-ready, period.
- 03.Audit the schema graph. Run Schema.org validator and Google Rich Results Test on every page. If you cannot recognize the JSON-LD as a connected graph (Organization linked to Physician linked to Service via @id), the AI did not ship a graph; it shipped fragments. Replace before going live.
- 04.Verify accessibility. Run Lighthouse and axe on every templated page. If the Accessibility score is below 100, the legal exposure is real. Fix or replace before going live.
- 05.Set the security headers. CSP, HSTS, X-Content-Type-Options, Referrer-Policy, Permissions-Policy. These are not optional in healthcare. AI-generated sites rarely include them by default; you must add them.
- 06.Decide where to draw the agency line. The fastest path for most solo and small-group practices is to vibe-code the conceptual layer (you keep the IP, you stay in control of the brand) and to engage a healthcare-specific agency for the production layer (compliance, schema, accessibility, security, analytics). The two are complementary, not adversarial.
The bottom line.
Vibe coding is real, useful, and here to stay. Healthcare practices that learn to use it well will out-iterate competitors that ignore it. The same practices, if they ship vibe-coded production sites without the compliance-and-credibility audit pattern that healthcare YMYL standards require, will lose more in OCR exposure, accessibility lawsuits, and lost search rankings than they ever saved on agency fees.
The line is not between AI and human. The line is between conceptual exploration (where AI excels and the doctor stays in control) and production healthcare infrastructure (where the operating discipline is the deliverable, the tools are a means, and the audit pattern is what separates a working site from a working site that compounds).
Build the prototype yourself. Bring a professional in for the production version. Both halves of that sentence are true at the same time, and both halves are necessary in 2026.
The compliance and ranking layers.
Ship the prototype you built. We handle the production.
SitePRO is the production-grade healthcare website service for practices that want the compliance, schema, accessibility, security, and analytics audited end-to-end. You keep the design vision and the IP from your prototype; we ship the version that will not get an OCR letter or lose a Map-pack ranking on launch day.
If your question doesn't fit the audit.
Send a line.
Have a question that doesn't fit the Practice Audit form? Send it here. We route it to the right person on our team and come back when there's a substantive next step, usually within a business week.