Every AI vendor selling to healthcare has "HIPAA compliant" somewhere in their marketing. Few of them make it easy to understand what that actually means for your practice — what you're allowed to do, and what creates real exposure.
What HIPAA governs
HIPAA applies when you create, receive, store, or transmit Protected Health Information. PHI is individually identifiable health information — a patient name paired with a diagnosis, an appointment record, a prescription. If you're sending that data to an AI tool, you need a Business Associate Agreement with that vendor.
Where most practices get this wrong
Consumer ChatGPT doesn't offer BAAs. Neither does the standard consumer version of Claude or Gemini. If you paste a patient note into ChatGPT to help summarize it, you've sent PHI to a vendor who hasn't agreed to protect it under HIPAA. That's a violation, regardless of what the AI does with it.
This doesn't mean AI is off-limits for clinical work. It means you need platforms that have signed BAAs with you. Nuance, Suki, Abridge — these are clinical AI tools operating under proper agreements. Many EHR vendors now have AI features covered under their existing BAA.
For administrative tasks that don't touch patient-specific information — writing staff training materials, drafting marketing copy, general practice operations — you're not dealing with PHI and BAAs don't apply.
The practical checklist
Before using any AI tool in a healthcare context: Does the vendor offer a BAA? Have you signed it? Are you sending only the minimum necessary PHI for the task?
If a vendor doesn't offer a BAA, or if getting one requires a large enterprise contract, that tells you something about how seriously they've considered healthcare compliance.
The paperwork side of this is not complicated. Getting the agreements in place before you use the tools is much easier than dealing with a breach notification after.
