Keeping PHI Out of the Model: Practical Patterns for Privacy Preserving LLMs in Healthcare
snahil singh, Anoop N.
LLMs are racing into clinics and back offices, but a single prompt, log or misstep can leak Protected Health Information (PHI) and erode trust. This fast paced, vendor agnostic talk shows how to ship useful Large Language Model (LLM) features in healthcare without violating privacy or slowing delivery. Instead of theory, we’ll focus on what can go wrong across the LLM lifecycle (e.g. in training, prompts, logs, embeddings etc.) and how to think like an attacker. Then translate all of it into a pragmatic, privacy by design workflow you can adopt immediately. You’ll leave with a concise blueprint, a threat to control matrix you can copy into your program, and a simple decision rubric for on-premises versus cloud deployments. If you own security, ML or compliance and need practical patterns, this session is for you!