Snahil
Snahil is a cybersecurity professional with over a decade of experience in the software security industry. Her work spans biometric authentication and access control systems, IoT security, AI and ML infrastructure protection, applied cryptography and software supply chain security. She champions inclusive growth in security through mentoring, organizing technical events and speaking across industry and academia, opening doors for new voices and turning ideas into practice.
Session
LLMs are racing into clinics and back offices, but a single prompt, log or misstep can leak Protected Health Information (PHI) and erode trust. This fast paced, vendor agnostic talk shows how to ship useful Large Language Model (LLM) features in healthcare without violating privacy or slowing delivery. Instead of theory, we’ll focus on what can go wrong across the LLM lifecycle (e.g. in training, prompts, logs, embeddings etc.) and how to think like an attacker. Then translate all of it into a pragmatic, privacy by design workflow you can adopt immediately. You’ll leave with a concise blueprint, a threat to control matrix you can copy into your program, and a simple decision rubric for on-premises versus cloud deployments. If you own security, ML or compliance and need practical patterns, this session is for you!