How do you get clinicians to trust an AI recommendation enough to act on it?
Led the 0-to-1 launch of a population health analytics module for enterprise hospital networks. The module surfaced AI-generated patient risk scores — but early pilots showed clinicians were seeing alerts and ignoring them.
Interviews with care coordinators revealed the issue was not model accuracy — it was explainability. Clinicians could not trust a score they could not interrogate. The UI had no answer to: why is this patient high risk?
Prioritized an explainability layer over building more alert types. Each risk score now showed contributing factors ranked by weight. Drove cross-functional delivery across engineering, clinical SMEs, and GTM while maintaining HIPAA compliance and enterprise scalability.
Improved care-coordination efficiency by 13%. Accelerated compliant feature releases by 9%. Module became the anchor product in Innovaccer's enterprise GTM motion, deployed across 12+ hospital networks.
