Deepti Pandita, MD, CMIO & VP of Clinical Informatics, UCI Health, thinks that the fastest path to value with AI is to stop treating it as a side project and weave it directly into existing operational priorities. “AI should be embedded into whatever your strategic projects are or whatever problems are being solved at the health system level,” she said in a recent interview, outlining a pragmatic playbook for health systems that want measurable outcomes.
This interview was conducted as part of our recently published
Special Report on AI
Rather than launching standalone initiatives, the approach starts with the institution’s strategic problems—length of stay, ED flow, revenue-cycle leakage—and evaluates where AI can augment or replace current tactics. That framing keeps governance and measurement grounded in familiar management disciplines: define outcomes, assign owners, set KPIs, and report against them.
She emphasized that governance must be multi-stakeholder by design. Operational leaders and IT cannot go it alone; end users, data scientists, and ethics expertise need seats at the table from intake through implementation. The goal is not to debate AI in the abstract but to ensure those who will live with a tool help shape it—and can advocate for it—before deployment.
A second pillar is role clarity. Many organizations struggle to decide whether AI requires separate governance or folds into existing structures. The answer, according to Pandita, lies in recognizing that the intake process should mirror other technologies, while the ongoing oversight will diverge. “A tech lifecycle management is different from an AI solution lifecycle management.” That difference—models that drift, data that changes, and guardrails that evolve—justifies tailored lifecycle checkpoints without fragmenting operational ownership.
Lifecycle and Oversight Without the Hype
The discipline extends beyond committee charts. Metrics must be tied to the business problem, not to AI itself. That means defining success up front, documenting data refresh assumptions, and planning for recalibration. When pilots stumble, the culprit is often not the algorithm but overlooked workflow realities or misaligned data timing. She recommends time-boxed experimentation with explicit go/no-go gates, resisting endless tweaks that consume resources without moving KPIs.
On culture, the view is evolutionary, not revolutionary. Health systems have successfully digested earlier technology shifts—from on-prem to cloud, from licensed software to SaaS—and AI will pass through a similar maturation arc. Education and repetition, she noted, turn anxieties into routine practice, just as governance disciplines transform one-off pilots into standards.
When Vendors “Add AI”: Contracting, Transparency and Control
A growing operational risk is AI arriving through existing vendors via upgrades or “companions.” That can change a product’s data, privacy, and risk profile overnight. Procurement, in her telling, becomes a critical line of defense. “Our procurement has a standard form for any vendor that mentions the word AI in it.” The safeguard helps surface where models live, what data they use, how outputs are audited, and who bears responsibility for failure.
The trickier cases are unannounced feature injections. Those require vigilance—by IT, security, or contracting staff—to spot new capabilities and route them through governance before use. Vendor understanding of local context matters as well: academic environments, employed faculty models, and state-specific rules (e.g., California’s AI posture) can make a “proven” solution elsewhere unworkable without adaptation. Shared goals, she advised, only work if the supplier first understands the system it is trying to serve.