Written by 5:31 pm Blog

When AI Moves at Machine Speed, Healthcare Governance Cannot Afford to Walk

Shashank Singh understands how artificial intelligence (AI) is reshaping the healthcare industry. H…
When AI Moves at Machine Speed, Healthcare Governance Cannot Afford to Walk

Shashank Singh understands how artificial intelligence (AI) is reshaping the healthcare industry. He has built AI and infrastructure systems that interface with each other using highly regulated healthcare environments. In this interview, he talks about the governance frameworks that are needed surrounding AI and the negative consequences of failing at such frameworks.  

The majority of the dialogue surrounding artificial intelligence in healthcare is focused on what artificial intelligence can do. However, as the deployment of artificial intelligence in healthcare continues to grow and expand rapidly, there are fewer conversations about what occurs when the speed of the system exceeds the speed of the governance structure that governs the system. Shashank’s focus is on the impacts of using AI technology too much, too fast, and without the right safeguards or governance structures in place. Singh works as a technical leader in the AI governance space, specifically on the intersection of governance, modernization of healthcare, and AI technology. His work has been accomplished primarily in the regulated healthcare environment, where the absence of effective governance and controls will have the most serious consequences when an AI system fails. 

Few practitioners have operated at this intersection from within production healthcare environments. Most governance frameworks for AI in healthcare are written by regulators, consultants, or researchers observing from outside institutional systems. Singh’s vantage point is different: he has built and overseen AI-integrated workflows within operational healthcare environments, which means his understanding of where governance breaks down is grounded in direct experience rather than policy inference. That makes his perspective on the current moment, a period of rapid AI adoption across clinical and administrative functions, worth paying close attention to. 

The scale of that adoption is no longer in question. AI-driven systems are now embedded in clinical documentation, claims processing, patient engagement, and operational management across healthcare organizations of every size. What is less settled, Singh argues, is whether the governance infrastructure surrounding those systems has kept pace. 

“Healthcare organizations are now operating at machine speed, while many governance models, oversight protocols, escalation pathways, and accountability mechanisms continue to function at a slower, human-dependent pace,” Singh says. “That structural gap is not a technology problem. It is a governance problem.” 

The Breach That Changed the Conversation 

The urgency behind Singh’s argument was brought into sharp relief by the 2024 ransomware attack on Change Healthcare, which disrupted payment and claims processing across the United States and exposed systemic dependencies on third-party infrastructure at a scale the industry had not previously confronted publicly. The breach affected data tied to tens of millions of individuals. It was not an anomaly. 

In 2024 alone, hundreds of healthcare cyberattacks were reported, with ransomware incidents affecting millions of patient records and costing the sector billions in disruption and recovery 

Singh is direct about what these incidents reveal. “Healthcare systems are no longer just digitized they are dynamically interconnected and increasingly autonomous,” he says. “Governance models must reflect that shift, or risk becoming reactive rather than preventive.” 

Credential-based compromise remains one of the most common entry points for attackers. In many major breaches, stolen credentials and insufficient authentication controls have enabled unauthorized access to critical systems a pattern Singh has studied closely in the context of designing more resilient access governance frameworks. The trend reflects a broader shift in which attackers rely less on brute-force disruption and more on exploiting identity layers and internal trust relationships within networks. 

The Autonomy Dilemma 

Detection capabilities have improved significantly with the integration of AI-driven monitoring tools. Yet response mechanisms often still depend on layered approvals and human validation creating a timing gap between detection and containment that can extend attacker dwell time and increase the scale of impact. 

Healthcare organizations are increasingly adopting autonomous cybersecurity platforms capable of executing real-time responses: isolating endpoints, suspending compromised sessions, restricting access within minutes. But their deployment, Singh argues, introduces governance questions that the industry has not yet resolved. 

“In healthcare, a false positive is not just a technical error,” Singh explains. “It can interrupt clinical documentation, delay billing, or affect patient care continuity. That is why autonomy must be paired with clearly defined authority boundaries not as an afterthought but built into system design from the start.” 

Governance as Infrastructure, Not Afterthought 

Singh’s prescription is structural. Frameworks such as the NIST AI Risk Management Framework are being examined as potential foundations for integrating accountability, auditability, and lifecycle oversight into AI-enabled systems. Healthcare organizations operating under existing compliance standards are exploring how to embed these controls directly into system design rather than applying them after deployment, when the cost of remediation is highest. 

The urgency is reinforced by an expanding attack surface. Emerging research indicates that healthcare AI systems may be vulnerable to risks including data poisoning, prompt injection, and supply chain compromise some of which can remain undetected for extended periods. Traditional continuity approaches backup systems, disaster recovery planning remain essential but are no longer sufficient on their own. 

“Resilience in the AI era means controlled autonomy,” Singh says. “Systems must be able to act at machine speed, but within boundaries that ensure safety, compliance, and traceability.” 

As AI becomes embedded in workflows across the healthcare sector, the real challenge, Singh emphasizes, is no longer adoption, it is control. Organizations that fail to align governance with machine-speed systems risk not just operational disruption, but a loss of public trust in how healthcare systems function. Managing that balance, he argues, is the defining infrastructure challenge of this decade. 

Shashank Singh is Technology Manager AI Governance & Healthcare Infrastructure at Axxess. He specializes in AI governance frameworks, cybersecurity resilience, and infrastructure modernization within regulated healthcare environments. 

Article Source

Close