Explore our Topics:

Managing AI utilization to ensure optimal business results

CIOs can partner with vendors via a Service Level Outcome Agreement that defines infrastructure and enables AI governance and scability.
By admin
Apr 30, 2024, 8:45 AM

This is the last of three articles, powered by CHIME Digital Health Insights and sponsored by Spectrum Enterprise, examining how healthcare providers can modernize their enterprise infrastructures to meet increased demands, including care anywhere and emerging technologies like AI, and upgrade their governance and service partnerships to support this “new normal.”

Any enterprise that sees a pilot program improve efficiency or cut costs will seek to scale the initiative to any business unit that may benefit. This is especially true in the tough financial times that health systems now face, with rising expenses and reduced revenues putting pressure on margins, according to a recent McKinsey report.

Healthcare executives must be careful about scaling effectively, though. If an initiative fails to increase efficiency, reduce costs, improve care quality, or boost patient and employee satisfaction as anticipated, it will only be harder to get buy-in for the next innovation.

For many organizations, artificial intelligence (AI) is the latest technology to pose scalability questions – not just about where and when to deploy AI but also about how to govern its use. AI relies heavily on real-time data access and processing, making robust network infrastructure and consistent connectivity critical factors in generating accurate insights. As leaders address these questions and challenges, they should consider whether their technology provider’s Service-Level Agreements (SLAs) explicitly reflect these factors by ensuring uptime, bandwidth, and latency meet the specific needs of AI applications. Without a robust foundation, even the most sophisticated AI can be challenged by inconsistent data flow and processing delays.

AI comes with potential as well as cautions

There’s no question AI has transformative potential in healthcare. Robotic Process Automation (RPA) takes tedious tasks out of workers’ hands, from coding and processing medical claims to reminding patients about preventive screenings. Ambient assistants document clinical notes so physicians have more face time with patients. Generative AI has augmented everything from marketing communication to search queries to decision support.

However, this potential comes with cautions. There are many reasons why organizations fail to scale AI implementation and AI-driven insights. Here are several top considerations:

  • It seems so easy. Needs, use cases, computing resources, and capabilities vary significantly among business units. The “lift and shift” approach rarely succeeds.
  • It’s not driven from the top. Business unit leaders may champion the use of process automation, decision support, or generative AI. Today, few non-IT senior executives do.
  • Data complexity is vastly underestimated. Data from disparate sources, much of it unstructured, must be harmonized before it can be analyzed. That’s a serious undertaking.
  • There aren’t enough resources. Small teams can handle a pilot, but few health systems have the IT professionals they need to manage enterprise-wide deployment.
  • Governance is often lacking. Many organizations have not fully thought through how AI should be applied.
  • Insufficient infrastructure and bandwidth. Data bottlenecks, processing delays, security risks, integration challenges and limited scalability can stifle AI projects and insights.

The role of AI governance

The first four problems, though not necessarily easy to solve, can be addressed with prudent planning, strategic resource allocation, and well-considered foresight. However, governance is a more challenging area. The Digital Health Most Wired (DHMW) 2023 survey found only 40% of respondents (representing a cross-section of U.S. health care providers) had AI-specific governance in place.

Governance isn’t new to healthcare – after all, dictating who has the authority to perform tasks or access resources is fundamental to maintaining care quality and patient safety – but AI governance will take some work.

Organizations should start with who governs the deployment of AI for tasks such as billing, scheduling, and supply chain management. This ensures employees understand the role AI should play and where the “human in the loop” interaction should take place.

In some specialties like radiology, AI is not new at all, so the familiarity with the technology across the enterprise is uneven.

The next and most challenging step is governance of AI that fully defines the relationship between the patient and the clinician. This is critical as staff need to know which data is available for decision support and when. They also need to be aware of complicated privacy and ethical issues regarding AI insights. Without this guidance, clinical staff are unlikely to consistently understand the insights that are actionable and those that are not – severely undermining AI’s utility to them.

Scalable infrastructure and connectivity

As bandwidth intensive technologies such as AI emerge across the healthcare market, bandwidth and compute capacity upgrades across underlying networks often ensure consistent performance and scalable headroom for future growth.

Reliable, high-capacity network infrastructure ensures consistent data flow, which is critical for tasks like feeding “training data” into AI models or facilitating quick communication between associated applications, devices, and software. Healthcare organizations that leverage AI need consistent high-performance throughput to enable data delivery, content connections, and application performance nationwide — across any fiber internet or Ethernet connection — on a network that consistently delivers on capacity demands.

Ultra-High Speed Data connectivity enables differentiated performance across bandwidth-intensive and cloud-based applications by accelerating data flow, reducing latency, and powering data transfer with speeds that scale up to 100 Gbps nationwide.

Infrastructure diversity is vital to the resilience, security, uptime, and flexibility of healthcare networks and operations. Organizations need to ensure resiliency and disaster recovery between major sites and the cloud with up to seven levels of diversity — DHMW data showed 40% to 50% of organizations cannot restore critical operations (e.g. network, communications, administrative, and clinical information systems) in under four hours, needing upwards of 24 hours to bring these systems back online.

Improving infrastructure diversity requires a multi-layered approach, including:

  • Multi-site architecture – Geographically dispersed data centers across different regions provide redundancy in case of localized outages like natural disasters or power failures.
  • Multi-cloud adoption – Leveraging multiple cloud providers or different regions within a single cloud provider (multi-region) creates geographically separate instances of critical applications and data.
  • Data backups – Regular backups of data across various locations — including on-prem storage, secondary data centers, and the cloud — provide data diversity.
  • Monitoring – Continuous monitoring performance of infrastructure, applications, and network connectivity helps proactively identify and address any potential issues.

Taking SLAs to the next level

Fortunately, CIOs don’t have to go it alone. AI systems are designed to guide decision-making that’s tied to operational and clinical goals, from better efficiency through billing automation to increased patient engagement through personalized outreach.

CIOs should therefore expect vendors to accept contractual responsibility for delivering on process improvement promises. A Service Level Outcome Agreement would detail how an AI system supports efforts to achieve stated business goals, along with how the health system and vendor will partner on achievement.

The SLA should clearly define infrastructure performance metrics such as network uptime, and latency. This ensures the infrastructure meets performance thresholds that AI applications require to avoid disruption.

Using this approach CIOs can gain both the context necessary to flesh out governance policies for individual AI systems and a framework for an enterprise-wide AI governance strategy, and ensure a resilient, scalable foundation for network performance and data flow for today and the future. When leaders understand AI’s applications, how it is enabled and delivered, they improve the likelihood that AI will scale effectively and appropriately throughout their enterprise – and live up to its potential.


Check out the prior two articles:

  1. From normalization to “swarm-ilization”: Preparing for the future of infrastructure demands in healthcare
  2. Infrastructure reimagined – connecting the patient bed to the data center

About Spectrum Enterprise

Spectrum Enterprise, a part of Charter Communications, Inc., is a national provider of scalable, fiber technology solutions serving many of America’s largest businesses and communications service providers. The broad Spectrum Enterprise portfolio includes networking and managed services solutionsInternet accessEthernet access and networksVoice and TV solutions. The Spectrum Enterprise team of experts works closely with clients to achieve greater business success by providing solutions designed to meet their evolving needs. For more information, visit enterprise.spectrum.com.


Show Your Support

Subscribe

Newsletter Logo

Subscribe to our topic-centric newsletters to get the latest insights delivered to your inbox weekly.

Enter your information below

By submitting this form, you are agreeing to DHI’s Privacy Policy and Terms of Use.