Healthcare organizations are moving fast to adopt AI, but speed without structure can create serious compliance risk. In this episode of the HIPAA Insider podcast, Adam Z. speaks with Franck Leveneur, founder and CEO of Data-Sleek and a former UCLA professor, about what it really takes to build AI-ready infrastructure in healthcare. Their conversation breaks down why governed data, clear lineage, strong architecture, and human oversight matter far more than hype when protected health information is involved. HIPAA’s Security Rule is built around administrative, physical, and technical safeguards, not just one-off tools or a signed BAA.

→    Building AI on top of healthcare data?
Request a free consultation
or get a quick quote to see what secure, compliant infrastructure should look like.

The real problem: having data is not the same as having AI-ready data

One of the strongest takeaways from the conversation is that many healthcare organizations mistake data volume for data readiness. They may have EHR data, claims data, device data, and operational records, but that does not mean those datasets are organized, governed, or safe to use in AI workflows. Franck’s point is that disconnected systems, undocumented transformations, and weak metadata make it difficult to trust outputs and even harder to defend those outputs during an audit. HHS guidance on the HIPAA Security Rule and risk analysis reinforces that organizations need to understand where ePHI lives, how it moves, and what risks apply to it.

“They confuse having data and having AI-ready data.” — Franck Leveneur

That line gets to the heart of the issue. AI-ready healthcare infrastructure starts with disciplined data governance, not model selection.

→    For organizations still building their technical foundation, this is where services like HIPAA Vault’s hosting solutions, and secure cloud provider services become strategic, not optional.

Accelerate Innovation with Managed Google Cloud AI

Build custom models using TensorFlow and Document AI. We handle the security and BAA, giving you total control over your results.

Learn More

What AI-ready healthcare infrastructure actually requires

Franck points to a practical readiness checklist: a unified data catalog, documented metadata, data lineage, quality controls, and clear ownership. That aligns with the broader direction of federal guidance. NIST’s AI Risk Management Framework emphasizes governance, documentation, accountability, and human oversight across the AI lifecycle, while HHS continues to center HIPAA Security Rule compliance on risk management and defensible safeguards.

In plain English, healthcare leaders should be able to answer questions like:

  • Where did this data point come from?
  • Who changed it?
  • What system generated it?
  • Was it transformed before an AI model touched it?
  • Can we explain that process to an auditor?

“Could you explain to a regulator where the data point came from?” — Adam Z.

That is not just a technical question. It is a compliance question, a governance question, and in many cases a liability question.

If your organization cannot confidently answer those questions, it is worth reviewing whether your environment is actually HIPAA-ready, whether your current stack meets HIPAA web hosting requirements, and whether your security posture is being validated often enough.

Don't wait until it's too late. Download our free HIPAA Compliance Checklist and make sure your organization is protected.

Why siloed systems turn into compliance debt

The episode does a good job illustrating why siloed systems create hidden AI risk. Your EHR, billing platform, analytics stack, and cloud storage may all work fine individually. The problem starts when those systems are stitched together without consistent naming, tagging, lineage, and access controls. At that point, organizations often assume they are secure because they encrypt some traffic or have a vendor contract in place. But weak architecture can still undermine monitoring, authorization, auditability, and data minimization. HHS makes clear that HIPAA compliance is not achieved by one control alone; it depends on a broader risk-based safeguards program.

This is also why healthcare teams should think beyond storage and into validation. A formal risk assessment and, where appropriate,HIPAA penetration testing and vulnerability assessments can expose weak assumptions before an AI initiative magnifies them.

→   Not sure whether your architecture is ready for AI or an audit? 

Schedule a HIPAA risk assessment. It is a faster way to find security and governance gaps before they become reportable problems.

Why AI pilots look good in testing but fail in production

Another strong point from the discussion is that healthcare AI pilots often succeed on curated data, not real data. Engineers select a narrow dataset, clean it up, align the fields, and test the model or workflow in a controlled setting. Then production happens, and suddenly there are undocumented pipelines, schema mismatches, missing fields, poor tagging, unclear permissions, and systems nobody fully mapped.

That is why pilot success can be misleading. It may prove that a model works under ideal conditions, but not that the organization is ready to operationalize AI safely in a HIPAA-regulated environment. NIST’s AI RMF explicitly pushes organizations to manage risk throughout the lifecycle, not just at the proof-of-concept stage.

The operational response is to treat data as a product: define ownership, document expectations, validate quality, and establish contracts between the source systems and the AI or analytics layer. For teams exploring this path, related HIPAA Vault resources on HIPAA-compliant app development and HIPAA-compliant AI platforms are useful next reads.

“Vibe coding” is fast, but it can make data lineage opaque

One of the most timely parts of this episode is the conversation around vibe coding: prompting LLMs to generate code, transformations, and workflows quickly enough that human teams can end up shipping architecture they do not fully understand. Franck’s concern is not that AI-assisted development is inherently bad. It is that code produced this way can lose architectural context, making data lineage harder to explain and security decisions harder to defend. That risk is real enough that security teams are now warning about “vibe coding” as a pathway to subtle vulnerabilities and opaque system behavior.

“AI should be an accelerator.” — Franck Leveneur

That is the right framing. AI can speed up work, but it should not replace architectural review, quality controls, and human accountability. In healthcare, where sensitive data is involved, generated code should be treated as draft material until it has been reviewed in context.

Who is liable when AI-generated code creates a HIPAA issue?

Adam raises the obvious follow-up: what happens if someone uses an LLM to write code that moves patient information from a secure system into an insecure cache or poorly protected workflow? Franck’s answer is straightforward: the human organization remains liable. Current HIPAA enforcement does not shift legal or compliance responsibility to an AI model. Covered entities and business associates are still accountable for safeguarding ePHI and for implementing appropriate controls, documentation, and risk management.

That has practical consequences. If your AI-generated code introduces an insecure transformation, breaks access controls, or exposes protected health information, the regulator is going to look at your organization’s choices, not the model’s intent. This is why logging, access reviews, segmentation, and secure hosting matter so much before AI enters production.

→    If that risk feels uncomfortably familiar, it may be time to strengthen the infrastructure layer with HIPAA cloud services, HIPAA hosting, or business continuity and disaster recovery hosting.

The minimal viable data strategy before your first AI deployment

For organizations without mega-enterprise budgets, Franck recommends starting with three things: inventory, quality, and ownership. That framework is useful because it forces a team to stop chasing AI buzzwords and start asking more basic questions:

  • What data do we actually have?
  • Is it usable and trustworthy?
  • Who is responsible for maintaining it?
  • Which business problem are we trying to solve first?

That final question matters. Not every healthcare company needs generative AI on day one. Some may get more immediate value from analytics, reporting maturity, operational dashboards, or workflow automation. Good strategy starts with business alignment, not AI hype. HHS risk-analysis guidance supports the same logic: understand your environment, identify where ePHI resides, assess vulnerabilities, and manage risk proportionately.

→   Want to know whether your current environment is truly AI-ready?
Talk to HIPAA Vault
about secure hosting, cloud architecture, and compliance-minded infrastructure for healthcare teams.

The bottom line: data governance is the real AI roadmap

The biggest lesson from this episode is simple: AI readiness is a data governance problem before it is a model problem. Healthcare organizations that move too fast without lineage, metadata, ownership, and defensible safeguards risk building fragile systems on top of regulated data. The result is not just technical debt. It is a compliance debt.

Before asking which model to deploy, healthcare leaders should ask whether they can explain how their data is collected, classified, secured, transformed, and audited. If they cannot, the next investment should not be a bigger AI budget. It should be a stronger operational foundation.

HIPAA Vault helps healthcare organizations build that foundation with secure cloud infrastructure, compliant hosting, risk assessments, and security-focused support designed for regulated data environments.

→    Ready to build AI-ready healthcare infrastructure without guessing your way into compliance trouble? 

Request a free consultation or explore HIPAA Vault’s secure enterprise solutions.

Frequently Asked Questions About AI-Ready Healthcare Infrastructure