AI is making it easier than ever to build software fast. In healthcare, that sounds like a win: faster MVPs, lower development costs, and shorter time to market. But when an app touches protected health information, speed is not the same as safety.

That is the tension healthcare leaders need to understand right now.

In a recent HIPAA Insider Show interview, Pulse Security AI CEO Mike Armistead explained that AI is fundamentally changing how startups build products, but he also warned that the same acceleration can introduce serious security weaknesses if teams do not build the right controls around it.

For healthcare founders, provider groups, and IT leaders, that creates a high-stakes question: Are AI-built healthcare apps genuinely innovative, or are they creating new security and compliance problems faster than teams can manage them?

The answer is both.

Under the HIPAA Security Rule, covered entities and business associates must protect electronic protected health information with administrative, physical, and technical safeguards. That obligation does not disappear because an app was built with AI.

Planning an app launch or evaluating an MVP? Start with a security-first review!


AI is changing healthcare software development fast

One of the strongest insights from the interview is that AI is not just making developers a little faster. It is changing the way experienced teams work altogether.

Armistead put it this way: “Our best coders, the most productive developers I’ve been around in my whole career, they don’t code anymore… but they set things up.”

That quote captures the shift perfectly. The highest-value technical people are increasingly moving away from line-by-line coding and toward architecture, instruction quality, validation, review, and system design. In other words, AI may generate code, but experienced humans still decide whether that code belongs in a real product.

That matters even more in healthcare, where apps often handle patient data, workflow automation, messaging, documentation, scheduling, or care coordination. An app can be functional long before it is secure, compliant, or operationally safe.

HHS also makes clear that health apps do not get a free pass just because they are innovative. Whether HIPAA applies depends on how the app is used, who is using it, and whether the developer or vendor is functioning as a business associate.


Accelerate Innovation with Managed Google Cloud AI

Build custom models using TensorFlow and Document AI. We handle the security and BAA, giving you total control over your results.

Learn More

Faster code does not mean safer code

This is where the hype starts to break down.

AI coding tools can dramatically reduce development time, but they do not automatically produce secure software. Armistead was direct about that risk. Explaining how these systems are trained, he said developers have historically been rewarded for shipping features quickly, not for writing hardened code. Then he added: “That’s what it’s been trained on. So is it a surprise that it has 30% more, 40% more vulnerabilities that it puts into that code? No, not very surprising.”

That concern lines up with NIST’s Secure Software Development Framework, which recommends integrating secure software practices into the development lifecycle rather than treating security as something to bolt on after release. NIST says secure development practices help reduce vulnerabilities in released software and lessen the impact of those that remain.

For healthcare organizations, that means an AI-built app should never be judged only by whether it “works.” It should also be judged by whether it:

  • handles ePHI appropriately,
  • enforces access controls correctly,
  • protects data in transit and at rest,
  • logs meaningful events,
  • limits exposure across APIs and integrations,
  • and has been tested for vulnerabilities before launch.

A working demo is not the same thing as a production-ready healthcare application.

Already have a proof of concept? Validate it before launch with a Pen Testing 


Don't wait until it's too late. Download our free HIPAA Compliance Checklist and make sure your organization is protected.

Why healthcare apps are a special case

The healthcare market is especially vulnerable to rushed AI development because many apps begin with a legitimate operational problem.

A physician group may want a better patient portal. A multi-location clinic may need workflow automation. A founder may spot a real gap in care coordination, intake, billing, or communication. AI lowers the barrier to building something useful fast.

But healthcare is not like building a generic consumer app.

In the interview, the discussion drew a useful distinction between internal healthcare tools and products meant for the broader market. That distinction matters because once sensitive workflows, patient data, or third-party service relationships enter the picture, the risk profile changes fast.

Armistead also emphasized that domain expertise still matters: “You’re not going to get off the street… someone who’s going to do a healthcare app if they didn’t have knowledge of that.”

That is exactly right. Healthcare software requires more than development velocity. It requires knowledge of the environment, the workflows, the data sensitivity, and the compliance obligations surrounding the system.


HIPAA still requires risk analysis, safeguards, and ongoing review

A common mistake in AI product conversations is assuming the compliance work happens after the product is built.

In healthcare, that is backwards.

HHS says the HIPAA Security Rule requires appropriate safeguards to ensure the confidentiality, integrity, and availability of electronic protected health information. HHS also treats risk analysis as a foundational requirement, not an optional best practice.

That requirement fits closely with one of Armistead’s best points from the interview: security and compliance policies should not remain static. Discussing policy management, he said organizations should make them “active living breathing kind of things.”

That idea is especially important in AI environments. If the model changes, the workflow changes, the hosting environment changes, or the app starts processing a new category of data, your policies and controls should change too.

Healthcare teams using AI-built applications should review at least five areas before launch:

1. Risk analysis

Can you clearly identify where ePHI is created, transmitted, processed, stored, and exposed? HHS guidance makes risk analysis central to HIPAA Security Rule compliance.

2. Vendor relationships

If a cloud provider, AI provider, or third-party tool touches regulated data, business associate obligations and contractual controls may apply.

3. Infrastructure and hosting

Even strong code can fail in a weak environment. Hosting architecture, backups, segmentation, monitoring, and access management all matter to the app’s real risk posture.

4. Policies and procedures

Documentation should reflect how the system actually works now, not how the team intended it to work during prototyping.

5. Testing and remediation

Security testing needs to happen before production and after major changes, especially when AI accelerates release cycles.


Detection matters, but prevention matters more

Another standout insight from the interview was Armistead’s warning that cybersecurity has become too detection-heavy.

He said the industry has “started to swing more and more towards just detectors,” and that many organizations have “lost the art of prevention.”

That is a powerful framing for healthcare teams.

Detection tools are important. Alerts matter. Monitoring matters. Incident response matters. But none of those things are substitutes for good architecture, secure defaults, role-based access, hardened infrastructure, disciplined change control, or proactive risk reduction.

NIST’s SSDF and HHS security guidance both reinforce that effective security comes from integrating protective practices into how systems are designed, developed, and maintained, not simply from catching problems later.

In practical terms, prevention means:

  • choosing secure infrastructure early,
  • minimizing data exposure,
  • reviewing AI-generated code,
  • validating integrations,
  • limiting privileges,
  • and fixing issues before attackers or auditors find them.

That is the difference between launching quickly and launching responsibly.


The next major issue: AI agents

If today’s concern is AI-assisted coding, tomorrow’s concern may be autonomous AI action.

Armistead said, “we’re going to have a lot of agents running around,” and warned that organizations will need ways to control those systems and ensure they do not expose data or behave in unintended ways.

He explained the distinction clearly: traditional chat models often respond to prompts one exchange at a time, while agentic systems are given a goal and then autonomously take steps to accomplish it. That creates a very different security problem.

NIST’s AI Risk Management Framework and its Generative AI Profile are designed to help organizations identify and manage the risks associated with generative AI systems.

OWASP also identifies prompt injection as a leading risk in generative AI applications, warning that attackers can manipulate model behavior through crafted inputs that cause unintended actions or disclosures.

For healthcare, that means AI agents should not be trusted with sensitive workflows unless there are clear guardrails around:

  • what data they can access,
  • what actions they can take,
  • what systems they can call,
  • what outputs are reviewed by humans,
  • and how their activity is logged and governed.

What healthcare founders should do before shipping an AI app

Healthcare leaders do not need to avoid AI. But they do need to stop treating speed as proof of readiness.

A better approach is simple:

Build with domain experts

Healthcare workflows are too sensitive for generic app assumptions. As Armistead made clear, deep subject-matter knowledge still matters.

Treat compliance as a design requirement

Do not wait until launch to think about HIPAA. HHS guidance makes clear that safeguards and risk management are integral, not optional.

Review AI-generated code aggressively

Do not assume code is safe because it compiles or appears to function.

Invest in prevention

Security should reduce risk upstream, not just flag trouble after deployment.

Keep governance current

Policies, access decisions, system boundaries, and review processes should evolve as the product evolves.

Test before scaling

A healthcare MVP is not market-ready until its risks, infrastructure, and controls have been validated.

Are launching a healthcare app, portal, or internal platform?
Request a Free Consultation and get a deployment review.


Final takeaway

AI is not the problem. Uncontrolled adoption is.

AI can absolutely help healthcare teams build faster, operate leaner, and explore new product ideas. But it can also accelerate weak coding habits, blur governance boundaries, and push insecure apps toward production before they are ready.

That is why this interview matters.

Mike Armistead does not argue against AI. He argues for using it with discipline. His warning is not that innovation is dangerous. It is that innovation without prevention, domain expertise, and ongoing governance is dangerous.

For healthcare organizations, that is the real line between innovation and risk.


FAQ