Health AI Is a 'Pandora's Box' of Risk, Lawyer Says. Here's His Advice

Health care is a highly regulated industry, for the most part. But there's one area so new that the line between "right" and "wrong" is still coming into focus: AI.

AI can bring significant value to health systems, but leaders should be aware of the legal risks these tools present, according to Dr. Danny Tobey, a dual MD-JD and global co-chair of DLA Piper's AI and data analytics practice.

DLA Piper—a global law firm with offices in more than 40 countries—was the first major law firm to launch a formal AI practice. It has defended some of the first cases on hallucination involving generative AI and successfully defended one of the first big "black box" algorithmic discrimination class action lawsuits in the insurance space. The firm has also done compliance and internal governance work for large health systems and biopharmaceutical and medical device companies adopting AI.

"When people ask me about health care [AI] risks, I say yes, it's imperative to do best practices," Tobey told Newsweek, "but at the same time, there is a risk in not adopting, and to throw out the baby with the bathwater is a big mistake."

I connected with Tobey to learn how health systems can protect their patients, providers and brands when adopting (or creating) new AI tools. Find our discussion on AI blind spots and bright spots below. This interview has been edited and condensed for length and clarity.

(Also worth mentioning: Newsweek is hosting its first AI Impact Awards this year to recognize the best and most effective uses of AI to solve practical problems in a range of areas, including health care. Entries are open until April 25, and finalists and winners will be announced in late May ahead of the AI Impact Summit in June.)

AI Healthcare Legal Issues
AI can raise legal issues for health systems, but that doesn't mean they should shy away from it, according to Dr. Danny Tobey, head of the AI and data analytics practice at global law firm... Photo Illustration by Newsweek

How strict are the rules around ethical AI use in health care?

We're in a gray zone right now because there's an overabundance of voluntary and mandatory regulations, legislation, policies that are coming out globally. It's a patchwork of different federal agencies like FDA and HHS, of state regulatory bodies like attorneys general, who are very focused on AI and health care, and then a fair amount of litigation starting to crop up.

No one wants to be in a Wild West situation, but we do find our clients in a position where they have both too much and too little guidance.

What are the main issues that your clients are running into that are leading to litigation?

Generative AI is powerful in a way that AI has never been before, but it is also a Pandora's box of potential risk. The thing that makes generative AI so powerful—its ability to operate probabilistically and solve creative problems—also means that it's much harder to regulate and test for safety and accuracy and consistency. Our clients have to figure out how to harness this new technology, but then test it sufficiently so they have some confidence that it's going to perform in the field.

Is it riskier for health systems to develop AI tools internally or to adopt technology from a vendor?

It's not a one-size-fits-all solution. You can have just as much or more risk from an off-the-shelf solution that is not trained and tuned for your environment as you can from trying to build your own tool and not having the internal resources to do it right. The main thing is having good governance in place, so that whether you build AI or buy it, you're making sure that it's working appropriately in your health care setting.

What are the most common issues that you see arising when health systems create AI internally?

All AI is not created equal, and one thing generative AI is good at is answering questions whether or not it truly knows the factual answer. That's the brilliance of generative AI: It can create lifelike, confident answers for any problem that it's faced with.

So having the right guardrails to make sure that it's, one, drawing from reliable sources, two, getting the most up-to-date information, three, declining to answer when something veers into a protected category—like actual clinical decision-making—and, four, educating the people using the AI on its limitations and appropriate uses…those are all things that can really mitigate litigation risk on the front end. And then if we end up defending the litigation—which we have—it's great to be able to point to all those steps you took to make this technology as good as it possibly could be.

The other thing I'd say, though, is it's important not to over-index on risk. Medicine and health care as a system are fraught with human error and risk, and that's everything from information getting siloed in a very fragmented system to doctors having different skill levels and knowledge bases. AI can play a role in really democratizing access to health care. When people ask me about health care risks, I say yes, it's imperative to do best practices—but at the same time, there is a risk in not adopting, and to throw out the baby with the bathwater is a big mistake.

Is there anything that you bring up when you're having these conversations about safe, ethical AI use that tends to surprise health care leaders?

People are often surprised at the need for early budgeting for responsible AI. These tools are so easy to adopt and use that it can be surprising how much governance they need.

And let me explain that. There are open-source models for generative AI out there right now that a hospital system or a data science department in a pharmaceutical company can use at relatively low cost and spin out thousands of really valuable use cases. When the cost of creation and implementation is so low, it can be jarring to [learn you] need to invest in governance and liability protection—sometimes at a higher level than it takes to roll out the tool.

We have to remind people, just because a tool is easy to adopt and roll out doesn't mean there isn't an investment on the safety, accuracy and liability side of the coin. In fact, the ease at which these things can be spun out makes it all the more important to have those sorts of good governance in place.

What would you consider the pillars of a strong AI governance framework for a hospital or health system?

First, a mandate from above, meaning the board and senior leadership, to take AI governance and safety and security seriously. Second, a budgetary mandate to treat AI safety and security seriously. Third, a multi-stakeholder approach that brings people together with appropriate skill sets and authority to have meaningful—rather than performative—governance in place. Fourth, a real dedication to testing AI early and often, because AI can make so many decisions for so many people so quickly, oftentimes in an unexplainable, or so-called "black box" way, you can be impacting thousands, or millions, or even hundreds of millions of people before you know that something has drifted off course.

Being dedicated to treating AI, safety, accuracy and security as a part of the design and development process, and not just something you tack on at the end, is a difference-maker.

About the writer

Alexis Kayser is Newsweek's Healthcare Editor based in Chicago. Her focus is reporting on the operations and priorities of U.S. hospitals and health systems. She has extensively covered value-based care models, artificial intelligence, clinician burnout and Americans' trust in the health care industry. Alexis joined Newsweek in 2024 from Becker's Hospital Review. She is a graduate of Saint Louis University. You can get in touch with Alexis by emailing a.kayser@newsweek.com or by connecting with her on LinkedIn. Languages: English


Alexis Kayser is Newsweek's Healthcare Editor based in Chicago. Her focus is reporting on the operations and priorities of U.S. ... Read more