News Blog

How Penn Medicine Data Scientists are Improving Care by Learning How to Learn

Rblog

Recently, we brought you a story about a new program called Palliative Connect that uses predictive analytics to facilitate coordination between palliative care specialists and a patient’s primary clinical care team, with a goal of offering palliative care consultations to the seriously ill patients who most stand to benefit from them. That program, which recently expanded from a small pilot into clinical practice at two of Penn Medicine’s hospitals, has shown notable success—nearly doubling the rate of patients being discharged to hospice, while decreasing inpatient mortality, 30-day readmission, and ICU transfers. But, perhaps more notably, it is not a one-hit wonder. Palliative Connect was one of a few such projects run each year by a data science team in partnership with various Penn Medicine clinicians, to use data-driven predictions to improve patient care.

Among the innovative multidisciplinary teams in which data scientists, human factors specialists (trained scientists who focus on how human behavior intersects with the design of systems), and clinical experts have partnered together, these groups have been learning not just how to improve health care through the use of data—but also how to run such data-driven improvement projects better.

“We’re creating the playbook for building these systems,” said Michael Draugelis, chief data scientist at Penn Medicine. These are questions that a lot of other health systems are still figuring out, he noted, from how to select a problem to how to build a team.

By taking a look at Palliative Connect, a project the team highlights as a particularly well designed one, we can peek into the principles that are filling the pages of that playbook—principles that can help practitioners across the health care landscape who will increasingly have the chance to use machine learning and other sophisticated digital tools to improve health care.

Principle 1: Find the Right Problem

“In almost every project we start, there is a clear problem to solve; that notion is often wrong,” Draugelis said during a talk at the first annual Informatics Day hosted by the Penn Institute for Bioinformatics on May 31.

An example is the Early Warning System for severe sepsis, a major cause of preventable death among hospitalized patients, a project that was highlighted in a story last year in System News. The team who set out to work on this problem saw it as a knowledge problem: Clinicians need to know about the onset of severe sepsis sooner.

The predictive algorithm and early warning system the team developed succeeded at solving that knowledge problem. Alerts came in up to 30 hours earlier than symptoms. But when the team expanded the risk score and alert system into clinical settings for six months, clinician practice and patient outcomes were no better than in the preceding six-month period.

“It's one thing to predict severe sepsis and it's another thing to be able to predict sepsis that otherwise would go unrecognized where there's an opportunity for improving care,” Craig Umscheid, MD, MSCE, vice chair of quality and safety in the department of Medicine, and one of the team members, noted in an article about the project in ACP Hospitalist News.

There are certainly still problems that clinicians can tackle to improve patient outcomes from severe sepsis. But the problem at hand was “not primarily a knowledge gap,” Draugelis said—and therefore predictive analytics were not the best tool for the job.

The team knew they had a chance to improve their process from step one when Palliative Care Chief Nina O’Connor, MD, expressed interest in the idea that became Palliative Connect.

“We realized an effort needed to be made up front by both data scientists and clinical collaborators in really understanding what the problem is that we’re trying to solve,” said Corey Chivers, PhD, the lead data scientist for both the sepsis early warning system and Palliative Connect projects.

When working on Palliative Connect, the clinical team knew there were many different ways to get at the challenge of encouraging more physicians and patients to have and document conversations about end-of-life goals—and some, like the Serious Illness Conversation Guide and training for outpatient providers, don’t involve predictive algorithms at all. But in partnering with data scientists, they determined that an area where prediction could help would be in stratifying risk to identify the higher-risk patients with the most severe illness. Solving this identification problem could ensure that the limited number of palliative care clinicians in the hospital could spend their time consulting with the highest-need patients who would be most likely to benefit from their care.

Principle 2: Predict Something Your System Isn’t Meant to Change

Another key idea the team applied in building Palliative Connect was making it a testable system.

Often, clinicians want to use predictive analytics to help them identify a clinical problem—which sounds great, until you start trying to intervene on that same problem after a prediction has been made.

That makes it hard to test why the system does or doesn’t work as intended. If patients’ outcomes on the measure of interest improved, “you can’t disentangle whether you’re making a good prediction, a good intervention, or potentially neither,” Chivers said.

They now stop to think about the type of predictive solution and intervention they are going to deploy. Instead of trying to detect the actual thing they want to stop, they aim to predict or identify something that is a mechanism for understanding the thing they want to intervene on.

“With Palliative Connect, the thing you’re trying to change could be readmissions or delays in getting a patient into hospice, or a patient having poor quality of life at the end of their life,” said Susan Harkness Regli, PhD, Penn Medicine’s human factors scientist who works with the data science team. But, crucially, those outcomes weren’t the thing being predicted; rather, it was the patient’s poor clinical prognosis. While front-line clinicians continue to help improve the patient’s prognosis, the intervention offered in Palliative Connect—a referral to a palliative care consultation—is focused on other sorts of outcomes.

Principle 3: Design Measurement into the Protocol

The principle of not predicting the outcome you want to change is important in part because it makes it possible to measure throughout the process. In a system like Palliative Connect, each of these things can be measured separately: the accuracy of the prediction, how often the intervention is deployed when the need is predicted, and how well the intervention affects the specified outcomes. Protocols have to be designed that way intentionally; it doesn’t just happen. Sometimes in using predictive analytics, data scientists don’t understand the intervention well enough to be sure whether it has been deployed correctly or not. “We don’t know if the intervention doesn’t work, the prediction is wrong, or if nobody is actually doing the intervention the way we expected,” Regli said.

In the case of the sepsis early warning system, measuring what actions clinicians took after being warned about a patient’s high risk of severe sepsis was never built into the protocol. “In order for us to understand what interventions are being taken, we had to resort to whatever data exhaust was being generated by recipients of the alert,” Chivers said. The data science team had to sift through these patients’ electronic health records to see what their medication or other treatment orders were at time points before and after the alert to see if something was changed. The data existed, but the team hadn’t planned ahead for its intentional capture.

“We’ve gotten better at understanding the workflow, mapping out the workflow, and instrumenting the workflow in a way that the actions that are important will be captured,” Regli said. For Palliative Connect, that meant building a simple software interface to document the actions the triage palliative care nurse took when contacting front-line providers. The nurse could record whether the palliative care consultation was ordered, or not, and why not. Eventually, as the program is refined and if it expands further, this measurement and workflow can be built into the electronic health record itself.

Principles 4+: Build a Great Team, Keep Learning

One penultimate lesson from the teams behind both Palliative Connect and the sepsis early warning system was the value of bringing together expertise from multiple disciplines.

“They brought the tool, and then we as clinical experts helped to figure out how to match the tool with the right kind of intervention,” O’Connor said. “Those two groups alone can’t figure out how to solve the problem.”

Bringing together not just clinical and data expertise, but also expertise in how people interact with systems, was another vital area that helped Palliative Connect to succeed; it was the first project from the Penn Medicine data science team that involved a human factors specialist, Regli, at a deep level from early in the planning alongside data scientists with expertise in machine learning and predictive analysis.

“I think the melding of these two fields is something that people are starting to realize needs to happen,” Regli said, “and I think Penn is pushing the envelope on this. You’ll see a lot of data science articles publishing their algorithms have really good predictive capability, but they are not to the point of putting them into practice.” And these other algorithms have not yet necessarily yet translated to measurably better patient care, she added.

Learning is a crucial aspect of this work on multiple levels, team members noted.

“I think we’ve taught them a lot about palliative care in the process, and I’ve learned a bit about data science, too,” O’Connor noted. “It works both ways.”

“Everything we’re doing is toward the end of facilitating Penn’s leadership as a learning health system,” Chivers said. “We both learn as we do these interventions about the programs themselves, but then there is a level of metacognition: The things we’ve learned from previous programs have helped us be in a better position to learn for future projects.”

Regli agreed: “By developing this framework to do this kind of work,” she said, “Penn is pushing the envelope on learning how to learn.”

 

You Might Also Be Interested In...

About this Blog

This blog is written and produced by Penn Medicine’s Department of Communications. Subscribe to our mailing list to receive an e-mail notification when new content goes live!

Views expressed are those of the author or other attributed individual and do not necessarily represent the official opinion of the related Department(s), University of Pennsylvania Health System (Penn Medicine), or the University of Pennsylvania, unless explicitly stated with the authority to do so.

Health information is provided for educational purposes and should not be used as a source of personal medical advice.

Blog Archives

Go

Author Archives

Go
Share This Page: