R1 is the leader in healthcare revenue management, helping providers achieve new levels of performance through smart orchestration. With more than twenty years of experience, R1 partners with one thousand providers, including ninety five of the top one hundred US health systems and handles over two hundred seventy million payer transactions annually. If you want to learn more about how you can transform your revenue cycle operations, visit us at w w w r one r c m dot com. Hello, and welcome to the Becker's Healthcare podcast. My name is Will Riley from r one. Joining me today is Rob Puritan. Rob is chief AI officer at AdventHealth. Welcome to the podcast, Rob. Thank you. Thank you for having me. You bet. Rob, to start off, tell us a little bit about your role. Tell us a bit about AdventHealth. Sure. So I'll start with AdventHealth. So AdventHealth is a health system primarily operating hospitals, but also a total of two thousand care sites that include urgent care clinics, physician offices, specialty clinics, outpatient surgery, and we operate in nine states and see over eight million patients a year. My role as the chief AI officer is a new role in twenty twenty five one that I'm honored to serve the organization where I've worked for twenty years. So originally, I was a software developer out of college, but I caught the bug early on, the lean bug, and have worked in health care to streamline process, eliminate waste, overall improve efficiency and help improve clinical outcomes for all of those twenty years. So I do view this role as a chief AI officer as a continuation of that mission to streamline, to standardize, and to help our organization become a better value proposition for our communities, for our patients, and for our purchasers. Fantastic. Okay, great. I'm really looking forward to talking to you. Sure. Tell me a little bit perhaps to set the scene about some of the strategic priorities that AdventHealth has going into twenty twenty six. Yeah. So first, AdventHealth has what's called Vision twenty thirty, which is strategic framework, a set of goals where we expect to arrive by two thousand thirty. It includes consumer focused experience. It includes a robust set of clinical outcomes. It includes financial stewardship, management of of population and risk, and there are specific measurable goals. We're in twenty twenty five, so we're halfway into that ten year football game as it were. We're at halftime. And so that's a good point to evaluate our approach accelerate when we need to and really stabilize and lock in performance where we have it on some of those strategic initiatives. The strategic goals of AdventHealth haven't changed just because AI is now part of the equation. AI is an accelerant to a number of our strategic goals. So for example, in clinical outcomes, we know that AI can help some of our clinicians make better diagnostic decisions, augment their clinical intelligence with predictive algorithms that can help them make the right choice for the next step of patient's care. We know that it can generate time savings back for our outpatient providers. So they spend less time writing notes, less time sitting down with fingers on keyboards, documenting in the chart and more time assessing patients dialoguing with them planning care and then and then helping them feel comfortable with their diagnosis. Yeah. So that's what I would say about AI is it while generative AI is a new technology and the new implementations of AI are exciting, the goals for health care value have not really changed just because AI is here. It's just given us a new new lens to look through and a new set of tools to apply. It it does seem though that it is changing the way health care is thinking about innovation and perhaps the pace of innovation and using technology. Health care providers haven't necessarily been known for, like, being on the cutting edge of technology. They've always had a relatively conservative approach to technology, but it does feel like that's different with AI. Do do you agree with that? Oh, yeah. Yeah. Absolutely. So everything has Why is that? Everything has sped up with AI. And in part, you can look at at where where is some of the AI, not just in health care, where is it making the biggest difference? So one of the biggest differences is AI is being used to write computer code. So if you think about how long does it take to develop a new piece of software for health care, Ten years ago, you were not using AI to write code at all. Fast forward to twenty twenty five, you wouldn't dream of writing any sizable piece of software without using AI for a big part of it. And and that means the cycles between this generation of software and the next and the next, they're getting shorter. Everything's speeding up. So we have to be our learning cycles have to be faster. Our Our pilot experiments have to be more robust around what we're measuring and and how we're determining if they were successful or we learned something different from it. And and then we have to operate those cycles, you know, much more quickly than we have in the past. The typical life cycle for an application today is much shorter than it was for an application released ten years ago. So by the time you get it implemented and adopted by say all of your clinical workforce, you already have to be looking for the next new thing. Right. So those innovation cycles have really sped up. The other thing that's changed is that tools have become more accessible. AI is not something that is only there when companies introduce it to team members. We're all getting to use it all the time. So every person at this conference, every person in our health systems, you know, know how to go access ChatGPT or their favorite, you know, AI tool. And and so they're seeing firsthand how quickly they're getting value from it just in their everyday lives. Their expectations have been raised about getting that value from AI back in their work in the health system. Yeah. Okay. Okay. Shrinking development cycles, and this stuff works. Yeah. Essentially. The expectations are going up expectations. With your own two eyes. Yeah. Yeah. When you think about innovation in health care, I think it's possible to look at it through the lens of a couple of kind of archetypes. Right? You've got incumbents in the health care system, large providers, large payers, large technology companies, right, who have the data, the infrastructure, the systems, the knowledge, and institutional knowledge, and so on. And then another group could be you could call them insurgents, new entrants, people coming into health care for the first time, maybe from a technology perspective, AI native companies and applications and platforms and so on. When you look at, like, how you're advancing your AI agenda at AdventHealth, like, are you taking like, how are you managing between those two things? Are you are you an incumbent that's welcoming insurgents, or are you, like how how are you doing it? Yeah. It's a it's a tricky line to walk. So we we wanna learn as much as we can from AI native or AI first companies. So we, you know, we love that, for example, startups in the healthcare space are going and getting, you know, to their first hundred million of ARR very quickly and we want to learn what works in that accelerated cycle so that we can adopt some of that, right? But then at the same time, have a stability that's expected around a health system our size where we don't want big swings in performance. We don't want big risks in our business cycles and in our finances. Right. So we have to be very careful about, you know, what we bring in and what we make as part of our system. I will say though, and I like your dynamic around the incumbents and the insurgents and then the disruptors. It's really hard for an incumbent to completely transform into an AI native company. That's just not really realistic. I think for many incumbents the the goal ought to be not to be the slowest gazelle in the herd. Okay. Yeah. So you don't need to outrun the fastest one. You just need to not get beat by the slowest one. That's right. Yeah. And and so that's that's part of what we're learning. It's not about like a we just you know, we want to stay with the pack. It's that we don't want to take any excessive risks Our communities trust us. Our patients trust us. Our caregivers trust us not to just be experimenting with their work and so we take that sacred trust very seriously and they also expect us to innovate at a speed that they see the rest of the world happening. So it's a very tricky balance. Yeah. What I will say is in how we're evaluating AI, we're applying all of the responsible AI best practices. So from NIST, the the National Institute of Standards of Technology, new standards from the Joint Commission and CHI, we're applying those to how we evaluate AI. We're very focused on a fast cycle time for those evaluations and then creating feedback loops. So our job with assessing risk in our governance process is to determine is a particular implementation of AI low, medium or high risk. And we may not always get that risk assessment perfectly right. But the way we hedge is we may say well, that's medium risk, but we need it to come back and be reevaluated every month. And so even if we were wrong, it comes back every month. We can determine actually it's low risk. We can dial down some of the follow-up or actually we determined it's high risk and we really need to ramp up the observability and the monitoring here. So I think it's really important in this age of innovation where everything has sped up. That's not your imagination. It's real. It's real. That we very much focus on speed so that we're keeping up with expectations and that we're keeping up with the pace of innovation around us. And when you sort of make a determination around risk Sure. Like, what what how do you how do you factor risk? Yeah. Great. Great question. So so we'll look at, like, what is the AI being used for? Is this a purely administrative task that's just gonna save some minutes out of desk work? Is this something that's actually leading to clinical diagnosis or treatment? And then we also look, is there a human in the loop? So does licensed clinician get to check the output before it actually reaches a patient? We look at if there are any existing reports on the accuracy bias or performance of that particular implementation of AI. We look at have they implemented it at other places first and is there anything we can learn from those? So based on these kind of criteria, human in the loop, clinical application, use of PHI, standardized security measures like are they SOC two compliant? We'll make a determination about their level of risk. We have a rubric that helps sort in that low, medium, high. Yeah. And then if we even though we may make that determination about the level of risk, we can still always say there needs to be some mitigation on the back end, like, we need a report on this every week Right. Every month, or what have you. Right. Yeah. And how does you're you're saying, we, so there's a Sure. Clearly a group involved. There's a group. Can you tell us a bit about that governance broader governance framework that you've established? Yeah. There's two levels. There's an AI governance committee Okay. Which is, populated by executives. And so they help us determine where are the bright red lines, what are the things that we're going to, you know, do or not do, sort of as policy for the health system. That's helpful. It meets four times a year, evaluates kind of big picture thinking and questions. There's a technical group that meets every week for anywhere between one and two hours depending on our agenda. And so it goes through all of the reviews completed by technical researchers over the previous week so that we can ultimately land on a determination low, medium, And so in that forum, we'll have lots of opportunity to ask questions to say, well, we need to go dig a little deeper on this one. Have we connected with the the clinical department that's looking at this? And have they considered that we already have a solution that does this inside the system? Right. So we get to ask all those questions. We get to get answers and then make our determination. And where does your role fit in to this as chief AI officer? Because it's an unusual role Sure. I think at the moment for a health system to have. How did your role come about? And who sort of set it up? Where did the idea come from? And how do you sit in this leadership framework? Yeah. So our our previous CEO, Terry Schott, his executive committee created the chief AI officer role, Bradman Health. I'd been doing some work in in my previous role in analytics and performance improvement related to just getting our doctors up to speed on what's coming with AI, how we should be thinking about it, what are some use cases that would be really impactful and, know, maybe I just drew the short straw, but I I anyway, no, I'm kidding. I'm I'm very blessed. I was I was asked to serve as our first chief AI officer and and so I'd I'd like to think it's this combination of a focus on performance speed value as well as a robust technical underpinning and the knowledge about how AI works and what are the risks that it can pose in our clinical environment. So, yeah, I was invited to take the reins on this one, and yeah, I wouldn't trade it for anything. Do you sit in the IT team? Are you I do. Sit in the IT department. Yeah. Group. Okay. Okay. Yeah. But I I will tell you, I work with the entire organization. Of course. Yeah. I work with our with our finance leaders. I work with our clinical leaders. You know, there's very low friction to them bringing a concern about existing AI or a new idea about AI up. I will say just some of the critical few AI strategic priorities that probably all AI officers are thinking about right now is what's gonna be our platform to let agents start to transform our work. So AI agents are more autonomous bits of AI that you can allow to do everything from like writing a finance report to researching and transacting, you know, some things in your clinical system. You can let them do those things. Question is should you and so how do you actually monitor and orchestrate those agents to do their work? That's a big one for us right now. Another is really looking at our digital front door. So how is it that patients engage with our system? Have we made it frictionless? Have have we made it frictionless for us to call when there's a referral and and get them engaged with the right kind of care team specialist or or what have you? And then a third are a third area of priority are really are more around hospital based innovation. So things in the surgical area, surgical robotics, computer vision, how are we helping to enable the next generation of surgeons and a more minimally invasive, faster recovery in some of our surgical specialties. Excellent. Which areas of hospital operations do you think are most right for full agentification? Well, I I think revenue cycle is an easy answer to throw out there. I think everything from, you know, authorization to to helping with documentation in the, you know, in the midst of your revenue cycle to to denials management on the back end. I think all of that is really ripe for AI agents. You know, I'll I'll also say there's a lot of clinical workflow that so far we've been hesitant, not just AdventHealth, but healthcare has been hesitant to really introduce a lot of lot of AI. But those AI assistants and copilots can really help, you know, whether it's helping documentation to get more complete. I mean, we've seen that with ambient scribes. Everybody's doing ambient scribes is a big help, but even more along the clinical decision support lines. How do you make sure you're not prematurely ruling out a differential diagnosis that even though it's a low probability could pose a very great risk to a patient? So some of those things AI is very good at keeping in front of you as a low probability risk. Yeah. And and and so we could benefit, and our patients could benefit from having those sorts of decision support copilots more in our in our environments. Yeah. Last question before we move to our conclusion. Sure. You mentioned revenue cycle. That is an area that historically has been very labor intensive. Right? Very labor first. And and technology has been used as a sort of an aid, a support, right, to people. And it does feel like it's an area that where that paradigm can switch flip completely, where it can be largely autonomous with people assisting. Sounds like you agree with that premise. What are some of the implications of that, maybe from a workforce perspective, or how are you sort of how do you talk to the teams about that as they as they contemplate that? Yeah. You know, so one one thing from a workforce perspective, we face a challenge in health care in that the number of patients who will need care will vastly outnumber the resources that we have organized to provide it today in the US. That that pinch point is coming and outside of some really aggressive reworking of our our workflows and application of AI, we don't have another great solution to to address that pinch point. So the the clinicians are not gonna come out of nowhere to, you know, ride in and and come rescue us from bad ratios. This is we just we have to go after every bit of of non value added work we can. And then that may mean that we need to repurpose roles into more patient facing, you know, more more medical aid, more patient care tech type roles, and less back office roles. I think we've watched the trend line in health care, the number of, administrative staff compared to the number of clinical staff in the last twenty years, clinical staff have not really budged and administrative staff have skyrocketed exponentially. So that's a trend we've got to reverse and get more resources pointed back at patient care. A way to do that is agentification and application of AI. Yeah. Last question you mentioned, last question, you mentioned your digital front door. Sure. Can you end with a couple of thoughts on how all of this new technology can really improve the patient experience? Yeah. So I think overall, the word is frictionless that we're looking for and and what we would hope to happen is that it's seamless and obvious, you know, the way you experience care say you go to your primary care physician suggest that you go see a cardiologist that within hours, a a call or a message in the right, format in the right, modality arrives to you and makes it easy to suggest some appointments. Maybe it's already integrated with like your calendar on your smartphone and can suggest sometimes that look like they already work for you to go connect with a cardiologist. The other is when consumers want to engage with our system, how to how do those inbound calls get routed in the smartest, most frictionless way possible so that it's it's really easy to get to the provider you need at the time you need and that as few barriers like office schedules or like, you know, questions as few of those as possible make their way in. We have an initiative one of our CEOs coined burn all the clipboards. So, you know, when you go to the physician office and inevitably you are handed a clipboard and asked to fill this out, which of course you've done triple digit times before. Yep. I refuse to do it now. Oh, you refuse. That's great. I I remember that one. That's good. But No one says anything. Like, they just let you in? Yeah. That's wonderful. Brilliant. I like that. So so yeah. I mean, part of this is we we we know we have your information. Like, we we this should be frictionless and easy, seamless, and you should not be writing down the same things again and again and again and again and again. So taking those experiences and and letting AI help to to transfer your information from one encounter to the next or make that whole thing just more seamless, that's that's what we expect to really happen with a digital front door. Awesome. Yeah. Rob, thank you so much. It's been a real pleasure talking to you. Yeah, yeah. Thanks so much for sharing your insights. Thanks for having me. Really appreciate it. Yeah, you all. Thanks.