false
Catalog
CHEST 2023 On Demand Pass
AABIP: Spotlight on Nodules: Unraveling the Future ...
AABIP: Spotlight on Nodules: Unraveling the Future of Detection and Tracking with Radiomics and Beyond
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, good afternoon, everybody. Thank you so much for taking your time to be with us today. We are going to try to rapid-fire solve all of your lung nodule needs in one hour or less, so hold on tight. My job, like you said, I'm Susan Garwood. I serve as an advanced bronchoscopist at a hospital in Nashville, Tennessee, about a 700-bed hospital. These are my disclosures. As you can see here, I do various consulting work for those that are on here. I think I'm supposed to say them out loud. Intuitive, Biodesics, Ferricide, AstraZeneca, Roche, J&J, none of which have anything to do with what I'm talking about today. So the learning objectives, I'm supposed to give kind of the 30,000-foot view. We're going to review some landscape of nodules, really look at the opportunity, and I think if it's the most important thing you can take away is to go back to your healthcare system and talk about not just screening but incidental, and then really impress upon not only your institutions but industry on how do we solve this problem better? How do we have the holy grail of natural language processing, AI, and radiomics in order to solve, not just can we find a nodule, but how do we identify who should be intervened upon? So I'm going to try to touch on some of that and how we've solved that. I'm going to review the workflow and management that we have at HCA. Like you said, I am the physician director for the entire pulmonary service line. I got that hat right before COVID, so we got a lot distracted with that, but lots of work done since then, and really look at what we consider a novel screening pattern for us. And so when I came to HCA, what I came to was a mess, like many of you. So the landscape of nodules has lots of ripe opportunity for intervention. Obviously the multiple entry points, most of us are pulmonary or pulmonary critical care. We know that care is very fragmented. It comes into pulmonary. It comes into primary. It comes in through the ER. A little bit comes in from screening. The biggest thing that I saw was missed opportunity, that we have lung cancer come into the walls of our hospital and walk out undiagnosed, and I thought that was my greatest opportunity. What I knew is that I could not do this alone, that we needed care coordination, and back when I started this, really care coordinators really were only attached to cancer navigation. So I knew I had a problem, but I had a healthcare system who believed in navigators. So no system-wide. So when you look at a big healthcare, we're the largest IDN in the U.S., so who better to try to take this on, but so I really used that competitive advantage to kind of spur them to action, and really outcome data at that point was lacking. No one really said at this point looking at incidentals was better than screening, but I told them, let's try. So what we began with was a patient-centric, physician-led objective to try to digest this problem of pulmonary nodules. Obviously what we all want to do is find early stage disease, and what's very appealing to a hospital system is reducing liability. Reducing liability means that you had a cancer that was identified potentially, and then let that walk out of your health facility. Later when that comes back diagnosed with late stage, there now are lawsuits that have been successful where that has been brought up that there's missed opportunity, and obviously prevent out-migration. We don't want to build a healthcare system that's going to diagnose cancer just to hand it to somebody else because you're disorganized. So the first thing I had to do was build a funnel, right? So how do I get a funnel built, and then how do I help navigate that funnel with care coordinators, and then how do I decide what software to use in order to manage them, and ultimately what platform I'm going to use to diagnose cancer, hopefully at its very earliest stage, stage one. And I believe at that time for that to be a navigational bronchoscopy. We have changed this now to robotic navigation, which is what we're using currently. So we had an execution plan. This was not just myself, so those of you who are in the audience know that your stakeholders are most important. So we have lots of stakeholders at each and every meeting, including administrators, VPs of our cancer service line, radiologists, ER physicians, pathologists, OR, you name it, the list is exhaustive. I'm happy to share my list if you need one. And outcomes start with the end in mind. So we wanted to really set our foundation. What is our current stage? What is our current timeliness of care? How long does it take us to get a radiographic abnormality to a biopsy, and then from a cancer diagnosis on to treatment? What are the downstream outcomes look like? How am I going to get my healthcare system to invest in this? And so that was an important piece for us. And then overall, what did we do to survival? So at HCA, we have lots of different service lines. And if you look to the right, the service lines that are involved in this project, we have both a pulmonary and a thoracic service line now. It's separate from our oncology service line. So when I came to HCA, the only thing we had really was an oncology service line. I worked on this project from 2015 until 2019 until the oncology service line made me break up with it. They said that pre-cancerous diagnosis are really not in the cancer space. You've got to find a different way to attack this. And many of you may feel this pressure that the pulmonary and pre-diagnostic space may not be as well supported by your oncology cohorts. We have a clinical operations group that are over most of our employed physicians, and so another ripe opportunity for us to engage. And we already had two different platforms. In addition to Sarah Cannon, our cancer colleagues, we had something called Care Assure. They deal with triggers within our healthcare system, basic triggers that you probably are very aware of. No primary care physician. Smoker, cardiac risk factors, aneurysmal size, thoracic vertebral height. So they already had triggers they were working on. BMI was another big one. And so we said, well, gosh, why don't we just throw our name in the hat there and see if we can't utilize this. And so first we had to say why. And you can see to the left of the low-dose CT screens, we know that only 6% of eligible patients in the U.S. are screened. Within HCA, we're a little worse than that. We're about 3.5%. We know that thousands of CT scans went through our healthcare system annually, and we really had no way to identify what was pulmonary nodule. And so we decided at this point we needed a program. We decided to start the program by building instead of buying an incidental nodule platform. So we thought, we tried on several for size. This is in 2016, 2017, not a lot on the market at that time. And so we decided to build our own. So we built our own nodule navigator based on natural language processing. We purchased a care management platform, which we'll hear a little bit more about today, in order to navigate these patients. We began intentionally in our emergency room. We felt like this was our highest risk population. Most of these did not have primary care physicians. Most of them had a tendency not to follow up. And so we felt like they were highest risk. We then moved to our inpatient, and then we moved to our outpatient setting. Outpatient, as you can imagine, a little bit stickier. You have a physician who's ordering this, who's engaged, who may not want your help or understand why you're even doing this. So we automated this nodule program so that it would identify nodules that were greater than 6 millimeters with certain keyword characteristics. Again, all natural language processing. It would go to a navigator. That navigator would then scrub the medical record to see if that patient was eligible. Again, if a patient already had a cancer diagnosis, ineligible. If they were followed by a pulmonologist, ineligible. If you had a CT scan that showed stability, ineligible. If they were calcified, ineligible. If you had a very clear secondary diagnosis, COVID happened in the middle of this, ineligible. They would then scrub that and then give that to a physician, in my case it was me, who reviewed all of these charts on a weekly basis to figure out what the next most appropriate step was. It's handed to a care provider and they go into an assessment that we've agreed upon. Our screening population was a little bit different. We decided that we would only intervene with the navigator on those that were highest risk. We did include LRAD3s in this. We know that those are likely benign but we know that adherence to screening is extraordinarily low. Having that conversation was very important. Having a smoking cessation conversation with those patients, even more important. So we also put those through navigation. This is a very busy slide, but just to say you have to get organized before you do this. Before you turn on a funnel to find incidental nodules you have to be very clear about the rules of engagement. What do you perceive as the standard of care, gold standard for following up these patients? So we identified the patients like we just talked about on the left. If they had a primary care provider, we did contact that primary care provider. If they didn't, we contacted the patient directly. If they were an inpatient find and the inpatient physician agreed to let us navigate, we would take that patient on just as they left the hospital. And then the rules of engagement were if you were handed these incidental nodules or screening nodules, you had to follow the HCA way, which was to make sure that a clinic visit that they were followed with all the appropriate evaluations, including a PET or a PFT when appropriate, or dedicated CT chest for instance. In all of our clinic visits we insist upon risk stratification at least in the form of a calculator. If you used any other proteomic or genomic testing, again, we wanted that in the medical record for us to follow. And then obviously they went on to tissue sampling. For us tissue sampling included not just biopsy of the nodule itself but also EVIS surveillance in all of these patients. So you really almost had to sign on with the rules of engagement before we handed these. And we looked at each of these in a stepwise fashion. And this tells you how old I am and how long we've been working on this. Because we started in 2017 once we finally got up and going. Remember it took us about two years to try on a couple different sites. We launched eight in that first year. And you can see now we've launched over 123 of our 180 hospitals, which means I had to do a lot of talking in those last few years. But these are the fruits of our labor. Just so you can go back and show that if you spend the time to do this the right way, what you will get is a tremendous amount of lung cancer on your incidental side. This looks at our 2021 data. And remember this is just a snapshot. This is what happened to that patient in 2021 and 2021 only. We are going back now retrospectively looking at our last three years. But if you look at the incidental side, 1.4 million scans had some portion of the field of lung in view that had a 6 millimeter nodule. 20,000 of those were eligible, meaning of the 1.4 million, 20,000 had those characteristics. Navigation is a key point. So when the navigator contacted them, either the physician said no or the patient said no. But 63 percent of the time we got a yes. Twenty-five percent of the time they got a biopsy and we had 1,400 cancers out of that 20,000. It was around 9 to 12 percent depending on what part of the country you were in. If you looked at our screening side, you can see we go through the same iteration. Most of those patients agreed to be navigated, a lot more trust in our healthcare system, but only 3 to 5 percent, so 62 patients. And if you do the math, about 95 percent incidental, 95 percent screening. So how do we do the next year? Again, we expanded. Remember we talked about expanding. We had 2.9 million scans. Again, about 2 percent of all of our scans had nodules that were in the field of view. And if you look to the end, around 8 to 10 percent of those had cancers. Now the interesting story to this is we got a little bit smarter on navigation. So we had more yeses and our navigators had more success with this. If you look at our lung cancer screening, this is still a very depressing number, okay? 20,000 people. I will confess that we know our market share area. If we look at our market share demographics, HCA owns 750,000 lives that should be screened. We screened 20. And don't tell anybody I told you. We need to do better, okay? So my next project is how do we do better, right? So we have a push mentality, waiting for somebody to push that screen to us. We're going to switch to a pull mentality. So we went to our physician services group and said of all your employed physicians, can we approach them and interact with their medical record to pull them out of the medical record and explain to them we love you, but you're doing a very poor job. And even if you get one, you rarely get the second screen. So we scrubbed our medical record. We started with six physicians and we have now tripled that in the past three months. We scrubbed the medical record for 50 in smoking ever. If any of you have tried to do PAC years, time from smoking cessation, unfortunately we need smarter AI in order to get in our medical record. But more interestingly, we need better record keeping. So we have a coordinator who verifies that first to see if they've ever had to screen. They also check to see if they've had a CT in the past year to make sure we're not double dipping. And then we coordinate. Right now we don't have to get permission from the PCP. We have this list that we're working from. And so the first three months that we did this, we had a 57% of yeses when we called them the first time. Now we're at 70% yeses. The APP does our shared decision making visit. And when that screen comes back, if it's an LRAD 1 and 2, our APPs for smoking cessation trained, they focus on that and talking about the importance of compliance. And if they're an LRAD 3 or 4, they see myself or one of my partners to get risk stratified and decide what's the next appropriate step. And finally, again, we are getting smarter and better as we go. This is the importance of care coordination, having a human being who knows how to interact with that patient. If you look at the bottom, the left side is eligible patients. The right side is navigated patients, meaning when we contacted them, we were able to explain that we wanted to help and come alongside them. If you look at the bottom, in 2021, about 50% of the time when we started, we got yeses that got a little bit higher. To the end of the year, about 63%. If you look at how we're doing this year, it's about 75% yeses. So those who are eligible, 75% of the time, adding a care coordinator made a tremendous difference. Our experience made a difference. We've talked to our C-suite and we definitely have a formula in order to get more people to the table to show how important care coordination is. So just a little bit to show you how important it is to find these nodules. Don't forget your incidental nodule base. If you don't have an incidental nodule program now, you're looking to find one. Happy to share with you more tricks of the trade. We have lots more to share today about how we get smarter with AI looking into this and about what the future holds. So thanks so much. Good afternoon. It's an honor to be here. Just a little over 10 years ago, I got married and on my way to the honeymoon, I stopped at CHEST in Chicago and my beautiful wife's in the back and now we have my two daughters there. So really just honored to be here. They're dying to hear about data and managing pulmonary nodules. So let's not keep them waiting any longer. So since that 10 years, I've been honored to help with implementation of 700 hospitals, institute lung cancer screening and incidental nodule tracking systems. And we still have a long way to go to truly be able to scale and solve the problem of scale. So today, we're going to talk about metadata, which is really data about data and dig in on that a little bit, why it's important. The current state, I think we all say, hey, healthcare data is siloed. What does that really mean? And what does it mean to managing patients with pulmonary nodules? And then look really at that data and go through the clinical pathway and what it means to the clinical pathway, dig into the data pathway as well, and then look at the current technology solutions, what their upfront risks and then what the real downstream gains are, and then really look at a fully integrated ecosystem where there's sophisticated intelligence that's available today to allow you to get some of the results that Susan's got and some other health systems have got, and then looking at that in the future for the power of scale. So let's start. And really metadata, like I said, is data about data. And so if you start off thinking about all the different types of metadata, there's descriptive metadata, which is really like the title, the author, the subject matter of data. There's administrative metadata, which is privacy and security permissions, who can access the data, what can they do with it. And then there's structural metadata, which is really how the data relates to each other. And that's a hierarchy of relationships, and that's really important in machine learning, and we can talk about that a little bit more here in just a little bit. And then other types of metadata include like author, you know, who uploaded it, when it was viewed, how it was used, archived. Now it's starting to also use, include chat logs in our EHR. We used to call it Nurse Twitter when we were covering the ICU, the nurses chatting you about a patient in a certain room. That's all metadata, user notes, comments, bookmarks. And so really, you know, managing metadata is the difference between a well-oiled data strategy versus really just a cluttered, you know, repository of data, which you see on the left. And the biggest question that you need to ask yourself really is, you know, is your data working for you, or are you working for your data? And really that's the power of metadata. And so I'm going to really dig into five different silos that really affect the management of pulmonary nodules. And there's more than, you know, five silos, and even within those silos, there's silos within the silo, right? And we're going to dig into really EHRs, linguistics, image analysis, devices, pathology, and biomarkers. And so if we look at the EHR silo, and one thing I'll say, you know, maybe just bring up like smoking history. And we looked at the fidelity of smoking history. Every single EHR probably has five separate places where smoking history could be found with probably different variable answers. And then within the same health system, if you look at a patient who's a current, a former, or a never smoker, if it's listed in never, former, or current, then that data is not translated or standardized. And when you try to leverage that data, it really becomes extremely painstakingly labor-intensive, which costs money and doesn't allow you to scale your programs. The volume of incidental nodules, the volume that she's talking about with lung cancer screening, and the data organization is extremely heavy when it comes to administrative burden. And so, you know, EHRs are just one silo that needs to be really organized, translated, curated, to be able to unlock the power of being able to manage pulmonary nodules. And then you have linguistics, where this is normally used for the analysis of either non-structured data or semi-structured data. And there's all types of different techniques of linguistics when it comes to, you know, applying macros, whether it's regular expressions, negative expressions, computational linguistics, or other computer models to analyze that data, and then extract information that's relevant to the management of pulmonary nodules that, again, allows you to then attack that problem of scale. I'm going to keep repeating the problem of scale, because that's what we're trying to solve when we talk about data and pulmonary nodules. So then there's also CAD-E, CAD-X, computer-assisted detection, computer-assisted diagnosis, image segmentation, and there's important data that is valuable in the management of pulmonary nodules, but how do we actually ingest that into a clinical workflow and a cost that makes sense when you start to add up the costs in terms of the number of transactions, the number of patients with pulmonary nodules? And then also, you know, pathology. At the end of the day, we have to have outcomes data, and we talked about that hierarchy of relationships, but how can you actually understand the hierarchy of relationships if you don't understand what the result is or the data from pathology and how that's organized, or a biomarker when you're trying to assess risk or whether or not a patient has a pathway for a specific treatment based on a biomarker result. And then also all of the data that's generated from devices, whether it's data from the actual machine or data from the reports. And so that's just a quick overview of the five silos that we're going to talk about that need to be organized in terms of solving the problem of scale when it comes to pulmonary nodules. Let's go through the clinical pathway real quick and how that data overlays and these silos overlay. So obviously step one, let's find a daggone patient who has a pulmonary nodule, right? And then step two is assessing the risk of that patient and the finding. Step three is deciding on the recommended next step of care. And step four is that did the patient actually perform the follow-up? This is overly simplified. We're going to dig in a little deeper to show you some of the complexities, but this is just to show the concept of where in the clinical pathway do these different data silos impact the automation of managing pulmonary nodules. And so here you can see in the EHR, the PACS, the risk system, that technically you could identify a patient with an EHR by flagging with a macro. It's very manual. It's not automated. So really what we're saying is, is that EHRs, PACS, and risk have data or metadata that allow you to help assess the risk of the patient and the finding. Why? Because are they a smoker? Do they have a family history of cancer? Are there comorbidities? That data is found in the EHR. If you just have that data, everything else needs to be handled manually, which is not something that's pragmatic when you talk about handling the volume of pulmonary nodules. So that's one silo. Then you have linguistics. Okay. Linguistics has the ability to extract from documented reports whether or not a patient does have a pulmonary nodule. It is an effective way at scale to identify patients with documented pulmonary nodules. It also has the ability to extract out documented characteristics. And so linguistics is something at scale that has been shown that can be applied in the management of pulmonary nodules. However alone, creating a work list out of linguistics is not something that's pragmatic to manage pulmonary nodules at scale. And alone, it's not enough to be able to solve all the problems that we see in terms of managing pulmonary nodules. So then now there's CAD-E and CAD-X. Computer assisted detection. Non-documented pulmonary nodules happen. And so assisted detection allows for patients who aren't documented in the healthcare records to be extracted. It adds value to the clinical pathway of pulmonary nodules. Now is that workflow something that's non-interruptive or pragmatic for implementation across scale? And then assisted diagnosis allows for the malignancy risk. One of the things that we haven't gotten to is really step three. You need to be able to predict the next step outside of the silo of a provider or the subject matter expert to be able to overlay the interventions that are required to get a patient and a provider. A provider needs to write the order for the next step. That's a desired behavior. And then the patient has to show up. And so if you don't know what the next step is, then you have no ability to affect the problem that 70% of patients who have a pulmonary nodule don't get the appropriate next step in their follow-up. And so then also pathology and biomarkers can be applied to a patient who's performing follow-up. Also there are some biomarkers that potentially have the ability for risk stratification. And then obviously all of the devices and procedures and quality are going to be important in terms of seeing exactly what the approach was, what the diagnostic rates are, and then all of the data that comes out of those devices is also important for us to learn and get better. And so now if we were to take those five silos and now look at the data pathway, and this is a huge eye chart, but they even get worse as you start to add complexity to the management of pulmonary nodules. And what I'll show you here the top left, and I'll just kind of describe what's going on here, is that in the EHR and PACS and the risk system, patients undergo an imaging procedure. That imaging procedure data is then sent to the EHR. Now if you were to overlay linguistics at the top of this, now, and this is how you start to bring together the solution, right? So that data's created in your risk system. Now linguistics can analyze that data that was from the silo of the risk system and then say, okay, yes, a positive finding is populated. And then imaging risk that can come from characteristics that are documented, or it can come from CAD-E or CAD-X in terms of assessing what the malignancy risk is. And now what you can start to see is this complex data pathway in which all of these silos, that if you were able to integrate all of the technology that is organizing, extracting, eventually what you see is a full diagram that has no pitfalls in terms of managing a patient with pulmonary nodules at scale. So it's not about one technology. It's not about one piece of data. It's about putting all of the data together across all of your hospitals, all of the hospital systems, all of your patients, and integrating. And then organizing complex workflows that are truly automated that remove an extreme FTE burden to solving the problem. And today one of the mission critical problems that almost every single healthcare system in every hospital is facing is staffing. They can't staff. So if you want to have an incidental program or a lung cancer screening program, you've got to figure out how to do that without requiring immense FTE burden. This thing just grows. You start it and you saw Susan's numbers. It just grows year after year. Somebody needs to be able to manage that system. And instead of it being a somebody, it could be a combination and should be a combination of different technologies, is the point that I'm really trying to show here and demonstrate. Now let me get back to cost, right, and whether or not it works. And I'll tell you that EHRs vary. There's different ones that are very, very cheap to very, very expensive. Now, if you were to just use an EHR to manage your pulmonary nodules or lung cancer screening system, you know, the resource demand is going to be likely very high in manual, and there's going to be workflow disruption. Linguistics is, I think, very cheap and allows for, you know, very little workflow disruption and very little resource demand if the performance of the linguistics engine is high. You know, CAD-E and CAD-X have tremendous value in theory, and also not just in theory, but image segmentation, you know, in lung cancer screening does show value in which the assisted detection and segmentation of those nodules is superior to that of even thoracic radiologists. And so, you know, assisted diagnosis, which I know Avi's going to speak about more, has tremendous potential value if there is a target in an image that allows you to add something to a metamodel in terms of a risk stratification metamodel that allows you to say, okay, look, this patient either has a high likelihood of malignancy or a low likelihood of malignancy, and I can predict the next step without a subject matter expert. Most of you here are subject matter experts who take a patient record and then have to really mine data yourselves and make a decision on what the predicted or expected next step is. What if we could do that, right, without the subject matter expert review, which is extremely costly, right? And then, obviously, device and procedures, you know, they do cost money, but, you know, the diagnostic rates are improving. But obtaining the data from that is important. Biomarkers, they range in their cost. But eventually, all these things are going to end up having a transactional cost that goes to the left, which means that technology becomes cheaper over time. And so, as they become cheaper over time, as the resource demand becomes less over time, as the workflow disruption becomes less over time, you're going to see the eventual, you know, goal of the data flow and a fully integrated data ecosystem around managing pulmonary nodules, not just pulmonary nodules, but all incidentals and even chronic disease. It's an intelligence layer that's overlaid on your EHR systems. That's what's coming. And so, this is just something today, this is from Geisinger, actually, and this is a waiting room. This is just the EHR data flow with overlay of different intelligence, whether it's linguistics, whether it's macros, whether it's tracker codes, whether it's CAD, you ingest it and then you overlay the intelligence onto a workflow for patients that have a important finding that needs a next step. And as the patient is awaiting the approval from the PCP to get entered into a truly centralized and centrally managed system, this flow is just showing you the configuration or the intelligence that's just behind the waiting room. So, if you look at the intelligence that gets overlaid on top of the end-to-end management of pulmonary nodules, just think about how complex the configurations are in terms of ingestion of data, metadata, desired behaviors, integration into your EHR system, legacy software systems. And so, this can happen today. And managing pulmonary nodules is something that is pragmatic today at scale and I think that it's only gonna improve. And I know Avi's talking about the future of some of those technologies, too. And honestly, Susan, in her shop at HCA, is the first shop that's been able to really scale the management of incidental nodules successfully over 100 hospitals. So, this can be done. If you're thinking about doing this, this is something that can be completed today and it's only gonna get better. So, just again, just looking at this single patient, I'm a doctor, I would dig into their paper record back in the day. I'm not gonna, they definitely had paper records back when I was in my fellowship. And you would figure out what the care plan is for a single patient. And then, maybe I'll hire a team. And a team could then organize managing a single cohort. But then maybe I get some of these technologies that I talked about in terms of linguistics or patient management software processes and now I can manage all incidental findings. Geisinger's doing that. Maybe I can manage all hospitals in a health system like Dr. Garwood's done. But really, if we're able to combine these technologies, integrate them, and improve them, and make them cost effective, now we're talking about managing all patients with all abnormalities that need follow-up, including chronic disease with an intelligence layer that can be overlaid on our EHRs and other legacy software. To me, that's the future. That's what I'm busting my butt for. I appreciate the time today and looking forward to discussing this more with you all. Thank you. I really appreciate the opportunity to be a part of this session. I'm really jealous of Aki. My wife actually came here with me, your kids are leaving, but they were listening to every word in your talk, and I couldn't pay my kids to come to a meeting with me in Hawaii. I mean, they're 16 and 18, so coming with their dad to a meeting is not cool, but I'm very, very jealous. Joy while it lasts. As they get older, that gets tougher, so well done. I actually thought Susan and Aki's talks were awesome because they really set the stage for what I'm gonna be talking to you about today, which what do you do after you find those nodules? How do you risk stratify? Who should get a biopsy, who shouldn't? Before I tell you about that, let me get some important disclosures out of the way. I am an employee of Johnson & Johnson. I lead a group there that's focused on the early detection and treatment of lung cancer. I'm also a part-time employee and faculty member at Boston University, where I lead a group that's really focused on developing something called a pre-cancer atlas, which I'm gonna talk a little bit about today with funding from the American Lung Association, Longevity, Stand Up to Cancer, and the NCI. And then finally, in a previous life, I founded a molecular diagnostic company called Allegro Diagnostic. Those of you thinking of founding a molecular diagnostic company, don't do it. Let me just tell you, tough road. But this company was actually successful and ultimately developed a biomarker that was acquired by VeriCyte about six, seven years ago. I'm gonna talk a little bit about that biomarker today, but I have no financial relationship to VeriCyte at this point. So let me start with where we are today with risk-stratifying pulmonary nodules that we find on CT. And just for the purposes of my talk, when I talk about pulmonary nodules, I'm talking about those that are found both incidentally and screen-detected. Of course, more than a million incidentally, a large number coming up with screening, and I think Susan did a really nice job of showcasing the relative proportions of the two that we see today, and it's primarily incidental. So what do we do as physicians when we have a nodule found on CT? Today, there are three things that we can use. We have our clinical subjective assessment of that patient and that image, and we can combine that with guidelines, both screen-detected and incidental nodule guidelines, alongside clinical risk models, like the Mayo model, where you put in the age, the size of the lesion, smoking history, and they spit out a probability. Based on those three tools in our toolbox today, we then stratify that nodule into one of three boxes. Hopefully, that's not coming at me. Just watch out for me, watch my back. So it could be a high probability of cancer, of course. That's somebody that we would wanna take to surgery or for a biopsy, generally more than 60 to 65% pretest risk. You could go in the low bucket, less than five or 10% risk of lung cancer. Those are patients that most of us would follow with CT surveillance, and there's a large bucket in the middle in yellow that are anywhere from five to 10, all the way up to 60% pretest risk of disease, where quite honestly, the guidelines are not that helpful, and there's a lot of heterogeneity in how we manage these patients today. So what I wanna focus my talk on is what are we doing today? What are the new approaches that we are seeing in the clinic for helping risk stratify that indeterminate group? And maybe more importantly, is give you a vision for the future of where this field can go. So let me start with today. There is a rapidly evolving landscape of both imaging-based and molecular biomarkers for risk stratifying indeterminate pulmonary nodules, and there's new ones coming out every other day here. Very few of these have been implemented in clinical practice. We'll talk about why in a minute. Let me start with image-based or radiomics. So image, of course, is used as part of the subjective clinical assessment today that I talked about on the previous slide, as well as part of a lot of the clinical risk models and the guidelines, of course. But when I say radiomics, what I'm referring to is objectively looking at an image through machine learning and extracting features that are predictive of the nodule status, being benign or malignant. These can be pre-specified features, or they can be a black box of artificial intelligence finding those features that are predictive. A lot of activity is based, and I'll talk about one of the imaging markers in a minute. There are a laundry list of molecular markers below here, and I've only listed a selective number of these. I think ones that you probably have heard of, blood-based proteomics, blood-based circulating small RNAs or microRNAs, circulating tumor cells or circulating tumor DNA, airway transcriptomics, which is near and dear to me, and I'll talk more about that, and volatile organic compounds in breath. Now, one of the reasons so few of these are actually being used in the clinic today is this slide over here, the chain of evidence that one needs for a diagnostic test to be implemented in clinical practice. You first of all need analytical validity of the assay. This is particularly important for a molecular test. Is it reproducible? Is it accurate? Can it withstand what happens to a biosample when it's being shipped from one part of the country to another? You then need clinical validation. Does that biomarker correctly predict the cancer status of that nodule, benign or malignant? And then the third, and I would argue the highest bar that very few biomarkers can achieve is clinical utility. Does that biomarker change what a physician does? And does that impact in a positive way the outcome of that patient? And specifically, in the indeterminate pulmonary nodule setting, there are two potential clinical utilities that you would need to show. You need to show one of these two things. Either that it reduces the time to diagnose lung cancer in those with disease without increasing unnecessary invasive procedures in those without cancer, or the reverse. It reduces the number of evasive procedures in those without cancer without delaying the time to diagnose cancer. That's what you basically would have to show in a clinical utility study. Very few of the biomarkers in the last slide are in the clinic today or are being adopted clinically today because number one, the clinical validation, that middle bar in that chain, is not always achieved. A lot of the studies that you'll see in the literature are case control studies. When I say clinical validation, I'm referring to a prospective study where you collect the sample pre-diagnostic in the intended use population for the test. That is what clinical validation truly is. And then, of course, clinical utility, few if any markers have that level of rigor to it. And we'll talk more about markers in a minute. Now, this is a table I borrowed from a really nice review paper in CHEST earlier this year that summarizes six commercially available biomarkers in the U.S. today for stratifying indeterminate pulmonary nodules. I can't go through all of them, so I'm gonna give you a quick glance through them and show you where is their level of evidence today. Let's start with the top three in this table, which are all blood-based proteomic biomarkers for lung cancer. The first one, called Nodule XL2, basically consists of two proteins that are measured in blood, and they're combined with five clinical risk factors. And in a nice clinical validation study published in CHEST a couple of years ago in the so-called Panoptic trial, it did show it could accurately distinguish cancer from no cancer. And I think what's exciting here is they're about to enter into a randomized clinical trial to demonstrate utility. That would be the gold standard in this space. So we're still, of course, waiting. They're just enrolling now. It's called Altitude. And that's the kind of data that one would really need to see this being adopted widely in the clinic. If you look at the second row, this is a test called Nodify CDT. It's not an antigen from the tumor that you're measuring in the blood, but rather the immune response to those antigens in the tumor, so-called autoantibodies. There are seven of them in this panel that's measured. And the clinical validation data is interesting when you go through it. A lot of different studies. But the bottom line here is it's a very specific test, but not a very sensitive test. You miss a lot of cancers. But the positive predictive value of that test is very high on clinical validation. We do not yet have, I would say, a lot of clinical utility data on this. I know that's coming, and there's some studies in progress there. The last row is something called Reveal. I know the least about this one, but it is a protein panel. And looking at the literature and a paper that Anil Vashani just published in CHESS a few months ago, it does seem to have some degree of clinical validation in that they were able to do a better job than one of the clinical risk models called the Mayo Model at distinguishing cancer from no cancer in the indeterminate pulmonary nodule population. So there is some data there on clinical validation, but again, clinical utility is still pending. Sorry, here we go. Let me talk about Radiomics, the fourth biomarker on this list. And the one I wanna focus on is from a company called Uptelum. It's an interesting one. It's actually the only one that's been FDA cleared and has a CMS code for reimbursement. What's interesting about it, it's an AI algorithm. It's a neural network, that second category I described earlier that looks at features on the image of someone who has a nodule between five and 30 millimeters in diameter and predicts cancer status. It can work with any image from any CT, and it's pretty rapid. It gives you a score within minutes. That's a big advantage. But let's go to the clinical data, because I told you it's all about evidence here for adoption. This is their clinical validation. It's an interesting study published last year in radiology by Dr. Anil Vashani as the PI. What they did here is interesting. It's not a prospective study, but it's a really interesting retrospective design. They took 300 scans from patients that had nodules of that size range, half of whom ended up having lung cancer and half did not, and they gave those images in a blinded fashion to 12 pulmonologists and radiologists, who then read the scan and were asked to give a score between one to 100 as to whether they thought that was cancer, just based on their subjective assessment of the scan, and then they gave them the score from their radiomic classifier, and they asked the physician to reassess their one to 100 level of risk. What they found is shown on the right that when the physicians had the AI algorithm score, they do a much better job of predicting cancer status. The area under the curve on that rock curve goes from 82 to 89. That was statistically significant. What's interesting here is they're not validating that their biomarker correctly predicts cancer status. They're validating their biomarker can help a physician correctly predict cancer status with all the other information that they have. As I said, it was based on this data that they got their FDA clearance. Let me now talk about the bottom two here, which are near and dear to me since I was involved in their development. These are airway gene expression biomarkers. They're based on a very interesting paradigm called the field of injury. The idea here is in folks who smoke or are exposed to inhaled carcinogens, all of the epithelium that line the respiratory tract are altered. Even though the cancer develops all the way down here, usually deep in the lung, there's actually genomic changes all the way from your nose to your main bronchus that you can use as a surrogate to detect the presence of lung cancer. We started this journey a very long time ago. I'm seeing how young you are, Aki. This is when my kids might have come to a talk. They still didn't come back then, but they might have come back then. 2005, 2006, we started, and these are all studies done, bronchoscopic studies where we brush the cytologically normal airway in the main stem bronchus. It's a relatively pure population of epithelial cells. What we did is we initially looked at healthy smokers and non-smokers. We looked at what smoking does to airway gene expression. Then we went to smokers, current and former, who had lung cancer versus those that didn't. We developed an 80-gene biomarker that could distinguish the two groups. That was published in Nature Medicine in 07. Then we refined that biomarker and validated it in the New England Journal study in 2015. This is the biomarker we developed. It consists of 23 genes. There's some really interesting biology, which I won't have time to go through today, but just as an example, if you look down here, among the genes that are down-regulated in the airway of smokers that get lung cancer are so-called xenobiotic genes, genes that help detoxify the harmful effects of tobacco smoke. Smokers who are getting lung cancer aren't turning on that defense mechanism in their airway to the same level as smokers who don't get the disease. So it's really interesting here. But the important is the clinical validation that was in that New England Journal study. It was in two independent prospective cohorts called Aegis I and II. And you can see the numbers at the far left. Good news is very high sensitivity, but not great specificity. So the negative predictive value, which we believe will drive utility of the test, is 91% in the intermediate pretest group. Why that's important is on the next slide, because this shows you how that test would influence decision-making. And this would specifically work in the setting where someone has an indeterminate nodule and a non-diagnostic bronchoscopy for that nodule, which is coming fewer and fewer with robotic bronchoscopy. But this is still the population we were aiming at, where you then take the classifier to decide if it's a negative test, that patient has a less than 10% post-test risk, and most physicians would be comfortable with CT surveillance, versus a positive test should not change your management, because the positive predictive value of our test is not very strong. We had a lot of chain of evidence here, but I wanna highlight the challenge for us was on the clinical utility front. We had a number of papers, but the one that I'll show you here, not a randomized study, but I'd say the next level study, was a study where we took a registry of users of our tests and asked them to tell us what they would do with the nodule before we gave them the test result back, and then we gave them the test result back and saw how did it change their management. The bottom line here is if you were in low or intermediate pretest risk and had a negative biomarker score, three quarters of the time, it was a reduction in invasive procedures in this population. So that's, as I said, a very important outcome. But we're even more excited to move into the nasal epithelium as a less invasive surrogate for the bronchial epithelium. In 2017, we showed that if you collect nasal brushes from the inferior turbinate of the nose and compare them to bronchial brushes from the same patients, with and without lung cancer, the nose is a pretty good surrogate for the bronchial airway, obviously less invasive. So VeriCyte now has developed and validated a nasal gene expression test. They presented it at CHEST a couple of years ago in abstract form, and it's now under review for publication. But I'll just share with you what's unique about this test is it has both a lower threshold and a higher threshold to make a call. At the lower threshold, it has very high sensitivity in their score. At the high threshold, it has very high specificity. So it's a unique test in that regard. I think one of the limits of this test is the intended use population in that validation. They had a prevalence of about 56%. If you look at the small print there, the intended use population for a nasal test would be a lower prevalence of disease, closer to 25%. So these NPVs and PPVs are assuming that prevalence, which was not in the study that it was validated in. Having said that, what's really nice about the Perceptive Test is the validation that's ongoing in a clinical utility study. This is a, I'd say the gold standard, a randomized controlled trial called Nightingale where they're taking 2,400 patients with nodules. All of them get their nose brushed, but only half of those patients get the result of the nasal test. And then we see does that impact diagnosing lung cancer earlier and reducing unnecessary invasive procedures? Let me spend the last couple of minutes on the future. So that's where we are today that's evolving. I wanna talk about three areas that I'm excited about moving forward in this space. Number one is integrating multiple markers to improve risk gratification, predicting future lung cancer that's not there today from the image, and then working with ground glass opacities to intercept the disease. Let's talk about integration. This is relatively straightforward, but a field that's still there in its infancy where we're taking multiple molecular and imaging biomarkers on the same patient, integrating them together to have the best value. I showed you some biomarkers are most sensitive, others are more specific. What if you can combine them together? The challenge with doing that is you need cohorts where you have multiple biospecimens and images collected prospectively in the indeterminate nodule setting. One such cohort I wanna highlight for here that I think is very unique is the DCAMP cohort. This is a Department of Defense-funded group of military hospitals and vet VAs across the country that are collecting all the biospecimens shown here in a prospective indeterminate nodule population. And this is being led by Dr. Iheb Bilatos at Boston University, and Dr. Mark Lemberg at BU is the PI of an EDRN grant from the NCI focused on integrating various biomarkers to stratify these nodules. And just as proof of concept, this I just wanted to show because this comes out of Pierre Massion's group at Vanderbilt. Pierre is a close friend and colleague and we miss him terribly, passed away a few years ago. But he published a really nice paper in the Blue Journal showing that if you take clinical risk factors alone, you can actually improve upon them using those factors plus radiomics plus one of the serum proteins. You get an improvement in the area under the curve in DCAMP and in three other cohorts. So proof of concept that integration may actually help. Second area that I'm excited about moving into the future is predicting future lung cancer. The nodule today may be benign, but is that patient at risk for lung cancer in the coming years? So there was a publication earlier this year in the Journal of Clinical Oncology from Regina Barzilay's group at MIT, really nice study, where they showed proof of concept that you can do a radiomic machine learning AI approach and look at structural features on a CT of someone who doesn't have lung cancer today and predict their three year risk of getting lung cancer. We have a similar one that's been developed within our company at J&J. I just wanna show you an example of the output of that. This is a cumulative lung cancer incidence curve over time and you can see in the group that we predict at higher risk at baseline based on their CT has about a 16% chance of having lung cancer over a three year period. That's important because the holy grail here is to find patients to enroll into chemoprevention trials to prevent lung cancer from occurring. How can we find that group that's highest risk? Again, I think that's something that imaging can allow. The last thing I'll tell you about is moving into ground glass opacities with a chance to intercept the disease. GGOs, everything I've shown you till now are primarily solid and partly solid nodules. GGOs, as you all know, are a difficult clinical problem in terms of their risk stratification, but Optelum now has developed a radiomic classifier, again, an AI classifier that can distinguish solid, partly solid, and pure GGOs that are malignant versus those that are benign, and that's an AUC curve over here that you see. That will be presented by Neil Vashani at the ESMO meeting in Madrid next week. Why is that important? Why do we care about predicting GGOs? GGOs are often pre-malignant or early invasive disease. That will give us a chance to intercept, and in parallel to having these radiomic classifiers, we're now making major advances in understanding the biology of pre-malignancy, and that's being done through something called a pre-cancer atlas. We're actually sampling pre-malignant cancers, adenocarcinomas and squamous carcinomas, over time and doing molecular profiling to understand what are the drivers that allow a pre-cancer to become invasive. This is an example of that right here from Steve Dubonnet's group at UCLA who's part of our atlas. He's taken patients that are undergoing resection of adenocarcinoma of the lung, this is one of the patients, but have an adjacent GGO nearby, where the surgeon is collecting normal lung, the GGO that's a pre-cancer, and the cancer, and then doing single-cell RNA sequencing. What he's finding is that the immune microenvironment is being suppressed as normal lung becomes pre-cancer, becomes adenocarcinoma. That's important because that gives us an insight for how we might reverse the process. So now imagine you have a ground glass opacity on CT, you have a radiomic classifier that tells you that it might be malignant, and now you can go in with robot-assisted bronchoscopy, you can get all the way out to the lesion, and here, is this video gonna play? Maybe, here we go. You can go out and potentially inject a drug directly into that GGO that would reactivate the immune microenvironment of that lesion, and prevent it from going on to becoming a full-blown invasive adenocarcinoma. This is really the holy grail. And the future is now. We're actually beginning to do this in a study at Roswell Park using the Monarch robotic platform where we're going out and sampling ground glass opacities. This is one patient from the study, 69-year-old gentleman, had a GGO in 19. It grew on follow-up in 2021. Then with Monarch here, we went in, and I don't know if the video will play, but he was able to go out and actually sample that lesion. It turned out to be a minimally invasive adenocarcinoma. But imagine if we were able to go in with the scope two years earlier, sample this, which is likely a precancer at that stage, and then inject something into that lesion that would prevent it from ever becoming cancer in the first place. That's the holy grail, and that's, I think, the exciting future that we can achieve. So let me stop there and just thank a lot of folks, Boston University, a number of other academic sites, J&J and Verisight that are involved in the work and all the patients. And thank you again. Sorry for the technical challenge to start, but again, look forward to if we have time for a couple of Q&A's. Thank you.
Video Summary
In this video transcript, three experts discuss the management and risk stratification of lung nodules. Dr. Susan Garwood describes the challenges of managing incidental lung nodules and shares her approach to solving this issue. Dr. Akila Viswanathan discusses the importance of metadata in managing pulmonary nodules and provides an overview of the different data silos that need to be integrated for effective management. Finally, Dr. Avi Spira talks about the current state of risk stratification for pulmonary nodules, including the use of radiomics and molecular biomarkers. He also discusses the future of risk stratification, including the integration of multiple markers, prediction of future lung cancer, and interception of ground glass opacities. Overall, the experts emphasize the need for further research and validation of biomarkers to improve risk stratification and management of lung nodules.
Meta Tag
Category
Imaging
Session ID
2159
Speaker
Akrum Al-Zubaidi
Speaker
Susan Garwood
Speaker
Christopher Manley
Speaker
Russell Miller
Speaker
Avrum Spira
Track
Imaging
Track
Lung Cancer
Keywords
lung nodules
management
risk stratification
incidental lung nodules
metadata
data silos
radiomics
molecular biomarkers
ground glass opacities
©
|
American College of Chest Physicians
®
×
Please select your language
1
English