false
Catalog
CHEST 2023 On Demand Pass
Beyond the Individual: Team Cognition in Critical ...
Beyond the Individual: Team Cognition in Critical Care: An APCCMPD and CHEST Clinicians Educator Forum
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello everybody, good morning. I just wanted to welcome everyone to the APCCMPD CHESS Clinician's Education Forum. Thank you all for joining us. Thank you CHESS for giving the space for the APCCMPD and our clinician educators to get together and, you know, learn. We have a really nice lineup of three different talks. The first one is going to be Beyond the Individual, Team Cognition, and Critical Care. The second one will be Getting Beyond Read More, a faculty development workshop on narrative assessment. And the third will be Use of Novel Technologies to Improve Clinical Training and Enhance Patient Safety. And so there'll be 15-minute breaks between each session and so, you know, please come back in between. But we'll get started with our first talk, Beyond the Individual, Team Cognition and Critical Care. All right, everyone. Good morning. Thank you all for joining us, especially at this early time of the day. My name is Naila Ahmed. I'm one of the Pulmonary Critical Care Fellows at Mayo Clinic. Today we'll be chatting about Team Cognition in the ICU, Beyond the Individual. So first I want to introduce all the members of the team that helped make this presentation. So Dr. Elias Mayil and Dr. Lee are here with me today presenting. But this is essentially the brainchild of ETS Metacognition Pod from Medical Education Pod. And so I'd like to acknowledge the contributions of Dr. Blackwell, Dr. Ciacero, Dr. Jagpal, and Dr. Lin who weren't able to join us for the conference today. So as I said, my name is Naila Ahmed. I have nothing to disclose. Good morning, or should I say aloha? My name is Abdullah. I'm a respiratory therapist and I'm a program director at Loma Linda University, California. I'm happy to see you all here. Thank you again for showing up. And I have nothing to disclose. And I'm May. Yeah. I am the Pulmonary Critical Care Fellowship Program Director and Assistant Dean of CME at the University of Southern California. And I also have nothing to disclose. So today we'll go over the following learning objectives. We'll talk about the differentiation between microcognition, macrocognition, and team cognition. We'll discuss how these concepts impact decision making in critical care teamwork. And then we'll apply these concepts to a case that we'll go through with you in a breakout session. So to start off, let's have a look at this video of one day during ICU rounds, as I'm sure you're all familiar with. And then we'd love to hear your thoughts on the video itself. Mr. Blue is on ventilator day three and recovering from community acquired pneumonia. I think he's ready for excavation. Okay. I'll hold sedation. Great. I'll prepare for SVT and excavation. All right. We got a plan. How come Mr. Blue didn't get excavated? I held sedation according to protocol. And I placed him on SVT. Once he became agitated, though, I had to resume sedation. Okay. We'll try again tomorrow. Were you guys able to hear the audio okay? All right. In that case, any thoughts that come to mind after hearing or seeing that video? Or how many, show of hands, how many of you have all experienced some of that? Probably on the unit? All right. So we all go through that. Yeah, for sure. So let's keep this scenario in mind as we go through the theories of cognition and then we'll have a look at another video. So firstly, let's talk about microcognition. This was the first concept in cognitive science that was widely disclosed. It was prevalent in cognitive psychology starting from the 1960s. Started off as simulations where they were conducted in artificial settings to mimic what human behavior might be. And it was done using behavior predicted by models under highly controlled conditions. The goals of looking for microcognition were very simple. Identify different elements of cognition such as memory access and learning time. Once these elements were identified, they were then studied in isolation. Believed that complex cognitive results can come from looking at these interactions amongst these elements. Examples of these settings in our current practice include didactics and case-based learning where we focus on building the basic learning blocks. Classical decision theory is one example of this which focuses on making the building blocks of cognition. There is an assumption that decision making involves matching potential options to known values. Now this works really well in laboratory settings, in simulated settings, but as we all know, in real life, it's not quite enough. Non-analytic factors such as intuition and uncertainty management, heuristics, all of those play into all the decisions we make in real life. And so that's what led to the development of macrocognition. In 1995, Pietro Caciabove and Eric Holnagel introduced the term macrocognition. What they meant it to be was a level of description of the cognitive functions that are performed in natural settings versus artificial settings and helps replicate what real life decision making would more closely look like. This is considered to be a much more whole-minded approach and incorporates experiences and variables from real life scenarios. An example of this is the naturalistic decision making theory. Observations are made in the real world about how decisions are made. As we all know in the ICU setting, these decisions are often made under time pressure. They often involve a lot of risk and we're often doing that with incomplete information and then adapting as we have more information available. So some aspects of the decision tend to be unanticipated and we have to evolve as we have more decisions, more decision points, and we have more options that become available. So this is more commonly used in complex situations such as the ones that we are used to. An example of this apart from the ICU would be surgeons in the OR who have to make decisions on an ongoing basis based on what they see in the operating field and respond by shifting from automatic processing to deliberative processing. So the shift from microcognition to macrocognition makes sense, but individuals working on their own rarely achieve very sophisticated results. And that's where team cognition comes into play. Team cognition was first described by Fernandez et al. as the organized structures that support team members' abilities to acquire, distribute, store, and then retrieve critical knowledge. Fiore et al. further simplified it as the interaction between an intra-individual and an inter-individual cognitive processing. As we all know, cognitive processing occurs within our own minds when we talk to each other, between individuals, and as we interact with the environment. And so within individuals, between individuals, and individuals and the environment. The best way to exemplify this would be the shared mental model. How many of you have heard of this concept or this phrase? I see some hands raised. And so it's becoming more and more popular because we've all identified none of us work in silos. None of us are able to function effectively individually. We all work better as a team. And each team member has unique knowledge structures. But as long as they're compatible and there's a shared understanding of each team member's roles, that allows members to coordinate their behavior better, coordinate their activities better, coordinate their communication better to help meet the demands of the situation. So now that we've talked about the concepts of micro, macro, and team cognition, I'm going to hand this stage over to my colleague, Dr. Lee, who will go over a problem solving approach to the issues of cognition in the ICU. Thanks, Naila. So sometimes it can be confusing to separate macrocognition from team cognition. So macrocognition emphasizes the development of new knowledge and performative processes in complex real world situations. Team cognition emphasizes how team members utilize the individual knowledge gained from macrocognition in order to execute a specific action as a team. So the process of team cognition should result in a more informed decision than what would result from macrocognition alone. So to incorporate both of these concepts, individuals and teams will need to externalize their internalized individual cognitive processes and reconcile the differing viewpoints during their collaboration. So we have modified a figure from Fioret et al. that was published in Theoretical Issues in Ergonomic Science in 2010 to visually demonstrate the process of transforming internal knowledge into externalized team knowledge. So the individual processes are denoted in dark blue and the team processes are defined in light blue. The squares represent knowledge and circles signify knowledge building processes. So effective team process requires tacit and explicit coordination and communication strategies from these members. So these processes rely on the individual knowledge becoming externalized and combining into the team knowledge and so the feedback loops help modify and strengthen the process until finally you get team knowledge that is externalized and then that leads to the team being able to problem solve together. So when macrocognition is added to team cognition, the process of developing the new knowledge and information from members evolves due to the collaborative approach of a shared mental model. Therefore the team is able to utilize the collective knowledge of the team in a maximized capacity. So this highlights how combining macrocognition and team cognition can help to optimize patient care. So how can this be used to solve problems in critical care? So a successful team requires optimizing the team knowledge skills, team dynamics, and team environment. So you need to recognize that all of these processes, the micro, the macro, and the team cognitive processes are all tools for problem solving as a highly effective team. So how do you apply these situations to those that occur in the ICU? So we developed this framework to show how the theories we discussed can be applied when working in teams. So first we should ask the question, do we have a shared mental model? So in this, when we think about that, we want to ask is there situational awareness and did the team make the decision in a cohesive manner? If the answer is no, then we should move on to question two, which is how do we engage in creating that shared mental model? So what is a team member's professional background? It's important to understand where everyone's coming from. And then how did the team member come to that decision? And how did the information that the team member share impact the team performance? And then finally, once we have considered both of these questions, we should move on to question three, and how can the shared mental model be used to solve the problem? So in this area, we're trying to see whether or not we have coordination amongst our team members, whether everyone has contributed, and the improved awareness and understanding of how each member's background enhances our team decision making. So keep these processes in mind. We're going to replay the video, and if you look at the bottom, it will highlight some of these concepts as the video plays. Mr. Blue is on ventilator day three and recovering from community-acquired pneumonia. I think he's ready for extubation. Okay, I'll hold the sedation, but maybe we can wait until the family comes around. Yesterday he became kind of agitated during the SBT, and maybe the family will be able to redirect him or something like that. Great, and I'll prepare for SBT once sedation is over. Okay, let me know when you're ready for SBT. Oh, Mr. Blue. Mr. Blue is still quite agitated despite the family being nearby. Are there maybe other causes of why he might be agitated? Let's suction him and check his secretions. Sure. Oh, much thicker than yesterday. Well, he didn't spike a fever, but his secretions are concerning. Well, you know, he's on standing Tylenol for back pain, so maybe that's masking some of the fevers. Let's put him back on sedation and change his antibiotics. Sure, and let's also do some chest PT. I can do a session once the sedation is back on board. All right, great plan. We want to thank all the actors. It took a lot to shoot that video. I see you, as you all know. The practical application of the theory was shown in the example. If you saw the bottom, we kind of scrolled through some of the framework that we talked about. We need to see whether or not the team has a shared mental model, and then if there's a disconnect, we need to engage our team members to create that shared mental model. Kind of reflection on the model itself. If we see here, we have question one. We have to ask ourselves, do we actually have a shared mental model as a team? You're working together as a team. What does the team think? That's, for example, where the team leader or the intensivist starts asking the question, because we all kind of have that team member that doesn't like to speak or doesn't speak up, because they're internalizing all their thoughts. How do we engage that team member into that shared mental model? Is there another cause for the agitation? That's when they basically start to talk. How can the shared mental model be used to solve the problem? Instead of just internalizing all their thoughts. Now, what we're going to do is some fun exercise. We have a scenario here. We have physician X reports that patient Z is a 47-year-old. They were continuing to improve from their pneumonia based on increasing regular nasal cannula, the respiratory therapist and the nurse noted in their assessment that the patient required suctioning at Q2 for coping secretions despite having an instant cough. Transfer orders were placed and the patient was sent to the pools. Next morning, patient transferred back to the ICU where a head box was put in a failure cause that might be misplaced. How many of you have experienced that? Thank you all for returning. We are very excited to introduce the next session, getting beyond reading more, a faculty development panel on narrative assessment. I'll allow the panel to introduce themselves. Thanks everyone for joining us. My name is Ugo Carmona. I'm one of the associate program directors in our PCCM program there. We're going to be talking about trying to improve the narrative feedback you get for your fellows on their written evaluations. I'll ask the rest of my panel colleagues to introduce themselves and then maybe or maybe not we'll look at some slides but we'll at least talk about an introduction and then we'll turn it over to them. My name is Marilyn Marciniak. I'm one of the assistant professors at the University of Maryland in Baltimore. I'm the associate program director for the pulmonary critical care fellowship and one of our community hospital internal medicine residency programs already having a stroke. And I have to chair the CCCM for both of those groups and so have a lot of thoughts on this topic. Hello everyone. I'm Kanta Velamuri. I'm an associate professor at Baylor College of Medicine. I'm a former program director or recovering program director as I call it from the pulmonary critical care and our critical care fellowships. I have about 20 years of experience in medical education. It's my passion and I'm closely involved in aspects of medical education including faculty development on this topic. Hi everyone. My name is Mark Warner. I work with the University of Texas Health Science Center in Houston. So I am a current program director for pulmonary critical care. So just excited to be here with these wonderful folks. So our goal today is to talk a little bit about what is a narrative feedback. What are the elements of making a great narrative feedback. Why it's important and really making that case for why we should try to work this more into our programs particularly maybe including it as part of faculty development if that's not a program that you have to help improve feedback for fellows. So we've all had evaluations come back to us from fellow fellows that have said something between blank box blank box five out of five. Does that sound like familiar to anybody. OK perfect. Or maybe you know one line that says something along the lines of should read more. This person was lazy. Really great. Again feeling does that feel about right for some folks. OK. So while not helpful at all we would like to help improve that and get fellows to really have some more constructive feedback that they can use in their week to week practice. Now we're all familiar with milestones and general assessments that we use that range from if you use Med Hub or one of the other kind of programs about where the fellow is on the spectrum of needing complete help from their attending all the way to an aspirational you know kind of quality. And that's helpful in the long term. That's helpful maybe with at the six month CCC kind of programmatic level. It's helpful overall you know year by year. But it's not as helpful for is week by week month by month. How does the fellow think about where am I today and what I want to work on. What can I use to work on next week particularly as more and more rotations have moved to just being you know where attendings are only spending one or two weeks with fellows during their time. They don't necessarily have the same longitudinal experience show of hands. Does anybody spend more than two weeks on service at a time. Yeah it's pretty uncommon at this point. How many folks do spend two weeks on service in the ICU or home. OK. And then one week. Yeah the majority. OK. Thank you for that. So you know this is I think particularly important in an era where now we do spend a lot less time. I'm going to keep using this slide as well. OK. So yeah I know I'm telling you they're amazing. So you know so so that's part of the argument for why I think these are these are important. Right. And there's a general approach to assessment. Right. So ideally you would sit down with your fellow for whatever amount of time you're going to spend with them at the beginning. Discuss your goals and expectations with them. Observe them during your time together. Give them verbal feedback maybe midway through and then at the end. And then also write that down. And it's really those two at the end that we have heard from that provide challenges for faculty that you know maybe they've given verbal feedback but they haven't actually written it down. And you know why is that an important feature. So we'll think a little bit about these comments themselves. What am I talking about when I'm saying we want to get fellows better narrative feedback in their evaluations back to you as program leadership and back to the fellows themselves. Right. There are generally a couple of kind of key features for that. One is that they focus on behaviors. Right. So again before I made a mention about you might get these descriptions but they're really a description as the person interpreting them of a personality attribute, you know, this idea that maybe they are disengaged or disinterested or lazy or something like that, if it's a negative comment like that. But that's not really an actionable thing for somebody to receive that feedback and do something with it. It's not an actionable thing for you as program leadership to act on, to help that fellow come up with a learning plan or some other kind of program to improve that. With that, they, you know, you want them to be actionable. So this idea of read more is not as helpful as, you know, a specific topic-based plan and that they should be specific. So right now, if you were to see on this slide, you see a couple of different examples of things that I've talked about, but I've kind of read them out loud and described for you. So, you know, another example of this is you might have received feedback for a fellow that says, you know, responds well to feedback. I mean, that's great in that it's positive. It's a great attribute to know about your fellow. But what would be more helpful is to say, you know, I observed them during a, you know, procedure, a central line placement. We talked about some feedback around setup. And then I observed them, and that next, the next time around, I observed them implementing that, you know, that feedback, right? That's much more helpful for them to know that, you know, they take feedback well, and this is a version of how they engaged with that. So those are some of the features of strong narrative feedback, right? They're actionable. They're specific. Obviously, they're timely. And they focus on behaviors. So if that's so great, there's often one challenge to this, which is they're not numerical, right? They're not, you know, they're not a scale. And shouldn't scales be more objective? And we'd like to argue that the answer to that is no. You know, scales are really generally fairly arbitrary. And again, we're thinking through week by week, month by month, how can fellows improve? And scales really do very little for them to help them with that. So with that, I'd like to start asking some of my panel colleagues about questions. And along the way, I'd love for this to be a conversation with you all. So please come up at any point to ask a question, to provide feedback, or if we're talking about a specific, you know, challenge that we've seen or a behavior, you know, a kind of programmatic input, and you have a version of that that you'd love to share with the group, just please come up and come to the microphone and share that with the group. All right, we're back. So thank you guys for sticking around. So this is the third of our talks in the APCMBD CHESS Clinician Educator Forum. I'm happy to introduce Dr. Kishido. And she will be talking about the use of novel technologies to improve clinical training and enhance patient safety. All right, thanks so much. All right. All right, good morning, everyone. So I'm Stacey Kishido. I'm here from Philadelphia at the University of Pennsylvania. And I'm excited to be talking to you about novel technologies in medical education. So this is me, so I don't need to introduce myself again. So what are we going to cover today? So we're going to talk about different types of technologies. We'll define them. We'll talk a little bit about what are different example use cases of these technologies. We'll then go through some of the pros and cons of these different modalities. And talk a lot about practical things like barriers, potential risks. We'll talk a little bit about AI and chat GPT, which I know is all the buzz. And then we'll talk about some practical considerations for all of us educators in the room who may be thinking about using these technologies that may be unsure of just how to proceed and move forward. So just to kind of gauge the room. So I wanted to get a sense of everyone's experience with these different emerging technologies. Virtual reality, augmented reality, and the like. And if anyone's willing to share their impressions of VR or AR, their experiences with this. Maybe you've worn a VR headset gaming. Maybe you've used it in your educational work. Anyone used this type of technology? Yeah, I see a few hands. Negative for gaming, personal things, yes. Anyone used this in the educational space yet? Yeah, a few people, great. Positive experiences? Yes, I see some shaking heads. So we have some converts in the room already. But hopefully by the end of this talk I will have you thinking about whether or not these technologies are right for you. They may not be the right answer. And I think that's part of what we need to be doing as thoughtful as educators. About what is the right technology for the educational problem that we are trying to solve. And so with that I'm going to jump in to give us a brief overview. But before I get to the specific technologies, I think the skeptics of thinking about emerging technologies will often say we have our traditional modalities, right? Why are we going to change things? They work. We already have them. We don't have to make new content. We already are all busy. Why rock the boat? It's familiar. We all know how to use PowerPoint. If it's not broken, why are we trying to change it? But then I want you to think about what are some of the potential benefits of doing things that are novel and changing things. So can we use elements of gamification to improve learner engagement? Can some of these emerging technologies solve some of the problems that we may have in engaging our learners by fixing issues around scalability, flexibility, and allowing our learners to have some autonomy in their learning with the availability of asynchronous content. And so just to give you a lay of the land, because virtual reality and augmented reality often kind of go hand in hand when we think about the idea of simulation because it is a natural extension of simulation, I want to kind of take us back in history to sort of understand where we've been and then we can kind of think more thoughtfully about where we're going. Simulation, in terms of where we are now when we think of that in-person mannequin, really started back in the early 1960s when the Rosessa Annie mannequin was conceptualized and born. The first standardized patients came out in 1964. If any of you have used or seen the Harvey cardiology simulator, we still have it at our medical school. I used it as a medical student. It hasn't gone away. He came onto the educational space back in 1968. Harvey's getting up there in years, but we still find a use for him. The first OSCE, which I think most of us are probably familiar with, came around in the mid-1970s. And then the more modern era of in-person high-fidelity simulation with the mannequins that I think we probably most often think about as the version of simulation that we see most often came around in the early 2000s. And then when we think about extended reality, so virtual reality, augmented reality, this arena has really exploded in the last, you know, 10, 15 years. But it also dates back to the 1960s, where we were starting to use these primitive head-mounted displays. A lot of the early work was done by the government at NASA and in the military. And then we really saw things sort of start to pick up in the mid-90s, where the gaming industry was starting to use some virtual reality headsets. Google Glass, which is an early technology that I actually dabbled around with in my fellowship time, came around in 2013. And then we started to see some of the more modern headsets, the HoloLens, and sort of this explosion of different virtual reality companies in the early to mid-2000s. And then COVID happened. And I think we can all agree that COVID, myself not excluded from this, really sort of forced us to think about, well, how can we learn in a different way? How can we use virtual content and create it in a way that is engaging for our learners? And I think we did it with forced disruption and probably in a way that was much faster than many of us would have wanted to do in a more thoughtful approach. But I think that that has really brought about the rapid adoption and now hopefully some sustained and more thoughtful integration over time. So this is what we, I think, typically think about when we think about simulation. So the mannequin in the bed, hooked up to a monitor. And there's a lot of evidence to tell us that simulation works. So this is one study from 2008 that showed that using simulation can improve adherence to AHA guidelines for ACLS. Another study from 2018 looked at how simulation can help improve procedural performance. And here we see that there was less complications from performance of thoracentesis compared to those residents who are traditionally trained in procedures versus those who had simulation-based mastery learning. And then more data within resuscitation as well improved skill performance and retention in an ACLS checklist amongst residents. So what I now want to do as a natural extrapolation of that foundational simulation education work is to start to think about the medical education metaverse as we can call it, and the different buckets of technology that exist therein. Augmented reality, virtual reality and then a concept that I'll tell you a little bit about, which is a little bit harder to grasp, called alternate reality. So first, virtual reality. So I think we've all if we haven't put on a headset ourselves have seen people with these funny looking headsets. And really what this is is that it is recreating a fully immersive virtual world. So when you put this headset on and you're looking around in 360, everything you see is a computer generated virtual environment. But you can have partially immersive virtual reality experiences so these don't always have to be done in a headset. So we'll talk a little bit about what that might look like. You can do these with 360 video creation. So we've done some of this work where we have put a video camera that sort of records in 360 in the room. So you can kind of look around and see a simulation happening when you put the headset on. And we often will use these in conjunction with the use of avatars which are computer generated characters in a virtual world. And so what does the equipment look like? So usually, again, we think about having a headset, but these virtual reality cases can be done on a desktop or on a mobile device. Some of them may integrate augmented artificial intelligence voice recognition software. There may be the use of hand controllers and motion sensors to sort of interact with and move objects and things within that virtual world. And so, again, I talked a little bit about headsets. They're often wearable. And then the software. Some platforms come as customizable software where you can kind of create your own case whatever it is you want it to be, customize it for your learner. Other software platforms come as sort of these pre-boxed case libraries where they have 100 some cases or 30 some cases that are sort of built with a particular level of learner in mind. And that may be the right thing for you. It may not be the right thing for you. It sort of depends what you're looking for and if they have created that pre-boxed scenario. And, again, artificial intelligence voice recognition may be there. It may not be. So what are some of the applications and where has some of this early work been done? So I will say that while there are some studies, and I'm going to show you a few of in the realm of virtual reality and how it interfaces with our profession, most of these studies are really at the pilot level. So small studies, single institution, small number of learners in a particular learner group as sort of this pilot work to say, hey, you know, they found this useful or it was acceptable or we saw some, you know, small change in learning outcomes. But I think we're just sort of starting to see the tip of the iceberg I hope in what is going to be a growing body of literature in medical education. But some of the applications that we've seen so far, virtual patient encounters. So instead of that traditional standardized patient encounter where I think we're all familiar with having a virtual standardized patient encounter, and I'll show you what a video of that might look like in a few slides, a lot of work with virtual reality around training and clinical resuscitations and emergency management, multiplayer, interprofessional education and teamwork training, lots of work with procedural and surgical training. I will say our surgical colleagues have really, I think, even been more at the forefront of the uptake of VR than those of us in medicine. And so they are using this not just for training but using this actually in the surgical world and really like integrating it into direct patient care. And then also using virtual reality not just for those technical skills and procedures and how do you resuscitate and, you know, clinical reasoning but also around perspective taking and empathy building. So all that's great. But again, why might you want to use a virtual case if you think that it might be right for you? So it may be a way to increase engagement with your learners. So rather than talking through a case, sitting at a table, maybe they put on a headset and now they're actually in that patient room. This is really great potentially for our novice learners where we're not actually letting them touch patients directly for patient safety reasons. I think we would all agree that that's very reasonable. But they get some of that early sense of autonomy in their clinical decision making. Reducing barriers to actually participating in those types of cases. Augmenting opportunities for additional practice. So potentially a great way to increase availability of coaching for our learners who may be struggling with a particular concept or a particular skill. And then allowing us to facilitate increased opportunities for deliberate practice and remediation. So let's see if my technology works, which is always nerve-wracking when you're giving a technology talk if your technology doesn't work. So I'm going to show you a quick video of a case that we actually created using 360 video at Penn with some standardized patients and faculty that participated. You're going to see the 2D version but we also videoed these in 360 so that if we wanted to down the road have someone put on a headset they could actually look around and it would be as if they were a fly in the wall watching the case. And then you'll see that there's some branching logic built in as well. This is an example of a 2D self-directed video virtual case. Again we have the high quality videos. I'm going to take a listen at Gracie. The nurses are concerned about how she's breathing. Instead of facilitator prompts, learners interact directly with the platform answering questions that allow them to practice the clinical decision making. This is followed by teaching slides to reinforce the content. Here's an example of how the students response to questions can lead to multiple different branch points. If the student chooses albuterol for example, they will see how it's administered and the patient's response to that intervention. This is followed by teaching slides. Since this is not the desired path, the student is then directed back to that same question to try again, thus creating a deliberate practice cycle to reinforce the desired actions. All right. So that was something we did that was completely homegrown. It wasn't overly fancy. It wasn't computer generated avatars, but it was something that we created for our learners who this was done back during COVID. We wanted them to see how to address a pediatric patient who was in respiratory distress with branching logic to sort of see what things would look like if they chose the correct or the incorrect intervention. But something you can still use even though we're not in COVID anymore, but once you create it and you do that up front creation, it's there. And the treatment of bronchiolitis and some of the conditions we managed doesn't change all that much. I mean, I not, you know, but certain things just sort of remain stable. And so it can be allowed for scalability and sort of exportability. That was one example. So we talked a little bit about avatars, and I'm going to show you this example, which is a software company that we work with, which we custom create some scenarios where we take a case. We're not creating it from scratch. We take a case that already exists. We adapt it to put it into a virtual standardized patient case. So these are for our medical student learners where they are getting the chance to actually do a history with this patient. You'll see what that looks like in this software program uses artificial intelligence voice recognition. Hi, Ms. Green. What brings you in today? I have a cough and shortness of breath. I'm sorry to hear that. That means a lot. Thank you. When did your symptoms start? My symptoms first started about two or three days ago. And so this platform, again, I don't have any financial relationships. I wish I did, but I don't. But this particular platform, what it does is you can sort of pre-program on the back end responses to hundreds and hundreds of questions and it all kind of goes into this AI brain and the more that students use it and they ask things in ways that I would never have fathomed that someone would ask a question about, like something random and things that would never have occurred to me but occur to our students, we build that in on the back end. They can then go in and they can order diagnostic tests. They can see the results. They can interpret them. And then we actually can build in a checklist on the back end so they can see their score on the elements of the history that they asked, things that they forgot to ask. We can give them positive points for diagnostic tests. We can take away points for things that they ordered that were unnecessary or potentially harmful. And so it gives the learner the ability to get some immediate feedback. There are bugs and there are kinks and I'm not saying it's perfect, but it's a way for them to kind of get feedback on their medical management, their clinical decision making, and their history taking, especially for early learners. We already did that. All right. So this is another screenshot of a different I don't want to play you again. Okay. What's going on? See? Technology. It's all great. All right. All right. Well, there was a screenshot there that apparently doesn't want me to show you of a software platform that we use, which is a pre-boxed set of 170 different cases that are tagged by chief complaint and organ system and you can change the level of difficulty for the learner. They're peer reviewed. You get feedback at the end on the critical actions you did, things that you did that were unnecessary, things that you did that are harmful. And this is something that we're rolling out currently to kind of see where it meets our levels of learners. And then this is a photo of myself and two other attendings at Penn trying out a more immersive virtual reality software platform. You can kind of see on the screen in the background what we're seeing in our headsets. And we're all working together to resuscitate this patient who's having chest pain. We don't actually have to all be in the same physical room, though. We are here because we're trying it out at a virtual reality conference, but I could be here in Hawaii and my colleague could be in Philadelphia and we could all be in the same virtual room and we could take care of this patient. And so it overcomes some of these geographic issues, even if we're all in the same town. Not everybody's physically in the hospital at the same time and our students are in the same physically at the same site, even. And so it's a way to do some team training potentially, despite different locations. And so just to show you a little bit of data around virtual reality, in our specialty in particular, this was a study in 2018 that came out looking at how virtual reality-based bronchoscopy training in virtual reality compared to mannequin-based training. And really what it showed was that, you know, we were able to it almost ended up being a non-inferiority study. So the virtual reality ended up being nearly equivalent to the mannequin-based training, sort of suggesting that maybe there is an opportunity to use some of this emerging technology that inherently seems, well, that might not quite seem right, but maybe in certain instances, especially as the technology gets better, the haptics or the hand feel for certain tactile things gets better. There may be more and more applications. Some people are thinking about this in patient care, facing things in the ICU, using virtual reality for calming our patients, for distracting them during procedures, helping with anxiety and delirium management, rehab, things of that nature. So if we can be transported to a tranquil place like Hawaii instead of the ICU, maybe you would be calmer and maybe that would help in certain instances. And then there's also been work looking at the use of virtual reality with empathy building. So this was a study in academic medicine in 2020 that looked at basically putting the participant in the shoes of someone who was experiencing discrimination so they could actually walk in the steps of that person and feel what was happening to them. And these were some of the quotes, more people in particular, people in positions of power should experience this VR simulations and others like it, to be in the situations others deal with on a daily basis and feel even a part of what they feel really impacted me. So feeling that realistic immersive experience can be transformational for some people. On to augmented reality. So virtual reality in its fully immersive state, 360, you're in the virtual world. Augmented reality a little bit different. So I'm looking out at the real world in front of me but I put on a set of augmented reality headset and I now see a superimposed computer generated image on top of the real world. And so I'm not really sure the exact use case of this, but this is something I grabbed out of a paper where they were studying, seeing how this group of learners seeing vital signs superimposed on the real world and some coaching could help with their education. So the hardware often looks similar in terms of you're wearing a wearable headset. The headsets have different capabilities. Not all VR headsets have AR capability, not all AR headsets have VR capability. Some of them are mixed reality. There is some hand tracking and built-in voice commands. For us, there is the capability for the AR headsets to link to Microsoft Teams as a video streaming platform so that when you're streaming in a patient room, it's actually HIPAA protected at that time. So you could be transported into a patient room where your learner is doing a procedure. For example, they're struggling doing a line and they have their headset on and you're at home and you're not there as the attending. You have no idea what the problem is, but now you have them wearing this headset and you look down and like, oh, you need to change the angle of your needle. Just an example of a way that you could extrapolate how some of this technology might help. Same in a rapid response, right? So you're not there, you're the attending, you have no idea what's going on. People are trying to hold up the phone, like FaceTime and show you the waveform on the ventilator or trying to show you something and it's chaos, right? And I'm not saying that this is gonna be perfect, but you're actually then seeing what the learner's seeing in front of them as opposed to them looking at the patient and you're looking at the wall because they're so frazzled, right? So you could imagine the potential possibilities. Also, some of these integrate with ultrasound platforms and there's a lot of stuff happening with anatomy teaching as well. So this is a photo that I'm gonna show you from a study in a few slides, but you can see doing some procedural education around central line placement. So what this looked at was here, they're actually giving the learner prompts while they're doing the procedure to tell them what the next step is. So instead of them trying to remember, okay, well now I have my needle in the vein, what's the next step? I have to thread the wire. It actually prompts them to tell them what to do next. And so that's what this study looked at was an instructional slideshow beforehand, as they were doing the procedure in sim versus conventional sim training. But they didn't really see a huge difference to be frank. There was no change in median time to IJ cannulation, no change in median total procedure time. But what they did have was they had improved procedural adherence to the checklist of what they were supposed to do and the steps they were supposed to do. So I think many of us would say, I don't really care necessarily if it's that much longer, if they do it well and they do it the appropriate way. So then alternate reality is this concept of a computer generated parallel world used as a learning and practice environment. So what does this really mean? So we often talk about alternate reality in the context of serious games or alternate reality games. And really what these are used to do is sort of they have these problem solving and creativity sort of embedded into a game. We are really engaging the learner. And it may be something simple, you know, just, you know, my God, I'm having a, so, you know, you can just have sort of these linear games that sort of go along the way and can engage the learner in very simplistic ways. It doesn't even have to be technology enhanced. So electronic health record simulations are an example of this. Patient panel management, teamwork, decision-making, escape rooms is what I was looking for. That was my word finding difficulty. So this is a study from 2018 looking at an EHR-based simulation. So here what they did was the red line is weeks before they were doing the EHR sim. The blue line is after they did the sim. And they were really trying to really help the learners to use some of the tools available to them within the electronic health record to improve their usability and decision-making training that could happen within that environment. And you can see after they did the sim, things got a lot better. So modality selection. So I've told you about all these different extended reality novel technology options out there. So now what? How do you decide what's right for you? How do you decide what the right modality is for your learner group? So thinking about what am I trying to teach, going back to those core things we're all taught as educators. What are my learning objectives? What are my learning goals? What am I trying to achieve for this group of learners? So is what I'm trying to teach them primarily cognitive? Is there an important physical element that can't be recreated in the virtual world? Do they need to do something tactile that clicking through and holding hand controls isn't going to be able to accomplish? Is there an important factor of real human interaction? So while AI is great in terms of responding the same way, most of the same time to questions that are asked in the context of history, it's a bot, right? So there's no subtlety of eye contact and nonverbal communication and the way that things are said. It can't give the learner feedback on that. But what I would say is to consider what level of immersion you need. Is the technology additive? Is it gonna make it better? Or is it potentially gonna be a distractor? What is the degree of fidelity or realism that you need to achieve? Is talking through something, sitting at a table, gonna cut it? Talking through a case in a classroom setting? Or is it gonna be all the better for the learner to be actually doing it in a virtual environment? And how sophisticated is your learner? And so this is a set of photos to really kind of give you the spectrum of immersion that we can do with our learners from classroom-based to basic simulation. The bottom left is my daughter. I can tell I believe in simulation because that's my photo of my daughter, Shira, from several years ago doing simulated ACLS on a monkey. And then virtual reality to the bedside with patient care. And so what we really think about is how does increasing immersion, when is that really beneficial to the educational task at hand versus when can you get away with things that are maybe more rudimentary but appropriate for what you're trying to do? So these all sort of overlap that you can have mixed methods between the two. But thinking about increasing levels of immersion where empathy building and communication in a virtual world may be helpful, may need to be done at the bedside, but there may be some training to sort of prepare the learner for the next in-person sim. So one of the ways we think about this is that this is not the be-all, end-all of their training, right? So doing a virtual standardized patient case is not gonna adequately prepare our learners to take care of patients and to have those high-level communication skills. But what if they practice the history and they actually remember, okay, when I'm taking a history for someone who's short of breath I have to ask these questions. And oh, when I did the history, the virtual standardized patient told me I forgot to ask about the smoking history. Or I forgot to ask them about exposures at home in their environment. So they learn the rote mechanics so that then when they go and they meet with the standardized patient, they're not thinking about the rote mechanics of what they have to ask in the history, but now they're focused on the nuances of the way that they ask the question so that they get more out of that standardized patient experience. And then they go to the bedside and they're better prepared for that learning experience so that when they're meeting the patient, they're less nervous about the cognitive aspects and can think more so about the doctor-patient relationship. All that being said, I'm not gonna say that this all works perfectly all of the time. There are barriers to entry, there are risks to the use of this technology, but there are also a lot of opportunities. So when we talk about the use of virtual reality, artificial intelligence, we have to recognize that there is a risk of bias built into the use of these technologies. So all of our learners have different comfort with using technology. Maybe you have a learner who spends all their free time gaming. They're gonna pick up a headset and it's gonna be great. You may have a learner who gets motion sick when they put a headset on. And how is that gonna impact their learning experience? You need to consider access to equipment, to Wi-Fi. We've actually seen this. Some of our medical students, you would think everybody has great Wi-Fi at home. Not the case. The streaming speeds actually matter for the quality of the case and the way that the case is gonna run. It also matters in terms of having your learners all in the room at the same time. So we found this when we did a pilot with one of our medical school courses. All the learners were sitting in the room at the same time trying to do their history with their virtual patient. And guess what? Didn't understand a thing any of them said because the one computer was picking up what the other student was saying and the other computer was picking up what the other student was saying. Because the computer doesn't know who they're supposed to be talking to and who they're supposed to be listening to. And then all the students hate it. They were like, this thing doesn't understand a word I said. And it wasn't because the actual platform didn't work. It was because the way that we actually set it up to work was not completely planned out or we didn't realize they were all gonna be in the same room at the same time. Gender-based differences in motion sickness with headset use, AI-based voice detection can have built-in gender and race bias. So a few words about AI. So AI underpins much of VR, AR, and a little bit in the alternate reality world. It has the potential to propagate biases in original data. It can also have these hallucinations. I don't know if you've heard of these AI hallucinations where it just makes stuff up and it was never on the input and it just comes up with things. I think some of you may have read about it in the context of sort of scholarly work where it cites things that don't exist. So you have to think about the AI hallucin... We saw this actually in one of our virtual standardized patients where I forget the specific example, but the learner had asked something about an abortion and just hallucinated an answer that hadn't been built in about like they don't believe in that or they don't do that or I don't know what it was, but it was something like politically charged that we had not built into the algorithm and the AI just hallucinated it and we were like, where did that come from? And then thinking about privacy and data sharing considerations and laws protecting student performance data. And if you do use a third-party software platform, what are they gonna do with the student data? How is it protected? Are they sharing it? Where does it go? I've spent a lot of time with our med school legal department. These are all really important things to consider about is it safe and is it private? And so just to check my time, a little bit left. So in terms of practical considerations, so again, we talked about matching the content to your learning objectives, matching your technology to the learning objectives. Stakeholder engagement is so critically important. So making sure that the people who are running the courses that you're working with, obviously if you're the person running the course, you're engaged, but this also means engaging your learners, your end users. Why is this important to them? Why is this helpful to them? So if you're gonna roll this out and they think that they're just doing this as something that's an add-on, it doesn't have perceived value, it's not integrated into the course or into the rotation, they're gonna view it as this is a waste of my time. However, if you can integrate these sessions meaningfully into your curriculum and think about, is there a task that's going to be perceived as meaningful to them, that they're gonna get something out of it. So one of the things we've done with some of the virtual standardized patient experiences, we've paired them with writing a soap note so that when they get to the end, it feels more realistic to them that they didn't just do the H and P, but then they actually did a task that they would normally do in the hospital and that task got assessed and graded by the faculty in that course. One of the common misconceptions we've had from our learners is this is not like a real patient. Why are you taking me away from the bedside of a real patient and I'm just doing this virtual case? So we've done a lot of work and I think you have to think about this. If you're gonna use this technology around messaging to your learners what the purpose is of the technology, is it replacing something else? If so, what is it replacing? Because there can be a perception that we're valuing these fake technology enhanced patients over putting them at the bedside. For what I've done in my personal work, that's not what we're trying to do. So just messaging out to your learners what the goal is and then engaging other faculty or leaders early on in the process. Thinking about what resources you need. So hardware, software, and if you're not gonna lend out your headsets to your learners, where are they gonna do these cases? So you actually need a physical space. Even in someone's apartment, if you live in a city or you live in a small apartment, you may not actually have enough physical space to put on a VR headset and move around the room without bumping into a wall. So all of these are practical considerations but on the flip side, if you have an empty classroom to do a virtual reality case, you don't need $150,000 mannequin. You need a couple of headsets which are a couple hundred bucks a pop that you put on and all you need is an empty room. So it just sort of depends what you're trying to do and what your resources are and what your space, what space is available to you. Thinking about, again, how you're integrating into your existing curricula. If you're using a learning management system, how this is gonna interface with that. We talked a lot about onboarding and preparing the learner for how these cases are going to go. So not just messaging what the purpose is but actually how are you going to get them to use the technology and not feel incredibly frustrated. So we found, as an example, when you send an email, do any of your learners read long emails? No, they don't read it. They read maybe the one line, if they open it at all, and then they go in and they just start doing it and they just start doing it and then they're incredibly frustrated and then they're like, this thing doesn't work. So what we found is that, this is the TikTok generation, right? So if you want your learners to actually get onboarded to using it, they're not gonna read an email. We made quick little 30-second videos and this is how you use this so they can watch it and then they do it. So don't write it out in an email. From my personal experience, it doesn't work. And then thinking about how you're going to support ongoing technical needs. So no matter how well prepared you are, how well you set up your learners for success, there's gonna be tech problems, there's gonna be tech glitches and so be mentally prepared for that and think about how you're gonna support that, either how the software company's gonna help you if you're using a software company or how you're gonna sort of manage that within the context of what you're building and then we'll talk about assessment in a few slides. So, if you wanna create a virtual case, I suggest starting with something that already exists. Don't try to build a new, don't try to build new case content. Take something that's maybe stale or hasn't been used in a long time and revitalize it with virtual reality. Set your learning objectives, match them to the format and the modality that you want, build your questions and prompts based on your learning objectives and then again, anticipate potential delays and technological issues. So, start early, you think you need three months to build a case, it's your first time, leave six. Don't say that you're gonna build a virtual case now and you're gonna pile it in the beginning of November, it's just not gonna work. So, leave yourself time so that you're not stressed. We talked a little bit about stakeholder engagement, making sure that you're aligning with the course and institutional needs, starting your discussions early, setting realistic expectations. So, I'm not gonna completely revolutionize your course, I'm not gonna make your evaluations go from a three to a five. What I am gonna be able to do is I may improve the way that for our obstetrics and gynecology course, for example, they wanted everybody to see a preeclampsia case because not all of our learners see the same kind of cases despite our best efforts, rotations are different, they see different things, there's different case mixes but they wanted everybody to see preeclampsia. So, we created a virtual reality case where they manage the patient with preeclampsia. We had some upfront time investment, we built it but now it's built. So now, every learner in the OBGYN clerkship sees and manages a patient with preeclampsia and they all have the same exact experience, they all get the same exact computer generated feedback. So, you can also think about when you get a learner who says, well, that faculty member was really hard on me or you get all this back and forth about bias and the way that the faculty member assessed them and you get people saying they did things that they didn't, well, now the computer has a chat log of everything that the learner asked. So, you can say, well, you thought you asked about that but see, you didn't or you didn't ask about it until 20 minutes into the case. So, there is some objectivity into it even though it may be imperfect. And then finally, solve problems, don't create them. So, if you have something that's working really well, that's not probably not the place where you wanna try to put virtual reality, find something that maybe the learner is not working within a course, not working within a rotation, something you're getting feedback about that they're not seeing and that's the place where you wanna use this so that you're solving a problem. These are some examples of different headsets. I'm not naming names because again, I'm not commercially linked to anything but prices range anywhere from I think $300 to $3,500. So, big range of expense in terms of how much the equipment costs. Different software companies out there, too many to name, some pre-boxed, some customized, some very niche, like they're trying to do one very specific thing. There's not one software platform out there that solves all of our problems, at least not yet. We talked about space. So again, thinking about what physical environment your learner's gonna be in. If they're doing a fully immersive virtual case, they need a quiet area if they're gonna use voice recognition technology. Consider their ability to access the internet. So, if any of you work at the VA, that is impossible if you're trying to have your learner link to the Wi-Fi network over at the VA. So, these are practical considerations, right? Like where's your learner gonna be when you're asking them to do this case? Implementation planning. So again, allow a generous timeline. Plan for pilot testing and consider your learner orientation strategy. I already alluded to this earlier. Message your goals. Have a tech failure plan. Avoid high-stakes assessments for pilots. So, if this is your first time doing this, this is not where you wanna use it for an OSCE. This is not where you wanna use it for your summative assessment of your learners at the end of their training, right? You wanna do this in a low-stakes environment where your learner's gonna be open to trying this out and if something doesn't work, you can then go back and sort of have iterative improvement from there on. You wanna get multiple feedback streams, how the faculty perceived it, if it's not just you running the course, how your learners perceived it, and we'll talk a little bit about more assessment in a few slides. Oh, here we are. So, in terms of assessment strategies, so you wanna think about objective things around learner performance, completion of critical tasks, actions. Is that what you're trying to get out of this case? Are you trying to find out, did they ask certain history questions? Did they do certain physical exam maneuvers? Did they order specific diagnostic tests? Other process metrics, like how long did they spend in the case? How many questions did they ask? How much money did they spend? So, certain platforms you can actually pre-program in how much money a particular test costs and then they can see at the end, they spent this much money on this case. And then having your expert try out the case before it goes live so that you're comparing to what an expert performance level would be. And then asking them questions around perceived usefulness, ease of use, actual use. So, if you can show, for example, that you've gotten 150 learners to be able to do 400 and some or 600 and some simulations with the use of a virtual standardized patient, well, you just saved a lot of money, right? Because you just saved how many hours a virtual standardized patient time and cost if you have a case that is getting your learning objectives and your learners find helpful and useful. It's a way to potentially save money on the backend, even if it's not necessarily seeming like it's super cheap at the front end. And then thinking about your validity evidence as well. So, it was a lot of me talking. I'm gonna summarize and I'd love to hear some thoughts and questions for discussion. So, challenges and lessons learned in terms of the use of these technologies. Acceptability of new technology is variable. The logistics of a rollout to large groups can be challenging. The communication of goals of VR content is super important and you have to think about how you're gonna roll out this content to faculty with differing levels of technology comfort, right? So, not all faculty are gonna be on board with this. You're gonna have skeptics amongst all levels of learners and faculty. And how are you gonna onboard those folks who maybe aren't as ready and willing to accept it? Engaging your faculty for content creation. Some people will be on board right away. Other people you may need to bring along once you have done a couple of pilots to show that this is useful. And then importantly, you wanna get real-time, high-quality feedback from your learners so you can fix problems and then have it work better even the second time around. So, in summary, XR-based learning presents vast opportunities for medical education. Like all things in med ed, you have to start out with your learning goals and decide if the additive technology is actually helping or potentially being harmful. You're gonna make a plan, start early, engage your partners, align with those goals, plan for failure, it won't work perfectly the first time, but it can be a really powerful tool. So, with that, I'll stop there and I'd love to take questions. Thank you.
Video Summary
The APCCMPD CHESS Clinician's Education Forum featured three talks on topics including the importance of teamwork in critical care, improving feedback given to fellows, and the use of virtual reality and augmented reality in medical education. The speakers emphasized the need for teamwork and team cognition in critical care settings to improve decision-making and patient care. They also discussed the importance of providing specific and actionable feedback to fellows. In the final talk, the use of virtual reality and augmented reality in medical education was explored, highlighting their potential benefits in increasing learner engagement and enhancing training outcomes. These technologies offer opportunities for immersive and interactive learning experiences, allowing learners to practice clinical decision-making, improve procedural skills, and enhance empathy and communication. Implementing XR-based learning requires careful consideration of learning objectives, stakeholder engagement, and resource allocation. Technical considerations and assessment strategies should also be addressed to ensure successful implementation and evaluation of XR-based learning. While there are challenges and barriers to overcome, XR-based learning has the potential to transform medical education and improve learner outcomes. Continual feedback and refinement of the implementation are important for harnessing the benefits of XR technologies in medical education.
Meta Tag
Category
Educator Development
Session ID
2150
Speaker
Nayla Ahmed
Speaker
Abdullah Alismail
Speaker
May Lee
Track
Education
Keywords
APCCMPD CHESS Clinician's Education Forum
teamwork in critical care
feedback given to fellows
virtual reality
augmented reality
medical education
team cognition
decision-making
patient care
learner engagement
©
|
American College of Chest Physicians
®
×
Please select your language
1
English