Cheryl Cho-Phan: It's been really helpful to discuss the specific use cases because it drives how we build the evidence.
Prashni Paliwal: Every data source bears the imprint of original purpose. We are not looking for one data to rule them all.
Lev Demirdjian: Clinical genomic datasets in particular allow us to correlate individual biomarkers or sets of biomarkers with outcomes and identify patients that have poor prognosis.
Tamara Snow: So I'm excited to share some insights from our experience generating these innovative data sets and where we hope to take it moving forward.
Alex Deyle: Awesome! Welcome back to ResearchX! My name is Alex Deyle and I'm the General Manager of the clinical research business here at Flatiron Health. I'm really excited about today's episode. Taking a look at the attendees that we have here today it's so awesome to see so many folks across both biopharma as well as from many different research sites joining today. As a reminder, this episode is part of Flatiron's ResearchX 2022 season where we're exploring how integrated evidence can transform oncology research and patient care. So let's get started!
We have a great agenda lined up today and I'm excited to introduce our speakers. First, I'll be kicking us off by walking you through how Flatiron thinks about leveraging technology to accelerate clinical research. Next, Len Rosenberg, Head of Clinical Operations for the Leukemia and Lymphoma Society will share a case study highlighting the use of EHR to EDC technology for the LLS's Beat AML trial. Finally Nelson Lee, Principal Data Engineer at Genentech and Lauren Sutton Director of Product Management at Flatiron will walk through a pilot project focusing on reimagining the role of the EHR in clinical research.
We have a packed agenda, but we will have time at the end to answer any questions that you all may have for our speakers. So to that end, a few quick housekeeping items before we dive in. I would like to draw your attention to the Q&A option available through the webinar. Feel free to submit a question anytime and also reach out to us afterwards if you'd like to discuss any of today's topics or content in more detail. If you have any technical questions or issues, please let us know via the Q&A tool and we'll do our best to help address those.
And a quick disclaimer, please excuse any interruptions from our pets, or loved ones, or any of the New York City street noises outside of my window. Like many of you, a few of us are still working from home today including myself. Awesome. Before we get started, we thought we'd start by learning a little bit more about each of you. So you should see a poll pop up on your screens momentarily and don't worry, the answers here are going to be anonymous. We're not going to share your responses with other attendees.
The question that we want to start with today is how often is your organization using innovative technology to accelerate clinical trial execution today? And the choices here are, not using innovative technology at all, using innovative technology in one to two studies, using innovative technology in three to five studies, or really starting to start to scale and use innovative technology in five plus studies. Let's go ahead and close the poll and we can share the results.
Awesome. Well, thank you so much for providing your input. It is really helpful to get a pulse check on how your organizations are using innovative technology and I'm excited for you all to hear the case studies that we have on deck for today. So let's get into it. I'd like to start by talking about how we here at Flatiron think about the opportunity of using innovative technology in clinical research. As many of you know, here at Flatiron we are 10 years into our journey to improve lives by learning from the experience of every cancer patient.
In service of our mission we positioned ourselves as both a leading oncology specific EHR company via OncoEMR and a leader in the science of real world evidence specifically focusing on methods for curating and analyzing patient level data documented as part of routine care in the EHR. Through that work we've gained important learnings that have helped shape our perspective on the opportunities for innovative technology in clinical research like the importance of technology that integrates seamlessly into site workflows as well as both the opportunities and the limitations of using routinely collected EHR data to support research.
So now with those learnings we're building on our foundation to learn from the experience of even more patients by leveraging technology to bridge the gap between clinical care and clinical research. However, as you all know, clinical care and clinical research have historically been extremely siloed in part because of completely disparate technology stacks that have emerged over the last 10 to 15 years specifically EHR for clinical care and EDCs for clinical research.
Now, Flatiron has spent the last 10 years reimagining the role of the EHR and the role that that EHR can play in clinical research. And by joining forces recently with Protocol First, a leader in interoperable research technology, we're now tackling some of the most persistent challenges associated with the traditional clinical research model. But why now? What's unique about the opportunity today for innovative technology to change the way we conduct clinical research?
There are a number of external tailwinds that we believe are propelling us forward and will bring about lasting change in how clinical research is conducted. So I want to talk about a few of those today. First, interoperability. Critical to bridging the gap between clinical care and clinical research is the ability to exchange information interoperably across systems. It's been estimated that somewhere between 50% to 70% depending on the trial of clinical trial data is duplicated between EDC systems and the EHR.
This isn't a new problem, but there is new industry momentum behind tackling this problem including ONC regulations as part of the Cures Act and a movement towards more standardized APIs to facilitate the exchange of information across systems. Also, similarly to what we heard about in the last episode of ResearchX which focused on multimodal data, the ability to integrate different sources of data, for instance, structured genome data into the EHR can enable more precise patient matching to clinical trials.
And Flatiron is actively working with genomic testing companies to integrate these data for use in clinical trial matching. Second, we've seen a lot of guidances come out recently from health authorities which have signaled a willingness to accommodate new approaches to clinical research. But one that's particularly relevant for us here and for the conversation that we're going to have today is actually a guidance from 2018 on the use of EHR data in clinical investigations. This guidance plus recent advances in interoperability across EHR and EDCs has really unlocked enormous potential.
The first excerpt that you see here on the top focuses on the transfer of structured data from an EHR to an EDC and how that can eliminate the need for manual source data verification. The second excerpt focuses on the creation of research modules or research tabs built directly within the EHR to facilitate the capture of more data needed for clinical research at the point of care, which in turn can increase the amount of data eligible for direct EHR to EDC transfer.
These excerpts nicely tee up the two case studies that we're going to talk about today. And during today's episode we'll share more about how we've turned this guidance into reality with our partners. Finally, we've seen a huge shift across the ecosystem in terms of the growing expectation for more representative and inclusive research. Key stakeholders are expecting that new treatments are researched in populations representative of those who will be receiving them in the real world. This means expanding clinical trials beyond the large academic institutes, designing and conducting studies so as to be more inclusive of different patient populations and different treatment centers. However, this won't be possible unless we lower the operational barrier for more sites and more patients to be able to participate in research. And we're going to need new approaches and innovative technology in order to do that. So what are the specific opportunities for innovative technology that we here at Flatiron are focused on.
First, we're partnering with sponsors to design better clinical trial protocols and make data driven site identification decisions. This isn't just about evaluating the impact of inclusion/exclusion criteria like on things like target patient population sizes and representativeness, but obviously that's important. It's also about using high quality, robust, representative, real world data to estimate study outcomes in a target population, inform sample size and power calculations, and define time points for interim analyses. And if we get that right, we can reduce the number of protocol amendments and accelerate study timelines.
It's also about unlocking the potential of data and technology to enable faster, more efficient patient recruitment. Unlocking the full potential of EHR and genomic data is really important. This takes a combination of structured EHR data processing, connectivity with structured genomic data sources, and machine learning models to predict eligibility based on unstructured information.
But it's also about effectively embedding technology into site workflows and integrating that workflow seamlessly into the EHR. For example, here at Flatiron we've developed what we call research badges, which surface potentially eligible patients for a clinical trial directly in a physician's EHR workflow, so that they know when to consider a patient for a specific trial prior to making their next treatment decision. If we get this right, we can enable faster and more efficient patient recruitment.
Next we're also re-imagining clinical trial data management. The opportunity for technology here is truly a win-win across all stakeholders involved in clinical research. Current processes are incredibly burdensome on sites. They're error prone, they require significant time and cost for sponsors to effectively manage. We have to reduce the burden of duplicate data and data entry across disparate data sources. If we get that right, we can reduce transcription errors, lower costs associated with source data verification, accelerate time to database lock, and improve site experience. Finally, it's the combination of these technologies across protocol design, site identification, patient recruitment, and data management, and the integration of these technologies within the EHR that unlock fundamental change. It's about cultivating technology enabled sites,ensuring the technology embeds seamlessly into site workflows, but also integrates with how sponsors design and conduct their clinical trials. And if we get this right, we'll be able to provide more patients with the opportunity to participate in innovative research.
So today's ResearchX episode, we'll focus on examples of how innovative technology is helping sites and sponsors reimagine clinical trial data management. Specifically, we're going to speak to innovative technologies that can help increase the amount of clinical research data captured directly in the EHR, and then directly transfer that data from the EHR to an EDC. Or directly into a sponsor's data warehouse.
And with that, I'm really excited to turn it over to our first speaker. So let's bring in Len Rosenberg from LLS. And Len is going to share how EHR to EDC technology was used to support the BeatAML clinical trial with great success. Len, I'll turn it over to you.
Len Rosenberg: Thank you, Alex. And hello everyone from RTP. You can tell I'm in a hotel room, so I apologize a little bit. But doing a little traveling as they say. So I'm going to ask my assistant to advance the first slide here and jump right in. So let's go ahead.
All right, so I'm going to take it that most of the audience doesn't know about master protocols to some degree, but they became en vogue. We at LLS did that about six years ago now, when we were tackling the failure to advance any drugs, essentially, for curing frontline blood cancers or treating blood cancers.
And so we understood that we were going to have to go ahead and change that paradigm. Now many of you might know the recovery trial that's going on now with COVID and master protocols are getting more en vogue. But essentially what they do is they take a targeted audience to take a screening protocol. In our particular case, we're screening for frontline AML. We're putting patients in after getting their bone marrow and figuring out their genomic setup, and then precision matching them centrally to a sub-study that's all part of this big umbrella. And we'll share that with you in a slide or two.
But it's clearly a complex study, because we're looking at the way in which traditional clinical trials were being done. Picture doing five, six, seven, eight, ten studies simultaneously. It's not going to work using the old methods. And so it's complex. We're doing early stage evaluations of promising treatments, which is intended for our particular study– legacy data management systems are really not set up to do this. And of course when you're doing all of these early stage promising clinical trials simultaneously under a master protocol, you have to get the safety information along the way as well.
I mean if you think about doing this all under, at that time ICH E6R2, that's a daunting task unless you change the paradigm. So we changed the paradigm scientifically and medically, with the way in which we were packing this. But we knew from a technology perspective we had to go ahead and change the way we executed the program. And that's where we ended up partnering very early on with Protocol First, now associated or I guess a subsidiary of Flatiron. So the next slide, please.
Okay. So this was our perspective. We have to obviously change the way we operated from the traditional mono file. But we said we had to go ahead, and if we're going to invoke new technology, to solve some of the problems we had to go and modify as we went along. So we knew versions are not a bad thing. That's just the way you advance technology, and actually science. You learn more about the clinical trial as you move ahead.
So we knew we were going to have to go ahead and put some dollars into the technology, but we also knew that if we didn't do this it would be impossible to execute the complexities of running six A10 trials simultaneously at a site under a master protocol concept.
Now we said, yeah there's these buzzwords of AI and NLP and so forth. And we knew that they were going to play a role in us helping to execute the program. Obviously I just want to clarify, that's for us as a sponsor at LLS different than what Flatiron might be speaking to you about. So I wanted to get that clarification. But we knew that was going to be helpful to us as we advance, but we had to first get some of the technology solutions in place, given the magnitude of the study and the complexity of the study. So we had to find vendors that were coming out with new technology, and we had to play to their strengths in order to tackle this as we went along with the program. Next slide please.
All right. So this is a busy slide and it's intended to do that because I'm trying to pack in years and years of research into essentially one slide that I can convey how complex the study was and how we moved ahead in trying to execute it. So on the top right, you have the clinical trial sites. And so we were roughly up to 17, to 20, the major academic institutions in the country that partnered with us. We had the pharmaceutical companies on the right coming in, and they would give us their assets and funding, kind of in a plug and play model. So if they had a particular substudy that matched with the mutation of interest, and we would marry those two with the central decision making. And so these particular companies were coming in, in the hopes that we would actually get them a quick result, whether it failed fast or they expanded, but they knew it was a plug and play model.
So the beauty of business, once the infrastructure and the technology are set we can swap out pharma companies along the way. The FDA partnered with us as well and gave us the green light to do a master trial in frontline AML in 2016 or so, which is quite different than the publications that have come out more recently that's saying, "Everyone let's start doing platform trials." So we had actually anticipated this was a new way of bringing promising treatments to frontline blood cancer.
And you'll see, in the bottom we said, "Look, there's going to be some different technologies we're going to use from the outset." And the Protocol First clinical pipe, we'll talk about that in a minute, we knew that we weren't going to use the traditional way of collecting the data and processing the data. And we were also using different providers for sharing the information, such as the MyClin knowledge transfer, which is a protocol which is kind of like the Facebook for clinical trials. Because you have to keep 10 protocols at its height, top of line to the same sites. And how did you do that when there's constant movement and amendments that happen that are purposeful? Because you're learning more about that particular investigational product, because it's in early states. You're going to know something about the safety and you're going to have to continuously modify the protocol on purpose the same way you're going to continuously modify the technology.
So let's go ahead and finish out. We had some standardized technology solutions as well, and we used the CRO and that as a partner. And just to let you know, the CRO was not prepared as well for this in the sense that they had their own legacy processes. So we have to kind of, if you will, blow up the way that they were doing their SOPs and the way that they were managing the trial but that was intended. Because they said they were going to be flexible and work with us and they did, to actually help us execute the program. Next slide please.
All right. So the best part of doing a particular program and planning it from the outset is having results to share. So remember we spun this stuff in 2016, we weren't happy with the progress of clinical trial, clinical research, and frontline AML. We had to do something differently. So what did we do? We did that master trial, and about a year and a half ago or so, we published the first set of results. And what did that say? It told us that when you go ahead and do precision matching of clinical trials using this master trial approach, you actually extend survival, compared to standard of care, more than fourfold.
So as a nonprofit focused on curing or treating blood cancers, that's what we're looking for. So you can say, did it work? The answer is of course it worked, because we meaningfully changed the survival paradigm and brought more promising treatments or more indications for these treatments to the marketplace.
And then on the right, you see that we wanted to share what we did operationally in the same vein as we did the research. And we have shown what are the things we use with the technologies to allow us to execute the complex study a little bit more favorably? So let's move ahead to the next slide.
All right. So in terms of specifically working with Protocol First, we knew that we wanted to digitalize the protocols from the outset. And so we used their software to actually disambiguate the protocol. What does that mean? So when you take the flow chart and you actually can chop it up into procedures and look at it based on visits at the same time, and did that automatically, it would tell us if there was any inconsistencies between the way you were going to execute the program and also the data collection form. And if you think about this in terms of induction and consolidation and a lot of moving parts when you're treating patients that are on cycles, this was a tremendous benefit. And we actually put the CRF through this process to make sure that we had planned for all of the nuances that happen in frontline blood cancer trials.
Then we also said we're going to use P1 source upload. And what is this? Well, everyone understands this, this eSource source upload thing. But this wasn't en vogue in 2016. Our mindset was, "We don't want to go to the site. We want to actually have the site go in and upload a certified copy of the medical record entry that pertains to the EDC entry and put it into Protocol First." And people said, "Well we don't do that." And the answer is, "If we don't monitor, we're going to make you do it. If you will." Because these were our sites and they were interested in having the promising treatments and we didn't want to spend a lot of resources going to the sites. We could centrally look at those if they uploaded the information and it was non redacted.
Remember 2016, "Oh my God, you can't do this." Well the answer is of course you can do it. It was in the regulations, it was allowed. And the fact that we still do this today. So the non redacted version means the same way that you send a request to a lab or a pharmacy or transcription, you can put in the patient name as long as it's role based. Gated for the access to that information.
So we did that. And of course then the pandemic came, and everyone says, "Oh, let's go jump in now and figure out ways to get this information, because no one's allowed on the sites." Well guess what? We had figured this out long before there was a pandemic, long before there was guidance documents on master trials. And that's just the message I want to leave the audience, which is all the things that we're doing now are thinking about the pragmatic technology solutions that make sense and that you have to start and you have to do them and get better.
The biggest one was the, not only using the P1 EDC, which allowed us to do quicker migrations. Because obviously you're changing with amendments all the time. It allowed us to spin up an EDC from scratch within a week. And the standard was eight weeks, 12 weeks, and so forth. And then all the complexities when you have to revalidate if you had an amendment. None of the stuff made sense to us. We said, "We're not doing it the old way." However, we did start with metadata, because you could probably see it on the circle. But we rapidly moved away from that legacy technology or management system to the newer way of generating EDCs. And then finally in 2018, at least for this particular suite of technology solutions, we said, "What happens if we can move structured data sets from the EMR to the EDC?" And of course, clinical pipe, as people are already aware, is the solution. Was it easy to start with? No. Did we have to get a pilot institution or two to start it? Yes. Did we get Epic in the room and did we get P1 in the room and get the technology group in the room and get Protocol First.
Yes, we did all that. And we obviously figured out a way, with the right mapping at the beginning of the program, to move the process along to get us to the point where we have now connected and figured out how to map these studies initially. And structured data is now flowing into the EDC. And it doesn't have to be P1 EDC. It can be metadata, it could be Oracle, and on and on and on. So we have a connector that will allow us to automate to transfer structured data. And now we have comments coming over in addition to labs, and the future is that we will apply a little bit of NLP on top of the progress notes and have the technology complete the EDC for things like side effects or AEs or other things that went on during the study. Next slide, please.
All right. And so this should be intuitively obvious, but it's always nice to have data. Right? And what we said was, "If you go ahead and automate the movement of structured data from the EMR to the EDC, it should cut down on the manual data entry chart." Well, of course. And so we went out and we measured it because people just said, "Well, what does it typically take? And how much does it take now when you map it?" Because there's some time to do the mapping. And they say, "Well, what's the overall cost savings?" So we had to first get the unit and then multiply it out by the pages and do all that other stuff. And again, to anyone listening, when somebody asks you for this, you kind of question if they understand what automation is, and would they like to do it the old way where they actually write it in, find it, transcribe it.
And again, look at the accuracy, which is a hundred percent. Why would you want to do it in complex oncology trials? And then you factor in now, oh, guess what? We don't have any staff. Okay. We can't see patients. This is the reality that's going on at the major academic centers. And it has to do with the fact that they've cut down on their ability to actually take on new clinical trials because they don't have the staff. So we say to them, well, if you would automate and you would move the data over, you could spend more time with the smaller amount of data coordination. Time that you have focusing on things that you have to do and take on more trials. Because as LLS, the worst thing that we can hear is that they can't take on any new trials, but the solution is right in front of them. It's proven. And we don't understand why more and more sites don't adopt this type of technology, because it works, it's proven, it's accurate and it saves on resources and it's cheaper. So, all right, next slide, please.
All right. And I kind of summarized this. I think I have another slide or so to go, and then I'll transition over. But I mentioned to you data coordination gets significantly reduced. From the site perspective, if you still pay them the same amount of money for data coordination, but they're automating 60, 70% of it, huge margins for them. Why wouldn't they want to do it? And then, of course, if you think about the fact that we didn't want CRA signal time moving from one page to the other side of the page and kind of doing that data entry. That's no value add there. If they can focus their time on value added services with the patient, that's the goal of enhancing the way in which you're going to communicate with the patients and so forth.
And so, again, to recap, it's obviously much reduced data entry time, little to no queries, which is, again, another bane of the CRA time and in the site time. And we do this because this was the smart way to do clinical trials. We didn't need COVID to tell us this. So all the things, as we said earlier, this is the roadmap COVID or not. And it so happens that everyone's trying to come in and we're five years into this exercise and trying to create purpose built solutions while the clinicals trials are going on and the pandemic's going on, makes no sense. Take the proven ones that work and go ahead and deploy them in your clinical trials. Next slide.
All right. And I think this is the last one. It has to do with, if you automate it, it comes in and you can visualize it. It's simple. So think about it. You're not waiting for a data coordinator to enter in a lab or vitals or resist or comments. The second it's entered into Epic or whatever at the site, it goes into the EDC and you can visualize it. So why would anyone want to run a complex early three plus three design and be dependent on a coordinator for entry, or be dependent on any of these other components, such as a monitor when all this information needs to come in early and we need to make critical decisions on expansion or plug and play, like we said, with next studies. And so it's just a proven way to do it. There's less monitoring. There's less DM costs, the sites are happier.
Was it easy to do over the last three years? No. But do we have it to a point now where it can be rapidly adopted? Absolutely. So you benefit from the work that we did with Flatironrotocol First, over the last three years, to give you a solution you can use tomorrow, to actually execute better clinical trials. So let me just turn it back over. I think that was the last slide. Again, I apologize. I'm in a hotel room, as well.
Let's go ahead and turn it back over to you, Alex. And I look forward to questions later on. Thank you.
Alex Deyle: Awesome. Thank you. Thank you so much, Len. That was an amazing example of how technology can help accelerate truly innovative research programs. Before we jump to our final presentation, I just wanted to share a friendly reminder. If you do have questions, please use the Q&A tool at the bottom of the screen. We'll be collecting those throughout the remainder of this last presentation so we have those to address at the end of today's episode. And now Nelson and Lauren will be sharing how Flatiron and Genentech are re-imagining the role of the EHR for clinical research.
Nelson Lee: Thank you, Alex. Hello, everyone. It is fair to say that EHR to EDC is really a buzzword. And I Googled it and saw over 700 results. Lauren and I are going to share our exciting EHR to EDC journey with you in this presentation. This is what we are going to touch base on. EDC has been around for a really long time, as far as I remember, at least for two decades. And it remains the most common data collection approach in clinical trials. We have observed that EDC systems continue to evolve and offer many new features to support data collections. However, the general practice largely relies on manual data entry with rapid technology advancement over time, the changing business landscape especially in the pharmaceutical and biotech industry, and the effort of growing data volume in clinical trials.
And I've read some articles based on the impact reported back in 2021. Phase III trials collect three times as much data now than then they did 10 years ago. So all this is really amplified, that this manual EDC data entry practice becomes inefficient and perhaps it's not even sustainable in near future. There are many reasons. And the obvious one, perhaps, is the duplicated data entry effort. Len touched base a little bit on this a while ago in his presentation. The data being entered in the source and then, again, it will be entered one more time in EDC. So many publications have already highlighted different aspects of the inefficiency. It could be the resource, it could be the time, the spend on it, the effort. And don't forget the burden to the site staff, as well.
And I recall, I got feedback from a research nurse on the site. And then she told me that actually at the site level, they spent so much time trying to look for the lab results of the electronic medical records. And then they look for the value and then enter into EDC. However, they still make mistakes. There are still mistakes, there's still data queries, because they look at the wrong values. It is because there is so much data they need to enter, so much data they need to collect.
So to address this problem statement, we set a really clear objective to enable source data capture in OncoEMR and automate data transfer to Roche, Genentech EDC environment. In short, optimized data collection efficiency as well as to improve the data quality in clinical trials. We believe this goal will add values and benefit key stakeholders, the patients, the research site, research staff and the sponsors. So where do we start? We start with the data source, that is the data origin, the EHR. So Lauren is going to share more on that. Lauren, I pass on to you.
Lauren Sutton: Great. Thanks, Nelson. So first I want to provide a little bit more context on OncoEMR. So Flatiron has an electronic health record called OncoEMR that's used primarily in the community oncology market. And the electronic health record is the source of clinical data for those practices and it's the first place that they capture data for patients that they're treating. And Alex touched on this a moment ago, but as many of you are aware, in July 2018 the FDA released the guidance that you see here, the use of EHR data in clinical investigations. And this guidance follows a trend that we're seeing in the industry where the electronic health record can be used for more prospective research use cases. And Flatiron responded to this guidance by connecting with research sites, to understand how they were capturing clinical research data and what those workflows look like.
And in response to those conversations we built and deployed research-specific workflows directly in the electronic health record. And you can see this on the next slide. To briefly show you what this visually looks like, here's an example of what we built for the adverse events workflow that has been deployed in OncoEMR. And the more we're able to incorporate research into every day clinical care within the EHR leveraging the workflows that clinicians are familiar with, not just for trial patients, but for all of their patients, the easier it is for data like this to be used in downstream applications, such as the pilot project that Nelson's going to touch on in a moment.
Nelson Lee: Thank you, Lauren. So we kick started our journey with a pilot and the pilot was broken down into different phases. Phase one of this pilot, this is more about the proof of concept that addresses three key areas. So can we enable intentional data collection in an EHR system and can we develop the capability to configure the data and can it be transferred to an EDC system. And how would this impact the business processes. The phase one is complete and then Lauren and I will share some more learning later with you. Phase two unit is a shadow study. What it means, is basically in parallel with the production study, we run another study. And it is currently in a planning phase. It enables us to look into some of the defined measures that we want to measure. The bottom line is about value in demonstration, as mentioned where the preposition slide, such as eliminating duplications in the data entry or minimizing of the data query, the end scope, data time, efficiency as well as medical history.
This is really a high level data process of this pilot. And one of the key components I want to highlight in this EHR to EDC approach is the capability of the customizing research field within an EHR system. These enable the intentional data collections for the purpose of the trial. So during the course of this collaboration, we have learned a lot. Here are some of the highlight topics. I wouldn't be able to cover every single item but I would like to highlight a few to share with you. The most obvious is, we felt a common data standard. What I mean is between healthcare and research. These different data formats, different data concepts, different control, terminologies, all these add the challenges to data mapping. And based on our own observation, based on our phase one findings, actually it's roughly about 50% of the data admin that can be mapped.
But one thing we also bear in mind, what can be mapped, it does not mean that the data can be extractable. The data in the EHR system could be avoidable. However, it may not be extractable because not necessarily every single data element that we need for the clinical trial are structured data in the EHR system. And as a sponsor, we also need to prepare for a study practicing the changes, in order to adapt to the new technology. The same way that we have been developing our ESL app, like setting up the form desire, how we set up in the EDC, it requires some changes. Along with that, we also need to be realistic that perhaps a hybrid approach is more practical. We wouldn't be able to cover every single ESL app to be covered by the EDC.
And scalability, that's always a critical. And because we cannot do something for just limited applications, can the solution to scale up beyond a pilot, is this sustainable? Can the solution be the user core studies locally and globally? These are all important in order to get buy in. Lauren, anything that you would like to add?
Lauren Sutton: Yes. Thanks, Nelson. So I'll highlight a few of the learnings from the Flatiron team, as well, and what we experienced during this project. I would say, one is as we design workflows to capture that intentional data at the point of care, it's really important to keep the clinician work flows top of mind. Another one here that I think is important to call out is that there is a learning curve with each new EDC system and each new EHR system, and that there's often little documentation to work off of. And so we found that these projects are most successful when teams are really open to collaborating and learning from each other. And then maybe the last one I'll call out here is within the EHR, we really are able to build workflows that integrate research into everyday patient care, make it easier for clinicians and their research teams to remember to capture this data as they're treating patients that are enrolled on clinical trials at their site. But that wraps up the learnings from Flatiron. Thanks, Nelson.
Nelson Lee: Thanks, Lauren. So to meet the objective, the sponsor cannot do it alone, and collaboration is really the key to success. We need to know the EHR system vendor is on board, and then we also need to look at the third-party technology vendors. We need research sites and the sponsors, they all need to work together. First, data capture has so much potential, but we need the infrastructure to integrate in the research into the point of care, such as research staffing and launching, like Lauren shared. We need to set a realistic expectation too because we cannot cover all the EHR at the very beginning. So a hybrid approach is smart particle and this may be that it is really the reality. So that with that, then we need to pick and choose, and then we need to make a decision.
So our strategies start with the most impactful data domains, which one would be more critical. And for example, in the efficacy, and which one we deem the most useful, usually high volume, like such as the lab data easily, 20%, 25% of the data volume in any trials is typical. So working towards a common data standard is also critical because it will help us to facilitate the data exchange. Basically we are talking in the same language. And then we need to facilitate data exchange between systems. And last but not least, the willingness to explore and leverage existing technology could fast track the development of EHR to EDC, so therefore we don't need to reinvent the wheel and to start from scratch. In conclusion, we need to adapt for the future and challenge ourselves the way we have worked in the past. EHR to EDC is going to pave the way for new data collection strategies in clinical trials. So this concludes our presentation. Thank you. Back to you, Alex.
Alex Deyle: Thank you. Thank you so much, Nelson and Lauren. It was awesome to hear the example of how you all are rethinking how more data can be captured for research trials at the point of care, in the EHR, and to think, think about how that then makes even more data available for transfer into EDCs to think about powering innovative programs like the BeatAML study and things that Len talked about. So before we move on to Q&A, I want to take a moment just to reflect on the external trends that we discussed earlier. I think you can hear it in the speakers today and that it truly feels like we're approaching a tipping point where patients and sites and health authorities are going to be increasingly expecting and demanding that sponsors deploy new approaches to conducting clinical trials, embracing technology to reduce the inefficiencies that we've seen in the system for clinical research, significantly cutting down on redundant error, prone activities, and figuring out how to design and conduct studies that are more representative and inclusive of more patients and of more sites.
Both of the case studies discussed today represent great examples of how technology can bring about transformational change in how we're approaching clinical research. So thank you again to the speakers. With that and without any further ado, we'd love to hear from all of you. So we can go ahead and jump into Q&A. And I'm looking at the questions that are coming in. The first question is probably a good question maybe to start with you Lauren, and then happy to have others jump in. You mentioned in your case study that EHR to EDC technology is further enabled by intentional data capture. Can you share more about the idea of intentional data capture and how that innovation is impacting the future of clinical research?
Lauren Sutton: Yeah, sure. Thanks Alex. So at Flatiron we've defined intentional data capture as data whose collection is above and beyond what would otherwise be captured in routine clinical care. And we discussed this a bit today, but we are able to modify workflows at the point of care within the EHR. And I think Len touched on the protocol digitization component of this as well. And that combined with our ability to build and deploy workflows at the point of care to capture and augment the data that we're able to capture in the EHR can really drive impact in the EHR to EDC process. And I think when this is done successfully, these workflows have the ability to support adherence to the clinical trial protocol and perhaps increase the data completeness in the EHR as well.
Alex Deyle: Awesome. Thank you. Next question. I think maybe Len, you can take this. So there's a question from someone saying that they can see the great benefit of EHR to EDC for sponsors and sites, but the benefit to the latter sites is a bit intangible. Can you articulate more on the benefits to clinical sites specifically?
Len Rosenberg: Sure. Great question. And again, if you think about where the site adds value to the process, they want to spend their time with patients. It's all about patient centricity. We hear that as a buzzword, but for us at LLS, that makes a lot of sense, because we're focused on the patient and trying to treat or prevent, cure cancers. So at the site level, what's an added value exercise for the research coordinator versus the traditional mindset that's been around forever? And they were always ones that said, "Oh, go get the information from the medical record. Go find where that actually is." Because obviously a lot changes because depending on the visit, depending on the time point, you'd have to dig for it, find it, and then go ahead.
Len Rosenberg: And then sometimes you have to actually help out a CRA. Because I've heard this many times, the CRA goes to the site, goes to the EMR. They go, "Where'd you get this piece of data from, it's such a monstrosity this EMR. I don't know where to find it." So all of these inefficiencies continue to go on at the site. And if you asked yourself at the site level, "If I could just come in and get rid of the tedious tasks, and at the same time, we eliminate the errors and give the decision makers near real time data relative to making decisions on the clinical trials," all of those are tremendous advances relative to what the sites are doing right now.
Len Rosenberg: They can't even find enough resources at the sites to actually process the patients, let alone deal with the entry and the queries, which we can fix. So that is a direct benefit to the site. And I think when you step back, you're going to really understand that this is a tremendous change to the way in which we think about collecting data and processing data. And it can only help us going forward.
Alex Deyle: I couldn't agree more. Thank you, Len. Question, and maybe we'll start this. I think it's for a sponsor perspective. So we can start with Nelson. Many people are used to the status quo of how clinical trials are typically conducted and maybe hesitant to adopt new trial technologies. How do you go about getting organizational buy-in to explore or pilot new trial technologies like EHR to EDC?
Nelson Lee: I would have to say that unfortunately because my organization has a really strong culture to promote creativity as well as in innovations, my take, my experience is really kind of like going back to the basic. I need to do my homework. First of all, I need to understand what are the pain points. I gather the feedback from those that are being impacted the most. So therefore I have something to support the proposal. And I also find it's very useful that you use data, so I call it data-driven, use data to support. So there are many publications that look out about that I would be able to look at, and then so I would be able to use some of the findings. I'll weight the book in the public domain to support what I intend to propose.
And I think a value framework can also help as well too in a way that not is exploring the technologies. There may be multiple technologies. Then you can help to set the priority, which one, perhaps that would be something that we would need to get the key stakeholder buy-in first. And another one that I think this is always critical because I always get those type of questions. And so prepare enough to anticipate those questions. Key stakeholders would always have interest in the how would this be scaled up? Is it scalable and is it sustainable? And what about the ROI, the rate of return? Because like a bottom line running of a business, we do have the financial responsibility. And so, we need to be prepared for that.
Alex Deyle: Thank you. I think we may have time for just one last question. So I'll ask a question and maybe we can go start with Len and then Nelson. What are some of the opportunities that you are most excited about in terms of optimizing EHR technology and clinical research as you look forward? So we've talked about what you all are doing today, but as you look forward to the next three to five years, what are some of the most exciting opportunities that you see on the horizon? And looking at time we probably have to do about a rapid fire of maybe 30 seconds or 45 seconds each. But go ahead, Len.
Len Rosenberg: Again, if you look forward, it's a process that doesn't need human intervention if you really think about it. Once you kind of set up the mapping or set up the experiment, why do you need to use the human component, which is going to only be inefficient and probably make errors? So the more that you can create an environment to get the data sets that we want pulled from the correct locations in the EMR and focus resources on other aspects of drug development, we're just going to get better and better and better. So it's an obvious answer, but that's where we need to go, because this process is not adding any value when you have people sitting there and moving digits from one particular source to another source, and think that's an added value that we're paying ridiculous amounts of money for as a sponsor. Nelson?
Nelson Lee: Thanks, Len. So, let me be really quick. So I really look forward to seeing when there would be 100% data exchange between EHR and EDC. So I do not want, in five years, to still have the need to do a hybrid approach. So this is my hope. And then that is what we work towards still. And then another part is let's find a common language between healthcare and research so that we can understand each other.
Alex Deyle: That resonates a ton, particularly around finding a common language across all the different stakeholders involved here. All right. Well, that wraps up episode three of our ResearchX series. A huge thank you to all of our speakers, for sharing their insights. And thank you to everyone who took the time out of your day today to join us and to listen to this. As a reminder, we have three more ResearchX episodes over the coming weeks. Next up on April 13th, we'll be focusing on how novel methodologies and analytics are powering integrated evidence. And finally, since we weren't able to get to all of your questions that you all posted, please know the lines of communication remain open even after we end this episode. Feel free to reach out to us at firstname.lastname@example.org with any additional questions and we'll be sure to follow up. And a friendly reminder to please take the survey upon closing out of the meeting today so that we can help continue to improve and make these episodes even better. Thank you again, and we'll see you next time. Stay healthy and stay safe.