How to best leverage RWE in 2021: Perspectives from early adopters

February 26, 2021
 

Across organizations, researchers are looking to assess how can support an increasing number of drug development use cases. This session brought together industry perspectives across clinical development, regulatory, and market access to discuss how RWE is being applied throughout the lifecycle and what’s in store for 2021.

Transcript

Shane Woods:

Okay, welcome to those of you who just joined us. We're going to give it another minute or so, as we're just seeing the participant number fly up by the second here. So bear with us for another minute and we'll get started. All right. It looks like we have a quorum of attendees on the line, so let's get started. First, thanks everyone for joining today. I'm Shane Woods. I head up the life sciences business at Flatiron Health. I'm a scientist by training. I'm really excited to introduce our panelists, three senior leaders who we've had the pleasure to work with over the years and who we think of as progressive thinkers and progressive figures within their respective organizations. Also, great to see that our panelists followed our business casual dress code. I'll gloss over the fact that we're all likely wearing sweatpants at home today, but I guess that's more of a sign of the times.

So on our panel we have Dr. Maura Dickler, who is VP of Oncology, Late Phase Development at Eli Lilly. Maura spent the majority of her career as a breast cancer researcher and clinician prior to joining Lilly in 2018. At Lilly, she leads oncology late phase development. We also have Dr. Mathias Hukkelhoven, or Math as he likes to be called. Math is SVP of Global Regulatory and Safety Sciences at Bristol Myers Squibb. Math is accountable for regulatory strategy and execution of global regulatory pharmacovigilance for BMS. And lastly, we have Dr. Indranil Bagchi, SVP, and Head of Global Value and Access at Novartis. In his role, Indranil is accountable for the overall strategy for value, demonstration, and market access through his efforts on pricing and reimbursement, health economic modeling outcomes research, real-world evidence and health policy. And if you can't tell, he's not sitting really still. His Zoom video is frozen which is ironic when a tech company like Flatiron can't figure out the tech, but Indranil in your photo and your frozen photo, you're looking very engaged. So I think that's a good thing.

So as those introductions detail, we've got an intimidating panel of folks. But three folks, I know, our teams at Flatiron have really enjoyed working with over the years. I'll do some quick housekeeping items and then we can dive into the discussion. So first, I'd like to draw your attention to the Q&A option at the bottom middle of your screen. Zoom is probably second nature for most of us at this point. But at any time during today's presentation, you can submit a question through this feature and we'll do our best to answer as many questions as we can at the end, time permitting. Also, please excuse any interruptions from our pets or loved ones. In my case, that's a one-year-old and a three-year-old that basically, spend their waking hours trying to find out where dad is hiding in the apartment. So hopefully, no surprises from that duo today, but no promises.
So to start us off, I'd like to discuss how the availability of increasingly rich high quality at scale has impacted the way our panelists think about evidence generation. So a pretty fundamental area. And while we know there is more to tackle as an industry going forward. Certainly, the emergence of, what I like to call meaningful data at scale, has seen early success. And we're also seeing this widening of the aperture of appropriate applications across the drug life cycle for real-world data and evidence. So Maura, maybe I'll start with you. Historically, there have been evidence gaps that we may have just accepted as just the way it is. And I think that's changing. From your perspective, what are the key evidence gaps in the early and late development setting that you think real-world data is well suited or well-positioned to address?

Maura Dickler:

Yeah, thank you. So obviously, I spend most of my time thinking about large phase three trials, but they also have limitations. They often have very strict eligibility criteria that require either a certain performance status or normal renal or hepatic function or normal bone marrow function. And so that we're testing novel drugs sometimes, first in human phase two and phase three in a very carefully selected population of patients that may or may not represent the community. And ultimately, we want to bring drugs to market that are safe for all patients and many who have other comorbidities. So I do feel that that's one large area where we have gaps, where I do think real-world data can help us generate real-world evidence. Patients who are cared for more in the community who have hypertension, diabetes, renal insufficiency, maybe underlying liver disease who ultimately need these drugs when it comes to oncology. But we need to better understand how they behave in terms of safety. So I think that's one very important area where our clinical trials often do not provide sufficient evidence.

Shane Woods:

Yeah. Makes a lot of sense. And, yeah. Great to hear you thinking through the broadening set of applications that you're exploring at a Lilly. Along those same lines, what about using this data to support the trial planning or design or operations where maybe by shifting the inclusion exclusion criteria, you might be actually including more of those patients in the phase three, let's say?

Maura Dickler:

Yeah, well, we're increasingly trying to do that. We're trying to broaden our inclusion criteria to be more patient friendly and more representative of the general population. That's with regard to age and also trying to improve enrollment of a broader population, broader racial ethnic, geographical diversity. I think diversity in large trials are very important in order to adequately test our medications, but sometimes that's just not possible. And that's where I think, RWE can be incredibly helpful.

Shane Woods:

Got it. Thanks. Math, maybe going to you next. In the past couple of years, what are some of the more recent trends you're seeing in sponsor's willingness to use real world evidence in a regulatory setting?

Mathias Hukkelhoven:

Thanks, Shane. I think there are a couple of factors that have helped bring this field forward. First, on the regulatory side, we know that in our 21st Century Cures Act, there has been a lot of focus on real-world evidence to support regulatory decision-making. And I think that has encouraged many sponsors who at least, start to get that on the map in drug development. Also when the industry, both pharma and bio we're discussing the key priorities for the PDUFA VII, that will start in 2022, clearly real-world evidence came up as a big priority area. And in those discussions that have taken place now with FDA without being able to disclose details, I can tell that there has been indeed a lot of priority also from the FDA on advancing the fields and potentially going to open up pilot cases.

Secondly, I think the availability of larger, better curated data sets has increased the likelihood of unbiased or at least less biased results. Right. And I think that has brought the field forward and clearly a company like Flatiron Health has contributed very much to the quality of the data sets that are available in a real world evidence database. And I think also therefore, things like large, sample studies, observational studies, pragmatic clinical trials have begun to appear where real-world evidence is increasingly used. And then finally, the simple fact that some sponsors and some organizations have had successes with using real-world evidence, not just for post-marketing surveillance and safety registries, but to get, for instance, a new indication approved or a dosage schedule, the change in dosing schedule or a different patient population. At least using real-world evidence as supplementary information. And sometimes even, as the key side of evidence. I think that has helped as well. And I think we need to all work together to make those examples more obvious and more numerous as well.

Shane Woods:

Yeah. That really resonates, as we get more experienced with regulatory use cases, we're seeing a much broader set of applications beyond. The one that I think that started things off was contextualizing a single arm trial data, which, actually may be more aligned with what review teams at the agency are willing to consider right now. So yeah, makes a lot of sense. I would be remissed to get through an hour discussion without mentioning COVID at some point unfortunately. Math, we had talked a little bit about this before today, but curious, what impact do you think COVID-19 has had on regulators’ willingness to maybe consider real-world evidence as part of a submission?

Mathias Hukkelhoven:

Yeah, I think COVID-19 really was a unique moment where regulators and industry needed to quickly change course, and the pandemic has significantly accelerated the timelines for development programs. We all know that for COVID-19 medical products and often, real-world data were used for the first emergency use authorizations of medicinal products. We've also at the same time, a couple of months later, when those first emergency use authorization for giving realized that quick and dirty cannot be the solution, right. And if you use real-world data, you need to make sure again that they are curated sufficiently. Regulatory grade, may be too much of a standard, but we've also seen that with the first emergency use authorizations, with evidence leader did not necessarily support that.

So I think real-world data were used and were successfully used, but also we came to realize that curated data is equally important. And then we all know that we were put into this situation with many ongoing clinical trials and with the inability to collect data from the sites and monitors to visit the site. We need to fill in data gaps and, and we have used real-world data to try to make up for some of those data gaps in our ongoing clinical trials. And I think we have seen from the health authorities in various guidance flexibility to allow that approach, of course, under certain conditions.

But I think the regulators have been permissive. And because they realize that with those data gaps, you need to collect additional data that can sometimes fill those gaps. And then we will also see increased safety monitoring, especially of these emergency authorization use products. And I think real world data will play a big role. And similarly the FDA has used, we know that the FDA has used the Sentinel Initiative to actually get data on some of the COVID-19 studies. So that's another example basically, of successful use of real-world data and evidence.

Shane Woods:

Yeah. Just, as a follow up to that, when you talk about those evidence gaps that can't be filled, do you see this as... And that may be where the regulators have more willingness to consider real-world data and real world evidence. Do you see this as where clinical trials just aren't feasible? Is that the connection there? So that could be rare cohorts where you have to run a single arm study, but it could also be challenges in just designing the trial, because maybe the standard of care has changed over time and it's just not even possible to do what you had intended to do.

Mathias Hukkelhoven:

Yeah. I think the strongest use cases are in rare diseases, right where a randomized study may simply not be practical or feasible. And I think there's a lot of understanding from how the agencies, even outside of the US. And I think it's a challenge for all of us to try to slowly expand that use. Right. And, I should say, where a standard of care may not be feasible anymore in a rigid controlled clinical trial that may be another environment. We have ourselves had an example where a study was done mostly in Asia and East Asia in a disease that may be slightly different, have different treatment options than in the West. And so we used real-world data actually to bridge between East and West, if you will. So that has been working as well, but I think that as we progress the field, we need to prove that our data are of high quality and that the endpoints that we gather are relevant. And then I think the regulators will follow with us.

Shane Woods:

Yeah. The bridging study is a nice example of it doing exactly that. Indranil, let's go to you. Similar question, but now thinking about the post-approval setting, what are some of the emerging trends you're seeing for HTAs and market access, decision-makers when considering a role with evidence?

Indranil Bagchi:

Thank you, Shane, for the question and really a pleasure to be here today and apologies my video is not working. So hopefully you still have a picture. Yeah. Great question. And following up on what Maura and Math have already shared, I think in the post-approval setting, what we see is what we call evidentiary divergence, if you will. What do I mean by that? You often see regulatory approval based on single-arm studies, especially for oncology. So what I would argue is regulators are taking a progressive approach towards real-world evidence. Albeit, they're using it as contextual evidence. I think on the regulatory front, the piece that needs to happen is, how do we move it from being supportive evidence or contextual evidence to key evidence based on which regulatory approvals can be done but still significant progress on the regulatory front?

On the other hand, I would at least argue progress has been limited. There has been some, and there's more coming through, but progress has been limited when it comes to peers and policy makers, hence, evidentiary divergence, where regulatory hurdles continue to decrease while analytical hurdles increase. This should not be the case because real-world data in essence can answer health technology assessment agencies, and there's questions about how the products actually perform in the real world. That's really what they want to see. They want to take the efficacy and translate that into effectiveness on, what is the clinical benefit? What's the humanistic benefit? And what are the cost implications? But what we need to do collectively amongst all stakeholders is making sure we address the quality issues, the analytics hurdles issues, and the trust issues, which we'll hopefully get into later into the webinar today. To be able to get to a point where peers, HTAs use real-world evidence seamlessly with the regulators to ensure as many patients around the world get access to treatments.

Shane Woods:

Yeah. One follow on to that. I think it's something like on average, therapies are being approved in the US about six months prior to approvals in Europe. So there's an opportunity to maybe help inform the HTA decision making, going on in the EU with US data. In Flatiron's hands, we're seeing a general willingness of many HTAs to accept US data. Not all, but many especially when there's a dearth of local meaningful data at scale. Has that also been your experience? Are you seeing some HTAs open to US data?

Indranil Bagchi:

Let me answer the questions in parts. Let me cite a couple of data points. The first one is just [use] of real world evidence in HTA submissions. And what I want to cite here is some data from CIRS, the Center for Innovation and Regulatory Sciences. So one of their most recent studies that they've released, they looked at how much retrospective real-world evidence was part of submission in a study that they did in 2011-2015 timeframe. And then again, 2016 to 2020. So in five years, how have things evolved? And the scope of this study was seven countries, major five Europe, Australia, Canada. It's a limited sample, but representative sample. And if I can cite in terms of the data points, most of the countries increased in the five-year timeframe, approximately plus minus 10%. The submissions had actual real-world evidence as part of the submission. Which are the countries that accept the most? Australia, England, and France, which are in the mid to high twenties. So mid to high twenties of all HTA submissions in all those three countries had some sort of real-world evidence built into the submission. Whereas the rest Canada, Germany, Italy, and Spain had more like mid to high teens. But that gives you a bit of understanding of the seven countries. How are they generally, accepting real world evidence?

Coming to the question of acceptance of US data? I don't know how much data points exist externally. Yes, you're right. Absolutely Shane that there's more willingness to accept US data, especially because just temporarily, the US launch often happens at least six months in many cases earlier because of FDA's progressive willingness to accept single-arm trials.

So yes, we are using US data more and more in the submissions in ex-US countries. And there are good examples. Norway, for example, many of the Nordic countries, including Norway is a good example. UK, Italy, Canada, many parts of region Europe would accept US data as part of your submission. I think the notable exceptions, if I will, are Germany. Germany generally has had historically a reservation of all data, but Germany, Austria, for example, are a couple, that we have found through our discussions and surveys, still not willing to accept US data and would insist on generation of [local]. Hopefully that answers your question Shane.

Shane Woods:

Yeah, really thoroughly. That was great. So a lot of that detail, I actually wasn't even aware of. So I'd like to shift us a little bit away from the broader concepts of where real-world evidence can be applied, to how your organizations are actually executing and operationalizing these data. It's actually seldom talked about externally, but I think a really critical area. So I sit across all of our biopharma partnerships. And so I see varied approaches depending on the organization and our partners are generally trying different things to see what works, right. It's still early innings of real-world evidence. And so I think there's a lot of experimentation, which makes sense. Today I think we have a unique opportunity to learn about how each of our panelists has adapted their organization to incorporate real-world evidence into their broader evidence strategies and to hear what's gone well and maybe what hasn't and what are some of the lessons learned.

So let's start with one of the more overlooked challenges in my opinion which is simply operationalizing, how to surface the opportunities where real world evidence might be appropriate. Sounds really simple. But an area that I've seen, some of our partners struggle with at least early on. Just how do you surface those use cases? So Math, maybe we can start with you. Can you share how BMS has adapted here and is continuing to evolve its operating model to incorporate real-world data? I guess in a more systematic way as an evidence generation option.

Mathias Hukkelhoven:

Yeah. Thanks Shane. And that's a very important topic, right. Because if we don't generate use cases and real world examples, then the field will not progress. So we are struggling with how to best organizationally organize this. But I think it's different between market access, on one hand and drug development on the other hand. I think for market access and value and pricing, I think our health economics outcomes and research group, it's pretty well knowledgeable and is pretty much at the forefront of doing a lot of studies with real-world data that help bring forward through HTAs as was mentioned by Indranil. And I think it's part of their DNA now. I think in the development organization of which I'm part of... It's still a different story. I think the challenges to really... That we need to ensure that development teams, for instance, know where and when they can practically use real-world data. And how are they going to do that? It's not necessarily yet naturally coming to them.

So what we have done is a somewhat decentralized approach where the capabilities are within the functions, but we have now instilled two separate things. One is what we call the Real World Evidence Academy, which basically seeks to disseminate the latest information around real- world evidence, broadening BMS understanding of real-world evidence and its applications in drug development. And also in market access and medical. In addition to that we have a real- world research incubation team, and they are like a center of excellence that can advise development teams to think about possibilities if possible early on in the development of an asset where real-world data can be prospectively used. And that real-world research incubation team, can essentially at regular time points, visit development asset teams and chat with them about those possibilities. And I think together with the Real World Academy, that's probably a reasonable approach at the moment. But obviously, other organizations can have chosen different approaches. And I'm glad to hear about that later on.

Shane Woods:

Yeah, definitely. It almost feels like the challenge here is you go more center of excellence, right. And centralized, or do you distribute and almost embed those capabilities across the organization and neither at the extremes is optimal, right? Because now development plans, if you go with COE, they may never see the real-world evidence thinking and the progress that the organizations made. So yeah, it makes a lot of sense. You've got to find that balance.

Okay. So let's take this maybe a step further and assume you've surfaced the book of potential projects, right? So we've got past the surfacing of opportunities to use real-world data. One of the challenges that we've seen at Flatiron and pursuing our own science or working with academics, let's say is that there's often too many ideas, too many questions, but not enough time to pursue them all. And it's not immediately obvious which are the best fit for real-world evidence. You have to do some homework. So Maura, maybe a question for you, I'd love to hear how you evaluate fit-for-use and the potential for success for a given RWE use case. What are some of the novel challenges for evaluation of real-world data in a development setting?

Maura Dickler:

Sure. Well, goes back to really what you had first asked me is, in identifying gaps in the evidence. We have phase three trials and answer the big questions, but it often doesn't answer all of the questions. And also in oncology, there's an incredibly rapidly changing landscape. So your phase three may have read out three years ago, and now there's been a new trial that's been reported and it may change the positioning of your drug. And now you need evidence within the new landscape to say that your drug still has efficacy. But now let's say after immunotherapy, instead of when, before immunotherapy was even in existence. So we use it in many ways. And I think the best way to prioritize is ultimately what our healthcare physicians need. They often help us to identify the gaps and where they need evidence to help them give the best treatments or sequence their therapies.

A couple of examples of where we've successfully used it is we had a trial, the MONARCH-1, which was a really a single arm phase two study, and we needed control arm data for regulators in the EU. And we used real-world evidence to generate that data and showed a benefit of our treatment in a more heavily pretreated patient population, but it was just straight phase two data. And unfortunately at that time, regulators were not willing to accept it, but I still think it was a great exercise that we were able to generate control arm data and show a benefit of treatment. Other ways that we consider using it is that once a drug is in the marketplace, sometimes physicians use it differently. Somebody touched a little bit already on different schedules, and sometimes when using RWD, you're able to create RW evidence on that schedule and that it actually works, maybe is as efficacious as tolerable, no new safety signals.

And that might be a preferred way that physicians want to give it. And yet you're not going to repeat the original clinical trial and RWE is a nice way to do that. So I think listening to our healthcare providers and having them help us to identify the most important questions is one way that we can effectively prioritize that book of work. On the payer side, we always want to show that we're creating value. We want to develop drugs that ultimately bring value to patients. And our clinical trials often are focused on efficacy and safety, but not often on the value that it can bring. And that's where I think that RWE can also be very helpful once the drug is in the community and showing that we're really maybe reducing hospitalizations and, or improving quality of life in patients in the community.

Shane Woods:

Thanks, Maura. That's a great segue to my question for Indranil too, which is a similar question, but in the market access setting, how do teams think about RWE potential contribution to a positive outcome? What are some of the risk benefit considerations that you make in the go, no- go decision to include real-world evidence in say like a dossier submission to an HTA.

Indranil Bagchi:

Thank you, Shane, for the question. Yeah, I think for post-approval or in the barrier approval phase, as we're thinking of market access. For us, incorporation of RWE into the overall package, it's a cross-functional collaborative effort, and we work very closely with medical affairs in terms of developing, designing, and implementing a real-world evidence study. In the end, there are multiple different applications as both Maura and Math alluded through the webinar today. There are developmental considerations, there are safety considerations, commercial as well as access considerations. For us, actual data platforms sits in medical affairs, but it's a seamless collaboration with the medical affairs team, in terms of the age of our colleagues within the value and access team, working closely with the stats people in the medical affairs team, to essentially develop and implement this strategy. How do we work towards it? First of all, we take a look at the therapeutic area.

What's the unmet need in the therapeutic area and what data is available, because it won't be the same for all. In a therapeutic area of tremendous unmet need, where again, you can get approval based on a single arm phase two study and need to have real-world evidence is not just paramount. It's essential. Whereas some other areas, where you may be able to do a phase three trial, the real-world evidence can supplement, but probably won't be the key driver. So that decision that distinction needs to happen first. Next, when we are taking a look at the product strategy, we try to figure out number one, when we should start the study and what evidence will be generated? Because the optimum timing is also related to ongoing clinical trials and real world evidence should never impinge on ongoing clinical trials that have already happened.

And then last, what kind of real-world evidence or what real-world study is being done? That's also important. For example, if you're using a registry where data has already been collected to go and do some analysis on that, you need to go through the additional patient consent again. So you can only use a pre-specified registry so much in many cases, setting up the trial beforehand with the actual intent of data collection outcomes, is very important. Once we have that, we try to find the high value data sets. We optimize data sharing as well as acquisition. And then of course, analytics is very important.

In the end, what are we trying to inform? We are trying to inform the disease and treatment areas to essentially... Can we do a smaller phase three and then supplement that long-term followup in the real world? That's the most critical question that we're trying to answer in addition. And we also supplement the efficacy question with the effectiveness question. So what does effectiveness from your perspective look like in the real world. And last but not least, of course, what does it mean to patients? So as we are trying to develop patient centric treatment, how does real-world data supplement that patient centric approach and essentially help us develop treatments which provide better access?

Shane Woods:

That's really helpful. We have an active partnership with NICE in the UK where we're collaborating on how these data can be appropriately used in a market access context. So an area that we're investing in, how important is the dialogue, and probably I'm guessing like the early dialogue with HTAs when you want to use real-world data or evidence in a dossier?

Indranil Bagchi:

We routinely engage with health technology assessment agencies and payers in terms of early scientific advice. So even before we finalize our clinical trial programs, we would get early scientific advice from agencies, like NICE, IQWiG, CADTH, and so on and so forth. And while we were having that conversation, the discussion is not just about how does the phase three program look like? What's the end point? And what's the length of the trial. The discussion also includes in the interest of meeting the patient unmet need. How does the phase three program look like, but then how can we also supplement that with real world evidence? And that's where I cited some of the earlier data in terms of willingness to accept data from otherregions. Some countries are much more willing, whereas some others are not, but for us, we would gather this information as we were going into the clinical trial or only scientific events.

Shane Woods:

Yeah, makes sense. So I want us to take a step back and get your views on the future of RWE. This is I guess the fun part of the conversation. So I want us to think at the macro level about what needs to happen both as an industry and with regulators and health authorities to advance you. So a question for all of our panelists, what are some of the key uncertainties or foundational areas that we need to tackle to advance RWE? So maybe let's hear from Maura first, followed by Indranil and then closing with Math.

Maura Dickler:

Well, some of the challenges for me on the development side and some important questions that I would love to use RWD for is efficacy. And I think that remains really uncertain. In clinical trials, there's very strict RECIST criteria that radiologists use to read scans to determine response rates. And then also progression-free survival, whether that's determined by scans or the physician in the office. And I don't think that that's well captured in RWE. I think that radiologists in the community read imaging studies very differently than they do generally at institutions that are participating in clinical trials. Sometimes radiologists are even part of the budget, let's say, as a clinical trial and that they're actually paid for their time to review these scans and people in the community aren't. And so the idea of applying RECIST for routine scans, just it wouldn't happen. And I would love to see us be able to leverage RWD for efficacy whether it's response or progression-free survival, it's obviously better for overall survival because you're either alive or you're dead. But many of the other end points are much fuzzier and makes efficacy difficult.

Shane Woods:

Yeah. And it's an area that Flatiron heavily investing in right now specifically around. And that's right around the radiographic imaging, both as we think about how do we augment the core data that we have with images? But then also, how do we use those images to help validate and actually develop like new ? So it's like, it's definitely an area of focus, really agree with everything you said. I think we've said...Indranil, we'd go to you next.

Indranil Bagchi:

Yeah. Thank you, Shane. So I'll touch on a few things. So your question is, what are some of the key uncertainties or foundational elements? How I would tackle this is, what's getting in the way for us to use more real-world evidence so that we can remove these barriers, right? That's essentially what you're trying to get to. Number one is quality. I mean, we need to ensure quality data sources and there are many today and, of course, Flatiron is one, but we need to increase the number of quality data sources, not just companies, but also payers, policymakers, physician groups can tap in to be able to generate data that can rival or supplement or complement the clinical trial data that's generated. That's number one. Number two, capabilities. Capabilities are being built, but still lot more capabilities need to be built including statistical and analytical capabilities, because all the data that we have today, we're probably only at the tip of the iceberg in terms of what we are actually analyzing and using.

So how can we scale up those capabilities to be able to really visualize, and then put an analysis plan to answer the right question at the right time for the right patient. And then, in terms of infrastructure hurdles, the big issue, the big challenge is, of course, the protocol approval, formal data transfer. What does it mean for anonymized data and so on and so forth? And I think there are some great examples, most recently, due to COVID where we were able to accelerate many of these questions. And I think there are great examples with COVID vaccines, where in specific countries, UK and US, to name a couple where there were great steps taken in terms of prioritizing access to the data, to be able to provide as protocol approval and the use of the database for COVID treatments.

I would hope that there are learnings that we take away from this, which become a permanent feature going forward. And then the last piece I'll address, in the end, all of the efforts that we do on the science and technology front will not go anywhere unless we have mutual trust between regulators, HTAs, manufacturers and patients in terms of what does real-world evidence mean? And how can that essentially provide the proof of efficacy and effectiveness we're looking for. More work needs to be done in terms of building trust with many different stakeholders.

Mathias Hukkelhoven:

I totally agree with Maura and Indranil. A lot of things were mentioned, the quality of the data sets of the data sources having, high-quality curated data sets, is going to be crucial. We need the right methodology for the right question, right. Fit-for-purpose as the FDA says it. And it's a guidance. We need to show, as Indranil says that the quality of a real-world research proposal or study is at least as good as a clinical trial and may give, actually more information. And I know that Flatiron has done that in a number of cases where you want to reproduce the results in the real-world setting that you got earlier from a controlled clinical trial. And you may even argue it may not be exactly the same because it is a different setting, right?

It is the real world setting versus the very controlled, something that more I talked about in the beginning. But I think it's very important that confidence and that trust in the data gets established. And I think it's still, to my job, a binary world. There are still a lot of statisticians that don't believe fundamentally that you can get around the bias question. And then there are statisticians that do believe that with the appropriate safeguards, you can reduce bias. And I think in the health authorities, we see at the same, some of them... And it goes back sometimes to the individual clinical review or viewpoint. Some of them believe that it should only be used when the randomized study can absolutely not be done. Whereas others think that there may be other uses, especially when issues like quality and appropriate endpoints are addressed. So I think we all need to be mindful that we are responsible for this field, because it is still in an embryonic field and by providing real-world evidence samples and examples, I think we can further the trust and the confidence in this field.

Shane Woods:

Yeah. Thanks for that. Thanks to all of our panelists for that. So to wrap up before we dive into a Q&A, we thought it would be interesting to poll the audience to see which applications you're most looking forward to using RWE in the next year. So you should see a poll pop up on your screen. Okay. I got one on my screen. And the prompt here, where in the drug development life cycle are you most excited about applying RWE this year? And we have a few options discovery/translational, clinical trial design and operations, regulatory, market access, and post-approval/commercial. So let’s give folks another 10 seconds here before we close our poll. All right. Let's see what we've got. Interesting. So clinical trial design and operations, Maura, will be happy as the winner there followed by regulatory and then followed by a market access application. So interesting, maybe that says a lot about our audience as well, who's listening in today. But thanks to the audience for submitting those. Okay. So let's-

Mathias Hukkelhoven:

That difference is within the limits of the statistical.

Shane Woods:

Okay. Yeah. Good call out. So we have about 10 minutes left. So let's shift to the Q&A portion. Thanks for all the great questions. I'm seeing them come in here. I'm going to start with a question for Maura. So it reads, I'm very interested in RWE generation and populations underrepresented in clinical trial populations. So this sounds similar to what you were talking about earlier, Maura, given the under-representation of these groups and trial datasets, how can we work towards ensuring that such data is collected to ensure any differences such as toxicity and efficacy are understood and there's equity and access and benefit to modern medicines? Great question. Hard question, but great question.

Maura Dickler:

Yeah, I saw that already. And I was thinking about it, so there's really two parts to this. I think that there's our need to improve our reach in just phase three clinical trials, standard studies with control arms. And I think that getting trials to sites that are more embedded in the community is very important. I think engaging with different populations, I think historically there's been a lack of trust for some populations and they have not wanted to participate in clinical trials. So we really need to bridge that gap. And then also engaging the physicians that care for those communities, so that they're investigators in those trials. And I think that might improve accrual for those underrepresented populations, applying those same rules, some of those same suggestions to collecting datasets.

I think that we need to engage physicians to be a part of the Flatiron network, let's say, who are in those more rural and, or city-based communities where we don't often have the best reach. Maybe we need to incentivize their participation. I think it's really very, very important. It does take time to enter data into the electronic medical record. And I think that we need to think of creative ways to lessen the burden and ultimately improve the data that we collect that way as well.

Shane Woods:

Yeah, it resonates. Flatiron has two parts to the company, obviously Maura the part that you see more of is on the research side, but right. We have a whole, we have an EHR, we have a healthcare provider side. So well, one of the areas that we're investing in heavily to make the EHR easier to enter and structure more data at the point of care.

The question for, maybe you can go to Math for this one. How do you assess the impact for a use case that use real-world evidence? How do you socialize impacts within and outside of your organization?

Mathias Hukkelhoven:

Yeah. Thanks. Thanks for that for the question. Yeah, I think that's, that's very important, right? To show that certain use cases have been successful, but at the same time, also the unsuccessful ones need to be socialized. And if possible, the reasons why they may have failed, at least at this moment, they may have failed. I think the concept of the Real World Evidence Academy that I mentioned, where we collect on a specific website and we go to audiences like development teams and so forth to give those examples. I think that goes a long way. And then the real-world evidence research incubation team that can travel if you will, from asset team to asset team to make them aware of those possibilities. That's going to be very helpful as well.

And then I think if you take it outside of your own organization, we collectively as an industry, have a strong interest, I think in making sure that the world understands those examples. So if we can share them without obviously disclosing potentially proprietary information, but disclose the principles as much as possible. I think that goes a long way. And I think the FDA has also encouraged sponsors to do that.

Shane Woods:

Thanks Math. I'm going to Indranil for a second. Who decides what evidence gaps are? How do you reach consensus something like a multi-state stakeholder Delphi Panel?

Indranil Bagchi:

Yeah, the answer is already sort of in the question, right? So, yeah. Who decides on what the evidence gaps are? I think as I referred to before, in many cases, most of the industry these days engages in early scientific advice, not just from a regulatory, but also from a peer and HTA perspective as we are approaching phase three. So what would be the evidence gaps of a prospective phase three program? That's something we would sit down and discuss either jointly with, let's say FDA and different insurance plans in the room, or in many cases in Europe with EMA and maybe EUnetHTA, or some of the HT agencies like NICE and IQWiG, either jointly or parallel.

We will get the scientific advice and essentially take a look at the clinical trial program and say, "Okay, with our clinical trial program, these are the evidence needs that we can meet. And these are the other ones which will either be generated through a phase four post-approval study, or in many cases supplemented with real-world evidence." So it's a mix of internal and external stakeholders coming together and essentially agreeing on what's the fit-for-purpose data set that's needed for the exact indication that's being discussed.

Shane Woods:

Thanks for that Indranil. Okay. We have about five minutes left. There's a great question here which I think will help us wrap up the webinar. If you had a blank check to invest in new capabilities for real-world data and evidence, where would you put that money? Let's hear from Math then maybe Maura, then Indranil. Keep it to a minute or so. We could find out where you place your bets, I guess.

Mathias Hukkelhoven:

Yeah, let me start then. As I mentioned, having good data sources that are high quality is important, and one of the examples is obviously, Flatiron, but there are many more providers and those data sets do not come cheap. Right? So you know that Shane. So I think a big part of that check we'd go to get the access to all those databases. But if something is left, I would say internally, we need to spend it in resources on how we organize real-world data and evidence generation in our development teams by education by pointing to success stories and so forth. And 10 years ago, or 15 years ago, we all wanted to include Japan in our global development plans. We need to do, as an organization, expect development teams to incorporate real-world evidence plans in their development plans. And, that takes organizational energy and discipline. So I would spend certainly some of that money in that area.

Maura Dickler:

Maybe to build on that. I think that we spoke already a little bit about how the healthcare providers are working very hard these days and are under tremendous pressure. Yet really, the elements that they put into the database are so important. And in some ways, I think, and I know you had mentioned Flatiron is already looking at this, you need people to be inputting the data that we really need, and that they're able to take the time and the care, right. So that we're getting good output, good data from all of the patients that are cared for in the community. And the physicians model right now is to see more patients and it doesn't leave a lot of time for that.

In some ways we're so dependent upon what they put into the system yet they're not compensated for that. And so they're either, somehow I feel like we need a better fix there. And then again, I spoke about how efficacy could be so important to fill gaps as treatment landscapes change, as drugs are used after treatments that weren't available at the time of clinical trials, and to be able to get at whether your drug works after new drugs come to market and help doctors adequately sequence their treatments. First line, second line, third line, that's all based on efficacy endpoints. And so I would love to see, I find that that's where I would put some dollars in value.

Indranil Bagchi:

I'll go quickly. So again, I highlighted some of the barriers as data and the infrastructure, and then trust for me, trust is the biggest issue. So I think, if I had a blank check, I would fund research to reinforce trust in the different stakeholders that generate as well as use real-world evidence. What kind of research can we fund? And then can that generate essentially mutually trusted and agreed upon standards for data quality? What does a GCP for the world evidence look like? So I would focus, and Maura and Math have already alluded to the data quality piece, I would focus on the trust piece and see if we can build that. Back to you Shane.

Shane Woods:

That's great. Thanks. Thanks everybody. Well, we're at the top of the hour. The hour went pretty quickly. I want to thank again to all of our panelists, as well as all of our attendees for joining today. If you have any outstanding questions following the session, feel free to reach out to your typical point of contact at Flatiron or email us at rwe@flatiron.com. A friendly reminder to please take the survey after closing, if you have a second, to help us improve future webinars. Have a great rest of your day. Stay healthy and stay safe. Thanks again.