This is a completeness table example of what we need to know before we analyze the cohort data. Displayed here is the proportion of patients with analyzable results from structure data. Lab values are normalized and harmonized, non-analyzable results, such as pending, are removed. And if we have a question about kidney dysfunction we're actually at a pretty good place. We have a lot of data, we have 99% completeness, which means virtually everybody in this cohort had a creatinine within 30 days of starting immunotherapy. But, what if we have a question about the coagulation status based in INR? We need to think a little more carefully. We only have 25% completeness. Oh dear, is that a problem with our data pipeline? Or maybe it's actually expected, because in clinical practice INR isn't a common lab to get before starting chemotherapy. And this extends beyond labs. ECOG as well, from a structured data point-of-view, may not have great completeness. But, that may not be because physicians aren't evaluating performance status, but just that the workflow doesn't lend itself well to recording it as a structured piece of data.
Another really important quality metric is clear provenance. In the art world, provenance is used to demonstrate authenticity by tracing ownership and location of a work by Renoir or Jackson Pollock. In the RWE world, each data point is also unique and precious. And we need to know where it came from and who touched it and why. So, let's take a closer look at Joan's clinical journey. Here is Joan's structured data, including the lab report we saw earlier. Now we add in data abstracted from unstructured documents. Biomarkers, such as PDL1 status, adverse events like colitis that impact your treatment and points such as progression. And maybe we link in with external genomics database, or have a derived variable to fill in our mortality. In the future we can add in patient-reported outcomes or data from wearables. Now multiply that data by thousands of patients in a cohort and things get complicated. Or multiply that by the 50 patients in your small cohort where each patient is gonna be under very close scrutiny. And this is a place where technology can help us.
To have maximum confidence in RWE, each data point has to have a clear, traceable and verifiable provenance no matter where it comes from. This RWE provenance and audit trails is borrowed directly from clinical trial standards. Here in the black box we know that Sue Smith abstracts real-world progression. We know which documents she looks at, the radiology report and the clinician note, and we know when she does it. We also know how well Sue did at her last testing for response assessment. If a double abstraction that occurs during quality monitoring identifies a discrepancy, we can look back and try to figure out why. Does Sue need more training? Or maybe it's just a confusing case, an art of medicine kind of situation. Or maybe there's been changes in clinical practice that we need to update our guidelines for abstraction and update our policies and procedures with the important caveat that you need versioning. So, the key is that this is clear to external stakeholders and know what we have a black box here in the middle, that's exactly what we're trying to show, to show the details of how we got to the RWE data points.
So, provenance answers the who, what, why, when and how of each data point. To make all this work, we need systematic monitoring. The goal is not perfection, but having a high enough bar that we can make good regulatory decisions that benefit patients and limit harm. Quality monitoring is a real-time assessment of these factors. Quality monitoring takes time, resources, people and technology. The elements are risk-based validation, defining and enhancing best practices, quality management and review and most importantly measuring and publicly reporting our quality. This quality report needs to accompany each and every version of the dataset. This report allows for transparency, identifies bias, and can help us optimize analysis so we get to the right answer. For example, if we have a lot of missing mortality data that can wreck havoc on our overall survival and we need to know why. In summary, quality monitoring's goal is to generate clinically meaningful and actionable regulatory grade RWE.
We're engaged with submitting RWE across a broad spectrum of diseases, use cases, and regulators. Flatiron RWE has been used as both primary evidence and as supportive evidence. We have use cases for label expansion, initial therapy approvals, trial design, and others. And we still have a lot to learn, all of us. So now I'm delighted to welcome Amy Abernethy from Flatiron and do I see Sean Khozin in the audience? There he is. Please come up to the stage for our moderated discussion. So Amy you know well, she's CMO and CSO of Flatiron. And Sean Khozin is the Associate Director of FDA's Oncology Center of Excellence and the founding director of INFORMED, an incubator for driving innovations in agile technology, digital health and data science to advance public health. Is that right?
Sean Khozin: Yes.
Rebecca Miksad: So we've asked Amy and Sean to talk about their perspectives on what we need to get right as RWE moves into the regulatory decision-making space. We've reserved 15 minutes at the end of this session for an open Q&A so please hold your questions until then. So Amy and Sean, thinking about the checklist, what do you think is most important for us to think about in terms of getting to regulatory grade RWE?
Amy Abernethy: Well I'm going to let Sean start, and then I'll fill in a little after.
Sean Khozin: That sounds great, it's great to be here. Well, all the items on the checklist are obviously critical, it's hard to pick which items are the most important, it's like picking a favorite musician, so obviously data quality and provenance are critical in generating regulatory grade data, and the ability of having audit trails for source document verification, at the various touch points along the journey of the data, as it's being generated, is very critical. Completeness, is very important. As I was standing in the back listening to your presentation Dr. Pazdur, the Director of the Oncology Center for Excellence was there, so I asked him which one of the items sticks out to him, and he mentioned generalizability which is a critical concept when it comes to evaluating real-world data, because as we've heard today and before only about five percent of all oncology patients are in traditional clinical trials, and that can sometimes compromise the generalizability of the results of traditional trials, so that's an important concept too.
Rebecca Miksad: Amy, what would you pick out as the most important?
Amy Abernethy: Well, I really like Eminem as my musician, so sorry. The two things that are on the checklist that don't necessarily get as much attention but I think are really important and we're seeing this in our analyses, one is longitudinality, essentially longitudinal follow-up, so making sure you've got enough follow-up to see the outcomes of interest. It sounds obvious, but it's often not something that people pay enough attention to. And I think the other one is that standard of care is changing so quickly that the data set has to reflect standard of care during the period that matters and so data recency and reflection of contemporary standard of care. They're the two.
Rebecca Miksad: And Amy, when you were talking you mentioned that we need to have the right level of oversight and quality, how do we get there? How do we benchmark what is good enough for regulatory grade RWE?
Amy Abernethy: So I think that this is ultimately something that we all have to practice together and start demonstrating. Here is a question, here's what the answer looks like, and then it be inspected from all sides to say I believe and have enough confidence in that to use it in the care of my patients. I think one of the issues is good enough is one of those things that you recognize when you see it, but it also means that it needs to pass the sniff test. Having a regulatory body in the US or ex-US accept something actually helps us understand what that indeed looks like, but it also needs to make sense to the clinician in the office as well as to the journal editors and publishers. I'm curious also within the context of good enough, Sean's thoughts as it relates to both good enough but also how do you need to think about the validity of the data behind it?
Sean Khozin: That's a great question, it's a very complicated question, because what is good enough when it comes to evaluating clinical trial data? So we have the Office on Scientific Investigations that do site inspections, and we always see protocol deviations and anomalies in the data, however what we look for are certain red flags, like a fraud or mismanagement of the data such that the data loses its integrity. And for the most part that's a subjective assessment that we make in collaboration with the Office for Scientific Investigations. So I think we can apply the same principles, and the lessons learned through evaluating real-world data that there's no such thing as perfection, as perfect data, and we can use the lessons learned from protocol deviations and data integrity anomalies in traditional clinical trials to try to raise our comfort level to the same benchmark when we evaluate real-world data. So that internal knowledge is there at the FDA, it's hard to articulate it and spell it out concretely, but you know there's a vehicle for assessing data. But at the end of the day it's really all about data quality and provenance.
Rebecca Miksad: So when you say evaluate the data, what should life science companies think about in terms of when the FDA comes knocking at the door? How are they going to get evaluated?
Sean Khozin: Right, I believe there are different types of evaluations and when the data comes to the FDA, the evaluation is analytical. The assumption is made that the data has integrity after we've conducted our site inspections. So when the data is in the hands of the reviewers, it's already been vetted, and there's a certain degree of comfort level with that. So what's important is what happens when the data is being generated and as the data is being generated and extracted, those audit trails are critical because the way that at least the FDA evaluates data integrity is by examining data provenance, and the audit trails that are left behind, and completeness is very important in that sense and having these clear demarcations and audit trails is something that should be a critical part of extracting and generating real-world data.
Rebecca Miksad: So, Amy, as somebody who might be presenting to the FDA or have our data presented, how do you think about building the capabilities and infrastructure to be able to demonstrate the quality?
Amy Abernethy: So, I'm gonna answer this from both the Flatiron perspective and from a life sciences company perspective. I think on the Flatiron side, we need to be demanding of ourselves that we're building the infrastructure and capacity, for example what Zach said, that we have to make sure we build the technology to do what we believe is important and others believe is important. From the life sciences company side, I think it’s actually important that the life sciences company has on the team members who can think about data quality, and actually look at, for example, completeness and validation reports and say I understand what this report is telling me and how that's gonna translate to my analytic plan. I think it's really important that the team has the prespecification of analytic plans and that's something actually built into the overall rubric. And then, finally, I really think overall going forward, we're gonna need to continue, companies are gonna need to continue to build out the team of members who actually can speak the language of data science and be embedded in HEOR and discovery and R&D in multiple places. I suspect though, at the FDA, you are also thinking about what kinds of infrastructure and capacity you need to build out in the landscape of RWE. What does that look like for you?
Sean Khozin: We're dealing with a lot of challenges, other organizations are dealing with, and it's critical to incorporate data science and data scientists into critical development teams for industry and for the FDA. We have been working very hard to integrate data scientists and folks that are experts in advanced analytics into the review teams, and have them work shoulder to shoulder. You know, the first wave of attempting to do that at the FDA, and other places, was to incubate innovation teams. And Amy that has advantages and disadvantages, obviously you don't want these teams to be siloed, and get stuck in their own echo chambers without having the ability to impact critical development and how these protocols are being written. So ultimately, it's about integration and collaboration, because drug development and data science, it's a truly multi-disciplinary science, and I think we need to become more comfortable with diffusing some of the traditional boundaries around these disciplines.
Rebecca Miksad: Thank you so much, we're about to run out of time for this moderated discussion, so I wanted to ask each of you one thing that you recommend the audience pay particular attention to throughout the summit.
Sean Khozin: Well, I think the fact that real-world data really speaks to the experience of real-world patients, and I go back to what I mentioned earlier that we're not capturing the experience of a large group of patients in traditional clinical trials, and when I look at the mission of the FDA is to promote and protect public health, and that is really, we'd like to understand how patients are doing on the drugs that we regulate, both during the drug development process, but also in the post-market setting, and that's the real world.
Rebecca Miksad: And Amy, what's your recommendation for the audience?
Amy Abernethy: I would look at what each other is doing and sort of say, how can I take these examples and what does this tell me about what I can do within my own environment?
Rebecca Miksad: Thank you very much, both of you, we'll hear more from Amy and Sean later in this session.
Sean Khozin: Thanks.
Amy Abernethy: Thanks.
Rebecca Miksad: Thanks very much. So I really appreciate Sean and Amy how you've been able to think about those checklist items and try to start bringing them to life. Now we have two featured presentations, the first is Michael Kelsh, from Amgen who will discuss a successful RWE regulatory use case. Michael Kelsh is the current Executive Director of Amgen’s Center for Observational Research and Oncologic Therapeutics. Dr. Kelsh oversees epidemiologic research, evaluating the development, efficacy, and safety of Amgen’s therapeutics. Welcome, Michael.
Michael Kelsh: Okay, well thank you Rebecca for the introduction and thank you to Flatiron for inviting me to talk about this study that we did at Amgen for our BLINCYTO product for ALL adult refractory relapse of adult leukemia. And I'm going to really dive in deeper to some of the themes that have been outlined by Amy and Zach earlier about how we can use the data and what we need to think about with real-world comparators and just describe how we addressed it in our study.
So as we were doing this study, we saw this article come out and we were very interested in, because the title of focusing on oncology drugs and the use of non-randomized trials, we were doing that, and we saw that this laid out a nice start to where you think about when it might be applicable to use real-world evidence and they point out the high unmet medical need, lack of response available in these patients, a lack of treatments available, and that the patients are well characterized and you can identify them, and we see expected high response rates in this kind of scenario with rare diseases, this really offers an opportunity to consider external controls and external compare groups.
And so, generalizing this a little further, if we take the considerations of medical need, disease urgency, the patient population size, and the drug effect size, looking at these characteristics, you can see where the probability of use of real-world data is probably higher when there is a high unmet need and we want to get data to the regulators as quick as we can and when the disease urgency is a serious and life threatening event, again we want to move this process along faster and use available data sets. And as discussed earlier about the smallness, the small data sites, the rare diseases, it's really hard to get all the data in a trial, you can take a long time to enroll these patients, so the quicker we can bring data to view and discuss, the better.
And then also just in the real-world evidence parlance, to be able to identify a large effect is quite a bit easier than various subtle effects. So if you're looking at situations where you think your therapeutic has a big impact on patients, and I'd say this list is not complete, there's clearly ethical reasons why you might not be able to do a trial, you might be exposing patients to screening activities that is very invasive that in the thera-control group, they don't get any benefit from that, and they've had to go through this long process. So these were all the factors we think pointed to the use of historical data in our case. And so with leukemia it met those kind of criteria, it's a rare disease, high unmet need in relapse refractory leukemia, and there wasn't a lot of treatments available at the time that were affected in this sub-cohort of the population.
So unfortunately, we didn't have a dataset available in a claims database or an EHR database that really identified leukemia. So at that first point of, can you identify the patients and follow them through, we didn't have that type of data set for a variety of reasons, because most of this treatment is done in specialty centers, and is not incorporated into all these databases. So, we went out and collected the data from the original investigators. We contacted eight different groups in the European Union, and we worked with three different referral centers in the U.S. And they essentially had established their own internal mini registries that we could tap into. So the data were already there historically, that they collected especially because they knew this was a rare population, they had a lot of focus on it. So in the point about cherry picking, I picked up on Amy's talk that we asked them for the data but we didn't want to give them their best results patients or their worst result patients, we wanted really all relapse refractory patients, so at the start of this project we said all these patients who are relapse refractory over the time period, we started in 1990 in this case, we'd like you to forward those to us, and we compiled the data, harmonized the data by data definitions, and we had the original primary data to do this. So if we had to look at published literature, or try to do a med analysis, it wasn't all there to do, we felt we had to go get the original data sources.
So we created a much larger data set than we actually needed to compare to blinatumomab patients. The other important aspect is often in the databases are endpoint, complete response was not adequately recorded. Again the longitudinal follow-up that was referred to, we didn't see that in claims data or other data sources so we had to go to the primary source. Then we subset that and matched through the inclusion criteria, picked the sub-cohort of patients that were closest to our trial, and provided a number of different analytical strategies to address and to compare the historical cohort to the blinatumomab patients.
The key issues here in the use of historical controls and you'll see a lot of discussion around this, the comparability. How comparable are your historical data to your trial data, in terms of selection, in terms of measurement of outcomes, in terms of the standard of care they may have received. So that was the first big overriding challenge we had to deal with, and then as I mentioned getting access to the data was a key thing, you have to have a patient data set we could go to and so we created that and the different sites had the data we needed. And then we were able to there were commonly used outcome definitions, exposures, and covariants that we were able to capture and then define in the standard way. And then in epidemiologic terms the big issues we have to deal with are comparability, there's selection bias, confounding bias, and other immortal time bias, so we had to come up with both design and analytical strategies where we could address these biases in a reasonable way. It's never gonna be perfect, but in a reasonable way that seems credible to the clinicians, to the regulators, to ourselves.
A big concern too was also, and it was alluded to earlier, has the standard of care changed across time? And we had a lot of discussion of this with how constant has the standard of care been during the period? Because there was a debate the further back in time we go, the larger the sample size, more statistically robust, but has standard of care changed over that time? Is standard of care different across different regions? So when we have these historical comparators, are we able to combine them in a reasonable way and there's not too much heterogeneity, and could that affect our results?
So, simple weighted analysis of complete remissions, so we have here three factors, we looked at their treatment, whether first or later salvage, whether they had a transplant, and we looked at age groups. So you see across these different groups, so you see across these groups there's different CR rates. You get higher rates in those who are only in first salvage, like line two here, a 44% rate, and you get lower rates in people who are older in a higher salvage. So we needed to adjust for all of this, because our patient population didn't look exactly like the historical comparator. And it was different enough that we wanted to adjust. So we just took the strata, the only data from really the trial here is weights for these distribution of covariants and then we get a historical estimate and we're able to compare that to our clinical trial estimate at the bottom there.
Okay, here we go. So taking it further to apply to the outcome of mortality, survival, I'm showing you here the propensity score analysis. Now we did all these types of analyses for both outcomes, but just as the flavor of the data, this was looking at the survival of blinatumomab patients versus our historical control. Matched for more factors which we can do in the weighting for the propensity score than we could in the simple approach. But the simple approach was we felt transparent, easy to understand, and people could see what's happening in the data, where here it's a little more statistical methods and more mathematics applied. But we adjusted for eight or nine different factors and could see that the blinatumomab was experiencing lower mortality and regardless if we did some stabilizations of the weighting or distributions of the propensity score, the results were pretty robust and constant across the different categories.
So, here we were able to publish both of our results from the historical comparator, the first publication on the right there was the broader data set that everyone could look at and see our larger set of patients and the second one focused primarily on the comparison with the blinatumomab patients. And a bit later, actually about less than a year or so later, we had our phase three trial readout and it was published so it was interesting to compare how well did the historical comparator look with the phase three data. And happy to say that with our comparison group on the CR rates, we can see between the control group and the trial on the right and the historical comparator on the left, it was very similar to what we observed in our first study, in historical data, and what we found in the control group in the trial. And then finally, looking at survival, very similar finding with the phase three data and our historical controls. So that left us feeling like we maybe got it right here. So, just running out of time, we have 10 minutes, in sum, we were able to address these issues, we took these data to the FDA, we had pre-specified it, and it led to a faster regulatory approval. And I can't talk about this study without recognizing the many study collaborators. It was really a collaborative effort. Working with Dr. Gökbuget in Germany we were able to identify many different data sources. Thank you very much.
Rebecca Miksad: Thank you so much Michael. What I really appreciate about your talk was the demonstration about completeness meant that they had to go back and re-abstract the data from the primary source. So sometimes it's not good enough to have what the institution had made for their mini registry, but you actually had to go do the hard work of getting that data.
So next I'm very excited to introduce Kathy Tsokas, who will be talking about her perspective from the life sciences company about real-world evidence in regulatory data. Kathy is the Janssen Regulatory Head of Regenerative Medicine and Advanced Therapeutics, RMAT and the Johnson & Johnson Director of RMAT Network, where she ensures global regulatory policy strategies, contribute to and support the development of RMAT products across several therapeutic areas. Thank you so much Kathy.
Kathy Tsokas: Okay, great, well it's really good to be here, thank you for inviting me and my talk will be a little bit different because I'm coming from the regulatory strategy perspective, so I really need to work closely with our cross-functional and discipline colleagues to make sure that we're addressing the regulatory strategy in regard to real-world evidence. So I do want to acknowledge my colleague Rebecca Lipsitz who is our policy lead for real-world evidence, because she helped me prepare these slides, and my colleague Barbara Colp who is in our oncology therapeutic area who is here with me today.
From a Janssen J&J perspective, when we think about regulatory strategy and how we can utilize real-world evidence in regulatory decision making, really our mission is to effectively support the innovation and patient access via the use of real-world evidence. And so that really, I believe Amy said it earlier, our basis, our fundamental need and desire to do this is to make sure that we can bring these products to patients as soon as possible. So how can we utilize real-world evidence to do that?
So I repurposed this slide, I usually use it to say that global regulatory affairs, or GRA, is involved in the end-to-end process. But really, when we think about it, regulatory and other key functions, it's our compound development teams and our project teams that are multi-disciplinary, they have to be, in regards to our strategy. And so we've seen, we've heard earlier today, and throughout the rest of the Flatiron summit, there will be multiple examples of where real-world evidence has been utilized, label expansion, safety, market access questions. But the intent is, and I think from a future perspective, Amy talked about 2023, how can we make sure, how can we utilize real-world evidence for opportunities throughout the development life cycle? And so we're not there yet, from the earlier stage, I think we're starting to, at Janssen and J&J, we actually do consider how we can utilize real-world evidence to answer certain questions and we do have early discovery scientists on the team with our late-stage colleagues.
So again, it's there and I'm sure other companies do the same thing, but again it's an evolving process and we can't do it alone. It has to be multidisciplinary. So what are our key objectives when we think about real-world evidence from a regulatory perspective? We need to make sure that real-world evidence is part of the regulatory strategy. It's not the only piece, but it has to be considered, and that's been our big push. We wanted to consider it from early stage through to late. We need to make sure that we're gaining regulatory acceptance of our real-world evidence methodology. We need to make sure that we can enable real-world evidence as an alternative for our post-marketing requirements and commitments, and we also need to capitalize on these opportunities. If we don't try, it was discussed earlier today, if we don't take that risk in our strategy and our development of our products, if we don't try to see how we can utilize real-world evidence, we're not going to adequately address how we can meet our angles, our target product profiles, how we can make sure that we're providing our products to all the patients that can actually benefit from them.
So this is a long slide, but I wanted to just share, when we sit down at a team level, and there's multiple discussions, you know, what are our considerations? We always start with our target product profile, right? That's the initial goalpost when we're developing our products when we think about okay well this product will work in X indication, right, that's what we start with. So we need to consider okay well can real-world evidence provide some of that information? Are there data sets already out there from a retrospective that we can utilize to answer certain questions? And it really does depend on the indication, it depends on your patient population, and that's why I said we start with the target product profile, and I will say at Janssen and J&J we're looking, oncology is the focus for this summit, but we look at it throughout our therapeutic areas.
So again, there's a lot of different things that we need to consider. We need to think about, from a perspective opportunity, should we start collecting data now, maybe in the phase one early on, so we can keep that, we can gain information, our data set will be more complete, hopefully, so that at the end we can use that to do some prospective analysis as we're considering. We think about the endpoints collected. Where is that coming from, it was talked about earlier, electronic health records, should we consider claims data, can that be used to answer the questions that we're looking at? Do we think about registries and pragmatic studies? And then also I believe it was Amy who mentioned the fact that when we think about the kinds of expertise, again it goes back to making sure that you're having all different functions at the table for the discussion, but from a regulatory perspective, we need to make sure that we have the right skillset to allow us to have those discussions and interactions with health authorities, with the payers, and market access colleagues.
So, challenges, and again, I'm not gonna go through this because when I heard all the previous speakers, it's the same challenges that they mentioned. But I do want to point out that all these challenges directly relate to what our considerations are. So if we come up with a question in what we're considering, we need to realize what the challenges are and how we can mitigate them, and so that is part of our regulatory strategy. It's part of the questions that we ask ourselves, the questions that we go to when we think about our other external colleagues, whether it be from organizations, when we speak with health authorities, you know and it was alluded to that we're only gonna get to the answers if we actually take the risk, do a study, or propose a way forward, have that discussion with health authorities to see if they'll accept it or not. Because as we've learned, if they accept it or not, we, both our company but also the broader industry will learn from that. And I do just want to mention here, we need to make sure that our informed consents are appropriate because we're going to use that data. And going to the future, it's a different world. We need to make sure that patients and subjects understand how we utilize that data, that patient privacy is key, that it's de-identified data, that there is no way for it to be re-identified again, it's just something that we from an industry perspective need to make sure that we consider. So from an external landscape, yes there's considerations that we have, we definitely have challenges, but I won't go through this because it was mentioned earlier.
But now is the time, there is so much potential for us. We've seen current examples, we know in the future that there's so much more we can do. PDUFA VI 21st Century Cures gave us that ability, it gave FDA that ability to really make sure that they're focusing on this. We have the deliverables, so with these deliverables, you've seen that there's activities ongoing. And so, besides to the Flatiron Summit, but there's workshops and forums that we've already had real-world data, is it fit for purpose. How do we know if utilizing real-world evidence is going to give us the right answer? As someone said you can't cherry pick, so you have to go out there, and you have to decide what the methodology is, what data you're gonna use, but that's all been a part of these external discussions of how we want to make sure that we don't dredge the data and just get to the answer that we want. Real-world evidence data quality is crucial, because if we don't feel comfortable with the data that we're using, we won't be able to go and have with the health authorities and say this is regulatory grade, you should use this data, it's acceptable. Data standards, guidance development, these are all current activities that are ongoing.
Since I'm running out of time, these are just examples that everyone has probably seen, the first bullet was published in JAMA last year, and I think one of the things when I said about showing examples, so if you can show that real-world evidence can actually replicate what a randomized clinical controlled trial, that kind of just supports the opportunities that we have. And I was missing my summary, so my summary slide was going to speak to the point that it takes interactions between industry and regulators and academics to make sure that we're all aligned because we are all reaching for the same goal. It is a little challenging in regards to depending on what type of indication data sets that are available, but it can be done and you've seen it in the examples that we've had. Thank you.
Rebecca Miksad: Thank you so much Kathy, and I'd also like to invite Michael Kelsh back to the stage, Amy Abernethy, and Sean Khozin. And while they're getting seated I wanted to say Kathy, I'm so glad that you brought up the concept of patient privacy and consent, because that is a really important component and I believe, is really part of the quality monitoring concept that we've discussed. That you really need to make sure that patients know what is happening to their data. So thank you very much.
So this is the open Q&A session, we have two microphones in the aisles, and we invite you to ask all of your hard questions to our speakers, both about their perspectives and their experiences. So I'll start. And Kathy, I was curious about how your perspective has changed over time, as successful use cases get presented, as technology has changed, that makes RWE more feasible.
Kathy Tsokas: So it definitely has evolved, so it does come down, it was mentioned earlier, about risk. And I think as we have seen more examples, both internally and externally of where real-world evidence could be used, it's easier and more acceptable for us to be able to take the risk to be able to say well, we do need this funding because if we can do this study we can answer X, Y, and Z questions, and so actually that is what's evolved, because five years ago it would have been a harder case to make to say we need X amount of money, whereas now we can show what's actually happening, and so that actually is the big difference that I've seen in the past, let's say, five years.
Rebecca Miksad: I'll ask you Sean as well, what have you seen the big difference, how has the 21st Century Cures Act impacted how the FDA's thinking about regulatory-grade RWE?
Sean Khozin: It's been a call to action, in many ways we were thinking about real-world data before the 21st Century Cures Act, but it really solidified it as a legitimate science, essentially, and that really empowered a lot of folks at the FDA to expand on their existing efforts and really try to connect the dots and harmonize the efforts internally, as many of you know, we've said this publicly, that we're working on developing a framework and guidance and a lot of those were inspired by the passage of the 21st Century Cures Act. So it was a catalyst in many ways, it's really moving us in the right direction.
Rebecca Miksad: I see a question at the microphone.
Eric Klein: Good morning, my name is Eric Klein, I'm with Eli Lilly and Company and this question is for Mike and Kathy. I'm wondering if you could comment on your observations or learnings with regard to your own organizational dynamics as you've brought forward concepts and you've seen teams, or your decision makers deal with issues of certainty and uncertainty, risk, and trade-offs, why do this versus other things that could take you down the same path? So any learning you could comment around your organizational dynamics of how you got support for concepts. Thank you.
Rebecca Miksad: Michael.
Michael Kelsh: So I get to start on this, that's a key issue. We've had to convince internally the risks to take, and it's been a lot of time educating about data sources and understanding the quality of data and the limitations of data, so at AMGEN we've developed a lot of tools and hopefully we can apply those tools for real-world data, but it's been educating our development groups, educating our regulatory groups about what is real-world data, what is real-world evidence, where can we do it effectively, where is it appropriate, and know the limitations and know the risks given those limitations. And know the advantages involved.
Rebecca Miksad: And when you're thinking about strategy, Kathy?
Kathy Tsokas: Yes, so, from a Janssen perspective we have embraced it, meaning that we've embraced the use of real-world evidence, we have a very strong data analytics epidemiology team, so to Mike's point, I think now we have to bring it together so that's what we're doing now.
One of the reasons why the real-world evidence regulatory team was formed was to make sure that we're including real-world evidence in our regulatory strategy, and making sure that we are educating and meeting with our compound development teams so that from an earlier stage, they understand how real-world evidence can be used from a strategy perspective. So I think that's where we are now internally making sure that all the groups that are working on real-world evidence, because like I said we have embraced it, we know it's important, we see what it can do currently and for the future, but it's bringing everyone together to make sure that we're aligned and that we're thinking five steps ahead of the data we need to use now.
Rebecca Miksad: Thank you, Kathy.
Michael Kelsh: I would just add to that, yeah, we've endorsed it very strongly too in the sense that we've had internal summits, where we're talking about how will we use this in our regulatory policy group has joined strongly in endorsing it.
Rebecca Miksad: And Amy, your perspective, we have Mike was very early adopter use case for RWE, what do you see as the future for regulatory-grade RWE?
Amy Abernethy: So, it's interesting, I think actually Mike touched on this in his talk. Really what I see is oncology as a microcosm, it's the opportunity to look at how do we push forward the conversation around RWE in one therapeutic area, but I really think that this is going to expand beyond just oncology. So Oncology Center for Excellence offers an opportunity within the FDA to also have all of the cross-disciplinary conversations. And so again, I see that really the future of RWE is learning in oncology about how to do this together and then expanding further.
Rebecca Miksad: Okay, the microphone, a question?
Gaurav Singal: Hi everyone, Gaurav Singal from Foundation Medicine, thanks for the panel. You know, one of the things that feels like it's a major risk with using retrospective data for regulatory purposes is sort of p-hacking on steroids. And I'm curious both from Sean, your perspective at the FDA, and from the AMGEN and Janssen perspective, you can prospectively define SAPs and things like that, but what will it really take to get comfortable with it, that's not a major risk here?
Sean Khozin: That's a great question, it's a complicated question to ask during a short amount of time, but that's not a new thing really to the FDA, we've used observational studies as a part of regulatory decision making for a long time. There's a whole science of how to reduce uncertainty and making sure that you protect threats to internal validity when conducting observational studies. And we have to extrapolate from the existing foundation, the body of knowledge that exists and the biases that can creep in, and p-hacking, I don't want to call them attempts, but p-hacking is sometimes unintentional and you introduce bias into the results. And we can extrapolate from the existing foundation of research that has been done in observational studies and making sure that obviously the data is of high quality, and so you can not worry about that introducing bias into the results. And then the second step would be to really leverage the work that has already been done on threats to internal validity, p-hacking, when it comes to observational studies.
Rebecca Miksad: Michael, you also mentioned the concern about bias in your case that you described. I'm curious what steps you guys took to minimize that.
Michael Kelsh: Right, so I would endorse the whole comment, I think p-hacking we have to be concerned about, I'm more concerned about the bias issues, and the data quality and we're about trying to make estimates around what the effect measures are and thinking more about confidence intervals and the variability in our data, and how informed of it can be there, and what we did with this BLINCYTO example was we pre-specified all the tests we were gonna do and all the observations we were gonna look at, and also just being transparent and showing all the data that you looked at, that's how you can avoid p-hacking, and I think there's a big debate in the epidemiology statistics world about p-hacking and p-values versus confidence intervals and so, we focus on estimation and bias and trying to get the best sense of the data and what it can inform clinicians and regulatory agents so that's... and we did pre-specify that and have proposals ahead of time so we're not inventing analysis after the data.
Rebecca Miksad: Amy I know you've had a lot of different conversations with stakeholders across research and industry, how do you put this together?
Amy Abernethy: There are two things that come to mind Gaurav, one is what you saw in Sandy's comments in the video that replicability and consistency of findings is important and that's one of the things that we need to continue to think about. The other thing is that as data sets are continuing to aggregate and they keep coming out, being able to conduct those analyses at various intervals and seek consistency as findings longitudinally is the other thing that is really going to help in this case.
Rebecca Miksad: A follow-up question?
Gaurav Singal: That was it, thanks.
Rebecca Miksad: Great thank you. Oh here's another one.
Boxiong Tang: Good morning, my name is Boxiong Tang from BeiGene Pharmaceutical. First of all, I really appreciate all of you to spend the time on these sections. My background is typical with HEOR. In the past most of the real-world data we used is to support the reimbursement. Now, I think the future is regulatory. So one of the questions I have, probably applied to all of you, we are the middle man in the area called the HEOR. On the one side we have lots of clinical development, biostatisticians, data management, they're the specialties in clinical development, in the randomized clinical trial. On the other side, we have the claims database, and the experts on the real-world data science. So I think my challenge is what kind of the specialties can help to marry these together? Because I did a lot of teaching on what's the difference between the real-world and the randomized clinical trial. I believe I think Kathy you made a good point. It is truly a multi-disciplinary approach. But I do see probably Amy your background, your experience is probably the best, you have expertise on the clinical science and also the data science. So that's my question, how to, I know when I look at a lot of the challenges I think it's the difference between the traditional randomized clinical trial versus the real-world data.
Rebecca Miksad: That's a really great question, and we're just about out of time, so I'm going to give each person one sentence to answer that. And I'm gonna keep track. Alright, Sean, how would you respond?
Sean Khozin: Well I just want to quickly say that the official definition that the FDA has for real-world data, this is on our website we don't distinguish between randomization and real-world data, obviously a lot of the efforts currently happening within real-world data are retrospective, which are more conducive to observational studies. However, as these efforts scale, there's nothing prohibiting us from prospective examination of real-world data, so randomization can also be incorporated into those study designs. And I think this is an evolution of how we generate clinical evidence in general and I think we're going beyond the boundaries of traditional ways of generating this evidence, so...
Rebecca Miksad: That's a great summary, going beyond the boundaries. Alright, Kathy, you really have one sentence.
Kathy Tsokas: So we do work in a multi-disciplinary function, that's how we deal with it. The endpoint is the target product profile, what we want to bring to the patient, and from there we make sure that we work with our data analytics, we have a full team, our real-world evidence, our market access, we all work together.
Rebecca Miksad: Michael, from your experience?
Michael Kelsh: Yeah, I think as mentioned before it's really a cross-functional team you need the data scientists, the clinical perspective, and I think you need to view it as there's the data quality, and then there's the design and analysis you put over it, and there's new methods evolving all the time, and we're improving what we can do, with the big data sets and the tools and the software to analyze the data, so we have to keep up with that and so bring in epidemiologists.
Rebecca Miksad: Amy, can you wrap this up for us?
Amy Abernethy: In addition to blending the roles and being able to talk to each other, we need a glossary or a lingua franca. As the person in the middle you need to be able to have the translation of language to both sides.
Rebecca Miksad: Great, well thank you so much to all of our panelists, really appreciate your insights.