Real-World Endpoints: A Discussion With Industry and FDA

Thank you for subscribing to the RWE newsletter.x

The assessment and utility of real-world endpoints, such as real-world overall survival, real-world progression and real-world response, have become an increased focus of debate as researchers look to understand the full potential of real-world data (RWD).

The challenges of developing reliable and meaningful endpoints from RWD are many. As researchers consider the applicability of real-world endpoints for their research questions, it is important to understand the data sources that are used to create these variables and the processes that are followed to evaluate whether they are considered fit for purpose.

This panel discussion looked to bring together experts from industry and government to discuss how to establish quality standards for real-world endpoints and where to go next.

Transcript

Aracelis Torres: Certainly the timeliest set of presentations I've ever seen. So, thank you everyone. So, we'll kick things off with our moderated panel discussion and reflecting, in particular, on all the talks we have just heard. And, we've seen there's been a broad increase in uptake of real-world data, and in turn also real-world endpoints, and their utilization for specific use cases. So, hopefully from this discussion, we'll get a better understanding of, how do we begin to establish quality standards, with respect to endpoints and maybe vary by use case. And also discuss, where do we go from here now that we've sort of laid where, present day, what the frontier tells us, but what again is lying ahead.

So, I'll pose the first question to Sean. So, we saw in Amy's presentation a number of metrics shared, especially with respect to performance metrics for small-cell lung cancer and advanced non-small cell lung cancer. Curious as to your reaction and thoughts upon seeing some of those metrics across a number of domains.

Sean Khozin: Well, you know there are some metrics that obviously are more difficult to measure and require analytical validity, but it's important to recognize that with any metric, including how we assess RECIST, tumor responses in clinical trials, traditional clinical trials, there's a lot of volatility, and that's something that we have intuitively recognized. And for example, you know, if you look at phase three registrational studies that incorporate tumor-based endpoints, progression free survival, for example, you know, we require an independent radiology review assessment. The reason for that is that we'd like to have a second opinion, a second look at the images. And over the years we knew that there was a discordance, i.e. volatility in estimation of tumor response and more recently, we did a meta-analysis and that volatility as a discordance is about 35%.

So these are registrational studies, highly controlled, highly trained professional radiologists. And two radiologists looking at the same image come up with two different assessments, and that's after categorization into RECIST, which has a 50% margin of error already built in. You know we don't call anything a response unless a tumor shrinks more than 30% from baseline and nothing is progression unless it grows more than 20%, so that's 50% margin of error because of human visual inspection. That's why it was built in. If we look at, you know, how radiologists measure, actually, the longest diameter of the lesions, that discordance is much higher.

So, the moral of the story is that, there's a lot of volatility in how we assess what we believe to be the gold standard. Whether that's clinically meaningful or not, that's a separate discussion, and I think we can also, we can learn from that foundation when we think about real-world endpoints and how to think about the metrics and how to assess what is really good enough, if you will.

Amy Abernethy: You know, I just want to kind of comment on what Sean just said. I think, as you going to the moral of the story, the moral of the story is, it's all messy, right? And that's acknowledging it's messy is kind of the first part of the task and then figuring out how we're gonna be transparent about the messiness and work our way through it is important. And one of the things that strikes me about RECIST is it's a consistent framework to have a consistent approach to deal with the messiness. And similarly, I think in real-world endpoints, what we need is a consistent framework that allows us then to work our way through the practical reality of this is a messy area.

Aracelis Torres: And Mitch, I guess from your perspective, as you're thinking through potential use cases of how to leverage a real world dataset for some of the analysis you showed. How are you assessing the trade off of how good is good enough?

Mitch Higashi: Sure, so, you know, I showed this example from melanoma, how we have reasonable confidence in overall survival and that study is one case, there's a lot of great work going on with Flatiron. So, Aracelis, she's doing work in lung cancer to look at the effect of sensoring and potential bias in overall survival and how that could change the hazard ratio. So, Aracelis and the team are pioneering some methods there. Carrie Bennette and the Flatiron team are looking at ways to use real-world data and statistical modeling to essentially plot an external control arm in measuring and estimating overall survival.

So all of this stuff together combined, it's kind of like the, if you will, the collection of evidence and methods that's coming together to give us a lot of confidence with overall survival. Progression free survival, I see it again as this inter-rater and intra-rater reliability that gives us confidence there. But I also agree with a lot of Sean's comments about RECIST. Craig, in the previous session, talked about quantitative RECIST and I think there's something there for us to explore and get better at quantitative measures to essentially score and add quantitative information to what is effectively a measure of measurable disease.

Aracelis Torres: And I know Mitch, in your particular example, you talked through that reproducibility, replicability is one potential signifier or qualifier of, is this potentially good enough. Are there other, either Mitch or Amy, thoughts on additional signifier's of data quality with respect to real world endpoints?

Amy Abernethy: Do you want to start Mitch?

Mitch Higashi: Sure. I can go first. Look, I think we're getting to a place where we're going to see mortality as a domain and mortality surrogates to help validate that domain and I also think a second one is patient reported outcomes, right? The idea that a patient feels less sick and can schedule their work hours with more certainty is not to be taken lightly. I think that deserves its own consideration as a domain and I think more and more surrogates need to come into play to validate that domain.

Amy Abernethy: You know, I'll add to some of what Mitch was just saying, a couple of things come to mind. First of all, the ability to bring together datasets, and Mitch, you just mentioned, for example, patient reported outcomes, I think we need to think broadly. For example, one of the things we need to think about how to bring in, is claims data. So really bringing together multiple datasets, that's the first part, the second part then becomes, we need to think about essentially portfolios or packages of endpoints, because in order to tell the complete story you need to be able to see it from all sides. You need to be able to see what's the impact on disease burden and mortality, but you also need to be able to understand and what's the balance in terms of toxicity, safety, what's the balance in patient experience and frankly, what's the balance on our economy and healthcare system as a whole, so health resource utilization outcomes are very important there.

The last thing I'll say, and I've been sort of thinking about this entire session, and based on Sean's talk, is triangulation. You know, endpoints that, our ability to understand how valid our endpoint itself is, can be triangulated from multiple areas. And then also, having endpoints that tell the same story from multiple angles becomes another part of what we need to be thinking about in the future.

Aracelis Torres: Any additional thoughts Sean?

Sean Khozin: I agree with all the statements. And I think I'd like to, on the score of what was articulated, that we can have this very logical approach to how we look at endpoints in the real world and starting with concepts that we're comfortable with. Progression free survival, overall survival and then you move into patient reported outcomes. And as we all know, there are many emerging ways of collecting patient experience data using sensors and wearables, smart watches.

I've recently discovered after getting this gadget that my resting heart rate is typically less than, very low, it's about 50 and obviously that's not bradycardic, that's my n-of-1 normal. And we can all envision scenarios, those of us who have been in, for example, in situations in a clinical setting where we're monitoring the patient in say, the ICU, the heart rate is low, you're puzzled, should I re-dose the beta blocker? That could be that patient's n-of-1 normal. So to be able to go beyond what's feasible and try to trap patient experience data using these emerging modalities, that's something that has a lot of value and can open up a whole new realm of opportunities and possibilities. And phase approach, you know, starting with what we know, what we're comfortable with, all the way to sensors and wearables. Really spells out a very exciting path forward.

Aracelis Torres: And I've also heard this trend about, the importance of being transparent, what's under the hood, as noted again from the keynote. One of the challenges is, sort of, the publication cycle itself is very long and the use cases and the questions that sponsors and users want answered tend to be, as of today, how can I use this, what are the metrics you can share? Are there any sort of brainstorming that we can do or think about, how to get the information out there while the publication cycle tries to catch up? Obviously, we have the mortality publication, something we could easily reference, but the progression paper's still making its way through that, sort of, peer reviewed process.

Mitch Higashi: Well I'll go first. I think this forum is an excellent example of bringing us together, right, get a better understanding of the evolving methods and how we can apply them. And I'll say, look, you know, we're committed to publishing our research and we're on a publication path, but there has to be a way for us to understand how the methods are evolving, how to apply them and I mean, this is one great forum to do it.

Aracelis Torres: Any thoughts, Amy or Sean, to add?

Sean Khozin: Somebody should start a new journal, Journal of Real World Evidence.

Mitch Higashi: Editor and Chief.

Amy Abernethy: I think, one other thing, since I'm going to use this as my forum to push one of my things forward, I think we need versioning, I think we need to be able to see it on the web and other places where you can clearly see with documentation, this is the version and this is the information that goes along with it. We try and figure out how to do this at Flatiron but it's still, we still live in a publication focused world and figuring out, how do we have confidence with the information that we see on the web and we can all see it in a transparent way, I think is one way to get there.

Aracelis Torres: And we'll end on one last question, where do we go from here? What does the future of real-world endpoints look like, knowing what we know now? I guess we'll start with Amy, then Mitch and end with Sean.

Amy Abernethy: I think that really, ultimately, we need to get to a place where we are continuously thinking about what endpoints tell the full story. But we have the confidence of the endpoints that are in our datasets, so we're not really spending our time trying to figure out, how do we develop endpoints and are really thinking about what are the results and what's the story it tells us. So I think we're in a developmental step right now.

Mitch Higashi: Yeah, I think in the surrogate space, that's where we'll see more innovation and more disruption, right. And as these ideas evolve and more consensus evolves around these different measures, real-world treatment response, does that need to validated against RECIST? Is this a new type of endpoint? Where does it fall on the hierarchy? I think these are some of the questions we have.

Aracelis Torres: Sean?

Sean Khozin: I'll say that, don't be afraid to try new things and bold adventuresome things, and consider the FDA a friend, because we like to figure out ways of understanding a patient experience better and more holistically using a variety of different data types. And turning the focus towards the real world, you know, where the majority of adult cancer patients are being treated.

Amy Abernethy: I just want to say something about that, we've gotten so many great ideas in working together with the FDA, as well as working together with all of you, that I really just wanna, kind of, underscore that working with the FDA as friends has been a really helpful way of moving this forward, so thank you.

Aracelis Torres: Well I certainly want to thank Amy, Sean, and Mitch. Certainly a lot to think about and move forward. How do we, sort of, take the next step with respect to real-world endpoints. Certainly just the beginning, not the culmination of anything in any sort of realm of possibility. So thank you again, and hopefully everyone, sort of, got a lot from the presentations. I will note that some of the panelists will be joining us at Ask the Experts, so please find your way there, if there are any lingering questions you have or topics that you want to delve a little more deeply into outside of this presentation. So, thank you everyone for coming and thank you for participating.

View full transcript