Integrated evidence: Using multi-modal data to create new insights

March 18, 2022
 

In this episode, we dig deeper into the “what” of ; specifically the actual data sources and how bringing them together can yield new insights. We discuss our practical learnings from working with these modalities, and we hear how life sciences researchers are using integrated evidence today.

 

Transcript

Narrator: Previously on ResearchX.

Carolyn Starrett: We want to talk to you about how we go further with integrated evidence to achieve transformative benefits for a host of new applications.

Stephanie Reisinger: Digitization is not only changing the way healthcare is being delivered, but it's also creating a huge innovation opportunity for us within life sciences.

Somnath Sarkar:Matching real world patients with trial patients and improving trial accrual could bring speed and efficiency in randomized clinical trials.

Shane Woods: If we limit ourselves to just a single data source, say just looking at health records, we risk portraying a misleading picture of survival health outcomes.

Stephanie Reisinger: What if we can move beyond single source evidence generation by thoughtfully combining and analyzing all of these multiple real world data streams together?

Somnath Sarkar: This particular hybrid design could be a stepping stone towards a future where trial inclusion is broader and trials can be conducted more often in the community setting, meeting patients where they are.

Carolyn Starrett: This is an opportunity to start to lay out what the future path of evidence might look like.

Cheryl Cho-Phan: Welcome everybody to ResearchX. In today's episode, we'll focus on using multimodal data to create new insights, part of our introduction to integrated evidence. I'm Cheryl Cho-Phan, Medical Director at Flatiron Health. I'm excited to be here with all of you. It's so wonderful to see attendees from across the industry in attendance today, from biopharma, academia, to policy groups and health authorities.

This episode is part of the ResearchX 2022 season, where we're exploring how integrated evidence can transform oncology research and patient care. So let's get started. We have a great agenda lined up today and I'm excited to introduce our speakers. Prashni Paliwal, Director of Quantitative Sciences at Flatiron will kick us off by highlighting the actual building blocks that comprise integrated evidence. Lev Demirdjian, Senior Data Scientist from Janssen will then present on the value of integrated clinical and genomic data across the therapeutic landscape. And finally, Tamara Snow, Senior Product Manager at Flatiron will close us out with some learnings we've uncovered from actually integrating evidence.

While we have a packed agenda, we will have time at the end to ask your questions to speakers, but first, a few quick housekeeping items before we dive in. I'd like to draw your attention to the Q&A option available throughout the webinar. Feel free to submit a question anytime, and also reach out to us afterwards if you'd like to discuss any of today's content in more detail. If you have actual technical questions or issues, please let us know via the Q&A tool and we'll do our best to help. And though this hardly needs saying nowadays, please excuse any interruptions from our pets or loved ones. Like many of you, many of us are still working from home. Now, before we get started though, we'd like to learn a bit more about you. You should see a poll pop up on your screen momentarily. Other attendees will not be able to see which responses you choose. Our poll question is which of these data sources is your organization using today? Please select all that apply. The answer choices are clinical, genomic, imaging, claims, patient reported outcomes, or none of the above.

Okay. Now let's close the poll and share the results. Thank you everyone for providing your input. It's really helpful to get a pulse check on everyone's familiarity and usage of various data sources as we head into the presentations today. We'll dive into some of these data sources in more detail throughout the episode. And now let's bring Prashni Paliwal to help break the big idea of integrated evidence into very tangible chunks. Her topic is Building Blocks of Integrated Evidence.

Prashni Paliwal: Thanks Cheryl. As you might have seen in Episode One, we think of integrated evidence as evidence that's more than the sum of its parts. It's more robust as a result of bringing together multiple sources of data. 

We also discuss a framework for getting there; generate, combine and analyze. While Episode One focused on integrating retrospective and prospective data, there's a great deal of utility of just integrating different kinds of retrospective data, which is what I'm going to focus on today.

So let's start with what the pieces are that we can use to make this big idea a reality. What are these building blocks? Integrated evidence usually involves using data for a different purpose than the one for which it was collected. But of course, the purpose for which a data source was collected determines what's in it. That original purpose leaves each data source with specific strengths and limitations, and that affects the use cases when it's useful to add data source X to the integrated evidence mix.

For instance, claims data is collected for administration and billing purposes, but it's being regularly used for treatment patterns, record for hospitalizations, comorbidities and concomitant medications, etc. The strengths of claims data are that it is available in electronic format, anonymous, and not very expensive. However, claims data is not designed for clinical research and might lack critical pieces of information needed for clinical research.

For instance, claims data does not include information on disease characteristics such as confirming the disease diagnosis of interest. So claims data alone doesn't give us everything we need for clinical research. But if we fill out the picture by integrating claims to data with other data sources like EHR data, which has a robust longitudinal view of patient journey, we can potentially derive insights that cannot be generated using only the single source.

Recent FDA guidance clearly states that for a well designed study, it is critical that the study is not designed to fit a specific data source. Instead, it's important to think about how different data sources can be combined in a complimentary way to generate required evidence. Integration of data can be done for different reasons like increasing or characterizing completeness, evaluating the quality, validating different components of study design like exposure and outcomes, or building composite variables. These are just some ideas to get us started.

So let's now zoom in on some examples of using complementary data sources to generate evidence needed to make clinical decisions. First, I'd like to share Flatiron's RUBIES studies. Let's look at why we did this study. We previously developed the Flatiron real- world response variable to evaluate response based on clinician' interpretations recorded in EHR data. The intent of this abstracted variable is to provide information on the clinician's holistic assessment of response required for clinical decision-making.

However, tumor response in clinical trials based on tumor measurements from images is accepted as a gold standard. So we did the RUBIES study to see how closely abstracted real-world response corresponded with imaging-based response. For this study, the first step was to systematically collect routine care images. Imaging data is not captured as part of the standard EHR, therefore, in order to complete a study comparing abstracted response and imaging- based response, Flatiron needed to develop a method to capture imaging data on real-world patients.

We did this by integrating with site PAC systems, which are the databases where imaging is stored. Images were then transferred from the PAC system to Flatiron's database. From there, they can be delivered to a CRO for a radiologist to read these images using modified RECIST criteria. Note that we needed to adapt RECIST criteria so that it could be applied to real-world images, which gave us a real-world imaging-based response. Setting up the pipeline for images and translating RECIST criteria to real-world images was really challenging and hard, but when it was all done, this is what we found.

RUBIES showed that the overall agreement rate between an abstracted response and an imaging- based response was approximately 70%. This is in the ballpark of agreement rates we have previously observed when comparing responses based on investigator and independent review in a clinical trial setting, which is quite encouraging. Results of this study provide confidence in an abstracted response variable and support validity of abstracted response for clinical research. In addition, the study provides a framework for validating an abstracted response variable from EHR. A clear example of how multimodal data provides an opportunity to contextualize and validate novel . I'm also very excited to share that this work was recently accepted and will be published soon.

Similar to RUBIES, recently imaging data was used to validate response assessment in a study called RE-MIND. RE-MIND was used to generate historical control for the L-MIND study, a single arm Phase II trial in transplant ineligible relapse refractory BCL patients. The objective of this real-world data study was to isolate the contribution of tafasitamab to the efficacy of the combination that is tafasitamab plus lenalidomide. In this study, validation of response variable was performed using investigator and independent review of radiographic scans and relevant clinical information in the subset of the cohort. Concordance between investigator and independent assessment for responders was approximately 80%, which supported the validity of real-world assessment. This study highlights the value of to support drug development in which integrated evidence played a key role.

Next, I would like to touch upon how Flatiron is planning to use claims data to assess quality and completeness of selected variables in the Flatiron EHR. Though claims data is not a gold standard, it is an industry standard for several variables like treatment patterns, comorbidities, certain adverse events and hospitalization data. We also know that some of these data elements might not be systematically captured within the EHR. And claims data provides an opportunity to investigate the completeness of EHR capture. In fact, we have begun to explore using claims data to understand completeness of hospitalizations in the EHR, and early indications are promising.

Looking forward, we have also undertaken studies to assess completeness of other key variables like treatment and comorbidities in Flatiron EHR using claims data. EHR claims comparisons can increase confidence in EHR use or indicate what use cases may benefit from combining data from both sources. Our hope is that these studies will help us expand what we see in the EHR alone, which can further elucidate its appropriate use in generating integrated evidence.

What I hope you will take away from these examples is this: every data source bears the imprint of its original purpose, and those traits affect its role in your integrated evidence mix. We're not looking for one data to rule them all. Instead, you'll want to select the right ingredients for your integrated evidence recipe according to the use case you are trying to meet. In other words, while designing a real-world data study to fill an evidence gap, keep in mind the role that each component can potentially play, consider how it was generated, and design a study that's fit for that purpose with analytical techniques that acknowledge the original purpose of the data, and that uses complimentary sources of data effectively.

If you make those choices intelligently, integrated evidence can accelerate research and advance patient care. Now I'd like to thank everyone for their time, and I'll pass it back to Cheryl.

Cheryl Cho-Phan: Thanks, Prashni, for that wonderful presentation. You really brought out nicely how integrating different complimentary data sources is the path forward towards creating deeper insights. But you all may be wondering about where genomics fits into integrated evidence. Lev Demirdjian from Janssen is next. He'll be sharing the value his team is seeing when combining clinical and genomic data sources together. Lev, it's all yours.

Lev Demirdjian:Thank you, Cheryl. So let me tee off. There I am. Okay, excellent. Thank you, Cheryl, for the introduction. I'm really quite excited to be giving this presentation on how Janssen R&D has been using a multimodal dataset like clinical genomic data in order to support our clinical development programs. And so I wanted to start off with just a roadmap of where I'm going to go with this presentation. So I'm going to be starting off with an introduction that spells out some of the use cases where Janssen R&D has leveraged the Flatiron- Foundation Medicine Clinical Genomic Database. And then the real meat, the crux of the presentation is going to be a deep dive into those use cases. And finally, I'll conclude with a summary of my presentation.

So the key question that's really driving this presentation is how can we extract insights from real-world datasets in order to support discovery as well as our clinical development program at different stages of a drug development life cycle? And I do want to mention that while the theme of today's presentation is really around clinical genomic data, we're also utilizing other data modalities like Prashni mentioned earlier such as medical imaging in order to support a number of clinical development programs, but I'm only going to be speaking about clinical genomic data in particular.

So there's been at least three broad strategies or areas where we've leveraged such datasets. First, we've used multimodal data in order to better understand the real-world therapeutic landscape across a number of disease areas. So what do patients look like in the real-world? What do their treatment patterns look like? Their outcomes? Are there patient populations that have unmet medical needs or are faced with disparities in care? Secondly, can we use clinical genomic data in order to identify biomarkers that can be potential targets for therapy? And some use cases over here would be understanding and predicting recurrence of disease in patients who receive curative treatments, understanding and predicting duration of progression-free survival, and other endpoints in patients that have metastatic disease.

And a third, but I'll say not a final area, this isn't meant to be an exhaustive list, but a third area is constructing real-world external comparators or external control arms. And this is to support comparative effectiveness to support Janssen clinical trials, as well as to help contextualize safety events using real-world data.

Now, let me dive a little deeper into each of these three use cases. So to start off the first use case is: Better Understanding the Therapeutic Landscape in the Real-world Setting. So multimodal datasets really allow us to explore some fundamental aspects of patient care in the real world. So for example, we've used these kinds of data to better understand the natural history, treatment patterns, and outcomes for patients. And I'll also mention that the longitudinal nature of the data really allows us to look into how standard of care has evolved through time. And very importantly, this data allows us to identify patient populations that may be underserved. So as evidenced by differential treatment patterns or differential outcomes, as well as access to appropriate biomarker testing.

Now, clinical genomic datasets in particular allow us to correlate individual biomarkers or sets of biomarkers with outcomes. And this really allows us to identify patients that have poor prognosis. I'll have a little bit more to say about this on the next slide, but I did previously talk about longitudinality of the data. So I wanted to bring up this fact again, and the longitudinality of the clinical genomic database has really allowed us to add a time component to these sorts of correlative analyses. So for example, tracking the development of resistance mutations like those that develop in response to EGFR TKIs and studying and comparing the outcomes in that setting.

This naturally brings me to the second use case that I wanted to highlight, which is: Predictive and Prognostic Modeling. And there is to be sure a lot of overlap between the first use case that I talked about and this as well. So I just want to briefly go over some definitions to start off with. A prognostic factor is one that's associated with outcomes and is treatment-agnostic, while a predictive factor is associated with outcomes and is treatment-dependent. And using the clinical genomic database, we've really developed and implemented statistical and machine learning models in order to identify patients who are more likely to respond to certain therapies, as well as patients who are likely to have poor outcomes regardless of the therapy that's received. I'll say, very importantly, it's allowed us to identify and to understand, I do want to emphasize, identify and understand the factors underlying differential response.

Now, we're speaking about clinical genomic data today. So the data that's derived from Foundation Medicine reports have really allowed us to take both a hypothesis-driven approach to biomarker identification where we prespecify sets of genes or biomarkers to correlate with outcomes, or alternatively to take data-driven exploratory approaches where we develop and implement statistical and machine learning algorithms to extract potentially complex genetic signatures that correlate with outcomes. And I do want to emphasize that this is not only an exercise in model development and validation, and to be clear, that's a critical initiative of our

data sciences organization, but it's really having an impact on our clinical trial design and recruitment strategies, and quite critically, these models and the insights that we derive using them have allowed us to identify the right patients to match to the right medicines and to find and treat them earlier in the course of their diseases. Now this brings me to the third and final use case I'm going to be speaking about, which is: Comparative Effectiveness and Safety Using Real World External Comparators.

I know there's a wide range of terminology out there, so I'm going to be using external control arm or ECA. And just to align on definitions very briefly, an externally controlled trial is one where we're comparing patients that are receiving a study treatment to comparable patients outside of the trial. So for example, in a real world setting receiving center of care, an external control arm or an ECA addresses some of the limitations of clinical trials, including those around costs, ethical considerations. So for example, in some rare disease settings in oncology, where patients could potentially be benefiting from a study intervention.

Now, one of the areas where JRD has leveraged ECAs, is in supporting regulatory approvals, and the publication of FDA's framework for real world evidence several years back in 2018, really got the conversation started around key considerations for generating real world evidence that would be considered to be regulatory grade. In my mind I categorize these considerations into at least two categories. So say data-specific considerations, these include how do we assess if a dataset is fit for purpose, for the relevant regulatory purpose, as well as methodology specific considerations.

So can we ensure the use of appropriate statistical and epidemiological methodology to ensure robust and valid generation of real world evidence? Now in oncology, ECAs have become a relatively common study design element, especially in cases where clinical trials are not practical or feasible like I previously alluded to. Now, not all data are created or treated equally, right? Namely not all data are considered to be regulatory grade, and I'm not going to attempt to define what we mean by regulatory grade data, but we do have some guidance from FDAs some concrete guidance about what some of the key aspects of well-designed ECAs are.

In this slide, what I really wanted to do is highlight these aspects of well-designed ECA and how we're addressing them and checking the boxes if you will, using multimodal datasets like the Flatiron and Foundation Medicine CGDB. To start off with well-designed ECAs should really ensure similarity of the external control arm to the trial cohort. This is really done by mapping trial criteria, so the eligibility criteria of the trial to the real world analogs. And of course, clinical and subject matter expertise over here is quite critical. Prashni mentioned something that resonated with me and she said that don't design a study to fit the specific data source.

I think this is one element where the spotlight projects have really helped us address study design challenges by abstracting custom data elements from patient records, in order to match our trial eligibility criteria and assessment timings. Of course we're speaking about oncology over here. So in the realm of targeted oncology therapies, having the ability to define and implement biomarker defined inclusion criteria is critical. And the genomic component of the CGDB in the part of the FMI testing, really provides us a high level of granularity on testing results to ensure comparability of our patient populations.

Next, well-designed ECAs should have minimal bias, including selection and compounding bias. And this is usually achieved using appropriate statistical techniques. So for example, propensities for modeling, negative control outcomes, as well as using e values for quantifying and measured compounding. Of course, well designed ECAs should have well defined and reliable outcomes assessments, which is addressed by using validated real world endpoints like Flatiron's mortality metrics. Now, I wanted to wrap up to conclude by highlighting a specific case study, and this is amivantamab compared with real world therapies in patients with NSCLC with EGFR exon 20 insertion mutations that progressed after platinum double chemotherapy.

Now as a little bit of a background, amivantamab is an EGFR met by specific antibody and was granted accelerated approval in May of last year by the FDA, for adults with locally advanced or met NSCLC and EGFR exon 20 insertion mutations whose disease had progressed on or after platinum-based chemotherapy. Now, exon 20 insertion mutations make up around 10% of EGFR mutations. In contrast to the more common EGFR exon 19 deletions, as well as LA5AR mutations, which make around 85 to 90% of EGFR alterations.

Notably, outcomes in patients with exon 20 insertion mutated NSCLC, have been associated with poor clinical outcomes and resistance to currently available TKIs. So this study was really aiming at comparing amivantamab to real world therapies, using data from Flatiron and other data sources. And over here, the comparator group was defined to be patients who met the eligibility criteria of the CHRYSALIS trial, as well as patients that received physician's choice of care. Propensity score waiting was used to adjust for some key confounders like brain metastases, age, as well as ECOG performance status.

The results of the analysis indicated that patients on amivantamab demonstrated a 10 month longer overall survival, 5 month longer PFS and 10 month longer time to next treatment on average, over the comparative group. So, these results really highlighted the relatively poor performance of the external comparator, which was treated with immune checkpoint inhibitors, single chemotherapies and EGFR TKIs, but also highlighted the ineffectiveness of these agents in this patient population, and really underscored the need to take a more alteration specific treatment in advanced NSCLC.

So now, just to wrap up, I wanted to reiterate how JRD data sciences has really leveraged multimodal data, in order to support our discovery as well as clinical development programs. So the insights that we've extracted from these sorts of datasets have helped us dive deeper and get a deeper understanding of the real world therapeutic landscape. So better understand patient treatment journeys, identify patients with unmet medical needs or disparities in care, as well as they've given us a deeper understanding of the prognostic and predictive value of selected biomarkers. Finally, they've allowed us to construct external comparative studies to support regulatory decision making. This isn't just the construction of ECAs, right? It's really ensuring that the data are both fit for purpose, as well as regulatory grade. I've left my email address up on the screen and please feel free to reach out if you have any other questions, and of course I'll be available for the Q&A. So thank you all very much.

Cheryl Cho-Phan:Thank you Lev. It's great to see real world examples of clinical and genomic data coming together as integrated evidence at Janssen. I can say from experience that it's been really helpful to discuss the specific use cases clients like you have in mind, because it drives how we build the evidence as we continue to innovate in a thoughtful and rigorous manner. Let me just mention a friendly reminder to submit questions you may have through our Q&A tool anytime during this webinar. Now for our final presentation, Tamara will be sharing some practical tips and things to look out for from our own experience integrating data sources.

Tamara Snow: Awesome. Thanks Cheryl. So as Lev just demonstrated, integrating multiple data sources can result in a rich dataset and unlocks new and deeper insights. What I'd like to leave you with are some lessons we've learned from linking these data sources to generate integrated datasets. In short, it's way more complex than it looks. I'm excited to share some insights from our experience generating these integrated datasets and where we hope to take it moving forward. With any complex topic, I appreciate a simple analogy. To me, the process of integrating data sources has a lot in common with cooking.

As Prashni explained, it's important to start with the right ingredients, but to create a great meal, it is just as important to follow the recipe and to use the right techniques. So, to keep you from ending up like Homer here, I'll share our learnings to help you evaluate the suitability of using integrated evidence datasets to support your research. So what we've done is boil down a few key insights for you to consider when evaluating an integrated dataset across the different phases. Compared to a standalone dataset, there are nuances and technical decisions that you must keep in mind when linking multiple data sources.

I'll use the next two slides to dig into each of these different considerations and provide some real world examples to help you maximize the impact of integrated data. As Prashni discussed, every data source bears the imprint of original purpose for which it was collected. Few data sources are collected with the intention of integration, therefore selecting the right data source for your use case requires an understanding of how each source was collected in the first place. This slide shows a few characteristics to think about, one of which is the inclusion and exclusion criteria of each data source.

This is important since that will be inherited in the integrated dataset and can cause conflict if not thoughtfully addressed. For us at Flatiron, our clinical only datasets require abstract or confirmation that a patient has a specific disease of interest to be included. However, on the Foundation Medicine's core dataset, it's much broader and includes patients across histologies. So when we combine the two to create the integrated clinical genomic database, we need to require both a confirmation of a specific disease in our network, as well as a Foundation Medicine test with a histology that aligns with that disease.

That way we're integrating both clinical and genomic data that is relevant for our partner's analyses. So moving on to a combined phase, if you saw Somnath present in Episode One, you know that creating integrated evidence isn't just combination, it's transformation. So what we're showing here is a very simplified diagram of the steps it takes to link two data sources. For this synthesized evidence to be credible, how it's created cannot be a black box. It is important that each step or manipulation of the data is accounted for. That traceability is what we mean by provenance and is especially important to ensure accountability, trust and data integrity.

For us at Flatiron, we've built and tested infrastructure to capture data provenance for both individual and linked datasets to provide this confidence to our partners. A good example of where provenance is especially key is in matching patients. The step has its own highly detailed process, as there is an incredible number of options for identifying matches across datasets. So choosing the approach that maximizes both accuracy and the number of patients in the overlap is a careful balance. Accuracy will largely depend on what information is available across the different datasets and level of completeness and confidence you have in each variable. So all one data source may have a gold mine of granular information on each patient, such as social security number and zip code, that information may not be available in other data sources. So the teams will need to collaborate on a matching approach that provides the highest confidence. In terms of maximizing the overlap, it will largely depend on how you create tokens or exchange sensitive patient level information and how you identify any potential duplicates in each individual dataset, since some linking vendors will just drop duplicate tokens. So for example, Flatiron has patients that can visit more than one Flatiron's side of care

which could produce multiple tokens for that patient in our database. Thus, we have processes built out for how to consistently and most impactfully select the right approach for tokenizing these patients to maximize the overlapping cohort. This is important for supporting provenance, accurate matching and the utility of the integrated dataset for one's analysis. An additional hurdle in linking patient data is doing it while upholding strict patient privacy standards and protecting a patient's identity prior to sharing a dataset externally. We achieve this by having strict protocols on how to handle patient identifiers at each build step, as well as processes to ensure de-identification of a more granular linked data product.

This was especially key when we were building out our capabilities to link scans images to our clinical genomic database. The clinical genomic data product already has this very complex linking process. So collaborating with privacy and security experts to ensure all design decisions were accurately linking the large imaging files while upholding privacy principles was key to unlocking that data product. So once you have all this data linked together, it is important to be mindful of the overlap between the various datasets and how that impacts your analytic cohort.

So for example, as of this quarter, there are about 3 million patients in the Flatiron network and about 400,000 patients in the Foundation Medicine research database. And that leaves us with about 100,000 patients in the overlap, or about 25% of all Foundation Medicine tested patients. For use cases like what Lev was describing, it's more important to have that deep genomic data from Foundation Medicine than to oversolve for the cohort size. But in some instances, that may not be the case. So weighing those trade offs when deciding whether or not to use an integrated dataset is important to keep in mind.

Along with sample size, it's important to know the level of overlap and the time period in which a patient is in each data source and the impact it has on how you analyze an integrated dataset. That is the length of time that the patient was simultaneously in both databases. That's the longitudinality you have on that patient in the combined set. So for example, say that there's this patient where you are looking to link both clinical and claims data on. So patient X has maybe three years in the claims dataset, and then another three years in the clinical dataset, does it actually mean you'll end up with an integrated dataset for that patient for all three years?

Well, it's really only the case if it's the same three years. If it isn't, your longitudinality is only the time all of your source datasets overlap for that patient, and this can impact the power of your analyses. Last, but definitely not least, just because a data source on its own is representative of the broader patient population, it doesn't mean that the integrated dataset will be. So for example, if you have a claims data source that only covers a certain geographic region and a nationally representative genomic data source. Then linking both will limit to just a subset of patients in the area covered by both the claims and genomic data sources, which may not be representative of the broader population. Thus, it's important to have a good understanding of the representativeness of an integrated dataset and how it impacts the generalizability of analyses using that cohort. A team of us are actually working on a publication now to describe the representativeness of a clinical genomic database across data products. So we're excited to have it available this year for our partners to really help inform their analysis.

So I've talked quite a bit about the importance of everything from understanding your ingredients to a detailed recipe for just a robust integrated dataset. These learnings have really helped us build a strong foundation for how to integrate multiple data sources, which has unlocked exciting research opportunities, like Lev discussed earlier.

So today, there are even more types of ingredients for healthcare information that is being generated, from personal wearables to deep molecular insights. And so as we integrate more data sources, we can create richer evidence that gives us a more complete picture of the patient experience. And in the future, we're excited to take advantage of this opportunity by exploring new integrated datasets and variables like linking the clinical genomic database to digital pathology.

So we're really looking forward to collaborating with everyone across the industry to advance this exciting work in the coming years. I really appreciate everyone taking the time to listen and I'll hand it back to Cheryl.

Cheryl Cho-Phan: Fantastic. Thanks so much, Tamara. That's a really thoughtful integrated evidence checklist. Thank you for distilling the complexity introduced when cooking up a rich data source, where the sum of each ingredient together is better than each one alone. So now, let's begin our Q&A discussion. We're seeing some great questions come in. Let me kick things off with a question for Prashni. Prashni, integrating multiple data sources sounds like pooling. Is that what you mean by integrated evidence?

Prashni Paliwal: Pooling is definitely one of the ways to create integrated evidence. However, integrated evidence, as a bucket, is much more than just pooling. Usually, in pooling will have the same data model where you increase the sample size, but like today's discussion, what Tamara talked about and Lev talked about, it was more about increasing the dimensionality of our data, adding more data components so that we can learn about additional variables and more evidence than just one data source. So to me, it's a little bit like pooling is a subset of integrated evidence, but it's a little bit more than that. Actually, we'll be discussing considerations around pooling data in one of our upcoming episodes, in which we'll talk about our work with Janssen on pooling methodologies.

Prashni Paliwal: I think what we need to be really, really aware about and be careful about is that, when we are pooling and linking data, it's not as straightforward as it sounds and it can be really hard and we need to be thoughtful about that, which I think Tamara laid out quite well.

Cheryl Cho-Phan: Totally resonates. Thank you. Here's a question for Tamara. Why is linking necessary for data sources like genomic tests and radiology images, which are part of the EHR?

Tamara Snow: Thanks, Cheryl. Great question. In short, linking really allows for more or deeper information that isn't available in a single source, like we talked about. So some good examples are that, while we might see scanned PDFs of NGS test results in the EHR, linking to the source genomic data can provide more detailed and harmonized annotations, as well as additional variables that are in development that you wouldn't see in a static PDF report.

In terms of radiology, a good example there is linking scanned-in radiology reports. It's definitely not always linked together within the EHR, even though they are related to the same patient and the same pathway. So if you want to leverage information from both, to answer your question, you have to go through the linking process to get all the information together.

Cheryl Cho-Phan: Great. Thank you. Lev, here's a question for you. Based on your experience, what organizational structure and capabilities need to be in place to use integrated evidence?

Lev Demirdjian: Yeah. I think it's a good question. My initial thought here is the right people and the right teams. I think that's, if I had to say it in one sentence, it would be the right people and the right teams. Let me explain that little bit more. I'm a scientist. Right? So ideally, I'll be getting whatever data I need to solve the scientific questions of interest, but it's not like we wave a magic wand over a dataset and it's cleaned and processed and available. So there's a whole team of engineers and scientists that are working to harmonize, process, clean that data and put it in a formatting that really allows the scientist to answer the relevant questions.

And in parallel, I think working closely, having the ability, integrating very closely with the clinical teams, as well as regulatory teams to understand what's the actual medical scientific question, what are the regulatory intentions— these are quite critical ingredients for using integrated evidence. In terms of scientists, too, we have a phrase that we use a lot within GRDDS sciences. This concept of the bilingual data scientists. I really like that because it emphasizes that we really value the skill set in data science, as well as the subject matter expertise, the oncologists and neurology. So being able to make that communication and that translation, I think, is quite critical.

There's also one other component that comes to mind. It's really having the willingness to push the envelope. So I think having the culture of trying new things develop new statistical and machine learning methods. When we're talking about multimodal datasets and integrated evidence, it's necessarily adding a layer of complexity over a traditional, one-dimensional analysis. Right? So just because there's no standard methods out there doesn't mean we can't sit there and invent them. Right? So I think having that attitude of developing what we need in order to use multimodal datasets, in order to solve complex scientific problems, is also quite important.

Cheryl Cho-Phan: That's a great answer. I love that spirit of innovation. Prashni, here's a question for you. When doing integrated evidence, how do you balance data incompleteness and sample size?

Prashni Paliwal: Thanks, Cheryl. I think that's a great question and we think about both of these components on a day-to-day basis. Especially in real-world data, data completeness can be a big issue and a big challenge. We do have statistical methodologies to deal with some of that, but at the end of the day, for the key variables, you do need a certain level of completeness. And depending on what you're using this data for, the sample size requirements can be based on that.

To me, I think, when you start designing a study, these are the things you want to think about. Whether you have the level of completeness and the sample size, you need to generate the evidence you're generating. Whether you're using it for internal decision-making or you are using it for testing a hypothesis for a regulatory use case, that will determine how you're going to design your study.

So to me, at the core of everything is data. You can't deal with sample size and incompleteness as an afterthought. While you are designing the study, that is what you are trying to look for. And if you look at the FDA guidance that was published recently, it says very clearly that these are the components that, for regulatory use cases, regulators want us to think about when we're designing the study. So I don't think there's a magic wand you can wave to deal with completeness or sample size. You have to be thoughtful about it when you're designing your study.

Cheryl Cho-Phan: That's a very thoughtful answer, sort of just links with the complexity of it all, too. Thank you. Tamara, here's a question for you. How much of your matching, linking is deterministic and how much is probabilistic? Subset to that is, under what circumstances can you implement deterministic linkage? I think that's trying to get at privacy issues.

Tamara Snow: Yeah. Another great question. So all of the data products that we’ve been talking about throughout this discussion are all done through deterministic matching in order to find that exact match across data sources. For the second question, in terms of all the privacy restrictions, you have to be super mindful of it when handling PHI and that was one of the points I tried to hit on earlier. So for example, how we address this in the CGDB is that we use a third party to find matches across Flatiron and Foundation Medicine so we are not sharing PHI across companies.

It's very siloed. We actually use a third party who helps us do the linking and matching, so neither party ever has any of that information to help uphold those privacy guardrails of sharing information, but still, being able to create that exact match across datasets. So we feel confident that we're pairing the right clinical and genomic information to that patient. So it really feels just more confident in the accuracy when you're trying to take advantage of these integrated datasets.

Cheryl Cho-Phan: Yeah. I could speak to how much we talk about that, internally, as well. Great. Lev, here's a question for you. Can you speak a bit more about assessing representativeness of the integrated dataset? Are you considering geographic region, country, and diversity of patients, as well?

Lev Demirdjian: Absolutely depends on the specific question we're trying to solve, the specific disease area, but we do consider geographic representativeness. Of course, when we're talking about something like an external control arm, trying to match as closely to the patient population in our trials is something we aim to do, and geographic location could be one component in that. Alternatively, we could be interested, and I hinted at this a little bit, about disparities in care. And I think we could really dive into that and get a flavor of that, looking at the geographic representativeness of patients that meet our patient populations. So I think it's a great point. It's something we do explore in a variety of different contexts.

Cheryl Cho-Phan:Thanks, Lev. Prashni, here's a question. We aim for insights to be generalizable. How do you think about and control for various possible biases, especially selection bias, availability bias, etc, when making inference from Flatiron data and also the CGDB subset?

Prashni Paliwal: Really good question, again. It's so case-dependent. Right? When you're thinking about your analytic cohort, the way you arrive at it. Transparency around that and when you are implementing your IE criteria, being thoughtful about how that can introduce or how to avoid selection bias is really, really critical. We also have publications on Flatiron data about representativeness, which gives you that aspect of how representative our data is. It's quite representative.

However, I think, depending on the scientific question you are answering, sometimes representativeness or generalizability is critical and sometimes it might not be as critical. However, what we usually do when we do our spotlight with other projects for a specific scientific research question is making sure that we do our cohort selection without any exposure to outcomes and doing it in an as unbiased manner as we can. Even for CGDB, we have done a lot of research around what are the kind of things, if you're using treatment data to select a cohort, what are the kind of things we need to think about to make sure the cohort being selected is not biased in any manner?

Even if you do all this, this is real-world data. There can be biases for numerous reasons. I think what is really important is, when you are analyzing and drawing inference from this data, to be aware of it and then integrate the data accordingly with those limitations with those caveats. So doing the things right and designing the study in a manner that you minimize the selection bias and then, when you are towards integrating your results, making sure you are aware of any other biases that could impact your results in one way or the other, to me, this is the way you limit the bias in your results.

Cheryl Cho-Phan: Great. Thanks so much, Prashni. Let's see. Okay. I think that may be it. That's a wrap for Episode Two. Thanks, everyone. Thanks very much to all of our speakers for sharing your insights. Thank you to all of you for joining us today. As a reminder, we've got four more episodes over the coming weeks. Next up, on March 30th, we'll be focusing on the future of clinical research. Finally, since we weren't able to get to all of the questions, please know that the lines of communication remain open. Even after we end this episode, feel free to reach out to us and get in touch with us at rwe@flatiron.com. And then a last friendly reminder to please take the survey upon closing out to help us improve future webinars. Thank you and see you next time. Stay healthy, stay safe.