The Promise of Real-World Evidence
- Real-World Evidence
- Electronic Health Records
- Outcomes Research
- Data Science
- Machine Learning
- Clinical Trials
- Label Expansion
In this opening keynote at the 2018 Flatiron Research Summit, Dr. Amy Abernethy takes a futurist outlook on the promise of real-world evidence, particularly as it relates to the potential impact on the drug development process.
Flatiron Co-Founder Zach Weinberg followed Dr. Abernethy's message with a reminder that the future of RWE is only realized proof-point by proof-point. The first label expansion, the first novel biomarker, the first new drug application with real-world evidence at its core starts with a calculated, educated risk. As we learn more about where and how RWE can accelerate research and improve the lives of cancer patients, we need to look to each other within the oncology research community to learn from our partners’ achievements and challenges.
Amy Abernethy: I'm an old outcomes researcher. I remember the days when the question was what's overall survival look like for doublet chemotherapy in advanced non-small cell lung cancer? Some of you remember this too. We used claims data. Because we couldn't discern small cell from non-small cell, we found those patients who had received etoposide and said, "That's the small cell cohort," subtracted them from the dataset, and assumed what was left over were non-small cell patients. We then analyzed from there but didn't even have complete enough mortality data to have confidence in our estimates of survival.
Now we have the data. We have the depth of clinical understanding to be able to have confidence that this is, indeed, a non-small cell patient. We've got better and better endpoints data. And, indeed, this is the space we're moving forward in as we talk about real world data science. The outcomes research of old now is evolving into what we call real world evidence, and what we're here to do today is to explore this conversation of real world evidence together.
In December of 2016, the FDA published in The New England Journal a definitional piece to help us know what is real world evidence. In this paper, it reminded us that, really, the purpose and the goal of real world evidence, what it is, is data about how patients perform their outcomes in the context of real world settings, the day-to-day care that's received in clinics all over the country and all over the world, what's sitting right in front of us, patients with cardiovascular disease, patients with HIV, patients that otherwise would have been potentially excluded from our traditional clinical trials.
They also reminded us that real world evidence is based on data coming from many different places, electronic health records, claims, genomics datasets. They reminded us that real world evidence isn't just retrospective. It can be prospective data such as pragmatic clinical trials. But they also reminded us that the totality of the evidence, in order to how to care for this patient, is a combination of real world evidence plus traditional clinical trials to understand with confidence how drugs perform and how we can care for patients in the future.
This conversation, this move from outcomes research to RWE, actually was enabled through the HITECH Act. In 2009, HITECH Act made it so that Medicare could provide incentives to oncologists and physicians all across the United States to put electronic health records into their practices. What we saw in oncology in the uptake of EHRs from 10% to now over 95%. It means that there is a pool of data now being collected as a part of every single interface with a patient that reflects what happens in the real world.
So, we've got an immense set of data, and as Flatiron, and as you, we started analyzing those data and we could track what happens such as the uptake of immunotherapy and targeted therapies in lung cancer. But we still had a hard time, and continue to have a hard time, having confidence of exactly what types of research and what purposes to which we can put our real world data science. Until 2016. In 2016, the 21st Century Cures Act was signed, and this has really been a game changer for us. It's now not just pie in the sky that we can use data from electronic health records that have been carefully cleaned up to start to understand how patients perform, but, in fact, there is a piece of legislation that compels the FDA to help give us confidence of when we can use these data, to write guidances and help us understand, for example, whether or not these data can be used for label expansions, post-marketing commitments, and what other approaches. This has really been a game changer. Which brings us to today and the Summit.
The whole idea is to explore where we are together and imagine the future, imagine 2023. At least in my head, I imagine 2023, five years from now, as a time when there are standards of guidances that we've all helped to inform that give us guardrails about when we can use real world data and real world evidence for regulatory submissions and other key research output, where I imagine that we're taking retrospective data, that data that's already being collected in electronic health records and other places, and merging it with prospective data that we intentionally collect after patient consent in order to get a full picture of this drug and how it's performing. I imagine that the conversation isn't about the exact data anymore, but rather we acknowledge that we have the data available and we can pull from multiple different data sources to answer the question at hand. And the real conversation is now about confident analyses, interpretation results, and getting those results into practice every single day.
I also imagine, in 2023, that you're answering the questions that matter to you and your business most in days and weeks, not months and years, where the R&D dollar goes further and farther and faster. And so, really, what I see, as I get to 2023, is that we're gonna walk a journey of the evolution of real world evidence and real world data science together and that together we're gonna start to inform what it looks like and how we have confidence in the findings.
So, let's kind of look at this a little bit more closely. What are the different perspectives, jobs, that come together in order to make 2023 happen? Well, one of the roles, my role, outcomes researcher. Maybe many of you here in the audience are outcomes researchers, HEOR, the OR person on the team, and I think as an outcomes researcher, we're used to thinking about working with large datasets and large populations. But usually, it's one dataset or another dataset. In the outcomes research space of the future and the outcomes research of the future, our work is going to extend so we're talking about the intersection across datasets. Retrospective datasets, prospective datasets, and we're thinking about that intersection and how we take advantage of that to move the space forward. As an outcomes researcher, we have a larger portfolio of endpoints, and we think about the multiple different kinds of endpoints that we can use because we've also got those intersected datasets with more and more endpoints in them. And then, interestingly, the outcomes researcher of tomorrow isn't thinking specifically just about datasets with 50,000 patients but, actually, datasets with 50 patients. And so the outcomes researcher of tomorrow is a population researcher and a small cohort researcher.
What about the data scientist of today and tomorrow? I'm gonna put on my data science hat, and I think as data scientists, today we're gonna continue to talk in the language of informatics and data pipelines and analysis and analytic techniques, and tomorrow we're gonna be responsible for a larger portfolio of mathematical applications to make sense of the data. We're gonna have advanced statistics coupled with machine learning and other approaches so that we know with confidence what this information means. As the data scientist of tomorrow, I'm gonna need to be responsible for understanding the features of every single data point and how that data point performs. Because not all data is equal in terms of its quality and reliability, and I need to think about the differences across the longitudinal landscape of what's happening in healthcare. And, interestingly, the role of the outcomes researcher and the data scientist are gonna start to blur. The two are gonna work together because we're all part of the same team, and we each have a different perspective and inform the results.
Now, I'm gonna put on my clinical trialist hat. So, as I think about my role as a clinical trialist and clinical trial design, I'm usually thinking about how do I plan that randomized control trial or adaptive trial to get to the result and answer the question at hand? As a clinical trialist of tomorrow and in 2023, I'm gonna be thinking about a combination of my prospective clinical trials approaches, some of which are gonna look more traditional, essentially explanatory trials, and some of which are gonna look more pragmatic, but I'm also gonna be thinking about how, as a clinical trialist, I leverage retrospective real world data in the best way possible so that I only spend my clinical trial's resources on the part that has to be done prospectively in a traditional trial or pragmatic trial. And, interestingly, again, the role of the outcomes researcher and the data scientist and the clinical trialist are gonna start to blur. We're each gonna have a unique perspective, but we're gonna bring those perspectives to the table and be able to talk within a similar language so that we all understand each other.
And now I'm gonna put on my clinician hat. As the clinician at the table, my job is to provide clinical input, to shape the landscape of this is what's important to answer and also then help interpret results and communicate. Tomorrow, I'm not just a clinician but I'm a clinician who also has to speak the language of data science, understand outcomes research. I've probably had a long history in clinical trials, but I need to now think about the full methodological landscape. I really can't provide good input and help with good decision making without that.
But then, as those roles blur in 2023, still with my clinical hat on, I'm gonna be the person who helps to communicate this in a confident way to other clinicians and also to talk about what this means for patients and how we talk about it in a clinic.
And, importantly, we can't forget the other really critical hat at the table, the business hat and the executive hat who also enable and push this whole space forward. Today, as a business person and an executive, I'm making calculated, conservative risks because my responsibility is to advance the strategic aims of my company, to make sure that the bottom line improves, and to make sure that we do this in a way that there aren't risks that put us at peril. As this space moves forward, I need to be careful about how I advance my company's agenda within the space of real world evidence and where I support it and how I push those risks forward. But I am gonna have to take risks. Today, I'm taking risks, and I'm gonna take more as we move to 2023. And so, for example, being able to support teams by saying, "real world evidence needs to be a part of your regulatory package, and “I'll help you and make sure that there's the funds behind you to allow that work to get done."
I imagine in 2023, because real world evidence and this evolving landscape of evidence generation is applicable across my entire company, I will start to be thinking about how do I have discovery sciences and early R&D talk to late stage development and outcomes research. Because they're all part of the evolving and continuous conversation. I'm gonna think about how I provide the governance framework and a set of confident guidelines so everybody knows how to work together. This is the landscape we see evolving.
So, for many of you in the audience, some of the things that I've described are what you're already seeing happening at your company or in your roles and in your studies today. For others, you haven't actually had the chance yet to see this inside your own company, although you're having the conversation. The goal of the Research Summit is to give us a forum where, essentially, in a pre-competitive way we can talk about where all these things are and how we move this conversation forward together, acknowledging that in 2023, we really are a blended team with blended tasks and individual perspectives that we're gonna bring to the table in a confident way.
So, as we imagine 2023 and the kinds of questions that might be answered, come on a journey with me. Alright. In December of 2016 or so, I was at my first FDA meeting as a Flatiron employee. The FDA executive who was sitting at the end of the table said something that I have never been able to get out of my head. As a matter of fact, I remember the moment so well I was sitting to his right. And he said, "When we first started talking, as the FDA, about real world evidence, we thought we were talking about population science and big, big datasets. And what we're seeing right now is that the real world evidence of today is small cohorts.”
It's interesting when you look at 21st Century Cures, 21st Century Cures reminds us of that too because one of the use cases it points to is a label expansion. And when we think about label expansion oncology, what we realize is most of the label expansion opportunities are gonna be crossing boundaries of indications, often within the context of mutation-informed science. And so these are tiny cohorts in the context of precision medicine. It's 2023, or maybe 2018, and you have a drug in your portfolio. I'm gonna call it Icanimab. Don't you like that? So, Icanimab has been developed for melanoma that is JAK2 mutated. And it's now been approved for that indication, it's been on the market for about two years, and you notice that oncologists around the world, but maybe a few in the United States, have started to prescribe it for patients who have got JAK2 mutated gastric carcinoma. Now, JAK2 mutated gastric carcinoma only occurs in about 1% of gastric carcinoma patients. You're scratching your head and then you realize that in ESMO last year there was an abstract from one of the French groups that had suggested that Icanimab works in this scenario. And you see that there is now some increasing use of Icanimab since ESMO 2017 and, in fact, it tips up yet again when there's another abstract at ASCO GI back in February. And so what you're seeing is increasing use of Icanimab for JAK2 mutated gastric cancer. You're wondering, as a team working together inside your company, is this an indication we should be going after? And as we're thinking about going after this indication, do we do this with retrospective data or do we need to do a prospective trial? Is this an indication that's even big enough for us to worry about? And does this drug even work well enough in this particular clinical setting?
I go back to that meeting at the FDA in December 2016, and the FDA executive is sitting to my left, and I remember he said, "In this world of small cohort science, no cherry picking." He was very clear. Because they don't want to just see those cases that we've chosen that look really good. We want to know in the totality of all patients with mutated JAK2 gastric carcinoma who've received Icanimab, how do they perform? And, in fact, how do they perform against patients who haven't seen Icanimab? And so, you need to know, in working together with Flatiron or any real world data company, how do I have confidence that all of the patients that you possibly can get your hands on, all cases are reflected in this cohort?
At Flatiron, we've been working on a series of approaches. In 2018, those approaches include diving into our datasets of patients with gastric carcinoma and identifying patients who've had next-generation sequencing the JAK2 alteration and finding that cohort. In the future, it may include more and more machine learning and other approaches to search far and wide across our datasets for additional patients who perhaps received next-generation sequencing at places like Harris and other places that we can't always see. But, importantly, one of the key features of this small cohort science and these small indications is gonna be making sure that we can find all of the patients. A second key feature of this story, today and in the future, is that the datasets are gonna need to be complete. It turns out that when we go looking for these JAK2 mutated gastric cancer patients, we only find 60, so you need to make sure that for every patient in that cohort you have as much of the information as possibly available. How you know it's complete is gonna be really critical, and so today at Flatiron we build data visualization techniques where everybody can look at the same screen and see all data available and make sure that we all believe that the data that are needed are, indeed, available for each individual patient in the overall dataset. So, the data tell the complete story.
Another thing about this landscape of small cohort science is that the analysis is gonna be both qualitative and quantitative. What you find when you're looking at this analysis is that for the patients who have seen Icanimab versus those patients who have not, the overall response rate seems to be double. But the Kaplan-Meiers are not statistically significant, and you're not quite sure what to make of it. However, because you've got the deep dive of reviewing the narratives in the chart, redacted documentations of what does, for example, the medical case notes show, the radiology reports, or the pathology reports, you can now have with confidence an interpretation to put those overall response rate and Kaplan-Meier results into context. The clinician on your team helps you read that information. Your quantitative scientist on the team help you make sure that all of those data points are reliable and how that analysis is done. And the clinical trialist on the team starts to think about what's the clinical trials of the future potentially look like. Now, together, with all the team, you start to decide should we plan a prospective study? Should we submit these data to the FDA? How do we think about this? And that's a decision as a team.
Now, you might be wondering, does Icanimab work in JAK2 mutated patients beyond melanoma and gastric cancer? I know I was certainly wondering that too, and maybe it's uterine cancer, maybe it's sarcoma, maybe it's brain or lymphoma. So, the availability of datasets where you've now got, essentially, cross-tumor visibility by individual mutation become really important. At Flatiron, we have something called the Clinicogenomics Database, or CGDB, which gives that level of detail across diseases and also by individual genomic alteration. Importantly, we need to keep updating, as Flatiron but also as a team of scientists working together, our processes. So, for example, at Flatiron, one of the things that we're working on within the context of the Clinicogenomics Database is building a pan-tumor data model so that you can see, with detail and all the details about the disease, by uterine cancer and ovarian cancer and sarcoma and gastric and melanoma and brain, that you have pan-disease endpoints that you can look at with confidence so that, ultimately, you can think about a tissue agnostic indication for your JAK2 inhibitor.
Imagine 2023, where we can actually take that dataset, the retrospective clinicogenomics dataset, and pair it with consent from patients and prospective data collection so that after consent we have biologic samples that you can now use to inform your discovery sciences. We have the additional, of addition... We have the addition of other details about the patient's clinical care that may be helpful in telling your story. You can plan for further clinical trials. And maybe one of the clinical trials that you plan for is a single arm study of Icanimab in JAK2 altered gastric cancer. Turns out that, as many of you know, gastric cancer is almost 10% of cancers worldwide. This might be a huge indication for you, and it feels so high-risk and important that you've decided you wanna do the single-arm trial in order to complement your RWE package. We might work together with you as Flatiron to find all those JAK2 mutated gastric cancer patients in the US. You might reach out to other data networks. And then, at the Summit, you'll hear about work being done to start to think about how we can build along with you external control arms that can act as your comparator in your JAK2 gastric cancer single-arm trial. As we do that, you're gonna hear methods that we've been developing and that others have been developing to understand how to best align the inclusion and exclusion criteria, how we can think about the best way to, for example, appropriately balance and match groups, the endpoints and the confidence that you can have within the endpoints in these external control arms.
Importantly, this is an evolving conversation and not meant to say, "Here is the final," or, "This is the way to do it." And I think it's a really important example, this concept of external control arms, of where we need to think about this together. But as we look towards 2023, we might even be thinking further. Not just about using retrospective real world data to form an external control arm to complement your single-arm study, but maybe you still need to randomize some patients prospectively so that what you've got is a five to one or a three to one randomization and then you're using the retrospective data to merge with the prospective data in that randomized trial to build out a full hybrid control arm of the future. And that's where I think the story is really starting to go. So, these stories may sound optimistic, but we have many examples evolving and emerging today. They're within our reach. We need to work together to make it happen. It's gonna make all of our science better, it's gonna make all of our drug development programs move faster, and I really think it's gonna be a world that we all get to engage in. So, that really is the purpose of the Summit, to have that conversation. Because, at the end of the day, it's the patient today for whom we do this, and it's the patient of tomorrow with whom we do this. And this is the purpose of real world data science. So, I implore you to have conversations, share best practices, have a lot of fun, and thank you so much for being here.
Zach Weinberg: Hi, everyone. Really sucks to follow Amy. It's like the hardest job in the world. I always like to start with a little story. So, I called my wife last night and I said I've got two really big problems. Number one, Amy is talking in front of me, and number two, I don't have a joke. And I need a joke. And she goes, "Well, who's in the audience?" And I said, well, it's mostly physicians and scientists and engineers. She gave me the best advice possible. She said just, "Well, pander to the audience." So, here's my joke. It's not gonna be good. Why are statistical programming languages the best programming languages? Because they are. Okay. It's a pretty good one, I think. So, first and foremost, I want to join Amy in welcoming everyone to our first Research Summit. Our expectation, our hope is that this is gonna be a recurring summit, something we do multiple times, either maybe once a year or once every two years, but thank you everyone for joining us today.
So, Amy talked about where real world evidence is going, what the 40,000 foot view looks like over the next five years. What I wanna do is kind of zoom in and talk about the 400 foot view, and in particular, what we're gonna do over the next 48 hours and how we're gonna talk about real world evidence today and tomorrow. So, I want to start, first and foremost, with why are we here in Washington, D.C.? Or, I guess, technically, Maryland. Two trends I want to cover. The first trend has to do with the growing demand for evidence in oncology. And for folks that have worked with Flatiron before, you've probably seen some version of this slide, but generally speaking, we have three key trends that are increasing the demand for evidence in oncology and, in particular, in oncology therapeutics.
First and foremost, we have more drugs coming to market than ever before. R&D is getting better, tools for scientists are getting better, and we expect to see the number of new therapies that are in development grow exponentially. Second is we're beginning to use those therapies in combination with each other, with different mechanisms of action. And then, finally, we have better diagnostic tools such as next-gen sequencing in order to sub segment individual populations. So, this is like a great problem to have, right? This is the science moving ahead. But when I look at this, and when we first learned about this problem six or seven years ago, I thought this is a math problem. This is multiple therapies in combination with each other in really novel cohorts. The number of questions that we're gonna need to answer is going to grow, and it's not gonna grow linearly, it's gonna grow exponentially. And this is what's coming.
What's interesting about this is it's not just a pharma problem, it's not just a biotech problem or a physician problem or a patient problem. This is a regulatory challenge, as well. And groups like the FDA or EMA or payers or health authorities are all recognizing that this is going to be a challenge for them over the next 10 years. How do we make decisions? What's the dataset that we have to make those decisions?
So, Amy talked about this rapid adoption of electronic health records in oncology. It is actually true in healthcare, broadly. This is the second trend. And what's interesting about this trend, and I'll tie it back to Flatiron for a second, had this trend not happened, had the HITECH Act not been passed, I don't think Flatiron as a company would have been able to exist. This is the infrastructure. This is kind of the fundamental shift in how data is captured that's allowed Flatiron as a company to even appear in the first place. And I've said this publicly and with Nat and our investors. If we had tried to start this company even maybe three or four years before we did, I don't think it would have worked. But now, what's interesting about this EHR adoption is we can think about aggregating source data or electronic health record data at scale, and we can do this without having to get on a plane and go to a cancer center and pull a chart out of a manila folder. We can actually begin to use computers to aggregate this data.
But as everyone knows, source data is only as good as the source data, and there's a tremendous amount of post-processing and information that we need to do from source to regulatory-grade. And in particular, there's two problems here. One is rigorously processing the individual data points for high quality. The other is developing analytic methods in order to use this data because it is observational and it is retrospective.
So, the goal for the next 48 hours and subsequent conferences, as well, is to focus, in particular, on this middle layer where that question mark sits. So, for Flatiron, and again for the next 48 hours, what we want to do is bring transparency to this process. What is in that question mark? What are all the steps that are happening? How are we doing things? And actually show, both with Flatiron examples as well as with our partners' examples, of what's going well and also what's not going well.
I want to talk for two seconds about what this process looks like. I pinged our product team and I said, hey, can you give me just a timeline view of what happens from source data to analytics and interpretation? And she pinged back and said, "Well, this is gonna be a really complicated thing to put on the slide because there's actually a tremendous number of steps." This is like the 20,000 foot view of what actually goes on. We take an initial study design, we select a cohort, we do a whole bunch of work to actually generate that dataset, we do QA and QC on the dataset, it loops back around, and then, finally, we do a set of analytics and interpretation. But there is a tremendous amount of depth and detail in each and every single one of these steps, and this is what we're gonna focus on is actually going into that detail.
So, before we do that, I wanna talk for two seconds about what the conference is not. This is not meant to be a Flatiron sales conference. You've heard from Amy and myself, and from here on out we wanna talk about the science. It's not a dog and pony show. Even though it may look like one. That's not a real dog with a real hat, by the way. So, while we do have Flatiron presentations, we also have over 19 non-Flatiron presenters for this conference, and for future conferences we hope that number increases over time. We want the percentage of people up on the stage that are Flatiron employees to actually come down as the science gets better.
So, talk about the goals for two seconds. Three core goals, these won't come as a surprise. First and foremost, we wanna pull back the curtain on real world data and analytic methods. So, how do we actually do this? Let's show some of the details. The second is we wanna talk about applications of real world evidence. So, Amy laid out a compelling five-year vision. What are the current applications? What's working and then also what's not working? Where does RWE not actually have a place? And then, finally, what we're excited about, and getting everyone in a room together, is this is the first inning. This is a long journey here over the next 10 years, and we wanna hear from our partners and our customers on different ideas, ways that we can share both data and methods and learnings so we can improve this and do this work faster.
So, I'm gonna anchor, hopefully, the rest of the conference on this idea called the regulatory-grade quality checklist. Some folks may be familiar with this idea. We published in December, I believe, and we've made some tweaks and improvements based on feedback and additional learnings, so this is a publication that we expect to live and to grow as we learn more. So, I wanna start with what the checklist looks like today. And, in particular, what I wanna do is define each of these individual terms. So, what do we mean when we say clinical depth or provenance or scalability?
So, I'm gonna start with clinical depth. Clinical depth is asking the question does the dataset capture the important features and clinically relevant characteristics of disease, of treatment, and of outcome? For completeness, is the information available on a sufficient proportion of patients to enable clinically meaningful analyses? Have we benchmarked this data against some sort of gold standard? Longitudinality, do we have the ability to follow an individual patient longitudinally over time throughout their disease course? So, from diagnosis through outcome. Timeliness or recency, can we monitor the evolving treatment landscape and can we do analyses and access insights on a timely cohort? So, is this a cohort that is 30 days old or two years old or five years old? Provenance, do we have traceability through the stack? Can we actually take an individual data point and trace it all the way back to the initial source document? Scalability, can we actually take this data model and can we scale it to larger cohorts, to different populations? Can we do this across different source systems, so across multiple electronic health records? Generalizability, is the population representative of the broader patient population of interest? Have we found potential biases? Have we identified them? Have we addressed them? And then maybe most importantly, quality monitoring. So, do we have processes and methods to actually check for data accuracy and quality?
So, this is a public document. This is something we are putting out into the field, and we hope this guides the conversation around real world evidence over the next few years. So, for the next two days, our goal is to take this checklist and bring it to life through the various sessions and into the details. So, before we go to the actual substance, I wanna give just a quick preview of some of the upcoming sessions and then how they relate back to the checklist. This is not meant to be an exhaustive overview. These are just a few sessions that we're gonna cover, but I just wanna highlight a few different things. In the next session, so the one following me, we're gonna talk about leveraging real world evidence for regulatory use. We're gonna focus on a few key features of regulatory-grade RWE. We're gonna talk about completeness of data, we're gonna talk about provenance of data, and we're gonna talk about quality monitoring systems. You're gonna hear from leaders from FDA, from Amgen, from Janssen, and get their perspectives on using real world evidence for regulatory purposes. Maybe the one I'm most excited about is our machine learning and NLP session. So, our Flatiron engineering and product teams are gonna talk about where ML and NLP work, where they don't work, what are the right ways to apply these kind of modern technologies, and we're gonna talk about how Flatiron uses these approaches to reduce the burden of data processing, how we scale our cohorts, how we identify sub-cohorts using kind of novel machine learning techniques.
Tomorrow, there's gonna be two sessions, in the morning I believe, about constructing external control arms. This is, obviously, one of the most impactful if not the highest impact opportunity in real world evidence but it's really early days. And so we wanna share our learnings for what's working and what's not working. We're really optimistic about the opportunity to use real world evidence as an external control, both as a complement or potentially a supplement to clinical trial data, and we wanna talk about how we're actually doing this.
Maybe the most interesting thing at the conference is our Abstraction Lab. So, most folks know about unstructured data processing and they realize that we, at Flatiron, use a kind of massive team of distributed workers to go through, who are experts, and look through this data, and we call that process abstraction. What most people have never actually seen is how that process works underneath the hood. They've never seen our tools. And so what we have right outside the conference are a set of, essentially, fake patients where you can come in and actually use the Flatiron tool set to do abstraction. So, most important in all of this is we're gonna track who is the best at this. Gotta create a little bit of competition here. We're also gonna give out prizes to the top three winners. I was asked earlier if the prize would be free data. I asked our CFO. He laughed and didn't even give me an answer, so I think that's a no. But we are gonna give out prizes to the top abstracters during the session.
And then, finally, maybe for tomorrow afternoon, we're gonna close the conference with Dr. Bobby Green who you saw in the video earlier. Bobby is a medical oncologist at Flatiron. He's a Flatiron employee. He oversees the clinical accuracy and the feature set for physicians in our electronic health record. What's really interesting about Bobby is he also is a practicing medical oncologist. He sees patients one day a week at clinic in Palm Beach in Florida, and he's actually an active user of the EMR. So, when you watch Bobby and you see him do rounds with patients and you see him do documentation, the system that he's putting that data into is Flatiron's own EHR. And he's gonna talk about the potential impact of real world evidence on patients and how to get better therapies to market faster.
So, we are really excited to have everybody here. We have over 150 attendees. We thought it was gonna be more like 100, we got to about 170 or so, which is great. There's 33 different organizations. We have 13 case studies. So, this conference has somewhat already exceeded our expectations in terms of attendance and people and cases. Most important for us is the non-Flatiron presenters, and we have 19 of them. As I mentioned before, we hope in future sessions the majority of speakers are non-Flatiron employees and this is not one slide, or maybe the heads get really small. Some way we can fit 50 or 60 people on the slide. And then finally, before I hand it off, I just wanna thank everybody for attending, for coming, and in particular for sharing. I know many of the sessions we're gonna talk about things that typically customers would not share with each other. This is what we're really excited about. We think these case studies, these examples, these methods need to come out into the open and we need to see transparency in terms of how everyone is doing things.
Flatiron Research Summit
November 7, 2018
- Amy Abernethy, MD, PhD — Former Chief Medical Officer, Chief Scientific Officer & SVP Oncology at Flatiron Health
- Zach Weinberg — Co-Founder, President & COO at Flatiron Health