#enChannelNav()

David Rhew | Advancing health equity through data & AI

On December 22, 2022, the Peking University Global Health and Development Forum 2022 was held with the main theme of Digital Transformation and Development Divides. Co-organized by the Beijing Forum, Asian Development Bank and PKU Institute for Global Health and Development, this Forum brought together world leading scholars, policy researchers and industry leaders from both China and international communities to share their insights and recommendations on the thematic topics, attracted over 10 thousands online viewers participated in the event. David C. Rhew, Global Chief Medical Officer and Vice President of Healthcare for Microsoft delivered a keynote speech at the session of Health Digitalization and Development Divides.

David.jpg

I'm a physician technologist and a health services researcher. For the past 25 years, I've been spending my time investigating how technology can be used to improve access to care, quality of care, patient safety and ways that we can improve the experience for both patients and providers.

There are three things that I've observed during my time investigating healthcare, and these are things that apply across the world. First, there's significant variation in care. Second, variation in a lot of care is unwarranted. Third, for those who are most vulnerable, those who have the least advantages, they are the ones who oftentimes have the lowest quality of care and the greatest opportunity for improvement.

What I'd like to talk to you about is how we can close those gaps using data and AI to essentially address health equity. Now the concept of health equity is something that many have talked about. Generally, it means that it's allowing every individual to have a fair opportunity to achieve the highest level of health. When we think about how we have approached this in many ways, we've started with equality. Providing a level playing field for everyone and making sure that everyone has equal opportunities. But in so doing, what we found is that not everyone is starting at the same place, and there are barriers.

So as we think about things such as artificial intelligence, we need to do more for certain individuals, so that they can have the same opportunities as others. That gets us to the different sub-categories that I'd like to discuss with you. As we think about how we can achieve health equity through data and AI, I would like to talk a bit about the opportunity for responsible AI to be applied.

How to make AI more accessible, to include more diversity in the data sets that support the AI algorithms? Ensuring that what we do is trustworthy for the individuals and the people that are applying it. Finally it's actually having an impact on the populations that are of interest.

Let's jump right into it and talk a little bit about responsible AI. The responsible AI has many different facets. One can think about just looking at the algorithm performing as designed. What's the level of error? We can talk about fairness, we can talk about counterfactual analysis and critical decision-making. With each one of them, we actually now have open source tools that we can utilize to be able to determine if the algorithms are performing in a responsible manner.

Let's use an example. One of the open-source tools is Fairlearn, which is coming from the Microsoft responsible AI toolkit and available for free. Fairlearn has many different facets to it, but the one that I want to highlight is the fact that we can take AI algorithms and we can map them based on different subpopulations to look for level of accuracy for the algorithm and also disparity and divergence. If you look at this graph to the right, what we want to do is to have the highest level of accuracy with the lowest level of divergence. As we map out the different algorithms, you can see each perform slightly differently, but we may want to choose the ones for that particular population that have that highest mix.

The other thing about responsible AI is that we are on a journey, and this journey starts with just developing and validating those algorithms. There might be a process for getting them cleared through government agencies. But then it's all about adoption and real world evidence and understanding what impact this is having. As you can see from this graph, we're really on the left-hand side. A lot of this is still in the R&D phase. We're just starting to develop algorithms and validate them. But as we move from research to deployment, there is the big opportunity for us to understand the impact that AI is having on the various populations of interest.

I would also like to talk about accessibility. Accessibility is something that we oftentimes think of in terms of just making it easier to access the technology. But for certain individuals that technology can only be utilized if we do more. For instance, if you have low vision, or you have decreased hearing, or you have challenges with understanding and interacting with people, or learning, or your mobility's limited, these are all factors that can impact one's ability to be able to gain access to these technologies and better utilize them.

What we have realized is that we can do more to help these individuals achieve a higher level of function so they can benefit from the technologies that we're building. I want to give you a couple of examples. The first is around how we can help those with visual impairment through the use of AI. I'm going to show this video and give you a sense of how this can be used.

Example one:

I lost my sight when I was seven. And shortly after that, I went to a school for the blind and that's where I was introduced to talking computers. And that really opened up a whole new world of opportunities. I joined Microsoft 10 years ago as a software engineer. I love making things which improve people's lives. And one of the things I've always dreamt since I was at university was this idea of something that could tell you at any moment what's going on around you.  Narrative from the app :  I think it's a man jumping in the air, doing a trick on a skateboard.

I teamed up with like-minded engineers to make an app which lets you know who and what is around you. It's based on top of the Microsoft Intelligence APIs, which makes it so much easier to make this kind of thing. The app runs on smartphones, but also on the Pivothead SMART glasses. When you are talking to a bigger group, sometimes you can talk and talk and there's no response, and you think "Is everyone listening really well or are they half asleep?" which you never know.  Narrative from the app:  I see two faces, 40-year-old man with a beard looking surprised, 20-year-old woman looking happy.

The app can describe the general age and gender of the people around me and what their emotions are, which is incredible. One of the things that's most useful about the app is the ability to read out text. For example, the App would say: Hello, good afternoon. Here's your menu. I can use the app on my phone to take a picture of the menu and it's going to guide me on how to take that correct photo.  Narrative from the app:   Move camera to the bottom right and away from the document . Then it'll recognize the text, read me the headings.  Narrative from the app :  I see appetizers, salads, paninis, pizzas, pastas.

Years ago, this was science fiction. I never thought it would be something that you could actually do, but artificial intelligence is improving at an ever faster rate. I'm really excited to see where we can take this.

As engineers, we're always standing on the shoulders of giants building on top of what went before. In this case, we've taken years of research on Microsoft research to pull this off. For me, it's about taking that far off dream and building it one step at a time. I think this is just the beginning.

Example one ends.

It's a wonderful story. It talks a bit about how AI can help those with visual impairment. We also know that AI can help those that have speech impairments. Let me show you an example of how that can be applied. It is our product called VoiceIt. VoiceIt can control devices with voice commands, communicate easily with dictation by email, collaborate effectively with live transcription in remote work, express ourselves in telehealth video conferences. With tech integrations, VoiceIt can help us access our biggest dreams.

Just think about how this technology can be used to help those the millions of individuals that had stroke or other types of impairment for their speech. We also want to talk a bit about that diversity in the clinical trials because a large number of the information sources that we use are based on clinical trials, but if we think about who enrolls, it's a very select group. How can we increase that diversity, and include the populations that are oftentimes not willing or eager to participate?

What we found is that one mechanism is the use of a chatbot. The chatbot has given us a mechanism where we can provide individualized recommendations for people. In the process of asking questions to give them the answers, that can allow us to be able to match with certain criteria to determine eligibility.

We saw this actually during the pandemic when early in the stage there was a lot of questions about whether or not plasma could be used to treat COVID. But in the fact that many individuals were asking questions through chatbots about whether or not they should be seen via a practitioner or stay at home. There was oftentimes an opportunity to apply those criteria and figure out whether this is a person who could potentially be eligible for a clinical trial?

We've seen other mechanisms where those chatbots can be pushed in the background. The first interface will actually be an individual or an avatar that is presented back to the person. There's a great example from the World Health Organization where during the pandemic they launched Florence. Florence was designed specifically to help address some of the questions and concerns about smoking and COVID. In the process, this avatar which is extremely realistic in response to individuals in multiple different languages also has the ability to be empathetic. In looking at the individual with their permission, it can take 16 points on their face and determine their emotions. If they're angry, they're frustrated, they're sad, the avatar responds back in an empathetic manner. It's extremely engaging. It has on the back end the technology to be able to provide the right information at the right time.

With that as the front end, we now have an ability to engage individuals in different languages, different cultures, and answer those questions, but at the same time provide a matching to clinical trials. That's going to involve multiple different technologies in the backend. First of all, there's a set of databases that we need to access. There's information sources that may oftentimes be in text, where we use text analytics. We first match it, and then we will need to convert that into formats that can be understood by electronic health records and other systems.

This is just to give an example of when I talk about text analytics. What we're doing is we're looking at all the different words that are being captured. It could be through voice or text. In that process, it's mapped to standardized nomenclatures, which allows us to be able to understand the concepts. I'm going to give you a real quick example. Let me introduce a demo of a project that we worked on with War on Cancer.

War on Cancer is a global social network providing support to anyone affected by cancer. By leveraging Microsoft's clinical trials matching technology, War on Cancer built a clinical trial finder experience, helping patients to find potentially suitable clinical trials for their condition. The technology dynamically selects the most differentiating criteria that appears in the trial subset and generates qualification questions in a language patients can understand. It then uses patient responses to qualify them, filtering out any relevant trials and recommending potentially suitable clinical trials.

It's a great example of how we can use the AI to help with increasing diversity in clinical trials. We also recognize that we need to increase diversity in the data sets that we use to validate this information with the AI in particular. That can only be done through higher levels of trust.

What we're now looking at is how can we create federated learning processes that are privacy preserving. Now this is extremely important not just for healthcare, but you can imagine for other industries and government. Because a lot of times organizations like to have the data reside without actually sending it out to another area.

This is something that we have now found that if we can do this not just with data at rest and data in transit, but to create a higher level of confidentiality and privacy for data in use, specifically at the chip level and memory level. That's something that could be done by embedding these chips into the servers. It creates a newer root of trust, customer verifiable remote attestation.

There's some real key value propositions around us. If you think about what typically happens when we de-identify information, we lose information that could be relevant to one's outcomes. Specifically things related to your post code or your zip code, which has a huge impact on your outcomes, gender and ethnicity.

If we want to include those in our analyses, and specifically make sure that we have included the factors that can impact outcomes most, then we need to include that in those data sets. Let's not strip it out, let's keep it in there, and then let us do the computations that are privacy preserving enclave. It'll also allows us to be able to address highly sensitive data such as genomic data, which inherently cannot be de-identified. But it also protects the data and the model IP. There are significant number of benefits.

Let's talk a bit about now the impact that this can have. As we think about the populations, oftentimes those that have the highest level of impact are the ones most vulnerable.  I'll use cancer as an example. With cancer, we typically find that many factors are associated with cancer. Cardiovascular disease is another great example. Oftentimes, these factors are higher in underserved populations. Imagine if we follow a typical course. You've got an individual who is going for a normal checkup. In that course of the normal checkup, they may get a routine CT colonography. Let's imagine that the CT was read as no cancer, then they're moving along. Then a couple years later they get a heart attack or maybe a stroke and they die. Now that is an unfortunate sequence of events. It happens unfortunately pretty frequently.

Imagine now if during the time that we had that CT of the abdomen, we started looking at biomarkers for other things, in addition to whether or not there's cancer. We know that that's the case because we've seen this in other studies. With that, looking at the subcutaneous fat, the muscle mass, the bones, aorta, et cetera, we can identify biomarkers for cardiovascular disease, osteoporosis, liver disease, cancer, etc.

That is the opportunity for us to start thinking about how we can create new risk indices. With those risk indices, those can allow us to be able to determine that. With this CT and some of these other factors, we now have identified that you are at higher risk for cardiovascular disease. Because of that, we're going to recommend some screening and lifestyle changes, interventions, and hopefully lead to a positive outcome there.

What we're seeing is that if we think about lung cancer, which is a very important condition that we oftentimes don't recognize that there are some high risk individuals who should be screened with CT scans. In the United States, it's those that have been smoking for 20 more years or have quit in the past 15 years but used to smoke and now are between a certain age group. Now this group should receive these CTs. With those, there's an opportunity to be able to apply this. We're now starting to see that this is actually something that we can do today. We can do these screenings and we can start looking and identifying for other conditions to help us identify opportunities for improvement.

Let me close with some general thoughts in terms of what we just talked about. We talked about general categories for how we can improve health equity through data and AI: responsible AI by making it more transparent, explainable, unbiased; accessible by using different ways that we can improve and elevate the performance for those individuals that may have disabilities such as visual and speech impairments; making sure that the diversity of the data sets is ensured through different technologies that can provide engagement and are matching; and ensuring that the data is trustworthy, secure and privacy protected. Finally, we're actually having an impact by implementing these in populations that are of highest interest, and that also are most likely to be vulnerable and will suffer the bad outcomes from these diseases.