In this episode of the Podcast, Seth interviews Muhammad Ahmad. Muhammad is an Affiliate Assistant Professor in the Department of Computer Science at University of Washington and a Research Scientist at KenSci. His research areas are machine learning in healthcare, accountability and ethics in AI. His recent work is focused on foundations of machine learning and cross-cultural perspectives on AI. Muhammad’s work combines academic rigor with extensive experience in deploying machine learning systems at scale in the healthcare sector and thus first-hand knowledge of many moral and ethical dilemmas that come with it. He has published over 50 research papers in machine learning and artificial intelligence. He has a PhD in Computer Science from University of Minnesota. This episode will be the second part of our look at race and healthcare.
In this interview, Seth and Muhammad explore a variety of topics. Muhammad also talks about how he eventually turned his focus towards healthcare, specifically addressing discrepancies in care between different ethnic groups. He explains many technical problems in machine learning, including the challenges to applying different definitions of fairness. The key questions to this episode are: how has data served to both obscure and to illuminate discrepancies on healthcare? How can machine learning enhance our understanding of complex social problems?
Music: “Dreams” from Bensound.com
Seth Villegas 0:05
Welcome to the DIGETHIX podcast. My name is Seth Villegas and it’s a pleasure to share today’s conversation with you. In this episode of the podcast, I interview Dr. Muhammad Ahmad. Muhammad is an affiliate Assistant Professor in the Department of Computer Science at the University of Washington, and he’s also a research scientist at KenSci. His research areas are machine learning and healthcare, accountability and ethics in AI. His recent work is focused on foundations of machine learning and cross cultural perspectives on AI. Muhammed’s work combines academic rigor with extensive healthcare in deploying machine learning systems at scale and the healthcare sector. And thus he has first hand knowledge of many moral and ethical dilemmas that come with it. He has published over 50 research papers in machine learning and artificial intelligence, and he has a PhD in computer science from the University of Minnesota. This episode will be the second part of our look at race in healthcare. If you’re interested in this topic, I highly recommend that you also look at our interview with shaunesse’ jacobs to get her perspective on the topic. That interview serves as a companion to this one. And they explores the information problem of patient self reports on their own condition, something that Muhammed will also touch on.
In this interview, Muhammad and I explore a variety of topics. We talk a little bit about his background, and how we became acquainted through a program called signing synapses. Muhammad also talks about how he eventually turned his focus toward health care, specifically addressing discrepancies in care between different ethnic groups. He explains many of the technical problems to machine learning, including the challenges of applying different definitions of fairness. The key questions this episode are how has data served to both obscure and to illuminate discrepancies on health care? How can machine learning enhance our understanding of complex social problems? This podcast would not have been possible without the help of the DIGETHIX team, Nicole Smith and Louise Salinas. The intro and outro track dreams was composed by Benjamin Tissot through bentong.com. This episode has been cut and edited by Talia Smith. Now I’m pleased to present you with my interview with Muhammad Ahmad.
So I’m really glad to be here to be joined by Muhammad today, we had a bit of technical difficulties getting into the podcasts, which I think is always important to note on a podcast of this kind, which is about technology, and about everything that goes into technology just could not get zoom to work in the way that we hope but I’m really glad that your patient will have it. And, you know, we were able to get at least something going here.
Muhammad Ahmad 2:39
Yeah, that’s the thing that technology does not always work the way that you intend to be.
Seth Villegas 2:45
It definitely doesn’t. So Muhammed, I thought it’d be great to start by, you know, I know you through this program called Sinai and Synapses. Sinai, and synapses is a kind of a religion and science organizations, kind of hoping to foster relationships between experts, particularly in science and technology and other types of things. I think that part of what’s really cool about that program is, you know, like, I was able to meet you and kind of hear about the work that you’ve been doing. But also, you know, we get to kind of meet different kinds of people from you know, all different sorts of backgrounds, doing really different sorts of works, and having important conversations are things that are really relevant today. So I guess I was wondering, how did you first get acquainted with Sinai and Synapses? When did you first hear about it? Like, why were you interested in applying to be a fellow?
Muhammad Ahmad 3:29
So I’ve been interested in just this space of intersection between religion and technology, for I would save almost forever. And it was during like one of those sessions where I was browsing a number of resources online, and I came across Sinai and Synapses. And it just seemed to quite interested, interesting to me, and very relevant to my interests. And that’s how I got acquainted and I applied. So just to add to that. Another interest that I have is how different traditions religion, whether they’re religious or secular traditions, how they think about similar type of existential, moral and practical questions. That was my other other interests that led me to Sinai and Synapses.
Seth Villegas 4:23
Okay, so is there a particular existential question that interests you the just talking about, like, why are we here sorts of things like really big stuff, or what is it exactly that you’re interested in exploring?
Muhammad Ahmad 4:35
I would say all of the above. Okay. So to give you an example, and elaborate on that. So when I finished my PhD in computer science, I actually enrolled at a Lutheran seminary to study systematic Christian theology, sided at a master’s level. So I did that for a year. And my motivation for doing that was to get And Gordon got an insider’s perspective of a religious tradition, which was different from my own, like different religious and even cultural traditions say they have different takes are yet related takes on questions of, you know, what is the meaning of existence? Why are we here and even where we are going to my foray into that was to see, not just as an outsider regarding how I think about this question, are people in my tradition, see the word, but what’s the self understanding of another tradition?
Seth Villegas 5:35
Okay, so so that’s really interesting. I think, first of all, you know, maybe not even people inside of our own tradition necessarily try to get a master’s degree level knowledge, you know, at any point in their lives. I come from an evangelical tradition, which sometimes you can be a bit suspicious of people who kind of go to seminary perhaps, my particular background wasn’t necessarily like that. But I know that is some people’s story. I did notice that you didn’t talk as much about what your your own tradition actually is. Would you mind elaborating a little bit?
Muhammad Ahmad 6:05
Ah, yes, yes. So I come from an Islamic background. So I grew up in Pakistan. And I came to the US right after high school, I have to say this summer quintessential May our familiarity with Islamic theology, and historic learning, we’re now how Muslims have thought about foundational questions of life.
Seth Villegas 6:29
Okay. And so what made you interested in Christian theology specifically? Was that just a part of, you know, being in the United States? Or is it a tradition you’re particularly interested in?
Muhammad Ahmad 6:42
So I can actually point to a particular event that led to my interest. So it was towards the end of my undergrad, when this Catholic priest Scott Alexander, so he’s, he’s actually the head of the Islamic Studies Department at Catholic Theological Seminary in Chicago. He came to Rochester. So those are at Rochester, New York, that was where I was doing undergrad. And he was at the mosque, and he gave a sermon about certain theological concepts. interrelate, if I remember that correctly, related to mercy in the Islamic tradition, and I mean, unless he was the visibly reading occasional Catholic garb, but you could not guess that he was actually Christian, like, not only was using a standard terminology, but just his enthusiasm for the Islamic tradition, it was, for lack of a better term overflowing. And then he I went to another one of his talks that he gave at a church, similar topic. And so I yeah, so at the mosque, somebody at the end of his talk asked him, like, you are visibly a Christian clergyman, and yet you talk about another religious tradition in such a positive and glowing ways. How do you reconcile these two things? And his answer was that you cannot really understand people unless you understand how they understand themselves. So that was my inspiration slash motivation. So I wanted to do something similar, but in the opposite direction.
Seth Villegas 8:23
Okay. That’s definitely fascinating. Especially you have a PhD in computer science, I don’t think it’s necessarily something someone would think to ask about, perhaps, I guess, do these sorts of conversations happen where you work, you know, where you do research? Is it just sort of a side thing?
Muhammad Ahmad 8:41
This has made this has mainly been a side thing. But besides that, my, so I’ve been working in the area of machine learning. For the last 10 years or so, ever since I graduated, and you had an everyday practical work, I come across things which have ethical and moral implications. And I would say with progress and artificial intelligence, many questions, which, again, for lack of a better term, really get traditionally relegated to the domain of philosophy. They’re becoming, quote, unquote, practical. The one thing that I even like to say sometimes is that we are living in a truly marvelous era where questions of philosophical moral speculation. Many of these questions have become an engineering problems.
Seth Villegas 9:33
It’s interesting that you put it way that way, because I think if we had a, a traditional philosopher here, they’d probably say that those philosophical questions remain. And there’s an engineering problem, right. So it’s interesting that this there’s kind of a transposition right, like a shift and where that conversation is actually taking place. And if I’m being honest, I think that’s also one of the reasons why people can be scared at times. So I’m not sure politically what machine learning problems you’re thinking about. But the one that comes to mind for me is something like self driving cars, right to something that has to make these kinds of moral and ethical decisions. And it’s, they’re really difficult problems too. And it seems, it seems a little much to expect an engineer to be able to solve that through code, let’s say.
Muhammad Ahmad 10:19
Correct. And you are absolutely correct that there are multiple lenders to look at this. The reason that I mentioned that these are engineering problems, was not in the sense that an engineer has to solve them, but rather in the sense that traditionally, a lot of these were theoretical exercises. And now we are confronted with scenarios with dilemmas where we have to find practical solutions which need to be implemented in code. So it may not be the case that the engineering that an engineer would devise solutions to these dilemmas. But that said, we are being with respect to self driving cars, we are being forced into scenarios where we have to find practical, somewhat optimal solutions with tangible trade offs. So in that sense, these become engineering problems that said, like, if you look at the trolley problem, a lot of ink has been spilled on the subject in the last 70 years. And a lot of people outside of philosophy thought that this is just a theoretical exercise. Well, that’s not really the case anymore, you can go further back in time. So we find that, like, thought experiments like these have been discussed in different religious and philosophical traditions in the past. So for example, there’s text 13th, I believe, a 13th, 12th or 13th century texts by Muslim philosopher, Imam Ghazali, who talks about similar scenarios, or instead of a trolley or self driving car, he talks about the trade off between saving lives of people and in a ship. So we, so it’s not that they’re purely purely engineering problem. But we live in a time where we are forced to confront these with an engineering perspective.
Seth Villegas 12:15
Yeah, certainly. And I think just to clarify, for people who may not be as familiar, the trolley problem is referring to the situation which you have two sets of tracks, right? in which there there’s a trolley, right, which is interesting, because I think that shows how outdated perhaps a metaphor is, I’m not sure where trolleys are outside of maybe San Francisco now. But you know, so this trolley, it’s on a particular path, and it’s about to kill, I think, like, five people, right, or you know, a group of people, but you have the ability to switch on to a different track, which will only kill one person. And it’s this problem of, is it better to take the action, knowing that someone will still die? And that seems to, you know, kind of, I think, come up, you know, we kind of mentioned self driving cars a lot. But I think you’re exactly right, you know, you know, the trolley problem is, you know, like a philosopher Philippa, Philippa Foot, you know, way of putting that, but they’re obviously it’s a more abstract than that, right? There are already people thinking about this, you know, as you said, you know, in the top third century, which is, which is amazing. I didn’t actually know that. So. But it is interesting that this comes up in such a practical way. Because I think the one things I was saying about the trolley problem is always criticized for being really impractical. It’s like, you know, what sort of devilish person has sort of set up this moral contrivance, you know, to kind of make things difficult for people, right, kind of a no win situation, so to speak. But, you know, in these real world situations, it seems like, it is a really, is it the same kind of a no win situation? Like why do you think that the trolley problem is even applicable?
Muhammad Ahmad 13:51
Yeah. A couple of comments regarding that. I mean, the history of science and even mathematics is full of questions, and even entire fields where for the longest time, they did not find any, quote, unquote, practical relevance. So for example, if you look at number theory, up until the ninth century, people really did not have any applications for number theory, and yet, certainly for hundreds are at least 4000 years before that people have been studying number theory in various parts of the world. And the ninth century, we discovered that well, you can actually use this in in cryptography. So beyond that, coming back to the trolley problem example. So I mean, I can see that why somebody would say that, well, it’s impractical. Or maybe you will never be in a scenario where you have, let’s say, driverless cars. And in my in my mind, that’s a technical problem better left to the real world. If by deployment of such systems as possible or not, that will be decided by technical merit or doesn’t matter what I say or another person says that. There are many more mundane examples of let’s say AI and healthcare that I, I see in my own work almost on a day to day basis which have these philosophical relevance. And many of these problems predate AI and machine learning. So an example of one mundane use case would be risk stratification. So that’s just one way of saying that. So when a, in any situation where you have to deal with scenarios, where you have multiple people, multiple patients, and you have to declare, because you have a limited set of resources, you have to determine which people or group of people to prioritize versus others. So for example, let’s say an emergency room department, so five people show up and now you have one very nice way to think about it. Let’s say fairness would be to well, if you first come, first served, but well, then you discovered that obey, some people are visibly bleeding. So maybe you should, they should be prioritized in the context of AI where things get interested in that. So historically, I mean, we always had to do prioritization like this. And it’s not just mundane examples, like the emergency department or crashing in hospitals in general, even though it’s something as serious, as I say, determining who should get a kidney transplant or organ transplant in general first, so that’s another risk stratification problem. Perhaps in the context of AI, what becomes interesting is that now we have massive amounts of data, you can consider factors that humans cannot consider. And so presumably, at least the hope is that maybe we can do better, or stratification as compared to a human. In this case, a physician because at the end of the day, when historically when now, doctors have to make a call whether a patient x should be seen first versus patient why or let’s say a particular patient should get kidneys, kidney transplant first as compared to others, because resources in the real world, whether they’re physical, whether there’s time, whether they’re human resources, they are limited, so that this problem has been with us for a long time. And given that, and practically, in almost any part of the world. We have a large human population and set of experts who can, in this case physicians, and other health care staff who can attune their, to their needs, that’s limited, monetary resources are limited. So that’s, that’s always there. But in the context of AI and machine learning, we can ask additional questions, when we are doing the allocation of how are these algorithms doing the allocation? How do we know if they are being fair or being unfair? And what is even the meaning of fairness in this context? So it becomes much more complicated, rightfully so when you get AI and machine learning involved?
Seth Villegas 18:10
Okay, so if I was going to try and zoom out a little bit and try to describe what it is that you just said, so it’s using machine learning, right, which is a statistical process of decision making? No, ultimately, I mean, you probably have a better definition in machine learning than I do. But it’s the application of that to particular domain. Right. So in this case, hospital allocation of resources may be transplants, you know, they’re kind of different sub problems within that. And I guess one of the things that immediately comes to mind is, how would we know that the machine learning isn’t, you know, replicating a process that’s already unfair, right, which I think leads back to what you were just saying about, you know, how would you know that it’s fair, in the first place, right, you know, kind of even absent the machine learning process, like, how would you gather that kind of data, because that seems to be, I’m not sure, like, a little bit more insubstantial in a way right, or requires a qualitative valuation of what it is that’s going on more so than just running numbers of you know, somebody beats per minute, or how old they are there, we know, other things like that.
Muhammad Ahmad 19:18
So the problem is slightly more complicated than that. So depending upon which particular problem you’re addressing, there are different ways to quantify the desired outcome. So in the, in the case of, let’s say, hospital admissions, or ER admissions, so you one thing that you can look at is, let’s say survival rate or quality of service, there are different ways to quantify that you can do something similar for, let’s say, or organ transplants. And, I mean, it’s not, I mean, you can never have an…have a best possible outcome, but at least you can still talk about relative outcomes. So you can do let’s say some sort of a test where You can see that prior to deploying this machine learning or AI system, you had a success rate of 60%. And now you have 85%. And that would mean that this algorithm, this new AI system, works better as compared to before. So that’s the the dimension of performance. So you can see, well, you have improved over this human expertise, human intuition, the second dimension is that you have to think about is fairness. So one way to get to that is that, so you interrogate your machine learning system to see why it’s making certain decisions. And the problem over here is that not all machine learning systems are transparent. And by that, I mean, as they are making a prediction or making a recommendation, they’ll just make that prediction, you will just get that output from them, you won’t be able to open the the algorithm or the term is that black box and see why the decision or the recommendation or the prediction is being made. So that’s that’s, that can be problematic in this context, not just in healthcare, but in any context where, let’s say, the stakes are high. So one really famous example is, is the use of algorithms or even decision making systems in, in sentences in the in the judicial system. So there was a high profile case a few years ago, where they showed that the system, which was being used at a particular county in Florida, was biased against minorities, so mainly African Americans and Native Americans. So it was recommending stricter sentences for these minorities. And it was also being wrong more times for these groups. So that’s just one straightforward way to think about fairness. Within fairness, the other element is that there are even different ways to define what constitutes fairness. So for example, a, one very straightforward way to define fairness would be that if you have two individuals, then all other elements, things being equal, your model should make the same prediction for them, or that do individual similar characteristics belonging to different groups, let’s say African American versus Caucasian, then you should have similar predictive rate. Again, model performance should be similar. Where this runs into problem is that when we think about the larger system, where these problems are embedded in then they are systematic bias. There are historical injustice in justices. So what if you, let’s say, 10 think think about a system which determines whether somebody should get a loan or not. And you build a model, which computes some number and says, Well, if your score, your number is greater than 700, you’ll get a, you get a loan. But then you realize that well, there has been historically been systematic biases against African Americans, lack of economic opportunities being denied hundreds of years of slavery, and then segregation redlining, then, if, from the perspective of maybe being more fair, in terms of the outcome, maybe you should have a different threshold for African Americans as compared to other populations. So already, we have three different definitions of fairness. And within the machine learning literature, it has been shown mathematically that when we take these different notions of fairness, we formalize them in the form of equations, then we get a mathematical result, which is known as the impossibility theorem of fairness. That is, it’s not possible to simultaneously satisfy all these three notions of fairness, what does that mean, in practice, it means that even in the real world, without even considering subjective elements, it’s not possible to have a perfect notions of fairness. So, one implication is that so now, you have to look at case by case that for this particular case, for this particular scenario, maybe one notion of fairness is more applicable as compared to others. That’s where you and when you try to avoid an element of subjectivity, you have to introduce that.
Seth Villegas 24:29
Okay, so you definitely just set us with a lot of different examples and I think that was a really good explanation to these this problem right, when you try to weigh these different priorities, you may even be say satisfied a particular definition of fairness while violated a different conception of fairness. I actually think you know, the, I think Pro Publica is the one who initially broke the case about the differences in sentences and it’s interesting because the the company that kind of defended their model by saying that there was a there was a form of statistical parity between their sentences, even if there was kind of a discrepancy over racial differences. And so so it’s interesting even in that way that, you know, there seems to be really on the nose unfairness, right? But they can defend it in this this kind of mathematical way, which is itself strange, but I think he really hit the nail on the head. And in terms of if you’re just trying to say use machine learning on a process like that, that’s already unfair. And it’s just kind of replicating that process. That’s a problem. But I think one of the things that you didn’t necessarily mention in that is that people, I think, can think that the metric is a bit more objective than it actually is. And so they’ll end up deferring to it more so than they would if they’re just using their own judgment, right. So they might make a sentence more harsh, for instance, for an African American person on the basis of this machine learning tool, which is supposedly objective, when it’s not potentially.
Muhammad Ahmad 26:04
Yes. So there’s two interrelated phenomenon, which are going on over here. So one is automation bias. So we’re just present not just in the context of machine learning, but in computing in general. So here, the problem is that people over the course of time, and they, especially in the context of decision support systems, get input from a computer, they get aligned to the extent that this, they even stop believing in their own expertise and become too reliant, too reliant on these systems started questioning themselves. They have an experiment in the context of radiology, and radiology has been one field where AI models have been spectacularly successful, over the last six, seven years. So these experiments, what these researchers did was basically divide these expert radiologists into two groups. So in one case, where they were given input from this AI system, who had, let’s say, far better predictive performance as compared to humans. And so even in cases when these this model was deliberately or at least did ever deliberately, given the incorrect information, then they people started second guessing themselves started making more mistakes, when, as compared to when no input was given. So that’s the automation bias. The second dimension is that on the the dark side of using machine learning and AI, in real world scenarios is that it also gives people something to hide behind. So historically, you can say that, well, people knew there was systematic racism, or maybe individual people were racist. But now people can actually hide behind algorithms and computers and news, electronic systems. And they can they can just say that, well, it’s not it’s not that I am not objective, or I’m discriminating. It’s the it’s the input, which is coming from these models that are not input, but rather the output. And if it’s coming from, from a model from a computer, then it must be objective. And the reason for that is that the general public has this image that data is neutral, which is far from the truth. The reason being that any data, especially which touches the human world, will have an element of bias one way or the other. So maybe certain types of information is not being collected, or maybe certain information is encoded in a certain way, or even the interpretation of data. So one example that comes to mind is this now classic study, which was done just just three years ago, where these machine learning experts. So they, they took some data related to I believe knee surgery, and created a model based on a set of inputs regarding patient characteristics, and also pain that a patient was experiencing. What they discovered was that the model performed really well as, except for African Americans, and for the two patients who came from lower socio…socio economic status. And that was very puzzling for them. Because if one comes up the representative population data was was pretty good. So one thing that they did was that they noticed that pain was encoded in two different ways in their in the data. So one was what the doctor was reporting and what one was for the patient was reporting. And so what they did was, so previously in the first model they were using, but the doctor was reporting. So they got rid of that variable instead and substituted that but the patients were reporting on it, and then all of a sudden, the differences in performance disappeared. So what it turns out was that these physicians when they heard what the patients were African American patients were reporting but their pain level was, they were re encoding that as a lower level of pain. So it could be conscious and unconscious bias. This idea, and it’s been widely reported in medical literature that a lot of non African American physicians have this bias that African Americans have high threshold for pain. So this is an example of just the data being encoded in an incorrect manner. There are problems with respect to even how data is and is interpreted. So this is a real world, kind of the real world example. Let’s say the model says that, let’s say African American patients costs less than one way to interpret that is that they utilize the healthcare system less and do not have certain conditions with another way to interpret the same data. Is that or maybe they are, the system actually values these patients less, and that is what’s being shown in the data.
Seth Villegas 30:59
Yeah, I think you’re definitely speaking about a couple of really important things, the first of which is that, you know, there’s this really important aspect of interpretation, right. And that happens at a variety of levels. So even just between a doctor and and their patient, right about how they’re reporting, the pain levels itself, right. So how that data makes sense of the model isn’t exactly neutral, per se. And that there’s just a variety of issues that can arise to kind of all over this process. So I guess for you, where do you kind of come into these machine learning problems? Like how is it that you approach them? And is there something that you’re trying to solve?
Muhammad Ahmad 31:40
When it comes to the problem of, let’s say, fairness, or transparency. So there are different places where you can, quote unquote, intervene or try to at least partially rectify the problem. So there’s addressing it at the level of data. So, for example, let’s say, you have a model, which predicts diabetes, and then you discovered that your data, let’s say that are less than 2% of your population is South Asian. And maybe that then that’s the reason why your model is not performing really well. For for South Asian populations, its accuracy is really low for South Asians as compared to others. So you can rectify that by maybe going back to the field and collecting more data for South Asians who have diabetes. So you can rectify that at the data level. There could be problems at the model level. So maybe there are certain algorithms which are more biased as compared to others. So you can intervene with respect to the choice of algorithms that you want to use, or if you already have a model than can convert. So there are certain techniques where you can convert biased models into relatively unbiased ones. And then lastly, there’s a model, which is, let’s say, a blackbox model, then one thing that you can do is maybe convert those models into models, which are transparent, supplied to figure out different ways in which you can elicit explanations from these models. So if the model is saying that, let’s say, for a particular patient, they have less than six months to live. And given that it’s a very life altering decision slash prediction, then you really want to know, Well, why is that prediction being made. And so you compliment your model with additional models, which can prey on that explanation regarding that, why that prediction is being made. So, so to if I were to give a summary then. So there’s no one solution. It’s think about this as a pipeline. So there are different parts of the pipeline where you can intervene. And then lastly, I think it’s very important to have conversations with different stakeholders involved, and also the end users and people whose lives will be affected. So if a physician or let’s say, another healthcare person, I was going to use the machine learning system, then they should, at a high level, know, what are the semantics behind the predictions, what the meaning of, let’s say, predictive performance, when to address the system. And when I say to other stakeholders should be involved. So suppose, let’s say it’s the minority populations, who will be predominantly affected, or let’s say people from certain socio economic backgrounds will be adversely affected by a machine learning model, then, while you’re creating these models, these stakeholders, people who will be affected, they should be part of the conversations regarding the system. So we’re one example of one way of thinking so machine learning in any given field is not necessarily a technical problem to be solved, the way to think about is that it’s a systems level problem. So of course, there’s the technical problem aspect of it. No doubt, that’s a very important aspect, but also how, at a social level, a psychological level. You You’re even at an institution level, how is that going to affect individuals? I think that’s the right way to think about machine learning in the real world.
Seth Villegas 35:28
Okay, if I noticed that particular theme of everything described so far, you mean to describe things that involve, you know, health care and racial discrimination of some kind? How did you get interested in those particular problems, and if you’d be willing to take us a little bit through your history of, you know, say, your interests and how you got involved in looking at those things.
Muhammad Ahmad 35:50
So I’ll start with my interest in machine learning in general, and artificial intelligence. So I was I’ve been interested in, again, going back to existential slash foundational questions. And this idea of replicating human thought ever since I would say even childhood, like the oldest note, oldest piece of written thing that I have is from 1995, where I have note, some notes related to artificial intelligence, that over the course of time evolved into different things. So from abstract philosophical questions to more practical things. So my PhD work was actually in a very different area. So it was in modeling human behavior and massive online games using machine learning. So again, I mean, online games, very different word from healthcare, where I am currently in. And then after that I’ve I’ve worked in different in applied machine learning in different areas. So of course, in the gaming industry, but also in geographical information systems and retail. So for example, I was a data scientist at Groupon, upon biomedical devices, so on and so forth. So what? So one thing that I’ve realized is that this may be different domains. But the underlying tools that you use, there’s there’s a commonality. So I’ve been in the health in the applied AI and healthcare space for for four years now. for just over four years, I would say. And the reason that I moved in this space is that, so there’s actually I’ve been just thinking about I was, I was at Groupon at that time. And I was just thinking a lot of the kind of work that I was doing. And so I there’s this quote from this former Google engineer that I really like, where he says that it’s a pity that the best minds of our generation are working on problems and trying to convince people to click on ads. And I realized that I mean, the work that I’m doing at Groupon, I’m in technical perspective. Sure. That’s very interesting and fascinating, but at the end of the day, that’s exactly what I’m doing. selling stuff, to folks. And, again, I’m not implying that there’s anything wrong with selling stuff, creators, a noble profession, but at the end of the day, I think I think the you get, if you really think about the just stuff, what the Google engineer was saying that there are many other areas which have a pressing need. So that’s when I decided to transition to healthcare. So I thought that would be an area where I can combine my expertise with respect to and also combined this quest for not addressing more quote unquote, quote unquote, foundational questions of life and also making positive positive change in them even worse. How are these systems affecting the lives of ordinary people, right? I mean, if you think about the health care system in the US, I mean, we spend most amount of money in aggregate advice as well as per capita in the whole world. And yet, if you look at the outcomes, I mean, there there are somewhat abysmal as compared to literally all of the other industrialized countries in the world. So that that was my motivation regarding coming in this field. And then I and then within the context of healthcare, I realized that we actually have a without realizing it. We have a tiered healthcare system that different people in different strata of society they are getting different treatment. different levels of healthcare, depending upon their background, their race, sex, ethnicity. So that was somewhat troubling. And I, I also realized that maybe technology does can offer a partial fix that said, I do not believe in techno-utopianism. So I don’t think that that technology has solutions for everything. But that said, I do think that technology can help us illuminate certain problems. At the very minimum, it can help us quantify or at least partially quantify some of the problems and help us gauge the magnitude of the problem in computer science. There’s this almost an adage that if you can, if you can quantify this, then you can you can change it. So that’s how I ended up here.
Seth Villegas 40:51
It’s really interesting to hear you describe how, I don’t know, is it accurate to say that, perhaps is the way in which the data and the technology itself is starting to reveal more of this problem that may have been known otherwise?
Muhammad Ahmad 41:04
Yes, that’s a good way to characterize that. So we have known these structural, and institutional problems, whatever. But I think what, what data is doing is that data is a reflection of society. So and the like, in the case of that well known case that propublica revealed that this the model was discriminating against African Americans and Native Americans. So it’s not that the model is discriminating is sort of reflection of society. So if you’re building a model based on, let’s say, previous judgments, then well, if the previous judgments against, let’s say, African Americans in the judicial system, if they’re biased, then you are going to end up with the bias system. If you’re not using historical data, and let’s say, if you’re using a set of questionnaires, then that were in this case, which they were doing, that also reveals the bias of let’s say, a particular strata of society. So if, for example, like one of the one, so after pro publica came up with their study, so there was an uproar. And this organization which had created this model, they were at least forced, at least from a PR perspective, for certainly some of the questions and one of the questions was, Do you know anyone who has been incarcerated, incarcerated? I mean, from a sentencing perspective? Like how is that relevant to a crime that the person is likely to commit in the future, whether they know someone extended family member, when you’re going to have 10 family members who committed a crime, and you could be the most law abiding citizen in the world? So it’s like, what about questions like these, who are they religious guilt by association. So that’s one aspect to it. And then the The second aspect, as I, as I mentioned, is, you start with bias data, and you are going to get end up with bias outcomes. And in this case, the bias data is reflection of society. If you go back further back in time, so the issue of data representative ness. So in machine learning, I mean, that has become a de facto standard thing that we that we look for. And I am actually surprised that institutional review boards when so it’s a standard practice that, you know, if you want to do a study, you go to the Institutional Review Board, or IRB, and if it involves human subjects, you need to sign off. What’s really surprising surprises me is that for a lot of these studies, why is requirement for proportional representation? A standard element? Just to give you a couple of examples, there’s this actually one of my favorite examples in this context is this study, which was done at Rockefeller University. I put even early to mid 90s, regarding I remember correctly, effect of obesity on uterine and breast cancer. And while they were literally have hundreds of artists mentored mil women in that study, I mean, I mean, you don’t have to be a biologist or physician to see that. That just doesn’t make sense. That said, I mean, in the last 30 years, things have improved. So if I were to look at our talk about a more recent examples, so studies which were done on on creating the vaccine for COVID. They are fairly representative of the population in general. So things have improved but we still have a lot a lot of way to go. Yeah, to to just cut a long story short, yes, I would say that absolutely these models, but data is revealing is a reflection of society.
Seth Villegas 45:09
The application of statistics itself, I think, is also something that’s maybe not well understood all the time. Earlier, you talked about guilt by association. And I think that this, you know, can also go to other kinds of studies where, you know, just because you can say something about our group, it doesn’t necessarily mean you can say anything about any individual person within that group. Right? Like, not with any definitive accuracy. And at times with some of these models, at least to me, it’s kind of an outsider, there seems to be an element of determinism, perhaps, right, of really trying to fit people into a model when they may not fit into it at all. And I guess that’s partially scary to me, because I’m not a quantitative thinker, per se. But it would seem that there are things that are kind of hard to measure. And there’s always gonna be individual cases where, you know, we’ve kind of gotten it wrong with whatever model we’re using.
Muhammad Ahmad 46:10
Yeah, yeah. So there’s an element of correlation is causation. In some of these studies, one particular study, which raised a lot of eyebrows, that came out a few years ago was that these researchers, they created this model, which just by looking at the image it could predict, I think, with 80, or 90% accuracy, somebody was someone was good or no, not. The problem with that is that it’s using a limited data. And for that, when they say it’s a, it’s accurate, 80 to 90%. It’s only accurate for that particular data, it’s most likely not generalizable. And I guess the problem is that it’s taking certain correlations, and it’s just generalizing it to the whole world. And this is not a new tendency. I mean, if the history of science in the in the 19th century, and even 20th century up till the 1960s is just full of these false correlations which people generalize to make, quote, unquote, scientifically grounded racist theories regarding superiority or inferiority of one race versus the other. So we have been for the danger is that we have been there before. And now people can use machine learning and AI to do to make the same mistakes. That said, I think the the climate that live we live in today, at least publicly, it is when something like this is deployed. There is a lot of uproar, and these things are rolled, rolled back. I think one danger is that, let’s say these models are being used implicitly, out of the public’s side, whether by governments or by corporations, and basically being misused at the end. So by analogy for these of their uses, that there’s there’s there the 21st century version of fair not fair analogy. So for those who may not be familiar with that it’s this discarded idea from the 18th, or the 19th centuries that you can measure a person’s personality or intelligence based on different parts of their cranium.
Seth Villegas 48:24
Yes, certainly. It’s one of those really strange things, I think, especially went into racial science, the measure of different skulls. And it’s it’s just bizarre how the, we don’t need to talk about this stuff chronology, but a kind of led to this this shorthand, right. And it’s a, it’s not that there were completely wrongheaded assumptions within the idea itself. But the shortcuts that were taken in order to use the model came down to the application of skull size, and it’s just, you know, almost completely absurd that that’s, correlates exactly with intelligence. So I think as we’re kind of wrapping up here, we’ll have it it’d be great to hear, I don’t know, if you would mind telling us a little bit about like, you know, what should the public be going out and thinking when they’re encountering, say these different kinds of problems in their everyday life.
Muhammad Ahmad 49:14
So public literacy of AI and machine learning, I think is extremely important. And the reason that I say that is that there are literally models out there, which are affecting the lives of billions of people making decisions on your behalf, nudging you in certain directions. And most people are not even aware that this is this is already happening. And for them, the question is, well, how can we change it? I think the one danger which is already here that I see is that how can people change something if they don’t even know that it exists? So think about you know, algorithms that you encounter. Rather they encounter Are you in social media, they are designed to maximize people’s engagement, which is meant to maximize revenue for the quarter, whatever company that they’re deployed for. So, without regard to other social outcomes, the same case of YouTube. So maximizing your engagement of kidneys, it’s not very difficult to get sucked into rabbit holes, being more and more extreme content and creating these social bubbles on Facebook. The interesting part I would say, just stick to these is that when these all these platforms they started, that was not their intention. But that said, at the end of the day, they are the goal is to make money. So these others wider societal concerns, sadly, they become secondary. The other thing is that just the direction in which technology is headed in the near future, I think it’s extremely important for the average citizen to know where things are going. So just, I hate to be a to be an alarmist. But here’s the thing, one thing that I think people should know is that there have been some speculation regarding the use of deep fakes in the near future. And to some researchers even think that just in four or five, five years max, majority of videos on the internet will be deep fakes. So they are basic assumptions regarding reliability of photo photographs and videos that just goes out of the window. And, I mean, there’s technical work, and I know people personal people who are working on this regarding Well, how do you distinguish these fake videos and images from the real? The problem is that, then you run into into the issue of like, even if you tag something as false, then if you already believe in something, and, and, and all of us have this bias that if you see something, if you encounter a piece of evidence, which gels well or confirms what you already know, or confirmation bias, then you’re more likely to believe it. And you know, a person could just say, well, should I believe my eyes, or should I believe some random professor at an Ivy League University. So pretty soon we will run into that particular problem. We will also have customized, customized experiences. By that what I mean is that, so think about a presidential debate in 2028. And let’s say if you and I are listening to the, to this particular debate, then are you in a speech that could because these output could be customized to both of our individual tastes. So the speech that I hear would be different from very different from the speech that you will be hearing, because based on how I am reacting to it, based on my background, my demographic, even I’m reacting to this speech in real time, these speeches can be generated by an AI system on the fly. So just imagine how much how these two different things that I talked about are correct on our, on our democratic system or any democratic system? To be precise, as a society, we have to think about things related to how do we deal with evidence in such a world? And not just that, but how do we how do we reach across the aisle to the other side and, and come up with a common understanding of the world? I think that these are going to be some of the very major challenges at a societal or even a global level going forward.
Seth Villegas 53:49
Well, thank you. I think that definitely leaves a lot of for us to think about, and I suspect that new even just given the amount of stuff we’ve had covered today. We’ll probably need to talk again in the future, so I hope you’ll be openside if it’s possible. Thank you so much. Thank you for listening to this conversation with Muhammad Ahmad. You can find more information about DIGETHIX on our website, digethix.org, and more information about our sponsoring organization, the Center for Mind and Culture at mindandculture.org. If you’d like to respond to this episode, you can email us at email@example.com or you can find us on social media @digethix. You can also find this information in the description for this episode. In this episode, Muhammad takes us through the research on race in healthcare. He shows us that it may be possible to explain certain discrepancies such as differences between patients and doctors and evaluating their level of pain. He shows how it is possible to clean up that data. If we take into account last week’s conversation, you know, it seemed as possible to better account for the experience of patients. The problem then is not the application of data but rather The connection between data and what that data is supposed to represent. This is an especially big issue. Now, if we consider that some people may trust their data over the words of actual patients, especially if those patients are from minority backgrounds. In other words, the use of data can at times distort our actual understanding of the situation on the ground. For that reason is necessary to evaluate and assess data continually. In order to determine if it is actually serving the purpose that is supposed to be done well, we might even be able to uncover and understand the true character of exceedingly complex social problems, such as discrepancies in healthcare between ethnic groups today. I hope to hear from you before our next conversation. This is Seth, signing off.
Transcribed by https://otter.ai