In this episode of the Podcast, I am interviewing Dr. F. LeRon Shults. Dr. Shults is Professor at the Institute for Global Development and Social Planning at the University of Agder, and he is the Scientific Director of the Center for Modeling Social Systems at NORCE in Kristiansand, Norway.
In this interview, I talk to Dr. Shults about the discipline of computer simulation and modeling. We go over the misconceptions around simulation and how practitioners go about building simulations in the first place. While these new kinds of tools are no doubt powerful, there may also be new problems that arise with it. The key question for this episode is this: are there dangers to understanding a complex process? How and why could that be the case?
LeRon has a new book, available for download through Brill Academic Open Publishing, called, Practicing Safe Sects: Religious Reproduction in Scientific and Philosophical Perspective.
Potentially helpful sources:
Music: “Dreams” from Bensound.com
Seth Villegas 0:04
Welcome to the DIGETHIX podcast. My name is Seth Villegas, and it is a pleasure to share today’s conversation with you. In this episode of the podcast, I’m interviewing Dr. LeRon Shults. Dr. Shults is Professor at the Institute for Global Development and Social Planning at the University of Agdar and he is the scientific director of the Center for Modeling Social Systems at NORCE in Kristiansand, Norway. His many books and articles address religion and human life in the context of the contemporary human and physical sciences. He is working with the Center for Mind and Culture, the sponsor of this podcast on extending the networks supporting the bio cultural study of religion in a variety of research areas, including secularism, natural, compassion, and political and religious ideology. LeRon has a new book available for download through Brill Academic Open Publishing called Practicing Safe Sects. Practicing Safe Sects: Religious Reproduction in Scientific and Philosophical Perspective, you can find a link to more information on this book in the description for this episode.
In this interview, I talked to Dr. Shults about the discipline of computer simulation and modeling. We go over the misconceptions around simulation and how practitioners go about building simulations in the first place. While these new kinds of tools are no doubt powerful, there may also be new problems that arise with them. The key question for this episode is this. Are there dangers to understanding a complex process? How and Why could that be the case? This podcast would not have been possible without the help of the digit ethics team. Special thank you to Nicole Smith and Louise Salinas, the intro and outro track dreams was composed by Benjamin Tissot, Ben sound is available through bensound.com.
So hello, again. I’m really pleased to be joined by LeRon today. So LeRon is one of the I believe you’re one of the directors here at the Center for Mind and Culture.
F. LeRon Shults 2:07
Senior Researcher, I think is my official title.
Seth Villegas 2:10
Senior researcher. Sorry, All I knew is that you’re above me.
F. LeRon Shults 2:16
Actually, I believe this might be my 10th anniversary with CMAC.
Seth Villegas 2:20
Oh, wow. I didn’t realize c Mac was that old?
F. LeRon Shults 2:23
Well, it was the Institute for bio cultural study religion before it was CMAC. And so I’ve been a senior researcher since Yeah, since 2011.
Seth Villegas 2:33
I thought today, it’d be great to talk to you a little bit about simulation. I think that there are a lot of misconceptions about what simulation actually is probably in part because a really compelling cinema. So for instance, or I think when most people hear the word simulation, they’re thinking of something like The Matrix, either, you know, being able to simulate all of reality, in part because, you know, Elon Musk has said stuff on like the Joe Rogan podcast while smoking weed, right? About the same? Yes, yes. But then there’s also you know, the computer’s running simulations in Star Trek, I don’t know, a million a second or something like that.
F. LeRon Shults 3:06
And Rick and Morty, Rick and Morty.
Seth Villegas 3:08
That’s right, right. And so you know, there’s kind of all this science fiction around simulation. But I think it’d be really great to talk about, like, what is academic research on simulation actually looked like? And maybe what’s a project or two that you’ve been involved in? To simulate?
F. LeRon Shults 3:23
Sure. Yeah. Well, I think maybe the first important thing to say is that simulation occurs in all kinds of academic disciplines. So I mean, it’s been used in physics and chemistry and traffic design, and so forth. So it’s very common in all kinds of sciences. But it’s relatively new in the social sciences. It was already it was already being used in the 60s and 70s, especially among archaeologists and some, some sort of initial simulations. But the last, I would say 10 or 15 years or so it’s really taken off. So you shouldn’t imagine The Matrix or Rick and Morty, Rick’s battery in his car, but basically, think of it as a sort of digital twin of a society or a virtual…an artificial society. So you have programmed within within a computer, individual agents who might have who can be more or less complex, they might be super simple, just make one decision based on whether they’re close to somebody, as in the famous segregation model, by Schelling, or in our models, we try to be as cognitively realistic as we can be. So our agents might have to be more or less religious, they would be a gendered, they would, they would grow older, they might have jobs, they might have different levels of prejudice, or aggression or all kinds of other variables, depending on what you want to study. Then they’re in social networks, they interact with each other, and you can alter them the environment in which they interact. Like some of our models, for example, have parameters in the environment such as contagion threats, or predators, or natural hazards like tornadoes or volcanoes, or cultural other threats, which can can be seen as either a threat of war, or maybe too many migrants coming in to an area, which can make some people feel threatened. So basically, our simulations, our digital worlds virtual realities, as it were, in which all these agents are interacting with each other, under different sets of conditions, then then we explore those conditions and see what happens when you alter the environment or when you alter their attitudes or their variables.
Would I be correct in saying that…there’s this old thing that I had to program back when I was learning C…Which is this game called Life? Yes. Right, in which you have little nodes, right? And whether they’ll kind of continue or not, has to do with whether they’re next to other nodes? Would you say that’s kind of a really low level simulation of a kind?
Yeah, yes. Because you because you have you have entities that are connected that follow different algorithmic rules? So in that sense, yeah.
Seth Villegas 6:12
Okay, so it sounds like it’s really important than that you…Well, you have those algorithmic rules and imagining those are embedded in some sort of a larger model, perhaps part as part of the theory? are you trying to test out different kinds of theories, and when you’re doing this?
F. LeRon Shults 6:25
You can be that’s one of the uses of modeling to test out a theory or even just to even just to formalize the theory, just just to get it to take the theory from English, say, into computer code. So you see, which forces you to be really clear about your assumptions and your definitions, and the relationships between the variables. In our experience with different theorists, we found that they always come away understanding their own theories better, and realizing places maybe that they weren’t as clear about the connections within their theories. So modeling can be used simply to clarify conceptually, but it can also be used to try to explain…either retrodictively what is most likely to have happened in the past, or it can be used to explain the the mechanisms that are most likely to be at work when you have a shift in a society. Or you can even in some cases, do forecasting models, under certain conditions here is the most likely future scenario to occur within your artificial society.
Seth Villegas 7:28
I understand that you’re using kind of these bigger social theories. And as you said, you’re taking them from English. So you know, something like a book or an article and trying to implement it into code, which sounds challenging enough. But some of the things you mentioned earlier, things like prejudice, you know, these really complex social interactions and emotions. And I guess one of the reasons why I bring up prejudice in particular is in our own society, there’s lots of debate about how subtle that is, right? These kinds of interactions that may be coded in certain ways. And I imagine if you’re going to look at any social phenomenon, you’re going to be running into similar sorts of things of is that really the thing that’s been measured, for instance, because people are really complicated. I imagine one of the reasons why this hasn’t been applied to the social sciences before is just because it seems really hard to capture that.
F. LeRon Shults 8:18
Yes, really good question. So and this is a natural concern that not not just social scientists, and not just qualitative social scientists, but humanities scholars, and even quantitative sociologists initially can get worried about this because you can’t capture at least now we can’t capture all the intricacies and nuances and interpretive details. So I sometimes like to think of it in the analogy of a map. So a map abstracts from the real world and it gives you just the information you need…the grossest level patterns, you need to get from one place to another. So if I wanted to travel from here in Kristiansand, which is in southern Norway, up north to Oslo, and I had a map, what I would want the map, I would want the map to tell me where the mountains are, where the rivers are, and the bridges and the roads. But I wouldn’t need the map to show me where the mole hills are or the grass, the grass or, or the quantum fluctuations in the water, right? I just wouldn’t need to be abstracted the basic structures that are important for the task for the purpose that I have, that I’m using the map for. So any any mottos, a famous phrase you often hear in the modeling community is every model is wrong. But some are useful. So no model is going to capture everything exactly, perfectly. But some models can abstract basic structures or causal mechanisms in a way that do shed light on at least some of the mechanisms and conditions that that are important factors in a social phenomenon.
Seth Villegas 9:58
So it’s interesting that you said that, you know, every model is wrong, but some of them are useful because I think that raises another question of how would you even know it was wrong? For instance, how would you test it against something? Did you try to correlate with real world data? Do you go through scenarios in which you already have data? What is the process actually look like for even seen where the models even useful?
F. LeRon Shults 10:21
Yeah, well, the the main sense in which every model is wrong is an epistemological one, that you can’t capture all the contingencies and intricacies of the real world. because by definition, you’re being reductive and abstracting. So in that sense, it’s necessarily wrong. What makes it more or less useful, or you could even say, right, in the sense of it really simulates the real world, is exactly what you hinted at there, the use of real world data. So there you can go, like the gold standard would be beautiful, lovely, detailed data at the individual level in a massive panel and a large sample, right? But you can also get validation through, maybe not quite the other extreme, but almost what we sometimes called face validation. So here you have experts in whatever field it is, and they for for decades, they’ve been researching, exploring, doing empirical work. So they have this intuitive sense of how the phenomena they’re experts on works. So then you can explore the parameters and run simulations. And this is sometimes called calibrating the model. You work with the subject matter experts until the model reacts in the way that they say that’s actually that’s how they see the real world altering when you change the parameters that you’re discussing.
Seth Villegas 11:48
I think this is something you’ve actually mentioned quite a bit. There’s a lot more integration of experts and their expertise into this process of simulation than I think most people think because it would seem in my mind, and in part, I study people who are really interested in computers, this group of people called transhumanists, who are trying to completely remove the human element. But from what I’ve heard from you say, so far, that seems to be really crucial, and actually checking the models, making sure that the simulations even just seem nice.
F. LeRon Shults 12:18
This is an approach that that we call human simulation With Wesley Wildman and Saikou Diallo and Andreas Tolk. We …was that last year, yeah, last year, a book called human simulation, it came out where we describe this method that we’ve used. And it’s many different kinds of models. So here, and this is where the ethical issues really, I think, start to emerge. Here you…from from all the way from the assumptions that are driving the construction of the model, to the purposes of the model itself, you have implicit or sometimes explicit, ethical ramifications there. So we want not just subject matter experts, but where we’re studying phenomenon that are, might say, socially relevant, addressing societal challenges that a lot of people are concerned about such such as a prejudice, or conflict, when you’re doing that sort of simulation, what we try to do is bring not just subject matter experts, but stakeholders into the dialogue from the beginning, and even change agents where possible. So basically, we think that simulation, computer modeling and simulation can be used to surface the ethical assumptions that people have, the moral judgments that people have, and force because it forces you to say really clearly how you’re defining, for example, immigrants, or minority or prejudice. Tt forces real clarity, and which can help you realize moral assumptions or ethical implications about the very framing itself. And then on the other side, when the model is done, what’s the purpose of the model? Why are you writing simulation runs to find out what? In order to influence what I mean, if our insofar as our model, for example, on mutually escalating religious conflict, insofar as that model really does disclose some of the mechanisms and conditions under which you get to religious groups fighting. That could be used to help resolve or or avoid religious conflict, but it could also be used to cause or promote produces conflict, right? So every model, every social simulation, I should say, brings with it not just ethical assumptions, but also the actual use of the model implications of the purpose of the model.
Seth Villegas 14:48
If we could take a little bit of a step back, you mentioned something, stakeholders. So I think that might be important to kind of clarify. So for instance, how like what is a stakeholder first of all, and then how would you even know who the stakeholders are? You mentioned, say perhaps immigrants? Are these just the people being simulated? How do you kind of figure out who you’re talking about who you’re dealing with? And who it affects?
F. LeRon Shults 15:12
And that question itself, that’s already really draws out the difficulty of this. I mean, ideally, you would have as many people as possible, people who have a stake in the issue, involved in the conceptualization of the model, the calibration, validation of the model, and the use of the model, ideally. But the problem is one of time and energy and finances. I mean even if we, and even if we had time and energy and resources, it’s not as easy as you might think, to get a whole lot of people interested in your simulation, because people have their own lives, even even policy professionals are busy. Even people who are deeply concerned about, for example, prejudice or religious conflict, why should they spend the time it takes to talk with us and work with us in developing models? So it requires us to do the best we can find the the stakeholders, we think are most relevant in each case, and then trying to always making the source code available, freely available. And trying to once the model is done, also trying to bring in as many people as possible to give critical feedback to it.
Seth Villegas 16:26
Okay, so I think that makes a little bit of sense than why you were talking so much about assumptions, right? Like the things that you assume, were really effective, and just how it’s going to be used, how its deployed. And all that. Because for instance, one of the things that I was just thinking about a little tongue in cheek, so forgive me, but, but wire red sports carsalways causing traffic in traffic models, right? There’s, there’s a specific image of the type of person who’s cutting people off, right? And that’s been reproduced in the model. And if you’ve seen some of these videos, they’ll zoom out, and there’s just red sports cars everywhere, just cutting people off, and suddenly there’s a traffic jam.
F. LeRon Shults 17:04
I’m not familiar with that. That’s interesting.
Seth Villegas 17:06
You know, I just thought it was, you know, kind of a really intuitive example of what you’re saying, there’s a certain type of person who creates a certain type of problem. And then we’re going to replicate that problem, what we’re showing is the model and, you know, it can be repeated over and over. So
F. LeRon Shults 17:22
to stick with that example, if you have real data showing that red sports cars are the most common type of car that cause traffic jams of this sort, and you have evidence of that, then you you then you develop the model in a way that tries to simulate the emergence of traffic jam using people who drive red cars with certain sorts of variables. So in the same way, in our models, if we’re trying to simulate the emergence of religious conflict, then we use empirical evidence, from psychology, sociology, whatever fields are most relevant so that we can design the simulation so that it matches the sorts of variables and behaviors that seem to be causal mechanisms in the real world.
Seth Villegas 18:09
I think that’s a good place to maybe delve more into then. So you mentioned this problem of mutually escalating conflict in these different situtations, could you maybe help us take a little bit of a deep dive into who are the groups involved? What sorts of situation are you talking about? And maybe afterwards, you know, how you would plan to use that to maybe defuse some of that tension?
F. LeRon Shults 18:29
Yeah, I’ve been referring to what we call or what I call anyway, the MERV model, mutually escalating religious violence. So the idea here was to try to understand what some of the causes are of escalating intergroup conflict when the two that when you’ve got two groups that have different religious worldviews, ideologies and ritual practices. So we developed in this case, not to go into too much detail, but in this case, the simulated agents have psychological variables related to for example, identity fusion, which a lot of the research in religious conflict radicalization indicates, and here you’ve got qualitative evidence as well as psychological experiments and survey research, that each of the agents then has a different level of feeling fused with their group, that the research not computer modeling, but other research, shows that people who are more are fused their sense of identity is connected to identified with their group are more likely to be willing to kill or be killed for the sake of their group, or to engage in a radicalized violence. So there were other social psychological theories that informed the agents but just to use that as an example. So each of the agents in the model then has different levels of identity fusion, is also one of these two different groupsm and also has different levels of tolerance for cultural others, so in other words, prejudice, and different tolerance for anxieties related to contagion, or predation, or natural hazards. So the findings of that model, and that model was validated in relationship to data from the troubles in Northern Ireland, about 30 years of conflict between Roman Catholic and Protestant groups. There are also political factors there, obviously, but religion being an identity marker. And also the Gujarat riots in the early 2000s, in India, which was a conflict between Muslims and Hindus. So, basically, that model found that you’re most likely to have mutually escalating conflict, when you have a distribution of the groups are around 70-30. So 70%, majority 30% minority, and where the majority of the simulated individuals, their specifically their social, sorry, their cultural otherness anxiety, and their contagion anxiety, when those two, their tolerance for those two threats are surpassed, that’s when you’re likely to get mutually escalating religious violence. So not predation, and not natural hazards. And also, if you’ve got groups around 50-50, you don’t get it or 90-10, you don’t get it. So the model was able from at the micro level without programming, we didn’t program religious conflict, right? We programmed at the psychological level, these tendencies and ways of interacting and responding to other individuals in your networks. And from that, there emerged the macro level phenomenon of mutually escalating religious conflict. And it matches the real world in which you have about that those percentages, majority minority, and those levels of threat going over a threshold that the population could tolerate.
Seth Villegas 21:57
So you have these these two different groups in a model, and you can kind of play with how many of them there are relative to one another. And one of the key traits is basically how much they identify with their group. And I think another key part of identity fusion theory is that people within that group, if the group is attacked, they personally feel attacked, right? They identify that in a in a way that’s very, very connected to whatever it is that’s going on. And so it sounds like that ratio, then that you’re describing the 70-30 seems to be the perfect ratio. Well, not the perfect ratio. But but it’s one of the thresholds, I guess we could say, at which this kind of escalating violence starts.
F. LeRon Shults 22:38
Yes, that’s right.
Seth Villegas 22:39
Again, it that almost sounds miraculous, because what what you had said, is using these kinds of psychological variables. And I guess, again, I think that sounds really, I don’t know, difficult to conceptualize in some point, because it’s so foreign. So I mean, even when we think about simulation, I think the quantitative stuff of you know, say maybe molecules swirling around seems a lot easier. What does it look like then when these agents are interacting with each other? Because this is a specific type of model, right? Like it’s, do they have like a virtual space that they’re that they’re living in? How is it that the ratio is even really play out in the virtual space?
F. LeRon Shults 23:14
Yeah, I mean, here, you can’t imagine The Matrix or even Sims or Rick and Morty. It’s just it’s, so we don’t have a lot of visualizations that are models, in part that’s because of the computational power that would be required. But it’s also because we don’t really need it. The agents are interact in, or you could say it this way, the distance between them is also virtually represented in the algorithms and in the models. So there are visualizations but they’re not, they’re not overwhelmingly compelling, I can put it that way. That what makes the models compelling, I think, is rather the the data that’s produced, it shows that you’re able to get to the macro phenomena that you’re trying to explain from the micro level variables and mezzo level interactions. And in social science, this has been a kind of Holy Grail, how do you get…how do you, first of all, theoretically, how do you link the micro and the macro level? And then secondly, causality, not just there’s a correlation between these variables. But what causes what, and this is what models models can shed light on this in a way that other types of methodologies can’t.
Seth Villegas 23:53
I can definitely understand why that would be the holy grail, because if you’d really figure out all the starting conditions, if it really worked that way, then you would just yield the thing. Like you’d have a really good show, whatever the phenomenon was that you’re trying to represent. What would you say you’ve been rather successful than in trying to represent those things through, say, MERV the mutually escalating religious islands? Are there still things you’d like to fine tune?
F. LeRon Shults 24:52
Oh, definitely things to fine tune. You could say it this way, Seth: the models that have been successful are those that have been published, but certainly There’s been lots of models that we’ve spent quite a bit of time and effort working on and they weren’t successful, we weren’t able to simulate the emergence of in a way that we were hopeful. Some of those we’re still working on others are sitting in files, because we don’t have time energy to, to keep trying. And it could also be that there’s something wrong with the theory, or that we don’t have enough data. But what we have been successful in trying to accomplish what we’ve set out to do, which is to produce some models that have relatively complex cognitive agents in social networks, and have been able to simulate the emergence of the macro level phenomena that are relevant to the kind of societal challenges that we hope to help address.
Seth Villegas 25:21
Would you say then that you found that some of the problems are a bit easier to simulate than others? Does that relate to maybe the kinds of data that you had where I mean, you mentioned first, often that maybe the theory just isn’t a good explanation, which could also be true, but it would also seem that, again, they’re just different types of things that maybe you can or can’t simulate easily?
F. LeRon Shults 26:05
Yes, exactly. Right. It has to do with data, it has to do with good and with good theory. And with having a time and resources in computer modelers who are creative and intuitive at formalizing and programming the theories in a way that they can be connected to the data. So you’ve got the you can imagine a kind of triangle where you go back and forth between the angles between a theory on one side data on another and the programming of the architecture itself. So you have to be constantly moving back and forth between data and theory and the architecture until you say, okay, now’s the place to stop, we’ve got enough. The architecture has been able to capture enough of the theory that can be validated, validated by enough of the data that we can say something interesting.
Seth Villegas 26:52
What you just described sounds like you need a whole team of people to even make simulation possible, right? You mean, you’ve already mentioned subject matter experts. You mentioned people who can kind of translate that perhaps to computer programmers. And then you even suggested that the computer programmers themselves might have to have this intuition and creativity when it comes to just implementing the thing. But that sounds like a very tall order to say the least.
F. LeRon Shults 27:17
Yes, no, no, it is. But I should say that there are a lot of people into single individuals who have done simulations. So it’s, it’s for for decades, people have developed their own simulations, created their own, implemented their own theories and done their own coding, and done really good work. But what we we found that if you bring together these multidisciplinary teams, you usually end up with something that’s, that’s better than any one individual could produce on their own.
Seth Villegas 27:47
If we could go back a little bit to what you mentioned earlier about trying to connect the social sciences and the humanities to coding because I don’t know maybe just at this point of the conversation, it’s hit me of trying to say if I was going to get my theory on transhumanists, and kind of the way their beliefs work in trying to validate that. I didn’t even know if I could explain it in a way that would translate to code. You know, there’s no formulas in my in my head about how that would work. There’s no ratios or anything. It’s just, it’s just lots of hunches about things. You know, I have real world data, I have things that I’ve read, but how would I even go about trying to explain to you, okay, this is what the model would look like?
F. LeRon Shults 28:25
And this actually this goes back to where you need programmers that are creative and intuitive. Because basically, that subject matter experts that we’ve worked with before most of them are, as you describe, have not formalized their theories and are not familiar with computer modeling. So they simply answer questions asked by the computer modelers, or either me or Wesley or one of the other people on our team who’s then on both sides and have been through the process before. But you need…it’s really important, in my experience, that the modelers themselves are open, intuitive, able to tease out from the subject matter expert, that what it is, in other words, to help them formalize, to formalize it for them, if I can put it that way.
Seth Villegas 28:33
I do think that’s really important. I don’t want to put I guess, maybe I won’t go into the specifics. But one of my colleagues here at CMAC was working on a modeling problem and the programmer that that she was working with kind of kept getting certain things wrong about how the model was supposed to work, you know, even say things as simple as like a negative sign, which has been positive, like, you know, so basically getting a whole correlation wrong of, oh, if this goes up, you know, this other thing should be going up and said it had a reverse pattern of what it was supposed to. And so it seemed like there’s a lot of areas for miscommunication there just because…most programmers that I know don’t have the kind of social science expertise just because it’s impossible to have it. I mean, you can’t expect someone to have years of programming experience and a PhD level education.
F. LeRon Shults 29:59
Exactly that. This is why teams are so important. And I think another thing about teams is if you have more than just two people working on it, then you have, you’re more likely to catch mistakes of that type.
Seth Villegas 30:11
Oh, sorry, if we could talk a little bit more than about about your background. So how did you get started with modeling? You mentioned, you’ve been a part of CMAC, or at least version CMAC for the past 10 years. So maybe you could tell us a little bit more about kind of projects you’ve involved in how you got involved, maybe how you met Wesley? You know, kind of wherever you want to start really.
F. LeRon Shults 30:29
Sure. Well, I actually met Wesley in 1994, at the American Academy of Religion, and over the years, we became friends and interacted in different ways, mostly more at the level of philosophy of religion, religious studies. But then, I want to say back in maybe 2012, or 13, or something like that. I was part of an interdisciplinary team, based in Stanford that was doing archeological research in chatline Turkey. And they had an anthropologist, psychologists, sociologists, philosophers, and they had had, they had already had funding for several years. And they had just gotten funding to go for another three years. And they asked me if I would come back. And I thought to myself, you know, it’s been fun, but I have lots of other things to do. And so I think I’ll only do it if they’ll also invite Wesley. So I asked them if they’re open to that. And they said, Well, yeah, what what would that involve? And so I called Wesley and I said, Would you want to join me in going to Southeastern Turkey in the summer for three years and studying religion and the origin, cognitive science of religion, so forth? And he said, Okay, if we can do computer models, and I had no idea what he’s talking about, and I said, Okay, that’s, that’s how it started. And then that first summer that we were in Konya, in southeastern Turkey, Wesley started explaining computer models to me. And we actually started developing two or three related to the Neolithic Trent transition. So they’re based briefly, we simulated the shift from hunter gatherers, to sedentary group agricultural collectives. So that was the beginning for me. And I’ve often said, it was like, finding the methodology I was separated from at birth. Because as I was trained as a philosopher of philosophy of religion, but always is interdisciplinary as I could be. computer modeling gives you an architecture kind of scaffolding, where you can bring together multiple disciplines and explore their interconnection and their interaction, and then run simulations, experiments on whether the way in which you’ve theorized the connections really makes sense, really do lead to the emergence of what you’re hoping to, to explain. So that’s, that’s the short version, then Wesley and I got various grants over the years to develop additional models. And we currently have several grants that we’re developing models on things such as religious conversion, religious change, the relationship between religion and pro sociality, or altruism, ideology, ideological polarization, and other other issues to not even related to religion, such as the spread of misinformation or anxiety in the wake of COVID-19 pandemic. So that’s the short version of how it started and how we got where we are.
Seth Villegas 31:27
You know, one of the things that you said really stuck out to me about it’s the methodology you’re separated at birth from as soon as especially because, you know, you mentioned kind of having this background in philosophy of religion, that there’s this real appeal to the way that computer modeling works, right, even the way in which you obtain results. Perhaps, I wish you could say a little bit more about that, I think in part because so I did my undergraduate at Stanford, right. And so, you know, it’s very natural to be around programmers at a very high level, but it can also be intimidating, I think, on both sides, right? You know, humanities people kind of do this sort of ethereal work that doesn’t make any sense to programmers. But programmers also have this kind of concrete quality to their products that can give you your humanities people a little bit of a run for their money. And so I guess what did you…what did you find in those models that was so appealing, that you’ve kind of kind of devoted so much of your work to, to following that methodology?
F. LeRon Shults 33:41
Great question. I don’t think it would have worked if I had not been doing it with Wesley. Because Wesley is one of those rare people that’s both computer knows computer modeling and organic programming, but it’s also trained in the humanities. I was trained only in the humanities originally and psychology, but not quantitative psychology. So it was new and it was overwhelming. But basically, the way we did it is I didn’t need it initially to understand modeling, or the different programs that were used. All I had to do was explain what I thought, paste and defend it. So here’s Wesley would just keep asking me questions, we spent four days, almost nonstop and hotel there in southeastern Turkey, where it just grilled me on theory after theory after theory to try to make sense of what was happening during the Neolithic. And so basically all I had to do was be a philosopher of religion, with interest in anthropology and psychology and sociology and describe the theories and my argument, what I how I thought they were connected, and we’re influencing each other. And then over time, we developed and, Wesley too, it’s not just me, Wesley too, over time, then we formalized that into a system dynamics model that was able to simulate the emergence of the Neolithic transition.
Seth Villegas 35:55
Is the system’s dynamic model, the same as the type of thing that you were talking about earlier with agents and what not?
F. LeRon Shults 36:02
No, not exactly that an agent based model has, as you expect from the name agents that interact. System dynamics models don’t have individual agents, but rather variables within a system that influence each other. So for example, you’d have a dynamic related to cognitive, cognitive tendency to detect supernatural agents or willingness to give up your freedom in order to be in a collective or social complexity. So these would be different systemic variables that would have to be connected to each other using algorithms. In the early days, I didn’t write the algorithms, I just talked about how I saw things connected, and Wesley and the other, Wesley too obviously is a theorist, but then that was turned into formalized differential equations that would shape the causal architecture of that particular model.
Seth Villegas 36:56
You know, as you’re kind of reminiscing, it almost sounds like there’s this real, I don’t know invigoration, perhaps, of this process, because from what I know or Wesley, he’s he’s my advisor, I’m sure you’re asked like really specific questions that you may not have necessarily thought about.
F. LeRon Shults 37:11
Nonstop all the time. Yes. And that’s, that’s what makes it so fast, or that’s one of the things that makes it so fascinating. For me anyway, being forced to think through things to this level of detail leads to questions you’d never thought of before, connections that you never even realized you need to make explicit. So it forces this clarity for and as a philosopher, driven for clarity of forces, it both forces clarity, but it also enables clarity, for for me, it is truly invigorating process.
Seth Villegas 37:42
I’m in the middle of writing my prospectus right now, right, which is my dissertation proposal. And one of the things that I’ve noticed is, you know, the parts where I’m, you know, a little bit fuzzy, in my mind, you know, always kind of hope it will get passed Wesley, but it usually doesn’t. And so I guess I’m kind of wondering, that is normally like you’d say, through your peers through content, you know, kind of criticism of one another, that sort of process of getting more specific about things, it would seem that the academy tries to do that naturally, or it has mechanisms to do that. It’s set up to do that.
F. LeRon Shults 38:15
I would say the ideal, the ideal is that we’ll we would want others to contest and challenge our ideas so that we have to be more clear in defending them or, or letting go of them.
Seth Villegas 38:27
Maybe we could talk a little bit more about that. Because I don’t always see that as much. In fact, what I usually see, I did debate in high school. And so think about this kind of this lack of contact between different kinds of arguments, different kinds of models, where they kind of live in parallel worlds and kind of parallel explanations. And so people just kind of have their little project that they’re working on. But they can be somewhat insulated from one another. And as a result, I don’t know if they’re as as fruitful as they could be, because in part that insulation protects them from the kinds of specific questions I think.
F. LeRon Shults 39:01
I completely agree. I think it’s it’s an ideal of most academic academics and academic disciplines, that that you be open to, to having your theories contested. But you’re absolutely right there. Scholars are human too. And so it’s all too easy to be drawn into an echo chamber, in which you everyone presupposes a certain theory, or a certain reading, and it’s just arguing about who’s interpreting this and the possibility that something’s drastically wrong, is never considered because there isn’t anyone outsidebeing critical. And that was my early training was in theology. I was a theologian for many years. And perhaps that discipline is, is especially good example of that. You just take for granted that this or that holy text or particular sample of holy texts is true. Oh, and then you just argue indefinitely about how to interpret it. But the possibility the possibility that text is not true. Is that the supernatural agents that you imagine that communicated that text are not real? That’s never considered.
Seth Villegas 40:07
I think this also, you know, as someone who kind of grew up Protestant, I think that kind of speaks to the kind of spiraling out of different interpretations and the inability of those communities to talk to one another. I mean, there are efforts kind of within those circles to more ecumenism, right, which is, you know, kind of crossing the boundaries. But there is real investment, I think, in individual interpretations.
F. LeRon Shults 40:28
Seth Villegas 40:28
And so I think we see that, you know, not just the Academy, but a lot of a lot of spaces. I mean, if even talking about kind of the online discourse today, I think, you know, we kind of see similar sorts of problems, where, if you made a bad assumption, there’s no, there’s no way to really know. And so I think if we can kind of, you know, bring this little full circle a little bit about how simulation works, where you’re always starting with these parameters. And then you’re trying to see if those parameters actually played themselves out in the way that they’re supposed to? And if they don’t, you know, that you made a mistake. And so there’s this real, those errors are actually extraordinarily helped.
F. LeRon Shults 41:05
Yes, yes. So so we’re all we’re all biased, all human beings are biased. One thing that computer modeling helps to do is reveal those biases. And that that is another reason I think it’s really helpful. In one chapter of the book on human simulation is called simulation as the lingua franca, as a lingua franca, that if you have the time and energy and cooperative partners to put together a team, the sort that we’ve been discussing, then computer modeling can be a kind of tool that both integrates theories. And in a way, that’s fascinating, but also that teases out the biases of the various disciplines and the individuals involved. And if you have relatively open minded people as part of the team, that can be a fascinating process of discovery and alteration of theories themselves.
Seth Villegas 42:01
I was wondering if you could tell us a little bit more about what is you mean, by bringing up those biases, in part because I can imagine you’re the model kind of is it just that the model doesn’t play out the way that you thought? So you go back and you think of, oh, why did I think it was like this? Or when is it in the process that those things are actually uncovered?
F. LeRon Shults 42:17
That’s one of the places where it’s uncovered. Another is in the actual formalization itself. So when you’re asked, very concretely, why do you think this kind this level of identity fusion, is likely to cause this kind of reaction to this sort of person under these conditions? That mean, that’s not the kind of question you normally get, if you’re just sort of writing about identity fusion. So even being forced to think, wait a minute, why do I think that this might have a causal relationship to that? And then at the other extreme, or the other, the other side of the production process of the model, when you’re running simulations, and you, you realize that you have been able to simulate the emergence of a phenomenon, you might discover that what you thought was causing something isn’t what was causing it, or in the model, that is to say, so. So this assumptions can be challenged from beginning to end. And throughout the middle?
Seth Villegas 43:13
I think one of the things I probably should have asked about earlier that there’s lots of probabilities that are working these models. I assume you’re not just running a simulation once, but but many, many times.
F. LeRon Shults 43:23
Yes, that’s out 1000s and 1000s, and 1000s of times.
Seth Villegas 43:26
What do you end up presenting that as a kind of an aggregate? Did you have like a specific I don’t know, key simulation of some kind of what is it that goes into the paper?
F. LeRon Shults 43:35
Well, in the model I was describing earlier, it…because it depends on a lot of things, but just to stick with a model we’ve been discussing, that it’s a presentation of you can’t describe every simulation run. I forget how many simulation runs we did for that one. Yeah, so I won’t I won’t give it I know some of ours have been over 2 million, but I can’t remember the exact number. So the idea is you run what’s called a parameter sweep. So you initialize your model, and all the possible configurations you could have so 1% 99% to 50-50. Every different level of tolerance for the different threats, different levels of religiosity, a different preferences for or tendencies to interpret things as supernaturally caused, or different desires to our need for connecting with homophilus others people are part of your in group in ritual interactions. So you explore all the parameter space of the multi dimensional state space of the model and run simulations to discover what are sometimes called attractors or attractor spaces. Under multiple multiple runs, you tend to get this if you do an optimization experiment, we want to find out what is it in the states phase that leads to the to the optimal. I don’t mean ethically optimal, but the optimal that is, say the goal of mutually escalating religious conflict. And then the scanning of the parameter space told us that those were the features that led to it, that about 70-30, population split, and an initialization of agents who have tolerance levels, where they’re especially not able to tolerate cultural otherness, threats, and contagion threats.
Seth Villegas 45:31
Maybe one of the places we can start to start to land this conversation is something you mentioned much earlier about. If the model actually works And it helps you understand something in terms of its useful, there are lots of different ways in which you can deploy that knowledge in which say, you could use it to reduce religious conflict. But you could also hypothetically use it to stir up religious conflict. I think that this raises larger issues about the better that we understand these mechanisms. And I think this is also something we see in analogous problems with social media space, right? the better it is, we’re able to kind of manipulate people as a result of the kind of conditions we can we can put them in, what does that say about us and about the way that we deploy those things? So is this something that you’re really worried about?
F. LeRon Shults 46:18
Another great question. And every time I’ve ever give a public presentation on our models, one of the first questions that comes up is the question about ethics. And the way I try to respond to it is like any other technology, genetic engineering, nuclear, nuclear power, and so forth, it can be used for good or bad, and people are using simulations for good, or what we would consider bad, and they’re not going to stop, just like they’re not gonna stop trying to develop nuclear weapons or genetic engineering. So what we what we try to do with our teams and in our publications, is to say, let’s get out not just not try to just to only get out in the forefront of doing the most power of having the most explanatory power in our models, the most cognitive complexity in our models, which that’s one of our goals, but also to be on the forefront of the ethical discussion. So Wesley and I, have published several articles and book chapters on artificial social ethics, on ethics of computing and the future of humanity. So we’re trying to say, the technology is going forward, no matter what, we can either not be part of the conversation, or try to be part of the conversation and have it as open to as many types of people and individuals as possible, which forces a kind of awareness. So, so even even going back to your example of social media, the more I think the more people realize that they’re being manipulated, the more they realize the effect of echo chambers, for example, the over time, the more they’ll have resistance to it, or resilience against it. So being open and surfacing the ethical issues, we think, can itself be a contribution.
Seth Villegas 48:10
Okay, I’m really glad to hear you say that in part, because that’s one of the things that the podcast is trying to do, right is to kind of talk about these things. But also, I think that there are larger issues around said, the deployment of knowledge, that kind of understanding of it, in part because you know, tech companies have been really big on say quantitative psychology, all these things that you’ve sort of mentioned, that go into into simulation are also deployed in other areas, and in different kinds of ways. But they’re not nearly as transparent. Because you’ve even mentioned, you’re trying to make things open source, making sure that as many people are involved as possible. And that can be really difficult, in part because, you know, simulation is really hard to understand. But also because akind of social knowledge doesn’t seem as dangerous as something like a nuclear war.
F. LeRon Shults 48:53
Right. Very good point. Yes.
Seth Villegas 48:55
I think that all of those things make it a little bit more tricky in terms of, you know, how do you keep the public informed on the things that are going on and I think I can talk to you about my own parents, I don’t mean to put them on the spot. But none of them have college degrees, but I do find them to be very intuitive about certain kinds of things. And so talking through them in a maybe even kind of similar to what you’re saying of asking really pointed questions about things, you know, trying to really get into what it is I think about something can be really fruit, yes. Oftentimes, we can’t always explain what it is that we’re working on or why.
F. LeRon Shults 49:27
And that’s true for the other technologies, genetic engineering or nuclear power, whatever it might be. And, of course, artificial intelligence itself, or the more sort of traditional, a general AI, which people are a lot of discussion around the ethical implications, What if we create an AI? Will it destroy the world who who will have power over and so forth. But what I like to point out is that we already have multi agent artificial intelligence, and the ethical implications of the kinds of models that we that we do can’t wait. They’re they’re already being used. And they’re already influencing and shaping people’s attitudes and behaviors. But I think they can be used, they could be used to discover new pathways for opening up societies, finding transitions towards sustainability, opening up new ways of interacting with people, and that are less conflictual, less, less damaging to the planet and to individuals.
Seth Villegas 50:29
I think that’s a really hopeful note for us to end on. The real power of this really kind of helps society. And I think that’s certainly my hope as well. Right. I think the reason that we go into producing this kind of knowledge in the first place is that it would ultimately be helpful that these kinds of issues wouldn’t be worsened by our efforts to ultimately uncovering these these kinds of patterns that you’d be able to divert problems before it’s out.
F. LeRon Shults 50:53
That’s the goal. Yes.
Seth Villegas 50:56
All right. Well, thank you so much for LeRon, I really appreciate you taking the time to talk to me today.
F. LeRon Shults 50:59
Thank you very enjoyable Seth.
Seth Villegas 51:03
Thank you for listening to my conversation with Dr. Shults. Simulation is powerful, in part because allows us to map out a complex process in further detail than we were able to before. However, those simulations help us to understand our world. They are also built with particular assumptions about our world in mind. So while they help us to come to new understandings, they are themselves reflections of how it is that we already see something. And as Dr. Shults says, the world is simply too complex for us to map it completely accurately. I hope that you’ve had a chance to explore the usefulness of simulations, and also some of their many limitations. You can find more information about ditch ethics on our website, page ethics.org, and more information about our sponsored organization, the Center for Mind and Culture at mindandculture.org. If you’d like to respond to this episode, you can email us at firstname.lastname@example.org. You can also find this information in the description of this episode. I hope to talk to many of you again, in the next episode. This is Seth signing off.
Transcribed by https://otter.ai