Seth talks to Bernd Dürrwächter, one of DIGETHIX’s advisory board members and main collaborators. Bernd is principal at AnalyticDimensions.com, a consultancy for big data analytics & data science projects. He studied computer science at the Frankfurt University of Applied Sciences from 1988-1992. Bernd is a seasoned practitioner in software engineering, IT architectures, business intelligence & data analytics solutions.
Seth and Bernd discuss the challenges to education. They highlight how difficult it is to come to a consensus around values. The key questions for this episode are: given the value problem, how can a podcast like DIGETHIX make concrete suggestions? How can stimulating discussion work together with education to better equip the next generation of technologists?
Potentially Helpful Links:
Music: “Dreams” from Bensound.com
Seth Villegas 0:05
Welcome to the DIGETHIX podcast. My name is Seth Villegas. I’m a PhD candidate at Boston University working on the philosophical ethics of emerging and experimental technologies. Here on the podcast we talk to scholars and industry experts with an eye towards the future. Today’s episode will be a little bit different than the ones we’ve had previously, as I will be having a discussion with one of my main collaborators here at DIGETHIX, Bernd Dürrwächter. Bernd is one of the principal members of AnalyticDimensions.com, a consultancy for big data analytics & data science projects. He studied computer science at the Frankfurt University of Applied Sciences from 1988-1992. Bernd is a seasoned practitioner in software engineering, IT architectures, business intelligence & data analytics solutions.
In this episode, Bernd and I talk about the challenges to education. In particular, we highlight how difficult it is to come to a discussion around values. The key questions for this episode are given the value problem? How can a podcast like ditch ethics make concrete suggestions? How can stimulating discussion worked together with education to better equip the next generation of technologists?
This podcast would not have been possible without the help of the dijet six team, Nicole Smith and Louise Salinas. The intro and outro track dreams was composed by Benjamin Tissot through bensound.com. This episode has been cut and edited by Talia Smith. Our website is digethix.org. You can also find us on Facebook and Twitter, @digethix, and on Instagram, @digethixfuture. You can also email us at email@example.com Now I am pleased to present you with my discussion with burn director.
What I kind of want to talk about today since we don’t have a recorded episode on this, but we did a lot of pre work on this kind of before the podcast started was on this aspect of education and technology, something you’ve both kind of talked a lot about and thought a lot about in the past. So I know that, for instance, you’re working on kind of professional development to try and develop like, what would our curriculum look like? What a larger framework out of which you could build a curriculum, the challenges of building that into your curriculum. And I’ve been working on some parallel stuff. But that’s aimed more at undergraduates. So a very different level of thing. And I’ve been mainly doing that for the computing and data sciences unit here at Boston University. And I think it’s been useful for us to talk to each other in part because in an ideal world, students would learn things, you know, in primary school, through high school, through university, that would then be very useful as the kind of baseline of skills out of which to draw the things that they might need, right, the more abstract things that they might need, so that when they come along to a professional curriculum development, as the kinds of things that you’ve been doing, they’ll be able to process that information, digest it, and know how to apply it within their immediate context. But for the most part, what we’ve been learning and Wesley and I actually had a consult with a professor at Georgia Tech about a week ago, you know, most of the time, if you are going to have that sort of ethics curriculum, it’s going to be a very small part of a course maybe, or maybe it is a course. But it kind of remains that this sort of elementary level. And what I mean by that is just it’s just kind of the basics. It’s a you know, let’s try not to hurt people with the kinds of technologies that we have. You know, here’s some ethical guidelines, maybe. But, you know, a, you just sort of leave it at that. And I think we’re engaging in a kind of a cultural situations, that’s complicated enough that that does doesn’t seem to be quite sufficient. In part, because, you know, there’s different kinds of technologies running in all different areas. So there’s highly centralized things. There’s highly decentralized things. There’s governments versus corporations, there’s all these sorts of geopolitical, societal cultural forces that are at play in this question. And so as a starting question for you burn, like, Where do you see yourself in in this process? And what do you think we can do to start to address the kinds of questions that people might have, if they wanted to do something right? Again, you’re kind of assuming that these are people who are trying to give a good faith effort to actually maybe making a difference where do they work?
Bernd Dürrwächter 5:01
Yeah, to me the biggest challenge and the point of interest and opportunity is in the bridging that gap between engineering of the technology side and the humanities. way the American education system is it’s very bifurcated you either like early on decide you do a technical functional job, or you do one that’s more people related, right? It’s, it’s kind of the work you’re doing, where you try to incorporate some of the ethics is, by definition, a very human centered aspect. And that is completely neglected in the engineering and technology, what they call STEM education. Right? There, the opportunity there is to bridge the gap, right? I mean, I’ve heard actually, that the humanities are actually on the decline in general, if you look at all the university programs, what they offer, there’s just not as much demand, but everything STEM, STEM, STEM: science, technology, engineering, and math. And bringing it back a little bit. So your job as an engineer is not just to get something functionally done, right? If I give you a problem, and you solve it in a technical fashion, but you step back, what is the actual problem that can take a little bit ownership as an engineer to say, make sure that the problem is solved is actually solving a human problem, like to the benefit of humans, not just, I do it, because it’s possible. Technically, I think my whole identity and engineering pride that I’m able to do something, versus whether it adds value to the human lives, the whole different way of thinking. So in a way, you’re you’re stepping back and I said earlier, it’s an American system, because at least when I grew up in Germany, the philosophy part was always heavily wasn’t so specialized. Even if I said, I majored in math and physics, it was always I had to have my classes for language based, history based, philosophy based even though I said, I wanted to specialize in science, these still forced us to move to call it like a holistic, more like me to stay holistic, no matter what you specialize in. And I think that’s the kind of gap that we’re trying to close, like, where you go in and teach ethics in an engineering course. And maybe in the future, put people together, right, that people combing over, regardless of what they study, but that the engineers hang out with you, man, it is vice versa. And then there will be some level of effect or even top settings, like where you have less technical people hang out more with it, they kind of leave instead of isolating them into, you know what I mean?
Seth Villegas 7:18
Yeah. And just to kind of go back to what you’re saying before, about the over specialization, the way the American system works, I think you’re exactly right, that the humanities in the American system, in particular are on the decline. And actually, you’re one of the interviews I did was with Wesley Wesley Wildman. who works here at CMAC, and you’re one of the things that he says is that maybe the humanities is rightfully declining, because it hasn’t done anything useful, right? Like there’s nothing useful that’s actually coming out of that. And if I think back to my own education at Stanford, the humanities was often seen as a easy a, so to speak, right like that, that there’s no real methods, there’s nothing that you can really do. How would you even know if you are wrong? How can you read a story wrong? Yo, Yo, what does that matter? And it’s just really funny, because if you take those things really seriously, you can start to see how complicated everything is. So for instance, one of the things that I do research on is on chatbots of people who have been recently deceased. And there’s this question there, because you really could use machine learning processes to replicate, you know, what somebody would chat, like, based off maybe the history of all their messages. But there’s already a bunch of problems that we can see, first off, you know, how do you get access to those things? Can you get permission from someone who’s already deceased? Like, who has that data? But second off, is that something you even should be doing it? Should it be the case that we should hold on to people desperately when they pass away to kind of preserve them maybe even a digital form? It’s not just this harmless sort of thing. And maybe it is harmless. I mean, we’d have to kind of see wait well into the future, what the kind of impact of using technology is, but at least for me, that there’s a lot of questions there, that just having the technology is actually kind of admission and all these other values, that people can be uncritical. And that’s actually where the humanities should be really valuable is in bringing those things to the surface of like, Oh, you value this sort of thing. So let’s do that. Right, like, and if we just sort of, you know, make whatever, especially for a quick buck or something, it’s not clear to me that we’re going to end up with the kind of society that we would ideally want.
Bernd Dürrwächter 9:40
When when I say humanities also don’t mean the over specialization, like let’s say you want to become a psychologist or I can think of any job site or like a teacher, anybody that’s not specialized in technology. So what happens is going to the other extreme where you know, you have to really deep dive in all the humanities which then An engineer will say most of this stuff’s not relevant or like you said, it’s just the next layer needs to be a level of when we say humanities such as the basic level, not the deep dive not go into the depth of philosophy, literature history, but look at like my my schooling. And that’s probably also more of a European, if I compare the two cultures and when you’re being there, the Continental versus American culture is also two aspects that are very unique in the American system. It’s a very positive, very optimistic focused, you don’t really acknowledge problems, I see this in work, nobody wants to hear about problems everybody wants to do is a positive goal, we’ll work towards it. And we’re gonna ignore the problem. So we don’t want to dwell on the problems. And there’s a little bit we worry about it when we get there, right. Like, there’s no point of thinking about that, that could be problems, because it’s counter to the idea of being an optimist. And the other thing is just, it always sounds so arrogant. There’s no sense of history, even though you know, modern North American history can look, you know, the United States as a 250 year history as a country. That’s so no European country has consistently existed that long, right? They’ve all went through transformations from different forms of state. And yet, if I look at my own schooling, whenever we look at what could go wrong, we usually go back to historic precedent all the way like not necessarily a World War Two, but a the first wave of democracy. How did that work for the Greeks? That direct democracy didn’t work? So moment? So always go back? We’ve tried this before as mankind didn’t work. So well, why didn’t didn’t work? What should we do different and in American culture, you don’t really you cannot really don’t worry about the historical part. It’s like, let’s try this, let’s not get bogged down with past experience, let’s try it, and then just change it if it doesn’t work out. And that is a really challenge to what we’re doing where we know, we try to proactively think about when could go wrong, even though we have a lot of precedents already individually on what whenever he hasn’t got involved. But culturally, at least on the business side, nobody really wants to talk about risks, or what could go wrong before it happens. And even then, let’s just fix it. Let’s move forward. Let’s move on, let’s not dwell on it.
Seth Villegas 11:58
One of the things I often think about is one of the reasons that they’re business ethics classes is because of this period in history called the Gilded Age, and its object color point in American history where you had right at the beginning of the Industrial Revolution, it’s taking off. And you know, there’s big barons of industry and everything, and they’re just kind of doing whatever, right. And we had to kind of come to some decisions over time of hay, like these sorts of practices, these sorts of labor practices, these sorts of production practices, these things are just not things we’re going to tolerate. These are not good ideas. And, and that’s it. And I remember I was having a conversation a few years ago, you know, before I came to my PhD program of like, Oh, I could see myself teaching technological ethics in 20 years, as a result of whatever happens within those 20 years. And I think what we’re trying to do now is, well, I think it would be better if we got ahead of some of those problems. So we don’t ruin everything, in part because these technologies are so powerful. To give another quick example, machine learning is a way of using statistics. And basically, you don’t have to understand the underlying process, right? You just kind of use the the algorithms to get the kind of outcome that you want. So if we use machine learning on something like Facebook, or Twitter, and we say, Okay, look, when we do these sorts of inputs, it gets people to use the platform more. And you do that in kind of a feedback loop that we talked about last time. What ends up happening is, you know, that you’re getting the outcome you want without looking at the underlying process. And then you have something that may be, you know, addictive manipulative, right? it because you just know that at work. So you’re always thinking about how it works, or whether it’s right that it works. And those things can all be potentially problematic in all sorts of ways. If we really want to have you know, people at the center of this, like a good experience for people on how do we put people first, for our technologies, rather than simply their effectiveness, we have to start to kind of pull back as some of these questions. And just think about, well, how do we get ahead of these problems? And even a more basic question, how would we even know if we were doing something that was bad?
Bernd Dürrwächter 14:15
So this this goes back to I don’t think there’s a broad consensus of things that are going wrong if when you watch these hearings where you know, the Google and Facebook talk in front of government, there is always issues a lot of people actually don’t think anything’s wrong. And a lot of this I was just reading an investment report on Facebook, all the way from like this number. So Zuckerberg is a billion CEO because it makes so much money and shareholder value. There’s there’s not enough societal consensus. If he asked him out on European whether the Holocaust was bad 99% of the people will say, yeah, it was bad. Only a few straggling neo nazis hold in my head, but for the most part, that was a terrible 50 million people died in World War Two, right? And so if I look at modern data privacy, why are Europeans so sensitive about data privacy? Well, the Nazis use the All the data is given to the hands on to find out who is Jewish, right there was like people died because of the violation of data privacy. You don’t have that here really like, you know, Facebook did all a bunch of stuff. But you know, like, it’s not really that tangible what made it bad, or there’s some historic precedent and even the Some people claim the genocide in Myanmar when 1 million people died was only possible because the coordinate themselves and Facebook but it’s substracted, like, wasn’t really Facebook’s fault wasn’t, you know, it’s kind of like suing machete manufacturer for the genocide that happened. And Sudan like where they use Chinese made machetes, so that it’s really hard to pin down in modern America, like why what makes it actually bad, but we can get a consensus, right, where you have a really massive event, like, where everybody can agree on that this is bad, but I don’t think we have a consensus on you talk ethics, this is bad, this is good. And that’s one of the struggles, I think we have that a lot of people see value advice, like, I like Google, I like Facebook, they enrich my life, right? I don’t care if they use my data to sue me, it’s, you know, I don’t have any beef with them. That’s not my opinion. But that’s what I run into where you literally, you make the assumption, what they do is not right. And some people say, I don’t think so like, it’s hard to propagate a value system that not everybody agrees with,
Seth Villegas 16:17
I think there are two things that work, what you’re saying, the first I can think of is that we live in a multicultural society. And one of the consequences of that is that it’s just simply harder to come to an agreement on what our core ethical foundations are going to be. And you can see this actually, most clearly, in the very fraught American political situation, just because you have such a big divergence, and also, there’s kind of the gamification of politics here, right, where people don’t see the need to cooperate as much as they did before. That’s that’s also certainly happening. But it you know, and so there’s a kind of this intangible effect of the technology of, well, there seems to be a correlational aspect between the inflaming of those tensions, and the kind of social media technologies that we have. But the relationship between those two things is, it may be abstract, maybe it’s not direct, it’s just sort of happened. It’s kind of like, you know, the frog in the boiling water, or, or just kind of at that point now. And maybe we’re not even at that point, which is scary to think about how much worse things could be. The second part of it is, in this kind of builds off the multicultural thing is, if you don’t have a normative ethic, then how do you manage those disagreements? And we’re not really at a great point now where we can manage them skillfully, in part, because of the technologies operate in a monopolistic fashion in which if you don’t conform to those rules, you kind of get squash. There, there are other sorts of technologies, right, besides Twitter, and Facebook that tried to do similar things. But there’s a kind of a technology cartel that will kind of smash them if they get too big or too influential. And so, you know, one of the things you can do in a multicultural society is kind of how these little divergences and little silos of people who kind of operate by their own rules, but you can’t have those things or, you know, maybe it’s bad to have those sorts of things will happen when you have something that’s flexible enough, with these hind kind of highly centralized things that everyone feels forced to use. I mean, not everyone’s on Facebook, obviously. But the fact that if you want to have a platform like this one, for instance, you have to engage with those big platforms, regardless of how you feel about them, is one of the issues that we’re actually trying to talk about here.
Bernd Dürrwächter 18:41
It’s interesting that you tie back, you know, the the difficulty to come up with a shared value system to the modern political environment, which we don’t have a shared value system and so many other levels outside of the digital realm. But I find that curious, almost ironic how actually everybody has a beef with social media platforms, both on the right, you know, and the left political spectrum, where nobody really likes what they’re doing, but for different reasons. And we’re talking about, you know, mentioned monopolies. So there’s effects too, while it should probably be regulated, or some laws should be made that guide that. But it goes back to we don’t have the shared value system at all, but but nobody likes what they’re doing, whether you’re on the right on the left or on the center, right. But what do you want to tell him to do? Right? Because everybody could scream censorship or or lack of censorship, you know, it’s it’s almost like they need to stop doing what they’re doing, but in which direction should they go? Right? So it’s like the term that you use normative, then what is the norm and I find a lot of books about, you know, classical societies usually had norms and I think our modern society because of them to reality, and combined with a democratic, being liberal and progressive as a modern society. I don’t think we have that in history where you have that, you know, 330 million voices said we’re all entitled to their own opinion. I don’t think historically even America up until World War Two, were The common anyway, we all need to pull on the same string, right? We don’t have time for individualism, but the monitor individualism almost makes it impossible to have some form of consensus. Unless you’re one of the extreme camps right? There seems to be the medical filter bubble effect that come to the far fringes that seems to be cohesion within that fringe.
Seth Villegas 20:19
Earlier, you actually talked about direct democracy. And one of the things I think, is interesting in this question of norms, and how to have norms is, let’s say you had a poll for naming a boat of one of your cruise ships. And what ends up happening is, you’ll have people kind of gather on the internet, you know, maybe on 4chan, maybe somewhere else, and they’ll name the boat like boaty mcboatface. And the thing is, is that, you know, I find that funny, in part because of my generation. But I can also see why the people who made that poll, it wasn’t their intention, that, you know, their luxury liner would be named something like boaty mcboatface. It’s this kind of sarcastic sardonic mockery of whatever the norm was supposed to be. And so we see this kind of complex negotiation at multiple levels, even of the these sorts of things, right, the things that may be more innocuous of, maybe these people aren’t taking this process seriously. And in the age of the internet, is it actually important that they take the process seriously, does that matter? Does it matter that we have things like that now? Or is it just sort of irrelevant? And honestly, I don’t think anyone really knows how to answer those questions, in part because it it is really strange now that you have, you know, such big differences in lifestyles, again, referring to how things are in America, you have really blue areas and cities, really red areas and rule spaces. And this really kind of complex tension between, well, how is it that you’ve have like, the same rules for different kinds of locales, and this applies also to internet spaces of, Well, is it okay that someone has a subcommittee of something we disagree with? And that’s, I think, kind of a fraught issue. Anyway, we were trying to talk about education, but but these these things are all kind of in the background of what’s actually going on with with these technologies.
Bernd Dürrwächter 22:22
If you focus on education, you’re finding exactly the same problem that everybody has a different value system, just look at the education between some people want to teach science and others want to teach theology. Think in Kansas, this is a big debate, evolution versus creationism. That’s like a real struggle, right? And it’s pretty equal. When you’re when you’re local. When you’re in Kansas, it’s like a split 50-50. And it’s going there, wherever, you know, two different methods of where we think life came from will be taught, even though by most standards, this is a scientific discussion. This is a theological discussion, right? I mean, I learned about the story of creationism as well, but it was framed as religion was famous. We want you to understand that there’s this way of thinking, but we’re not claiming that this is the science that you know, there’s science and the theology, both are important. And I think you’re in a unique position there with CMAC, and then your studies, since no, a lot of Wesley’s work is focusing reconciling the two, it’s important to have a belief system that’s not necessarily rooted in science while reconciling with the real world. But it’s a holistic relationship. And you have this either or, and the other thing I wanted to throw in in terms of the polarization, I think there’s also generational conflict. You have this classic, you know, that older people tell the younger people, what’s the right thing to do. And now you have this split, you have the digital natives that do things, a lot of what older people tell them is because older people are not familiar with it. So they try to make up rules just based on because they don’t understand it. Like just listen to the congressional hearings, swipe though, things being said there, and I’m going to switch between boomers and millennials. So it’s kind of funny for me to watch both sides, like one side sounds like my parents, the other one sounds like my kids, I can relate to both sides. So one of the resistance to any kind of when most people hear ethics, they think of, Oh, gee, it’s going to be a boomer or some older person trying to tell me how to live right. It’s going to you mentioned something about this rebellious ness and naming the boat by the whole lol culture. It’s one of rebellion. If you look at, for example, a Facebook culture, like break things and move fast and break things, right. It’s so anti establishment. And if they don’t come up with their own recognition that ethics is important. There’s no way to get food to them. Why do they need to stand up? Some elitist or older people tell them what to do. You’ve almost have to go to the next generation. I see this with my Gen Z daughter, who already sees the side effects who looks at them on Facebook anymore, right? Because they’re aware of what’s going on to actually highly informed their digital natives but they’re somehow more critical thinking goes into that first generation will live with a fallout of what the generation before them has done can like what you said, you want to build your expertise based on what happened in the last 20 years, the first 20 years of social media. And that’s a generational conflict. That’s always I see this from my perspective, if I tell people what the right things to do, I’m just an old person telling them how do you make this inclusive? How do you get everybody to buy in? What would be that forum and education to me I was sounds like an old person talks down on me, even when when we say education that some people would associated with brainwashing, right, it’s somebody imposing your your system on you. So but if you’re if education is the right word, or the right forum, versus more like intergenerational exchange, or dialogue, or whatever you want to call it, but the school system itself as it is, public school system doesn’t have enough momentum with much more mundane topics, like there’s so much struggle within the school system, what should be taught, how should it be part to whom, and then the other half, you know, couple of students out of the public school system, and since it gets to specialized schools that are already having that on the agenda.
Seth Villegas 25:56
I appreciate what you said about generational conflict, especially because I’ve also seen something similar. So people who are a bit younger than me tend to, you know, kind of like you’re saying, just not be on Facebook, because they’re a lot more aware of the kind of complications that come from being involved in it. And it’s, it’s definitely super interesting to see, like, they have a kind of sensitivity that I like I don’t have, even though I definitely consider myself, you know, at least right, like one of the first digital natives like one of the people who really grew up with the internet, I do appreciate what you said to that. When people talk about education, it can seem very talking down to people and whatnot. And I definitely think that’s not what we’re trying to do not not on this podcast, and certainly not in the kinds of materials that we’re developing, but rather, focusing more on tools more on skills. And there’s a couple of reasons for that, again, you know, one of them is just practical, in the sense that we actually don’t know what’s going to be relevant to the next generation of technologies, that people are going to be facing problems that no one has ever seen or thought about before. And then this is something that comes up a lot with technology in particular. So anticipating that they should do x and situation y just isn’t really going to work. And I think the second part of it is out of real respect for the people that we’re trying to help. I mean, you were the way I see we’re kind of all in this together. And I see this at multiple universities across the country of, Okay, we’re trying to figure this out, because we know, we need to figure it out of how to equip this generation of computer scientists, engineers, data scientists, you know, all these kinds of technical people, for whatever is going to be coming next that we can’t quite see. And we have to be deferential and some sense of, we just have to hope that they’ll have the wisdom to pull through whatever is going to happen, that nobody knows about. That’s definitely scary. But I want to be respectful to those people, in part because you know, as you’re saying, they may already see things that I can’t see in part because of my position. So giving them what I can, and maybe even getting in the way when I need to is really important.
Bernd Dürrwächter 28:17
One of the important part is to not generalize in the first place, because you’re a good example for that you’re not really typical to your generation when you are a Facebook user, but you’re actually very critically thinking right? What are we doing, it’s going to advocacy, I don’t think it’s education, because, as already the security buildings more like, we have a concern. We’re concerned citizens. And we want to advocate the dialogue, the thinking, right, we don’t want to be prescriptive what the right thing is, but we want to stimulate the dialogue to invite people to think through this jointly. And what we’re looking for are the change agents like other people who are concerned, and because then you automatically get people from different generations, right? We’re not the stereotypical where it’s like, well, I’m a 25 year old, you’re a college student, and I’m concerned, and I’m embedded in that culture of the young people. I know how to speak the language while understanding the cross generational concern. And I think that’ll be the key to draw these similarly, concern. People have critical thinking and somebody wants to put a positive spin. That’s all the better, but I think the way I understand our mission is to stimulate the dialogue, invite people in, it’s typically the more diverse perspectives you have the better when you try to solve a big problem. Right? So by by definition, it’s almost not an educational effort. It’s more of I would call it efficacy. It’s like, Hey, we’re concerned. And we’re interested in solving we’re not want to ring the alarm bells. We just want to, let’s think this through, maybe somebody has the solution. Maybe we’ll find some people said, Yep, I thought this food too. And here’s kind of my idea, right? It’s like, it’s almost like a brainstorming Think Tank ideation. Right. It’s like, I don’t think our main issues to raise awareness of what went wrong. I think a lot of the stuff that went wrong was already in the public conscience. The next question is like, what do we do about it? Right?
Seth Villegas 29:58
Yeah, I do think that we do you play a role in raising awareness on on some things, right, that we may just have information that other people don’t have? And then that’s kind of important to recognize, in part, because of our positions, you know, you’ve worked in technology for a long time, you have this really unique experience. I, you know, as you mentioned, I kind of have this unique experience of coming out of like, a theology and religion background into into more like secular philosophy, and how do you apply that to the domain of ethics, in part, because there’s a real normative quality that’s required in that in the absence of, we should just be making technologies to make as much money as possible, which I don’t think is necessarily the right way to go about it. I mean, I’m, I’m certainly not against people making money off their, their products. But, uh, you know, having them work in the way that’s good for everyone, I think what is better in the long run, and I think it has, partly to do with me being a bit of an optimist of like, oh, wow, like, we might really be able to make technology that can serve our society for the better, right, that’s kind of like a deeply seated belief, you know, something that’s almost non falsifiable. And you’ve kind of mentioned this to me before. And the only reason I’m kind of bringing it up now is because if I didn’t think that there was something that could come out of simulating these kinds of conversations, it probably wouldn’t do it at all. But because I do think there is some light at the end of the tunnel, and that there may be a way to kind of negotiate the boundaries of these things that they work for us. And that, you know, maybe getting in touch with people, you know, who’ve already thought about things, you know, people like drawn linear people like Tristan Harris, who are trying to raise awareness on these problems, so that we can be like, hey, like, what we’re okay with getting services for our data, if the data is x, y, and z, instead of you know, if the data is to be named later, and could be any, anything and everything. And actually, one of the things that I’ve noticed, even as I’ve been using my phone over the past weeks or so is like, Oh, is it okay? If we use this data, and it will be used in this way? And it’s like, oh, this is different. I’ve never seen this before. And I like that, I can see that something’s happening, right. You know, I don’t know quite what’s happening. But it’s important to keep that going. So that, you know, it doesn’t feel as kind of detached and ethereal, as intangible as it is. And as people start to recognize those things and maybe put that together, then we can be like, okay, you know, this works. This doesn’t work, you know, kind of best practices for going forward.
Bernd Dürrwächter 32:45
You said something earlier about that we have some information. Others don’t have that. Be careful with that. Because also informations available to anybody these days, the challenges focus to bring people’s attention to what’s right. And then we can say, Well, I studied this. So my focus on this and maybe I’m an educator, so I want to bring your attention to this. I want to tell you, I have proprietary information, you need me to like that. I think that would break the spirit of modern way of people learn. But I was thinking, I said earlier, I was had an aversion to the word education. But I think where education could happen is, I think a lot of people confuse some formality with ethics. And ethics is the process. It doesn’t prescribe how you should live, it’s more like how do I inquire how to how do I determine what’s the right thing to do? And not like this is the right thing to do, right? And I think when people are confused, I ran into this when I was telling people what I was working on, they automatically Oh, you’re going to be a preacher. It’s like, you clearly don’t understand what ethics means, right? It’s like ethics is a process to figure out what the right thing. And I think that requires education. Like, let’s just learn how to put yourself in other people’s shoes. Let’s just learn what empathy means. I’m not telling you what to do. But the first step is like, consider that you live with other people that you share this planet, this country, the city with other people, right? There’s nothing lecturing about that. That’s a fact. And you can relay this by how would you feel if others would do this to you? Right? It’s like, consider these other people outside of you. We used to learn them when we were five years old, right that this was shared to other kids. To me, that’s an educational effort. And that’s the stuff that should go into the schools to primary schools. But it’s nothing to do with technology has nothing to do with any rules about it’s just the fact that you automatically live by things you assume are okay. And there’s things that no matter who you are, there will be things that you don’t find tolerable that you don’t find, okay, well, there goes your value system, you have a value system, just raising that awareness to think in those terms. Bring it into conscious versus subconscious. And that’s definitely, I think that could be called education, and they want to have them as early as possible in a young person’s life. But the question is, how do you get that through the school Burt’s or it’s a you get this through the state level, you know, whoever makes the curriculum and the public schools by the Board of Education and things like that. How do you convince those people that this basic, the most kind of critical thinking I can only think now of a couple of states where Especially when I’m one favor, logically focused word critical thinking isn’t exactly top priority or even desired. But anyway, I’ll stop here because I will become judgmental.
Seth Villegas 35:09
I think this is actually a pretty important point. And one of the reasons why I think it is important is because teaching people critical thinking, first of all, it’s really hard. It’s really hard to assess. There’s lots of kind of numerous difficulties, even at the teaching level of how is it that you get someone to the point where you’d be like, Okay, this person is a confident, critical thinker. But I think for us, in particular, we’re not looking to impose on anyone a particular value system, we have a value system, right. And I don’t think we need to apologize for that. But you know, part of critical thinking is, as you were saying, first noticing what those values are. And I think, at least for us, you know, one of our values actually is not assuming that we know it’s better for someone in their own situation. I don’t even think we necessarily need to worry as much about, you know, how do you educate every American child from, you know, K through 12, through, you know, beyond whatever schooling they might go in. But what we can do is kind of within our own contexts, how can we make things that are helpful for the people that we have influence over, you know, the people who might be interacting with classes and these professional trainings. And then out of those things, if, if it is helpful, maybe that’s when you start to pass on those those kinds of practices to other people. But I think it is essential, as I was kind of saying before, that we assume that sort of respect for the people who are listening to this people in their positions who have lots of experience. And when I mentioned earlier that we have information that other people don’t have, I guess what I meant by that is, we have kind of our own experience and our own situation perspective, that’s unique. And that’s something that we can always offer to people, regardless of what else is available. And it’s because we have that and we acknowledge that that we can see that Oh, other people probably have that, too. So and that’s why this is supposed to be a conversation of inviting people to come on, is why you have different kinds of experts. Because unless we start the why not that sort of thing and start to find the commonalities between what’s successful in those different contexts that we might be able to have something that might be able to act as a norm eventually, but it’s only through going through a painstaking process of exploration that that becomes possible.
Bernd Dürrwächter 37:30
You said something where you said, obviously, it’s useful to me as people, we want to talk about advocating for. And that’s something I ran into on the professional side, whenever I try to advocate for more ethical thinking or starts with a critical thinking, first question is always what’s in it for me, I have a limited time I’ve limited budget, you know, my manager, and you’re proposing that I should train my team at a limited budget. That’s a competing priorities. What’s the been like cost benefit, right? It’s like, why should I do this? Right? Why is this useful for me, my team, my company, and this is probably something we will have to address one way or another do not just you should do this. But why is this good for you? And it’s actually fairly easy to do some of the value props, for example, why should I worry about data privacy, because there’s a 10 million contractual penalty if you violate students data, you know, there’s a for like a law around. If you do this, you get monetary fines, which is bad for business. And I know this is not exactly ethics, it’s more punitive, legal, but you can start framing. Imagine what people think of your company, if you do this, like look at what happened, Facebook costs them a lot of money, a lot of goodwill was lost, you want to tarnish your company’s reputation by doing these things, right? It’s not exactly the right angle to get them think ethically, but it’s a first start to start the conversation where you behaving unethical will have real cost to your life to your business. Right? It’s like it’s actually consequential, right? And then once you have their attention, then you can match them more towards it’s just the right thing to do. But you don’t you start the conversation with somebody thing to do. It’s like, Yeah, I know, but I don’t have the money time people for it. Right? They they will empathize verbally, and then say my reality looks different. Like I would like to but we all be right. And but if you frame it, the cost of not doing it is higher than doing it.
Seth Villegas 39:13
And to kind of go along with that, even what you’d mentioned earlier about generational compact does play into this. Okay, let’s say that there are numerous agree does data privacy violations by a particular company, you know, and we could probably just name one at random and find one. Those people grow up though, right? Like the people see those things, they may opt out of those systems, but even more so they may choose not to develop those things. They may choose to develop competitors. They may choose development, technologies that work differently. When you can even see this and kind of the increase in interest in things like photography, how do you protect people’s data, and these kinds of different impulses that are going on technology space? It’s not clear to me that if your target and you keep failing to protect credit card data, that people will continue to shop there. This is something that’s affected my dad where he just doesn’t use this credit card there ever, he only uses cash. And that’s understandable as a result of the history. And if you have enough people making those paying attention to those things that are happening, at some point that’s going to affect how the business actually works, and whether it’s able to work at all. And, you know, it may be a little abstract to think of those kinds of consequences. But ultimately, businesses do you need to be thinking about these things if they’re going to be viable in the long run.
Bernd Dürrwächter 40:42
The only problem with that, and I know I can introduce that notion that you’re buying into the argument, like the market will self regulate, the consumer will speak with their feet, and then all companies do they learn to put in their marketing. We follow up Duck Duck Go used to be a search engine, and they got taken over by an app advertising complement. Hardly anybody knows that. And the windows massive advertising campaign with posters are important everywhere. It’s like, you’re a data privacy, and they’re almost like a Mars attack, don’t run with your friends. And then they shoot people that it’s like, they just got taken over by an ad broker who harvests the data while doing the advertising, living off that brand name that people got used to somebody do the exact opposite. So when when you let the market decide the company’s just become smarter to pretending to be that which people expect them, like he’s a good example, they put so much into the perception. And people have all kinds of beliefs about that company. And that’s actually not true. And I used to work there, I can speak with some authority, but I have some insider knowledge. And pretty much a lot of companies that become so good at the PR that if you just leave it up to the market, just consumers will vote for the feed, a lot of consumers will not have the insight to to pass that judgment. So even if the market dynamics is one way to hold them accountable, there always needs to be some form of oversight guidance. It’s almost like not oversight in a way controlling them. But forcing transparency. That’s actually one of my beliefs, even with ethics. If companies aren’t, you know, for example, putting the label on food, what’s actually in there, right? We’re not telling you what you can put in there, but just forcing the companies to disclose what’s in there. And then that informed consumer that’s one of those motions by informed by being informed is the first step. And that seems like something we should all be able to agree on. But there’s there’s just too much counter movement, right? If it’s not the interest of a company to do this, they will they will they have the resources of creativity to come up with whatever the original intent was the spirit.
Seth Villegas 42:38
I think that’s definitely a really great point, in part, because it’s really hard to dismiss the potential impact of things like marketing, and people who are really, really able to manage brands effectively in the internet today, like you’re saying, you can give an impression of something, or as in the case of like Duck Duck go where you have a kind of a corporate takeover of a brand that’s established in something that no longer does that, once it’s taken over like, and you know, from the outside, it looks completely the same. But there’s no way you’re kind of on the interactions with whatever the technology is on the surface to see what it is that’s happening. And I do think that analytical tools and trustworthy sources are certainly necessary. But I mean, honestly, the thing that I see the most is people just don’t trust where their information is coming in from. And this is another thing about about fake news. And why it’s important to have critical thinking just because you have to really dig to know something these days. And that’s just a lot more work than people have time for even right or wants to put the time and effort to. And if we had the technological tools to actually help us assess those things, that’d be great. But they’re they don’t seem to be quite there yet.
Bernd Dürrwächter 44:04
But we have to be careful that that’s not a battle, we can fight that we used to have more than to solve the bigger debate about information warfare. And you said something earlier, I wanted to come back to in terms of the critical thinking, and it does actually tie to the overall stream of education because I think in this culture and American culture, education is often confused with relaying a lot of knowledge, right? If you look at the schooling system you have to memorize you have to figure out how to where to find it. A lot of based on education, we will share a lot of information with you and maybe this is how you apply it. But I remember when my school and there was a lot of open ended questions like we want to teach you how to ask questions. We’re not going to teach you what you should know about there was part of that too, like historical events, but there was a lot of this is the kind of question you should ask. This is what you should find or this is how you can find stuff but what you should search for is up to you depending on your Like, open ended and what question to ask that gives people agency as well said, If somebody’s talking down, you should do this. No, we encourage you to find out what you think people should be doing and why right. There’s a whole critical thinking process. And I know some uphill battle that the older person gets by I do think you can teach critical thinking, because there’s enough countries and cultures where I can only ever refer to a place I grew up, right and in Germany is literally every class for 12 years of education. Look at what happened in history, how many people had to die? Why did that happen? Because people follow the leader like a sheep, they were stealing people’s information. They didn’t respect human rights. Why there’s like this happened. What can we learn from it? This happened? What can we learn from it? Why was there in the medieval era like the plague? Why did the black bath like you know, a lot of medical students learn? What did they do back then then that building sewer systems muffling your urine into the street, but you’re like having a toilet? How do you make sure that cities don’t burn down and make them make the houses out of wood? Right? So there’s a lot of that I do. And so my point was, like, I want to say you can teach critical thinking, the question is, how do you teach an adult? How do you if it’s not related to as a young person baked into the system? How do you gently introduce people who are established in the life without coming across lecturing or without coming across as a nuisance competing with all their other parties and attention? To answer for, but I’m kind of optimistic that that way that it can be, maybe you just have to package it in an entertaining way on YouTube or something like that.
Seth Villegas 46:32
One of the ways that we have tried to address this is through the development of case studies. And what I mean by that is, we look at something that happened, and we just tried to pull through it. And I was looking at it from the perspective of, Okay, if I was here, in the middle of this situation, if I was working at Cambridge Analytica, how would I know that something fishy is going on? You know, there’s lots of different ways you can know that, like, if you get a sense that maybe numbers are being fudge, or, like, the thing that actually drives me, the craziest about that situation is Cambridge Analytica discovered a loophole right in which they could harvest not only the data of the person whose profile they got access to through the use of a quiz, but that there was this some this problem in Facebook’s own security, where they can also harvest the data of people connected to that person, you know, kind of like friends of friends sort of thing. And that multiplies the level of damage so much, just because it means that okay, at that point, anything can happen. Really, right? Like you have access to so much more data. And there’s, it’s time maybe not even possible to know, just because of the nature of the vulnerability. And so looking at that, we’d have to see like, okay, there’s a bunch of conscious decisions that are made through this discovery of this exploit, and the use of it to generate all this data for a specific process of political manipulation, you know, inflaming people’s emotions and whatnot, actually stepping back to look at all those things and be like, Okay, if we see these sorts of situations and that feature, how do we deal with them, and kind of a process way right now, because, again, we can’t mandate particular actions through to a particular situation, because it won’t necessarily repeat itself. But rather, okay, here’s what happened this situation, and what can we actually learn from it now that we’ve managed to pull it apart?
Bernd Dürrwächter 48:37
Let’s go goes back to the more and more of their critical thinking, because what I’ve seen, it’s hard to prove intent, right? When a company or individuals commit a crime, a lot of the acknowledgement is a crime. And the punishment goes by what was the level of intentionality behind it, and companies have gotten really good even, almost like test the waters by doing something that pretty much no, it’s not okay. But they do it in a very lightweight version, so that they can pull it back. So I’m thinking of the case how Facebook in Australia tried to manipulate teenagers the was that they tried to exploit situational, emotional vulnerabilities, so they just have left by their boyfriend or whatever, and then nudge them to buy this or that product. And they kind of did it in lightweight versus so they could Oh, it was just a rogue employee, or we didn’t mean to do this. And the thing you just mentioned about that you could get all the data from the whole network. I’m not so sure if that was an accident, or if there was, let’s test the waters. And later we will say, Oh, that was unintended. I wasn’t. I hear this a lot for a lot of the companies were something that clearly benefited them was later labeled as Oh, it was a mistake. We didn’t need to do that. Because in the eye of the law, whenever you say we didn’t need to do that, you guys, we get off the hook easier than if you say yeah, where we deliberately tried to harm people. Right. And that’s, it goes back to a point of it’s a cultural thing, right? It said, People need to recognize and be become more and more like, what benefits them and why are they doing that versus what benefits broader view of humanity that requires a certain level of accountability but but not just on what they do, but what their intentions are. And you see a little bit of that with the European Union where Uber and Google be violated laws that cost like 150,000 euro, fine. So the Congress is just doing business, we’ll do it. And then the European authorities like that wasn’t the intent, the intent was like a currency that you stopped doing. So then they said, We charge you 2% of the global revenue, we want that it really hurts, or the Uber guy actually got the rest of France because he openly defying the law, you’re not getting the point of our punishment. But we don’t want you to do this. You have criminal intent, basically, based on and some French law, it’s like, it’s not what you do. It’s like, to the end of why you’re doing this, like your your overall intention is criminal or unacceptable. And I think a lot of people need to be able to think this through and not just be sidetracked with look, we’re doing good, like Google headed in the name, do not do evil. But that’s all it was. It was a marketing slogan, if you watch the behavior. That’s open story on first one, say things that people need to think deeper beyond just the slogans of advertising. So the second What is it? That they’re really after? What benefits of Facebook or Google not just what they do on the surface? And we we donate a little bit money here, we do this by people so easily distracted by the brand building, right?
Seth Villegas 51:20
I do want to be a little careful here in the sense that, like, I think we both know, people who work at these companies, right? And I guess I can’t help but also imagine the people in those positions, in part because part part of why it’s humpy to look at corporate behavior, is because you do have people kind of in the chain, right? Well, whatever the workflow is, and it can get really complicated, right, especially if you have like higher level behavior at higher levels. And well, what I basically mean by that is you have people at the top, who may be dictating that people who are working on a better email chain, prefer a company may not necessarily affect this much. But there’s still no doubt that that happens, right? If I was working at a startup that I thought was questionable, at least they could leave. But if you’re one of these big companies these days, it’s, it’s hard to see how your own work is tied into those larger behaviors, which is why you know, even as you’re describing as a culture, but like, I’m not quite sure what to do about that, personally, because there’s a lot of things that are really particular to Silicon Valley. And that’s in part why it’s scary to have them, maybe dictating digital rules for people everywhere. You know, and I mean?
Bernd Dürrwächter 52:38
It goes back to that first step would be accountability, but not like dictating rules, but just be transparent what you do. So we had in Germany and or in Europe, in the 90s, manufacturing, there was this thing, ISO 9001, it was basically a certification process. And all you did is describe what you do, it didn’t prescribe any standards and which is you need to be audited, all your supply chain works, and make that transparent. And then based on that contracts, were given the company’s you know, everybody still could do whatever, but they had to be open about it. But we do this we do this as though enlightenment will impact this is how we treat our laborers. This and that, right, you know, there’s the finances, they were forced to be or if you wanted, they weren’t forced at all a companies could certify themselves. I mean, they want an audit company, and then became a competitive factor, hey, we’re a company, we’re transparent. Here’s the books, you can look in our books almost like the You don’t have to be a B Corp. But you can choose to be one, once you do that, you have to adhere to certain rules. But the choice to be that and it becomes a competitive advantage when you do this in a more liberal state or to say, people are more likely to do business with you, but the choice is still yours. You are forced to be transparent. You can say one thing and do another that’s really the has nothing to do with telling company what to do or breaking the free market. But be honest, which of course is an ethical principle, right? Like, some people will argue, but the American way is advertising to bend the truth in favor of the brand message. And so it’s almost to get into the circle of problems. Like the first step to be ethical is to be transparent and honest. But then you’re already prescribing something that a lot of people might not agree with. Like they talk about trade secret. Remember the guy from Clearview. It’s a trade secret. That’s part of the capitalism that we have trade secret and proprietary. It’s like, Yeah, but now I’m gonna hold you accountable to that you apply the same logic to your customer, right? And that’s an equity. That’s the principle of equity. And no matter which way you spend it, you you run into where you have a value system and try to impose it on me. It’s like all I ask for that you’re consistent with the rules that you treat others like you do, and that you’re transparent about your intentions. And I’m already prescribing about ethical value system like this. It’s hard to find a starting point.
Seth Villegas 54:47
Yeah, and I think in that sense, we don’t like it’s gonna be so hard to sort of prescribe something to everyone that’s going to work in every case, but the sort of things I believe were We’re sensitive to the ones that we kind of harp on over and over are the ones that would probably have more societal utility if they’re our fault. So for instance, like, I would personally, really find it beneficial to have more transparent explanations of ways that different technologies work. And part of the problem is that we don’t know how they work. And you know, as you’re saying, like, you can kind of hide behind things being proprietary. And because of that sort of defense, we’re we’re left in what I think Shannon Waller calls like this techno social opacity. And what that means is, it’s just like really hard to perceive things, it’s really hard to, it’s like looking through a really dark class or something, right? Like, you can kind of see what’s under the surface, these larger trends, but it’s, it’s all very unclear. Like if I was going to start with anything that definitely be one of the things I would love to see going forward.
Bernd Dürrwächter 55:59
And that there’s this is definitely shrouded in some form of paternalism, too. I’ve heard this many times. I’ve heard other critics say like, for some of us who both fail some scholars who have really focused on that, but there is a paternalism like, it’s too complicated for you to understand. It’s like, No, it’s just you don’t put any effort in explaining and then one part of my motivation, how I got involved with a digital ethics effort with Wesley was one of my business focuses is to help non technical people understand technology, specifically, I like to help traditional technology managers, if you use any AI or algorithm predictive, and they say it’s a black box, I’ll help you how to unwind this, this is actually not true. If you have a piece of software that takes in data and makes a prediction, it’s not a black box with no if you don’t know how to code, if you don’t have to read data, you probably don’t understand what it’s doing. But to an expert, nothing is a black box, right? If you put a nuclear engineer in a nuclear plant, they perfectly understand how it works. But if you put me there, it’s like it’s a black box. And that goes back to the motivation that a lot of companies or organizations or maybe individual managers, company owners, it’s kind of goes against the interests for example, Clearview, right, it’s it stands to reason, like, if they’re not really interested in explaining what exactly they’re doing, you know, they themselves know that it’s probably not okay, right, they’re worried about being transparent, because they know people would judge them in a different light. So it’s almost like you, you’re, you’re making yourself guilty by your shrouding and secrecy, right? But nothing is really, too hard to understand. So when going back to the education theme, if transparency of algorithms, or at least data processes is important, and yes, it will be too complicated in its raw form for the average person, then some educational effort should be there. And it can either be the company themselves, or some audit trail, some third party, like you have in the financial industry, why do you need to be a certified public accountant and they need to sign up on your books of a bank or investment company, we already have that in the financial industry. You know, there’s everyone’s while there’s some scandals and corruptions but overall, it seems to work right. When a publicly traded companies that publish quarterly and annual reports a lot of people rely on what’s been documented there, because it got signed off by an accounting firm, you could introduce the same method for technology that you have independent companies who explain what happens, they certify it without giving away all the trade secrets, right. So both interests are there will be transparency without giving the trade secrets. That’s a very, you could be modeled after financial industry. But you need to still get the acknowledgement that can we do this? Yes. Can we do anything similar to this? Right? And well, has to be there from those who are the gatekeepers vital?
Seth Villegas 58:34
Yeah, I think those sorts of intermediate solutions are definitely things we should look at as well, in part because you know, a certification would be good. I think that kind of public education would be good as well, especially for people who certainly are interested in those things. You know, they want to know what’s going on, they want to deep dive, and maybe they can’t. And I think this just goes back to how they’re different kinds of ethos. And by that just mean like different people have different ethical principles of you know, there’s whole communities that are completely open sourced, right, where it’s like, Okay, here we have this, you know, anyone can work on it, you know, anyone can have the code. It’s, it’s here. And that’s very different than, you know, no one can see how the technology works. And part of the reason why I think they want to have lots of transparent isn’t because of how the tech works, but because of all the other things that the technology does that if you knew about you wouldn’t be so happy about it. I think that’s personally where like, my objection starts to come in is like, well, what is it that you’re doing with that data? Like, what is it that you’re analyzing? Why are you analyzing it? What sorts of questions are you doing? And without that, it’s really hard to it’s really hard for me to sign off on anything.
Bernd Dürrwächter 59:51
I just kept thinking about how to how to start the debate or make make a forceful argument. It’s like you’re doing something that a lot of people are affected by it by Our democratic tradition whenever a chemical companies, pharmaceutical company or even banks, whenever they do stuff that affects a lot of people and will have bad consequences when your banks have been regulated for quite a while, because back in the wild west days, there were a lot of issues, you know, thanks for spending money. So we’ve had all kinds of banking laws, we have the antitrust laws because of the bumper there and Senator two years ago. So there’s all historic precedents. One of the discussions is whether Facebook and Google is a public utility, right? If you have 2 billion users, you clearly have some public function. If you’re able to influence election, you clearly have a public footprint of some sort before a judge, married have it. But it’s indisputable that Google and Facebook and Amazon have a public impact, right? Whether that’s positive or negative, it’s debatable. But as such, we regulate water supply, we regulate electricity, we regulate your car traffic, like we have to have driver’s license car has to be insured, right? There’s so much precedence for other public utilities. And no matter whether Mark Zuckerberg is whether the media company, entertainment company, or whatever, but the public utility argument is almost indisputable. Everything boils down to the CEO of Facebook, declaring whether something’s good or bad, like that’s acting like a king, you’re not representative know, the 2 billion people didn’t vote into power just because they use your platform. So one of the leverage arguments could be a representative forum for regulation would be the elected politicians, right? That’s the existing mechanism. Remember how Google or Facebook paid service to have an ethics board and they just put their own people in there, which kind of defeated the purpose of an ethics board supposed to be?
Seth Villegas 1:01:34
I was actually just thinking about that. Because every time that they say, have an ethics board, or an advisory board on something, that that board always comes back, like a couple weeks later, like, yeah, we think they did the right thing.
Bernd Dürrwächter 1:01:46
Yeah, then it blows up because the next gen blew, and they didn’t see it coming because they had their their television, that would be to meet a feasible argument that nobody could put down, like, Who are you that you think you can unilaterally make decisions? And what you know, I know, they have superficial vetting processes or better programs or focus groups. It’s like, no, if you run an empire that affects 2 billion people, and I argue they affect more than the 2 billion users they have, they affect me, and I’m not on Facebook. But that means, by definition, there needs to be a board some form of oversight that is actually representative of all the interests of the people who are affected by the I have that in my training, the concept of stakeholders, and you see that in a modern discussion about stakeholder capitalism, not just the people who financially profit from that company, but the people who are affected by that company’s activity. And then maybe that’s something we can piggyback on. Basically, whatever it is that you do, you shouldn’t be the one to decide whether it’s good or bad for mankind, because that’s actually one of the biggest problems that I see that somebody like Mark Zuckerberg, unilaterally declares, I declare that this is good for mankind. It’s like, You’re in no position to decide this, given our modern understanding of democracy and free society, it shouldn’t hinge on one person we used to call that absolutist monarchy, these kind of forms of governance, and yet, it’s like, Why do Americans listen to one guy, that’s the king, I thought the king got the Polish, right. And that’s the kind of dialogue I think we can build a momentum under there, just get people realize, like looking at what you bought into, say, You bought into a 250 year old model of governments that we abolish to various reasons.
Seth Villegas 1:03:20
Yeah, and I think that’s one of the points where we can start to lay in this conversation of like, did things really could be different. And I think that that’s an important part of whatever I hope it is that we do is just kind of placing that possibility in people’s minds of like, Oh, hey, like, these things work in a certain way. But they’re designed, you know, maybe to work that way. And they don’t have to work this way. They could work some other ways. So why not? You know, design those things in a way that would really work right? so that we don’t end up in the kind of dystopian cyberpunk reality with way more arbitrary rules than the ones that we have now,
Bernd Dürrwächter 1:04:01
Wouldn’t just say it could work a different way, should it like or should it persist? And I grew, I mean, nothing ever really works. But then intentionally, usually, history usually ends up accidentally the point, the question to be asked, so current system, should it be that way? Why, right? Like, should we keep it or should we change it as Okay, like, not just what is possible, like know, what should it be? Or is it acceptable the way it is, if we as society decided that it should be that way? And if not, what, what should it be? That’s not starts the thinking process without sounding paternalistic, like we can criticize them for better, no less than and then we get in the same lane like, Oh, this is better than this.
Seth Villegas 1:04:43
Yeah. And that brings us back to the normative question. And maybe that’s where you should actually in this sort of just like, you know, like a we’re not we’re not trying to impose the norms, but we do think that we need the that process to ask those questions. And if we can’t ask those questions we really needt to be able to.
Thank you for listening to this conversation with Bernd Dürrwächter. You can find more information about DIGETHIX on our website, digethix.org, and more information about our sponsoring organization, the Center for Mind and Culture at mindandculture.org. If you’d like to respond to this episode, you can email us at firstname.lastname@example.org. Or you can find us on Facebook and Twitter, @digethix, or on Instagram, @digethixfuture.
In this episode, Bernd I covered a variety of topics, we returned again and again to the importance of education, especially since the technological problems of the future will likely be different than anything we have experienced thus far. Given that we have to consider what lessons we can learn the present to build better systems for the future. Our hope with this podcast is that we may be able to create the context for better conversations, and that we may be able to model the kind of analysis that is useful for pulling apart situations involving technology. Perhaps then we can even get ahead of whatever issues may eventually emerge from our increasingly technological world. I hope to hear from you before our next conversation. This is Seth signing off.
Transcribed by https://otter.ai