Seth reviews the first set of episodes for the DIGETHIX podcast, talking about his experience and takeaways from this season. He reviews the cover art for each episode, which were designed by Alex Nielsen and Nicole Smith, to discuss the range of themes that were discussed.
The podcast will be on a break for three weeks, with regular uploads resuming again on September 1, 2021, with other content coming out on our other social media.
Seth Villegas 0:00
Welcome to the DIGETHIX podcast. My name is Seth Villegas, I’m a PhD candidate at Boston University working on the philosophical ethics of emerging and experimental technologies. Here on the podcast, we break down issues of ethics and technology from both a technical and sociopolitical perspective. Our goal is to help the next generation of technologists to build better systems and to avoid potential ethical pitfalls. On today’s episode, we’re actually gonna be taking more of a big picture, look at the podcast. And also do a bit of a recap of what we’ve done so far, where we’re at and where we’re going. This will be the 10th episode that we released so far this season. And we’re going to be taking about three weeks off. And what that means, in essence, is we’re going to have our next episode after this coming out on September 1. So we’ll be gone for about three weeks, we won’t have regular uploads or anything like that. But we will still try and put out some stuff on our other social media content, just not full podcast episodes. But to make sure that, you know, we are still recording things, we have five or six interviews already ready to be edited, looking forward to those coming out in September, October and beyond. So we’re still doing everything we can here. And so I highly recommend that if you’re not already that you definitely check us out, say on Facebook or on Twitter, @digethix. And also on Instagram, @digethixfuture, we will probably be putting most of the content that comes out from this point on on those platforms, at least for next few weeks, we’ll get back to regular uploads on all the podcasting platforms that we’re on, once again on September 1.
So what we’re going to be doing now is basically just doing a quick zoom out recap of where we’ve been so far. And I actually want to do this by talking about the logo art for each of the episodes. And I wanted to bring them up in part because we’ve we’ve put a lot of effort actually into trying to think about the aesthetics of this. In fact, we had a whole episode on aesthetics in Episode Four. So let’s talk about episode one. So in Episode One, edit chance to interview Dr. Wesley Wildman, who also serves as the executive director of the Center for Mind and Culture, who sponsors this podcast. He is also my advisor. And I’m really grateful to him for the opportunity to do something like this. And you’re one of the initial kind of concepts that we had was we call this the photo guy, the camera guy. And it’s basically this kind of bro looking guy looking out at among others, there’s just this giant QR code, the digit ethics QR code, in particular, just sort of looming down, it’s been filtered down so that it looks rather ominous. I mean, in a lot of ways, it’s kind of a funny photo, Alex Nielsen was the person who initially designed this one. And it’s just finding kind of hearing from her over and over again, how much he sort of hated it, in part because the there’s this real sort of potentially oppressive feeling to it. And actually, this is something that he also mentioned in the interview that we did together. There’s just a sense of doing something, maybe that you shouldn’t be when you’re kind of altering these sodas in these ways. And maybe we should talk about that, when you’re kind of taking these nice photos from kind of photo aggregation services, make them really available, kind of adding these digital elements to make it look almost more like augmented reality sort of thing. Like it’d be really crazy if you’re just walking around and seeing floating QR codes everywhere. But there’s also something that’s been expressed about the kind of overall that we’re living in where this is just sort of, you know, in your face. And I’m really glad Wesley talked to us in this episode about kind of his own motivations for why this sort of thing is necessary. And in kind of our background, it’s mainly a result of, like, Hey, we have this sort of humanities experience, especially related on the kind of philosophy side. And what that means is we can kind of break things down in a part in in a way that is abstracted out from whatever the circumstances are. And what I mean by that is, it’s difficult to say what kind of the next generation of technical and ethical problems will be within those technical systems.
But if you sort of think of like, hey, like, you know, these sorts of paths don’t work. This is or isn’t a good thing, then you perhaps we can get closer to making systems that work better. And actually, one of the things we’re going to be talking about About a lot in the upcoming season, as I’m able to use about, say completely new and radical technologies, say in the cryptocurrency space are these things called non fungible tokens. And the reason why I bring those things up is because there’s a lot of deliberate experimentation with radically different technologies. And the kinds of governance protocols of those technologies also varies in lots of cases. And we don’t need to get into that now. The fact that we can do those sorts of things, makes it at least really important to me to think about war, how is it that we would want to design things if we really could do anything? And I see a lot of that actually in the crypto space now.
Moving on to the second episode, Episode Two, I had a chance to do with Kate Stockly. So Kate, as I mentioned, that review someone I’ve known for a long time now, she’s kind of my son I sort of looked up to in the program. And it was really interesting seeing her work, in part because it speaks to the potential application of technology to other domains, right. So using technology for different kinds of brain stimulation and what not, really kind of radical stuff. And so the kind of episode art that we designed for this one was actually have a, you, we have one person kind of standing opposite side and in the way that we cropped it was actually to make this person around the same height as this kind of radio tower that’s in the background. And sort of using again, this sort of a computer vision look on the person’s face. And we’re going to kind of come back over and over again to this sort of what’s usually called like a surveillance aesthetic. And one of the reasons why we kind of emphasize this is because the really systems that were being watched, right, so you can even notice, like in the first photo, again, of camera guy, the of, you know, this notion of cameras being seen machines, seeing things of us being seen, of distortions, that camera guys blurry. The person who’s in these tower photo is just kind of dark, right? Like he can’t make out any feature features, but still clearly a person, these sorts of motifs kind of come up over and over again. And I think that speaks a lot to this sort of, like, hey, like what’s happening here, like we’re kind of in a, a new space where we don’t really know what’s going to happen next year for someone like me, I think that’s partially really exciting. But also, there can be some anxiety there. Well, how is it that things are going to get better? Are they actually going to get worse? One of the big things that we’re going to kind of be exploring over and over again, on the podcast is just a sense of what are the future technologies that are kind of on the horizon, just this sort of, at least place an idea of them into your mind before they become more mainstream. And you know, maybe none of these things ever become mainstream, right? For instance, there’s no guarantee that people will actually be super interested in something like ultrasound stimulation. But the fact that it exists, it’s been developed, it’s still worth paying attention to at least a little bit, because it speaks to the kinds of experiences that people are, you know, now will always want to keep those things in mind. And this is something that Shannon Vallor talks about, of, you know, what does it mean to live a? How can we use technology to create a life worth living, right, like, like worth wanting? And that’s going to be a question we’re going to revisit over and over again.
Episode Three was one they did with Dr. F. LeRon Shults. LeRon is someone I really enjoyed talking to over the years. He has a theology and philosophy background is really unique story, I would definitely check it out if you’re interested in that. And the kind of art that we went with this one actually had to deal with data tracking, which wasn’t exactly really the stuff he was talking about. But here’s how the kind of how I explained this sort of twist. So he made the reason why I went with data tracking and has a lot to do with the way in which simulation and simulation research is used to kind of break down people’s lives, right? So for instance, if you had a machine that was actually powerful enough to say, go through your entire day in simulate what would happen through different kinds of choices. I mean, people would do that. In fact, people do this in games all the time, right? Where they just want to kind of go back in and see what happens. But actual simulation research doesn’t really work that way at all, the way in which simulation research works, and there’s two big models, there’s agent based models. These basically, you could think of something like the Sims, even though it’s not really how it works. But they’re basically little things that could be next to each other. And depending on the kind of interaction that happens between these different agents in the model still do different stuff. And one of the ways that Lauren uses is to model something called a mutually escalating violence, mutually escalating religious violence, excuse me, merb. And basically trying to find these sorts of circumstances that lead to conflict. And one of the reasons why it’s important to do something like that is to try and prevent those kinds of conflicts from breaking out in the future. But as LeRon rightly points out, the knowledge itself is kind of agnostic to its use. And what I mean by that is, if you’re able to figure out the circumstances, for something to occur, then whether you want that to court, her or you want to prevent that from occurring is really up to you. And so knowledge itself is knowledge itself is powerful in that it can be deployed for all different sorts of purposes. And that’s something we kind of increasingly have to keep in mind. as, you know, data is much bigger and bigger, our lives.
Moving on next episode for this is what I did with Alex Nielsen. So Alex is one of our advisors, he’s actually done a lot of background media work for us, teaching me specifically how to, you know how to do a podcast. And so I’m really grateful that he took the time to talk to me. And the local art that we chose for this one is also some of his work. And it’s, we call it beach girl. You know, it’s a girl kind of looking back, it looks like it could be a nice photo, but the person who’s supposed to be in the foreground, it’s actually kind of out of focus. And as Alex says in that episode, it’s it’s a reflection of a kind of technical failure. But despite that, we still kind of went with this, the box right the the computer vision box with the lines over the eyes, and it looks kind of ominous, right. And so it’s, it’s in this moment of technical failure, that we have this grasping of the machine to kind of examine what it is that it’s looking at. And if you do want to know, again, more about why we kind of talked about a sec, so I really highly recommend listening to that episode. But to just kind of summarize it here. I really want for these, the kind of art that we’re using to be expressive of intuitions and other sorts of things that people are feeling. So so maybe you can’t express something, right, but you can sort of feel responsive to something that you’re seeing in the, in the art itself. And, you know, there’s, we’re living in a world that’s increasingly digitally mediated, and the kinds of aesthetics that look better on computers are the ones that are probably going to win out. So whether you think that’s good, or that’s bad, it’s just something to kind of keep in mind going forward.
Episode Five, was an interview that I did with shaunesse’ jacobs. So shaunesse’ is one of my colleagues here at Boston University. And he mainly studies the problem of black maternal mortality, which basically refers to the rate at which people die in childbirth. And, you know, this isn’t something I’d really thought a lot about, but the kind of drastic disparities between mortality rates in childbirth between races pretty staggering. was the ultimate mentioned this in his interview about how you can really have, like, you know, like non industrialized nation rates of survival. And this is kind of a really troubling fact. And so you’d want to step back and think, Okay, why? Why would the data be showing us this? And, you know, one of the things I really want to highlight in that episode in particular is, first off there, there’s this really important notion of trust. You have to have trust for your doctor, for the institutions around you. And this is increasingly important in a kind of information economy. What I mean by that is If you have false information, going into a given system, that system is not going to function very well over time, it’s, it’s going to kind of degrade, it’s not going to be very effective at all. And so if on the one hand, you have people who don’t have a lot of trust or institution, right, and so if I’m not really willing to share information about myself with, say, my doctor, that’s going to be a problem for that person treating me, the other thing that’s going to happen is, if I feel like my doctor is systematically discounting the amount of pain that I’m in, it’s it’s hard to have a real sort of relationship come out of. And, you know, a shared this kind of small story. In that episode of experience I had with the doctor where, you know, I had an injury and the doctor wasn’t as helpful as maybe, as they might have been otherwise, is a doctor is pretty young. And something I’ve actually thought a lot about, especially as I’ve gotten help with other issues from the very seasoned person is kind of looking back to my early years in high school where I had experiences usually with people who were kind of right out of med school. Of course, I didn’t know that at the time. But looking back, it makes a lot of sense. So people who would maybe do just like a like up to a few years of actual clinical experience. And one of the things that actually happened I’ve been thinking a lot about is, if you have a problem that’s outside of the textbook definition, or the textbook parameters of how something should work, it can be really hard to diagnose and get out whatever the novelty of the situation actually is. And it’s not unless you work with someone who’s more seasoned, who has more experience, you have someone you can really trust that you can kind of get past whatever that initial issue is. So the kind of logo art that we designed for this one was actually it’s interesting, because, you know, we have like a surgeon, and there’s all these kind of like, they look like heart monitors sort of things, but the way that they’ve been kind of laid out almost looks like sort of like barbed wire thing kind of speaks to this sort of, like pain and and that, that’s, that’s surrounding these sorts of hospital situations.
Episode Six, was kind of a follow up to the previous one that we wanted to kind of, but we are to two episodes in healthcare, we kind of wanted to put them together. And so the kind of art that we had on this one was actually, you know, it’s someone who’s looking at some, some scans, there’s kind of this larger read out of information that’s coming. Muhammad Ahmad is a, you know, he’s a scientist, and he’s kind of looking at things differently. So can use was more focused on the experience of the experience of having one’s pain discounted. Muhammed really was looking at the data side that went along with that, it was really troubling to hear his explanation of how if you were to take only the patient’s levels of reported pain, as opposed to what the doctors thought that what their pain actually was, a lot of the bias in this process actually kind of got washed out. And this isn’t to say that people don’t exaggerate or maybe even under report the amount of pain that they might be in any given instance. But what it does mean is that there’s some sort of problem that’s going on between doctors and patients, where, for whatever reason, the doctor doesn’t believe what the patient is saying. And this is something we can actually see in the data. And I think that these sorts of uses of data to highlight those kinds of problems are going to be really powerful tools for getting some a more stable place, right, in which we can, again, start to build those levels of trust up again, between people. And Muhammad also has a lot of other experience of his thought so much about like, what does it mean to kind of use these systems? And if you have data that’s already kind of corrupted or biased in some way, are there ways to clean that up? What what other solutions need to be made? Because while it’s great that we can use data to say highlight problems, and maybe even use it to fix those problems, we also kind of Can’t blindly adopt larger processes because of the ways in which they might be molded and shaped by things that just don’t really work in a fair way.
Episode Seven was quite interesting for me in part because Dr. Noreen Herzfeld is someone who she works on both sort of like the spiritual side of things, and on the technical side of things, and I think that gives her a really unique perspective on what we’ll first both because of her years of experience, she can kind of tell us about like the broader history of AI, and the way in which intelligence is thought out, and that’s kind of illuminating in and of itself. But also getting back to her sort of the law theological background that informs her spirituality of there’s something really special about people about being human about emotions, about these sorts of visceral experiences that we have. And I really enjoyed hearing from her about love her concerns for people who are going to grow up even more in meshed inside of the kind of techno sphere than I was growing up, then perhaps even you were, in part because you know, those people are living now with access to things that just weren’t everywhere, right. Like, even just thinking about the ways in which phones reuse that there’s gonna be a whole different generational expectations around social interactions, how things are supposed to work and everything. And I can see some of that in my own life. But I really liked the way in which Nicole Smith, who designed a while the past logo art, the one for this episode, deals specifically with you know, two people are kind of like giving each other a hug, right, but they’re still on their phones. And Nicole did a great job of showing the kinds of notifications that are coming up to distract them from what’s going on. I love that image, in part, because it expresses the sorts of it just really expresses the kind of problem that we’re living in, that we can be right next to someone and still be distracted. And you know, I noticed this, say in which you know, it for people who have the single pants or ring syndrome, where you just think that your phone’s buzzing, but it’s not or or kind of mentally tethered to these things in ways that are potentially problematic, potentially probably. But they’re, they’re nonetheless a real part of our reality. And so we have to kind of keep thinking about, well, what’s been gained here and what’s been lost, what’s the trade off. Unless we can accurately evaluate those things, we won’t really be able to make choices about how these things affect us.
Episode eight was with my longtime collaborator, Bernd Dürrwächter. And when I say collaborative person, someone I’ve been working with for a long time, we have structures where we’ll talk at least once a week about technology about what’s going on. And one things we think a lot about is education. For instance, in the middle of developing a training, you know, kind of corporate training, and I do some stuff in the classroom, and it’s been really great to kind of bounce things off each other. What can be taught at what level? What can you teach in university? What can you teach in a corporate training? How are those things have to look different? What’s accessible to people what makes sense, especially if we’re talking about engineers and coders. He’s somebody who’s an industry, I’m more on the academic side. And so we really have a lot to gain by being able to talk to each other about these different kinds of things. And I really, really enjoyed being able to bounce things off of him. And I do want to kind of emphasize what we talked about in that episode of process. We’re in the middle of trying to develop analytical tools and have processes that help us to think through whatever circumstances that are going onat any given time. I think anyone who says that they have all the solutions to paint this kind of, you know, they’re kind of pulling or pulling our leg so to speak, if they do that. Instead, what we really need to be thinking about is how do we sort of keep our wits about us in such a way that we don’t get pulled into the promises of a given technology? Because we need to kind of always have a critical eye towards how things work. So that we can know if those processes are distorting us, or if they’ve kind of changed in such a way that it becomes exploitative as kind of burnt even mentions with DuckDuckGo, which was a brand that had a particular focus, but eventually got subverted once it was bought out.
Finally, getting to the last episode that I’m going to be talking about here, and we’ll have a bit of a shorter episode. For episode nine, the way that Nicole kind of had drawn up this one is, you know, we have someone, you know, kind of on a computer, it’s a very dark image, and recording in progress. And I think this kind of speaks to what we’re sort of experiencing with NSO right now. And not just the Pegasus spyware and all that, but just this notion of Hey, like, even if you’re not being watched, the potential for someone to really kind of get an eye on everything that you’re doing. And everything that you’ve done is, is there now. And I think it can be hard to gain a sense of real privacy unless you’re kind of completely unplugged. But you know, we kind of live in a world now where there are cameras everywhere, where, you know, people will tape tape up their webcam, because they don’t want to be going on. But just because the the possibility exists doesn’t mean it’s happening. But the fact that there is a possibility means we always have to kind of be wary of those things. And while I’m not super convinced, some need to be worried about, say military grade, spyware going into infect our phone, and but instead, the thing that kind of takeaway here is, how do we renegotiate the sorts of boundaries of those things? Are there ways in which we can say, okay, like this, we need privacy in these areas? Like what does it mean to have freedom of speech, freedom of thought, and an era in which we do experience these levels of surveillance. And to be honest, I’m not really sure what that means exactly. But the fact that we have to go around assuming that we’re probably being watched in some senses, well, I don’t personally find it to be ideal, I would prefer if we were able to create systems that didn’t have to work like that. So as we’re kind of wrapping up our time here, and thank you, again, for tuning in. We also want this podcasts we have placed that’ll work for you. And the kinds of people I’m hoping to talk to you about these things are the ones who want to get involved in whatever this next generation of systems is. And in part, you know, I’m still pretty young, you’re gonna have to live with all these things that are created in the next five to 10 years, and also the ones that have been created in the last five or 10 years. So how can we make those things better? How can we really have in mind a lot of the sorts of concerns that have been brought up over the course of the season and beyond. And it really is my hope that we can kind of continue to push technologies forward. In a way that’ll work better. And I think that the sorts of episodes that we have coming out again, in September, once we’re back, we’ll kind of gestured towards this a little bit. And I’ll keep talking about and keep thinking about it, in part because you my research interest really is on like, what what are people wanting to do with those future technologies? And what will those future technologies actually be able to do for us? And if we can get them to serve us in a way that makes sense? Well, I think it’ll be better for everyone in the long run.
So rather than continue to ramble on, I just want to thank you again for taking the time to tune in today. If you want to get a hold of us, you can email us email@example.com and find us on Facebook and Twitter, @digethix. And on Instagram @digethixfuture. If you are a technologist and you are interested in say coming on the podcast if there’s something you want to beat or talk about, more than happy to do any of those kinds of things, as long as we can find the time to do it. So with that, I hope you have a great rest of your day. Here’s to continuing to talk about these things. It’ll be increasingly important to understand both how systems work and the larger impact societal impact of those systems once they’re deployed. This is signing off.
Transcribed by https://otter.ai