DIGETHIX Podcast Episode 7: Artificial Intelligence, Emotion, and What Makes Us Human with Dr. Noreen Herzfeld

In this episode of the podcast, Seth interviews Dr. Noreen Herzfeld. Noreen is the Nicholas and Bernice Reuter Professor of Science and Religion at St. John’s University and the College of St. Benedict. She holds degrees in both Computer Science and Mathematics and has a Ph.D. in Theology from the Graduate Theological Union in Berkeley. 

 

Seth talks to Noreen about the history of artificial intelligence and about cultural attitudes towards technology, especially those in transhumanism. She stresses that emotions are a key difference between us and machines and that emotions may be essential to what makes us human. The key questions for this episode are: what does it mean for a machine system to be considered intelligent? Is education really a matter of a simple information transfer?

 

Potentially Helpful sources:
https://www.loc.gov/loc/brain/emotion/Kagan.html
https://www.simplypsychology.org/behaviorism.html
https://mitsloan.mit.edu/ideas-made-to-matter/emotion-ai-explained
https://news.stanford.edu/2021/02/23/four-causes-zoom-fatigue-solutions/
https://www.scientificamerican.com/article/feeling-our-emotions/

 

Music: “Dreams” from Bensound.com

Episode Transcript

Seth Villegas 0:06
Welcome to the DIGETHIX podcast. My name is Seth Villegas. I’m a PhD candidate at Boston University working on the philosophical ethics of emerging and experimental technologies. Here on the podcast we talk to scholars and industry experts with an eye towards the future. Today I’m interviewing Dr. Noreen Herzfeld. norine is the Nicholas and Bernice Router Professor of science and religion at St. John’s University and the College of St. Benedict. She holds degrees in both Computer Science and Mathematics and has a PhD in theology from the Graduate Theological Union in Berkeley.

 

In this episode of podcast, I talk to Noreen about the history of artificial intelligence and about cultural attitudes towards technology, especially those in transhumanism. She stresses that emotions are a key difference between us and machines, and emotions may be essential to what makes us human. The key questions for this episode are what does it mean for machine system to be considered intelligence? Is education really a matter of simple information transfer?

 

This podcast would not have been possible without the help of the DIGETHIX team, Nicole Smith and Luis Salinas. The intro and outro track dreams was composed by Benjamin Tissot, through bensound.com. This episode has been cut and edited by Talia Smith. Our website is digethix.org. You can also find us on Facebook and Twitter @digethix and on Instagram @digethixfuture, you can email us at digethix@mindandculture.org. Now I am pleased to present you with my interview with Dr. Noreen Herzfeld.

Hello, I’m really pleased to be joined today by Dr. Noreen Herzfeld. Dr. Herzfeld, I was hoping you could tell us a little bit about your background first, if you wouldn’t mind. So I know you have a background in both computer science and in theology. So I was wondering how what kind of journey you’ve been on to gain a specialization in both of those things?

 

Noreen Herzfeld 2:02
Well, I began in computer science, and I was teaching computer science here at St. John’s University. I’m actually teaching a course on artificial intelligence. And I got to my first sabbatical, and was thinking about a sabbatical project. And the project that came to mind for me was that I had not found anything in the literature that discussed why we wanted to create what was called in those in those days a strong artificial intelligence, I think, today, people more often use the term artificial general intelligence or AGI. In other words, an intelligent computer that does more than one thing that has the same kind of generalized intelligence that human beings have, where you can take learning from one area in your life and apply it to another. And it seemed to me that we were trying to turn the computer into something of an image of ourselves rather than just a tool. And I wondered about the motivations behind doing that. Because it seemed to me that the computer was at its most useful for us as human beings. When we used it as a tool to do the things that we don’t do very well, you know, like crunching big numbers and stuff. I went off on my sabbatical happily thinking, Okay, I’m gonna kind of capture this question and found that I couldn’t do it. Because this wasn’t really a question about computers, it was questioned about human beings and human motivations. And I realized that to tackle such a question, I would have to either do it psychologically, or theologically, because I was thinking of the question in terms of one being an image of another, you know, I suddenly thought, well, where have I heard that? That’s like Genesis one, you know, the idea that we were created in the image of God. So I decided to study a little bit of theology. Just you know, I thought, because the we have a seminarian graduate program in theology on a campus, I thought, I’ll just sit in on a couple of courses, get enough background to get an angle on this question. You know, way leads aren’t away. And the next thing I knew I was off getting a doctorate in theology. So that’s kind of how it happened. I did not intend really to go into theology at first just to pick up enough theology to tackle a question that was bothering me.

 

Seth Villegas 4:53
I think something that you mentioned early in your comment was about strong AI in this transition in late Switch to artificial general intelligence. And oftentimes, we don’t actually investigate the ways that these terms have changed. But it is interesting because artificial intelligence is a kind of buzzword now related to machine learning. And anything that kind of acts on its own is known as artificial intelligence. And there’s all these other ideas that have become infused on what may be in a prior era wouldn’t even been considered an artificial intelligence, but just an algorithm of a kind. And so people are tending to place a lot more weight on the capacity of those services to fulfill a particular kind of need, in all kinds of ways, say, in connecting businesses to each other and connecting people to each other. And so I was just wondering, what it is that you think is driving this urge to even just name things as artificial intelligence that previously, we would have just thought of as computers kind of matching different bits of data around?

 

Noreen Herzfeld 6:02
artificial intelligence has always been a somewhat nebulous term. In fact, one of the earlier researchers at MIT, you know, when asked to define artificial intelligence basically said, it’s anything we haven’t managed to get the computer to do yet. So, um, you know, early on, people thought, a program that would play chess, you know, or program that could solve calculus problems, was artificial intelligence. But it seems that as soon as we get a computer to do something, we then push it out of that category. The interesting thing has been that in the last few years, instead of narrowing the category of artificial intelligence, every time the computer does something, we’ve been expanding it and I think that’s been basically a marketing ploy. That artificial intelligence has become kind of a buzzword. It means something vague in the human consciousness about how computers can help us do things. And that’s led to a lot of marketers wanting to call just about any computer program, artificial intelligence. So you know, in some ways, we’ve gone from having such a narrow definition that as soon as the computer did something, we said, well, but that isn’t really intelligence, to now such a broad definition that almost every computer application could fall under that a problem with having one definition for AI is that we don’t have a definition for intelligence. We don’t really know what it is. And you can see that when you look at the disputes that people have about animal intelligence, you know, how conscious are different animals? how intelligent are different animals? You know, in many ways, I think the best definition for intelligence have comes from, let’s say, from a US Justice Potter, who was asked to define pornography. And he basically said, Well, I can’t define it, but I know it when I see it. And we’re kind of like that with intelligence. We can’t really define it. But we said, well, I know it when I see it.

 

Seth Villegas 8:36
Perhaps another word that might be interesting to bring into this conversation, then is reason, in part, because when you’re talking about imaging ourselves, and in your book in our image, he specifically talked about how there’s this desire to create a completely rational being driven by reason alone. And so I feel like we’re not just talking about what intelligence is, but that there’s this transcendental, so transcendental in the sense that doesn’t exist in any particular person. But it’s some larger concept. That kind of, it would be possible to tap into, but humans in being say, embodied emotional creatures are, I don’t know, tainted, perhaps. But I kind of see these ideas floating around that. If only we could be reasoned strictly through logic, right? If only we could strip ourselves down to just that then we could be that much more intelligent or something like that.

 

Noreen Herzfeld 9:35
The fact is, the psychologists over about the last decade have shown us that that such a thing is actually absolutely impossible. You know, there could never be a Mr. Spock, who is all reason in logic and doesn’t have any emotions. And of course, if you watch Star, the old Star Trek episodes, Spock has all sorts of emotions. So the work of Antonio Damasio is one person who I think he’s worked with people who have brain damage in areas that affect human emotion expressing and feeling emotion. And, you know, he’s found that such patients tend to lose agency, in the sense that they can’t make even the simplest decision, like what to have for lunch, you know, should I have the soup are the sandwich, because if you have no emotion, no feeling, you have no real drive, to choose one thing or another, to do one thing rather than do another. And so a person who is totally stripped of emotion ends up also being stripped of many things that we have lumped under reason, the ability to make choices, the ability to make decisions, we can’t live without our emotions.

 

Seth Villegas 11:15
in your kind of interactions with colleagues in the computer science department or within different parts of tech, what do you think it is, then that kind of drives people to have this kind of animosity towards their emotions, that they would rather be driven by something like logic and the way that a computer is because, again, if we’re talking about this idea of what does it mean to be made saying the image of God or to make our or to make something more than the image of a machine, this seems to be a critical idea of how if I could push myself to behave in a different kind of way, or to think in a different kind of way, then, I don’t know perhaps I could solve these kinds of problems I wouldn’t otherwise be able to solve. And if I were to bring up say, a group of people like the transhumanist, so people who are perhaps looking to escape embodiment, and that sort of thing, I was wondering if you had any insight into what might motivate them, you know, the kind of emotional side of that, that they may not necessarily want to talk about?

 

Noreen Herzfeld 12:21
Well, first of all, ever since the ancient Greeks, we have thought about our emotions, as being that unruly side of us that we can’t quite, you know, control or get a handle on the ancient Greeks, you know, would describe, if you think about the person as a chariot, you know, that’s being driven, that the emotions are these wild horses, you know, that are, that are yanking the body of the chariot around, and that it’s up to the mind or the charioteer to hold these terrain these horses in, you know, and to keep them on the straight and narrow so that they don’t, you know, drive the chariot to ruin. Um, but interestingly enough, there’s a move in robotics right now to try to find a way to actually give robots emotion, because we can really interact with them well, if they don’t show any emotion or have any emotion. Now, the problem is, you can’t truly give anything that’s disembodied emotion, the way humans have emotion, because emotion is a four step process. If you go back to the psychologist Jerome Kagan, he describes emotion as having four stages. The first is a sensory perception. So think about the emotion of fear, for example. So suppose you’re out walking in the Adirondacks, you know, and you hear a rustling behind you, you get a sensory perception, you hear something. The second stage is you feel something bodily. And this is that your heart is gonna start beating faster, your body’s gonna, you know, get the shot of adrenaline that prepares it to fight or for fight or flight. And this is long before your cerebral cortex, you know, your brain kicks in and says, rustling noise behind me, you know, isn’t there you get that rush of adrenaline and bodily feeling first. Then you analyze it, you know, you think might be a bear, you know, better stuff better turn around. And that’s the last The fourth step is you do something you know, You have a response. Now a computer can do stages one, three and four. It can have incoming sensory perceptions of a sort. Okay, it can analyze them. And then it can respond, what it doesn’t have is the feeling. And yet, we recognize that that feeling is at the core of emotion. You know, I’m old enough to remember, you know, how Bill Clinton was kind of famous for saying to people, I feel your pain, you know, and that was a way of saying, I feel a genuine emotion for you. I’m not faking it. And there are people who mostly fake their emotional responses, and we call this people sociopaths. You know, they get a stimulus, they don’t actually feel much. But if they’re intelligent, they analyze the stimulus and think what would be the appropriate response. And yet we catch on, maybe not immediately, but after a while, that there’s no there there, you know, there’s no genuine feeling. Um, robots, you know, or computers are, in a sense going to be like sociopaths, where they get the stimulus, the calculators response. But as human beings, we we catch on pretty quickly that, that there’s no real emotion, no real feeling there. So emotion is, is going to be the real stumbling block, to making computers truly in our image. Now, you’re asking about a drive. And I think you put your finger on the real drive in your last sentence, when you said, you know, is this a way to overcome the body? The idea here is maybe partly yes, to overcome the unruly emotional horses of the body. But I think it’s mostly to overcome illness, frailty, and death. And that you get futures such as Ray Kurzweil, who were dreaming about this idea that if we get artificial intelligence to a certain point, we may also get it to a point where we’ll be able to somehow upload the patterns of our brains, and therefore have some kind of an escape from, you know, this mortal coil of the body that does not involve any kind of religious belief. That’s a do it yourself project.

 

Seth Villegas 17:46
I think you’ve given us a lot of information to digest there. If I was going to try and break it apart into a few different stages, I would say. First, I think that there’s some within robotics and computer science communities who think of emotions solely as an algorithm as a response to a specific kind of stimuli. And actually really downplay the subjective experience of those things. And so they’d actually been in a different psychological camp, say, in behaviorism, or something like that, which talks specifically about how people look from the outside. But I think that though, that’s popular is somewhat today, especially in the more kind of popular spheres, say, maybe not in academia itself, is because you don’t have to make any reference to what it is that people are experiencing inside, but can observe them strictly on what it is that they’re doing. And I think that actually downplaying people’s subjective experience is really patronizing. Perhaps it’s not really the right response of, I know that you say that you feel this way about what you’re doing, but you’re actually doing it for this other reason that’s outside of your subjective experience. And I think that, especially within computer science, that kind of affective computing, so even someone like Hans Moravec who’s also in the kind of Kurzweilien vein, talks about in Mind Children about having robots emo when they need help, or something like that, to have these kinds of really low level emotional algorithmic responses. But that’s still kind of absent. I think that kind of subjective experience that you’re talking about. So to return to what you said earlier than this, does that then imply that there’s something special about humans that they have a subjective experience that robots can’t have?

 

Noreen Herzfeld 19:35
Yeah, we got bodies. That’s special. That’s really special. It is our bodies that allow us to be what we are because it is through our bodies that we relate to our environment. Now you can say well, you haven’t a robot as body it’s just a different body. You know, it’s just a body of a silicon but ultimately it’s not alive. And ultimately, it’s also not a free agent. No, it’s programmed by someone else to give the responses that it gets now you will have people, I think you can put more back in that camp. But several others, even Dan Dennett, the philosopher into this camp of saying, well, we are to, you know, we just don’t like to recognize our own programming and consciousness and subjective experience are just an epiphenomenon, you know that that arises, once you get to a certain complexity of algorithmic understanding. The problem that I see with that is, first of all, they tend to grossly simplify the human being, and the role of the human body. For example, Kurzweil talks about mapping the connectome of the human brain. But now we’ve realized that the brain itself is not by itself, who we are, in other words, we have a great deal of neural structure in our gut. Do we have to add that in as well? What about the microbiota, we’re now learning that a change in our microbiota can actually change our minds, you know, that people who have a very unstable microbiota, this can lead them to clinical depression? And so now you say, Wow, do we have to include that as well? Or are you going to say, No, we’re just going to have this really narrow copy of what it means to be human. But we’re gonna cut out the fact that you actually can think with your gut, and trust your gut, that you actually that we are actually entire ecosystems. And if you cut out part of that ecosystem, you’re going to cut out part of what it means to be human.

 

Seth Villegas 22:12
I think this is a great point to to talk a little bit about our developments in the way that we think about ourselves. So for instance, if we have this push towards neuroscience and thinking of, oh, wow, the brain can do all of these really complicated things. So therefore, when there’s folks saying like, Oh, you know, I just go with my gut, that that’s something completely unreasonable based on what it is that we now know about people to a yet further discovery that no wait, you actually do have cravings and feelings that come directly from your gut. And I just find that sort of thing. Fascinating. And unless people actually sort of keep track of those different kinds of transitions in thought we, we fail to grasp what it is that we might lose by just utterly dismissing something that might be called folk wisdom, and whatnot.

Noreen Herzfeld 23:03
Exactly. I think one problem is that people who have been working computer science and piano I certainly been one of them, do tend to focus first on the mind. But secondly, also only on specific parts of the mind, you know, the sort of higher isn’t functioning of the mind. And it’s very easy to get caught into a narrow thinking of Well, that’s what it is. That’s what it’s all about to be human. That’s what makes us special, I think you would find that if you talk to artists and musicians, they would have much more appreciation for the body, a piano player is going to recognize that it doesn’t all come from their mind that they’ve also got muscle memory in their, in their hands. In their arms, a singer who uses their entire body realizes, you know, this isn’t just a mental activity. I think you find that women and this is a whole nother topic. Of course, in many ways, women have been characterized by men, mostly in terms of their bodies. But also I think, for women who, who give birth who nurse they’re young, they tend to be much more in tune to the fact that they are an entire body, not just a head that is just sort of carried around by the rest of this body. So it’s one thing that I find rather interesting that most of the people who have thought, if we could just be pure reason if we could just be disembodied minds. It’s all men who think that way.

 

Seth Villegas 24:58
That’s a really interesting point. And I especially like what you said about, say, athletes or artists, so people have different extensions of themselves. And he actually really enjoyed playing sports. And it actually confounds my mind that time when I do things that I can’t reason through, right? Like, I can’t understand how I even took a particular action in a particular moment. And it’s, it’s interesting to be baffled by my own body and that sort of thing where I can’t, you know, logic my way through whatever actions I took. And at that point, what do we even describe that agency? As you know, how is it that we’re able to say, Get the ball in a particular instance, or wherever, whatever it is, we happen to be playing at the time, or say, if someone happens to, you know, catch a picture that’s falling, or that behind them, right, we have all these sensations that are outside of us that we don’t even really think about necessarily. And I think, especially this year, where so many of us have been communicating over zoom, it’s easy to forget those kinds of things. But I’m actually so much so far less engaged by these meetings, through a computer than I would be normally, if we were in person, even if all the same people are there, just because the flatness of the information that’s presented. Isn’t that interesting to me, I always find my attention wandering. Whereas if I’m with a, say, a group of 10 people around a conference table, it’s far more interesting, because you’re paying attention to body language, you’re paying attention to facial cues, there’s all these little things, even paying attention to who’s looking at who at a particular moment, who’s acknowledging comments. And it’s actually interesting at that time, versus this is the semantic content that we have to get through. And let’s just get through the semantic content, it becomes kind of boring.

 

Noreen Herzfeld 26:50
Yeah. Interestingly enough, I think the pandemic and are moving so much of our activity online, has helped us to see that, no, it isn’t just we don’t just communicate with our words, we don’t even just communicate with our facial expressions. So yes, zoom helps better than a phone call, perhaps. But as you pointed out, we are still missing a huge percentage of the communication that we would be receiving, if we were actually physically together. And you know, many people, while they don’t really stop and think about what they might be missing, they still have just this reaction that they know that they’re missing a lot. That this just isn’t nearly as engaging or stimulating. You know, it, frankly, it’s pretty boring. I’ve got a conference coming up the end of next week, it’s going to be a three day conference. In fact, frankly, dreading it. Whereas normally, if we were all going to be meeting together, I would be looking forward to this with a great deal of anticipation. But I recognize that it’s going to be very difficult to maintain attention. For three days, almost impossible, I know, I’ll be reading my email, and I’ll be turning my camera off and getting another cup of tea and letting the dog out and doing things, I’m not going to be present the same way I would be if if I were with those people in person. And the other thing is you lose so much of the spontaneity. And often when I go to a conference, the best part happens in those hallway conversations, you know, over the tea break, or the idle chat that you’re having a dinner, which suddenly turns to something or brings up something that triggers new ideas or a deeper conversation. And those things just don’t happen, at least not to the same extent online.

 

Seth Villegas 29:04
There are two main settings that I think we’re going to be really debating as a society, what it’s going to look like going forward. I think, first off, what is the future of work look like? Is it really the case that people need to gather together in a particular place, you know, given say, commuting times, and whatnot, and even what it is you’re maybe able to accomplish on your own, say, at home versus at a desk in an office? And the second setting is an education. So what is it that you really need to be in person to be taught? Or is it the case that by sitting in front of your computer, like like we’re doing now you can just, I don’t know download that information into your brain and then that’s a sufficient that’s sufficient to say that you’ve been educated on particular topic. So I was wondering if you could comment on that, like, what do you think is going to happen going forward?

 

Noreen Herzfeld 29:57
You know if education were just communicating information, we would have stopped having in person schools A long time ago, correspondence courses would have taken off, they didn’t. Then television came and people thought, oh, educational television, it’s all going to happen that way. And that didn’t really work. You know, it’s not that you can’t convey some education some information that way. But really, you know, so much of education, I can’t remember who said it, but they said, it has to be caught not taught, you know, that what we get from in person, being with a teacher or a mentor, is something of their enthusiasm, something of their delight, in a topic that makes us want that same enthusiasm that makes us want that same delight. That spurs us to find the information, not because we just want the information in some dry way. But because we fallen in love with history, or poetry, or computer, or the beauty of an elegant mathematical theorem, it really does come back to emotion. And just, you know, giving information, the emotional part gets lost, and where the emotion is lost. So is lost the the drive, and the agency. So I think it goes back to what I was saying earlier, that without being able to convey that emotion, and we do that best in person with, we do it with our whole bodies, we can convey information. But we can’t really convey what it is that makes us love the subject that we do.

 

Seth Villegas 32:10
This is a really important idea, in part because I think we may be coming on a generation that has different ideas about those sorts of things. So for instance, I’ve given a couple of guest lectures to a computer science lab. And when I’ve done that, and offered to say have something in person, it’d be a little bit convenient, but at least we could be there together, I’m only going to be there once. So maybe we can make it a special thing, that the students actually weren’t interested in having something in person, if they could just have it over zoom and recorded, say to even go over it later. To me, it would seem that there are students who are kind of buying into this idea that I just need to get the information into my brain. And it doesn’t really matter. If I connect with the instructor, if I’m there to ask questions. As long as I get the content, all I’ll be there, I’ll get to where I need to get to anyway. So why does it matter? If education looks this way, or that way? How would you kind of respond to the people who think like that?

 

Noreen Herzfeld 33:15
Well, I do think you’re right. And I think it’s a tragic byproduct of the way we have immersed these young people ever since they were little kids in a digital world. The thing is, they don’t know what they’re missing is how I would react to them. Yeah, they can get information that way. But what they don’t know is that they may be very well missing, catching this spark of delight, the spark of loving something, or just this connection between two human beings. Sadly, many young people would prefer to just text their friends or relate to their friends through the medium of the computer because they say, well, it’s safer. But life isn’t supposed to be that safe. And honestly, I think life becoming too safe in that way, often then leads people to seek a release from that safety, you know, through extreme sports or something like that, because they know they’re missing something. And they’re looking for it. to just use kind of a simple or silly example. I find that yes, if I want information, I’m writing a book right now I’m in the final stages. So I’m looking for particular pieces of information. I can find it quickly and easily online sitting you know, right here at my kitchen table. Um, and that’s handy and it’s nice. I also know that And when I actually go to the library to find a physical book on something, often, it was sometimes Yeah, it’s a quick trip, I slip into the library, I grabbed the book I need I leave. But often, one book leads to another, I look at that book, but then I see other books that are near it on the shelf. And I find that Oh, there might actually even be a better one than the one I was looking for. Um, or, as I’m carrying my books upstairs to check them out, I pass the display of new books that have come in to the library and, and there’s something that I didn’t even know the library had ordered, I maybe didn’t even know existed, you know, that either leads me down a new avenue. That turns out to be something really interesting that I want to read, or, and this has happened more than once, it turns out to be exactly what I needed. And I didn’t even know it existed. Yes, we sometimes find that serendipitous article online when we’re just browsing around. But often, we lose the serendipitous encounter with another human being, or with a physical object, like a book that we wouldn’t have seen otherwise.

 

Seth Villegas 36:22
Right now, I’m in the middle of doing my dissertation research. And actually, one of the things I’ve really missed is that we can’t really just go to the library. Now, whenever we feel like it unless you have a particular purpose. And one of the things that often happens, and this is something that I would encourage anyone who’s doing that, who does similar things is you’re really limited by the particular words and how whatever algorithm that’s handling those words, chooses to categorize things. And so it can be incredibly actually limiting, to feel like you’re in the right domain of Oh, like, these are the right terms to be looking for. But if you were in a physical library that has someone who’s actually kind of curated in a sense through the Dewey Decimal System, of Oh, these things actually are related to each other, even though they use different words to refer to whatever it is that they’re talking about. And it actually is far easier to look through books in person through a bunch of titles than it is through, say, a stack of 10 Library results as it comes through here at BU and then through 10 more. And you know, by the time you get to the 15th one, it’s it’s already so far off that it’s not even what you’re looking for anymore. Whereas at least if you’re in a library, at least they’d all be more or less related, then. And then they are even on the computer system. So so it’s funny to think about that, that there’s this kind of larger system that may seem antiquated, but maybe actually better suited, though personally, I think it’s good that they do work differently, because I do tend to find different kinds of things through both of them.

 

Noreen Herzfeld 37:54
They can be complimentary, yes. You brought up the word curated. And, of course, you know, this is a huge issue with computers. Right now, how much of the information that is online that is not curated, but particularly, um, you know, people’s news feeds and stuff, which obviously we know here in the United States is is leading to problems, political problems, and the fact that we have separated into bubbles, where we have different sets of facts that we believe because we’re getting different news, we’re getting different information that has either been curated by different people, or that is just totally uncurated Facebook posts and tweets and things that we then tend to filter ourselves rather than having experts filtered for us.

 

Seth Villegas 39:05
To bring this to what you had mentioned earlier about emotions. I do think that that’s an interesting part of this particular issue. Because so much of the way that say social media works today, and our news media works today is through specifically targeting our emotions for particular kinds of reactions. And it would seem then that it’s not just that acknowledging that we have emotions is important, but also a kind of gentleness, perhaps there has to be some sort of a different treatment that goes along with that acknowledgement than simply trying to get people engaged with the particular platform, regardless of how it is that affects that person. I think, to refer to you in back what you said even earlier, that seemed to really downplay the importance of the subjective experience of the person who is continually having their emotions being flamed through this experience with the technology they’re interacting with.

 

Noreen Herzfeld 40:01
oh, they’re not downplaying the subjective experience of the person at all they are manipulating and using the subjective experience of the person. mean, we have to realize that the algorithms that underlie things like Facebook and Twitter, and we label these algorithms, artificial intelligence, you know, whether they work on their own or not, which is disputable. They don’t really, they they are, they’re built on an economic a capitalist economic model. In other words, their whole point is to engage your emotions. And the stronger the emotion you have, the more likely you are to stay on the platform longer to you know, move from one YouTube video to the next. And their sole motivation is to keep you hooked, to get you addicted. And to keep your eyeballs on that screen. And therefore, passing over the ads that they pass in front of you, that they have targeted to you from things that you’ve already looked at, or from a profile, that they have a view, it’s pure manipulation for no goal other than to get you to empty your back.

 

Seth Villegas 41:28
I know that machine learning is behind a lot of the way that these algorithms work. And I think that machine learning is important to to mention here, because the people who maybe begin these algorithms know what they want out of the algorithm, but they’re not always in tune with how that thing is working. And so because of that, there’s this continual push towards the outcome, kind of regardless of all other kinds of priorities that might come into play. So I was wondering, like, is there a way that these systems could work that wasn’t so manipulative, because in one sense, I’m not angry that YouTube shows me videos that I might be interested in, or even that Amazon shows me things that I might be interested in buying. But this kind of continuous push to press my buttons all the time, it kind of crosses a bright line at some point. But I’m wondering if there’s a way to kind of retrace that back so that these services could actually work more for people rather than kind of, say, for the interests of the companies and getting out their pocketbooks? As you mentioned earlier?

 

Noreen Herzfeld 42:38
Yeah. I mean, obviously, the algorithms could vary too. And, and they could be maximizing different things. But right now, the algorithms are tuned to maximize the profit of these businesses. And as long as that is the case, and that’s going to be the case, as long as you know, these businesses exist in the kind of capitalist system we’re in, they’re not going to retune the algorithms unless they are forced to do so by some kind of regulation.

 

Seth Villegas 43:14
Do you think regulations the main way forward, then?

 

Noreen Herzfeld 43:18
I do, I think regulation is absolutely necessary. Because otherwise, the company is going to do what is in their best interest financially. And if that means that they study psychological articles to figure out the best way is to addict people, that is precisely what they are doing, and will continue to do. This is not in the best interest of of any of us, who are meant to be sucked in and stay on these platforms. And if the best way to keep people on the platforms is by upping the emotional temperature continually, that’s exactly what they’re going to do is is not in the best interest of living in a harmonious society. So yeah, without regulation, we’re just going to have more of what we’ve already got.

 

Seth Villegas 44:15
to ask another question then. So just for people who might hear this or for regular kind of everyday people, so obviously, pushing towards regulation and towards better kinds of regulations is one option they have. But what else might they do in their kind of everyday life to sort of resist say, having their buttons pushed or just kind of playing into how these technologies work?

 

Noreen Herzfeld 44:39
Well, I think one thing that we can do is be a little more mindful about our use of technology. I think some people are starting to do this, you know, certainly, on my iPad, it tells me every week, you know what my consumption was how Which screen time I had. But that in itself is probably not enough. People can simply be mindful enough to say, Okay, I’m, I’m not going to look at screens after 9pm. Or I’m only going to allow myself to read emails in this half hour in the morning and this afternoon, I mean, there are ways that we can minimize our consumption. But I think more than that, because that’s looking at kind of the negative side as well can I give it up is we need to emphasize the positive side, especially as we come out of this pandemic, emphasize people being physically together, people working on projects together, people being out in the real world together. As we get involved in other things, in physical activities, and social activities, we’re just not going to need to spend that much time staring at Facebook or Twitter, or Instagram, we’re going to find our satisfaction there, we’re going to find it in the other things that we are doing.

 

Seth Villegas 46:28
The positive side definitely sounds good. But right now I’m working actually in Residence Life. So I live with students and whatnot. And one of the things that I see sort of over and over again, is, even if I’m in the elevator with someone, they’ll pull out their phone, not not because they have to look at something but because they’re uncomfortable. And actually say if I take an Uber somewhere or Lyft ride, I always wait a second to see if the driver wants to talk to me just because it’s polite. But I’m also kind of in this in between stage where I’m not completely uncomfortable with that, because these things were developed as I was being socialized. So there’s a time before them and a time after them. And I often get comments from drivers, I’m with Oh, wow, it’s so great to actually talk to somebody, because I have this opportunity to, to turn this back again to younger people who are kind of living with different kinds of expectations of their certain certainly is this positive side to interacting with people. But there is also as you said earlier, this real discomfort that comes with those situations because it doesn’t feel as safe as say texting someone where I’d rather have a text conversation than a phone conversation, or even an in person conversation. So if we’re looking at these kinds of larger social trends, do you have any thoughts on how is it we kind of get back at those positive things, given the kinds of the kind of danger dangerous not even the right word, but the the discomfort that comes with this sort of uncontrolled interactions that aren’t mediated through particular technologies?

 

Noreen Herzfeld 48:09
Yeah, and I’m not sure dangers such a bad word. I’m the sociologist Jean Twenge is done some good work in this area and the book with her book iGen, which looks at young people who have had smartphones ever since they were little kids. Um, and and she has documented how much less social activity they engage in, and how much more screen time or mediated conversation. But as those numbers go up, so does the incidence of depression among these young people, so it actually is a danger. And one of the things that I’ve found during the pandemic is a realization of how much even those little conversations of strangers that you might have caught in the conversation you mentioned with your Uber driver, or the little conversation you might have at the checkout counter in the grocery store, with the clerk or with the person who’s in line behind you. That how much these actually do raise our mood, improve our day. And we lost an awful lot of that during the pandemic when we you know, we’re ordering our groceries and we weren’t Obering around to go places. We’re gonna have to get that back. And we’re gonna have to help our young people get it in the first place. It’s not going to be easy, but it’s something that we really need to work at. And I know that if I had had a smartphone when I was a teenager, I’d have whipped it out all the time. Because I was shy and felt socially awkward, and it would have been an easy escape from those feelings, you know, the best we can do is really try to engage young people, even if they do have their phone out. Try to help them be in situations where they have to engage with others in ways that aren’t mediated, you know, for ourselves, we’re all going to have to work just a little harder now, at feeling comfortable with the spontaneous conversations, because we haven’t had them for over a year. But I think the pandemic has also helped a lot of us recognize how much we’ve missed them, and how much we’ve missed the genuine human contact, and the full 100% of communication that happens when we are in person with people.

 

Seth Villegas 51:02
Certainly I completely agree with that. Something that I think is often missing in these kinds of situations, especially as their society changing is that even something as simple as going to the grocery store and no longer choosing to interact with a person, this is something I’ve actually thought a lot about, they have kind of automated checkout machines now where you can kind of do everything yourself. And if you really think about those kinds of interactions, where you have to do things in a very specific set of steps. And you have to almost make yourself more machine like so that the machine can read everything that you’re doing. And I think people miss that at times as if, as they continue to only interact with these kinds of technologies, you are really becoming more legible to them, becoming more like them in certain kinds of ways. And I think that that’s just a really important thing to note of, if you were having say a particular issue with your self checkout machine, you can’t have a conversation with it about what the problem is, then all you can do is try to retrace the steps to make it to make yourself more readable to what the machine can actually do, which is actually quite limited. And it’s interesting that there’s so much a marketing around artificial intelligence, but the kinds of machines that we’re actually able to work with have such a narrow dine domain that we sometimes miss just how narrow it actually is.

 

Noreen Herzfeld 52:33
Yeah, and I think there’s two problems. Um, one, I thinking about something that Jaron Lanier, who’s kind of the resident guru at Microsoft, has written an interesting little book called 10 reasons why you should delete your social media now. And he says, you know, if a computer ever passes the Turing test, it’s not going to be because the computer reached human level intelligence, it’s going to be because we have dumbed ourselves down so much, that we now accept the computer’s level of intelligence as being the norm. He points out the fact that we are more flexible than computers. And if we interact mostly with computers, we’re the ones who are more likely to change, not the computer. And I think that’s an important observation. The second observation that I would make is that when we had all these little interactions with people, throughout the day, when you interact with the bank teller, when you interact with the grocery clerk when you interact, you know, even with your neighbors just being outside on a hot day, um, we come to broaden our circle of intimacy, our circle of humanity. And, you know, we and our neighbors might belong to different political parties, for example, or different religious traditions. And yet, through these little interactions, we come to see them as human just like us. And this expands our circle of, of care. And when we don’t tell those interactions, that circle of care tends to contract. And then it’s so much easier to think in terms of us and them to just think of other people as very much other and perhaps as the enemy than we would think if we had these small interactions with them on a day to day basis.

 

Seth Villegas 54:58
Those small interactions really helped us to build a kind of graciousness with the people that are around us. And I actually remember I had a conversation with someone I knew a few years ago, who’s also here at Boston University, who’s an undergrad about just going to the corner grocery store, and he would often get really annoyed at say how things were. And one thing I’ve learned is like, Oh, well, you know, that’s, that’s a job that people often take right after college, you know, you can have long hours and tough shifts. So it can be actually really hard to deal with things. This is a 24 hour grocery store, too. So it just, I could just imagine all these things about actually working there as difficult. And it was strange to me because he kept imagining other people in terms of their utility. And I think that gets back to what we were saying about the way in which machines work, the way that technologies work is, there’s this specific thing that I want to have happen. And if it’s not happening, something’s wrong, rather than, oh, you know, maybe this person is having a tough day or something like that, or is dealing with a million other things, maybe I should cut them some slack in this particular instance. And actually, everyone would really benefit from that, rather than getting more frustrated, because you’re not getting what you want immediately. And I do kind of worry about that sort of thing as well.

 

Noreen Herzfeld 56:19
Yeah, I do, too. Um, then. And I like you’re mentioning this, you’re getting what you want, immediately. We’re so used to getting an instant response from our machines, that we tend to get really frustrated. You know, if things slow down, I noticed this with my students. In computer science, when the system gets slow, they start getting really frustrated that their program isn’t compiling instantly, which of course, makes me chuckle. Because, you know, back in the day, when I first started programming computers, you submitted your program, and you waited overnight, for them to run the program and give you the result in the morning. So it’s like, okay, you can’t put up with a 22nd delay. Um, you know, but we do get used to instant gratification, and then we start to want that from each other. We lose patience. You know, one of the things that I’ve thought about is, you know, just just like most people, I admit to my students, I play online games. And with many of them, the goal is always to increase your efficiency in some way to have a better score, often, increasing your efficiency or getting a better score means being able to accomplish tasks faster. And I find I have had to ask myself, am I I’m habituating, myself to a need for speed when I play these games over and over again, and my goal is always to be able to do it faster, faster, faster. How much does that habituation then spill over into my day to day life, where I suddenly find myself, you know, being frustrated, being impatient, if things don’t happen immediately if they don’t happen faster and faster. And I sometimes find myself, you know, like, sort of gaming out my day like planning? Well, let’s see if I do this. And then I do that, Oh, no, I have to do that show before that one, because it’s going to be more efficient. It’s almost like I’m, you know, plotting out a video game efficiency route, about my day, and I sometimes have to stop myself and say, you know, maybe that’s not the goal. Maybe that shouldn’t be the goal. Maybe I need to just relax a little bit. And because if I’m stuck in this, it’s got to be fast. It’s got to be efficient. I got to get from point A to point B in the shortest route. How much am I going to miss? How much of the little serendipitous encounters whether with other human beings, whether with nature, whether with my own thoughts, well allow myself time to slow down and daydream time to be bored? What am I going to miss? And I think we actually missed quite a bit.

 

Seth Villegas 59:38
I think that’s a really great thought for us. And I’m just sort of reflecting on what you just said, or what are the things that we just missed when we’re just constantly pushing ourselves to be as efficient as possible. So thank you, Noreen. I really appreciate you taking the time to talk to me today. No, my pleasure, sir. I enjoyed it.

Thank you for listening to this conversation with Dr. Noreen Herzfeld. You can find more information about the ethics on our website digitex.org. And more information about our sponsor organization, the Center for mining culture at my new culture.org. To get in touch, you can find us on Facebook and Twitter, @digethix. And on Instagram @digethixfuture. You can also email us at digehtix@mindandculture.org. In this episode Noreen and I talked about how technology has changed the ways in which we interact with each other, and how our expectations of technology have changed. Noreen asks whether the purpose of education is simply a matter of downloading the right information into students brains. Noreen also points out how much we benefit from the small interactions that we have with other people, things that are often unplanned. While technology is powerful. She asks us to think about what may be lost if we blindly adopt and accept everything that comes to us. In particular, Noreen asks how we can help the next generation of people to navigate all the technological challenges that we are currently in the midst of realizing if we regained our sense of the humanity of other people, even when they feel removed from us by the kinds of devices that we use every single day. We may be able to readjust our expectations of not only our technologies, but also of each other. I hope to hear from you before our next conversation.

This is Seth signing off.

 

Transcribed by https://otter.ai