Archive FM

AMDG: A Jesuit Podcast

Does AI Make You Less Human? This Philosopher Has Answers

Think back to the early days of ChatGPT and generative AI. It was a topic discussed on seemingly every podcast and countless news segments. Nearly every one of them started those segments with some elaborate introduction about the risks and opportunities that the new technology posed, how the way we communicate with one another would be irrevocably changed, how we would no longer be able to differentiate the writing of humans from that of computers. And then, to conclude the intro, the host would say something along the lines of, “Bet you didn’t realize everything I just said was written by ChatGPT.” Don’t worry—we didn’t do that here. All that clunky writing is your host's. But for a second, you were unsure. Even now, you might be wondering if you can trust us, if you can take us at our word. And that, our guest today says, is a problem. Dr. Joseph Vukov is an associate professor of philosophy and the associate director of the Hank Center for the Catholic Intellectual Heritage at Loyola University Chicago. His latest book—and the topic of today’s podcast—is called “Staying Human in an Era of Artificial Intelligence.” Joe points to this erosion of trust as just one of the threats AI poses to our ability to stay human. But he doesn’t stop there. Throughout our conversation, he takes on this idea that just because AI can write something that sounds vaguely human doesn’t at all mean it’s eroding the building blocks of our humanity. All the same, as people of faith responding to the signs of the times, continuing to reflect on AI and its inevitable role in our present and future is important. And that’s what we do today. It’s a fun conversation. If you want to learn more about Joe and his work, visit josephvukov.com and check out the links below. Get his book: https://www.amazon.com/Staying-Human-Era-Artificial-Intelligence/dp/1565485998 Learn about his course: https://www.scienceforhumans.com/
Duration:
49m
Broadcast on:
11 Sep 2024
Audio Format:
mp3

From the Jesuit Media Lab, this is AMDG and I'm Eric Clayton. Think back to the early days of chat, GPT, and generative AI. It was a topic discussed on seemingly every podcast and countless news segments. Nearly every one of them started these segments with some elaborate introduction about the risks and opportunities that the new technology posed, how the way we communicate with one another would be irrevocably changed, how we would no longer be able to differentiate the writing of humans from that of computers. And then, to conclude the intro, the host would say something along the lines of "Betch didn't realize everything I just said was written by chat, GPT." Don't worry, I didn't do that here. All that clunky writing is my own. But for a second, you were unsure. Even now, you might be wondering if you can trust me, if you can take me at my word. And that, our guest today says, is a problem. Dr. Joseph Vukov is an associate professor of philosophy and the associate director of the Hank Center for the Catholic Intellectual Heritage at Loyola University, Chicago. His latest book, and the topic of today's podcast, is called Staying Human in an Era of Artificial Intelligence. Joe points to this erosion of trust as just one of the threats AI poses to our ability to stay human, but he doesn't stop there. Throughout our conversation, he takes on this idea that just because AI can write something that sounds vaguely human doesn't at all mean it's eroding the building blocks of our humanity. All the same, as people of faith responding to the science of the times, continuing to reflect on AI and its inevitable role in our present and future is important. And that's what we do today. It's a fun conversation. If you want to learn more about Joe and his work, visit josephvukov.com or check out the links in the notes. And now, here's Joe. Dr. Joseph Vukov, welcome to AMDG. We're so glad you're with us today. So glad to be here. I'm really excited for this conversation. Dude, I'm so excited. We're going to talk today about your new book, Staying Human in an Era of Artificial Intelligence, which is a topic we've been kind of coming back to again and again on this podcast. It's everywhere now. It's so important, right? In your book, it's super accessible. It's very fun to read. It's accessible in a way that makes you want to get to the end quickly, which I really appreciate it and enjoyed and I feel informed as a result. So thank you for that. Great. Thank you. Let's get into it then. So to start, folks, we hear AI all the time, but I think offering a definition from which we can build would be helpful. And more importantly, a definition that answers this question, why would people of faith be concerned about AI or thinking about AI? Yeah, I'll give you a technical definition first, then something maybe a little bit less formal. So the organization for economic cooperation and development defines it this way. An AI system is a machine-based system that for explicit or implicit reasons and furs from the input it receives, how to generate outputs, such as predictions, content recommendations or decisions that can influence physical or virtual environments. So that's kind of a mouthful. The way I understand what AI is is it's a prediction-making computer program. And it can make those predictions in ways that we're maybe more familiar with. So if you do any shopping on Amazon, watching a YouTube video is basically engaged with the internet at any point in any way right now is you're giving those companies data that in turn make predictions, right? So maybe they make predictions about what book you're going to want to read next. Maybe they make predictions about what video you want to read next if you're watching YouTube. And that's a form of AI. It's using data in that case you're viewing habits, the viewing habits of scores of other people to predict what you're going to watch next, what book you're going to want to read next. But of course, in recent years, the last year or two, we've seen a lot more talk about and a lot more press about these AI systems that do things like write entire essays, chat GPT, or create images from scratch that are original images in some way. But those are really doing too is they're making more sophisticated predictions. In that case, what's going on is instead of just making predictions about the next book you're going to want to read, what it's doing is you're telling it, you know, write me an essay about, I don't know, what Aristotle might think about artificial intelligence, right? And it's looking at all these essays and all this data about Aristotle on the one hand and artificial intelligence on the other hand and maybe some ethics on the other hand. And then it's making a prediction about how that might unfold. Same thing with when you see these images created from AI, what's it doing? It's just looking at scores and scores of images and then making a prediction based on all those images that it's a technical term, but it's been trained on those images and then saying here's what that kind of image might look like. The second question about why we should be thinking about it and why people of faith specifically should be thinking about it. I think two reasons, first one is that as with any other new technology, we need to think about how that technology is going to be integrated into our lives. So should it be integrated into our lives? In which way should we be using it and which way shouldn't we be using it? And by the we here, I mean both socially, but also on an individual level. But also, I think something that's maybe different from previous technologies is that I think AI really encourages us to ask questions about who we are as human beings. Unlike other technologies when we interact with it, it's not quite like holding a mirror up to ourselves, right? Even fairly sophisticated technologies like social media, they're really impressive, but we don't look at that and say that kind of seems like another human, right? That doesn't even really make any sense. Whereas when we're interacting with an AI, it sort of disturbingly strikes us as a little bit human-like, and so we're led to ask these questions as people generally or as people in a faith specifically, what is it that makes us distinct? Why aren't we the same as this thing I'm interacting with and is in fact this a real similarity or just something superficial? Yeah, it's really helpful and even just the omnipresence of AI, like it's necessarily in everything now or will be in the next couple of years. And so we have to deal with the signs of the times, right? We've got to figure out how we're going to exist in this world. Exactly. Yeah, and I think it's something that a lot of the hype around things like chat GPT and the image creation, I mean, I think it's really interesting that Pope Francis has become one of the memes of the image creation, right? We've seen so many Pope Francis images created using AI. And his coats. Yeah, oh, yeah. The really fancy coats and they look completely real, so it's pretty wild. I think there's rightly hype over those applications because they are something new and something interesting. And I really do think something kind of existentially disturbing because we've never seen something like that before. But like you were just saying, Eric, it's not just the flashy things that AI is getting integrated into. One example is if you've opened up your email in the last 10 years, you'll start writing an email and then the email app will complete the sentence for you. That's a form of AI. It's an artificial intelligence algorithm predicting how your email might be written. YouTube, Amazon are all using AI, more and more AI is getting integrated into banking into healthcare. So I think when we talk about AI, it's good to focus on the big flashy things, but also to make sure that we're carefully thinking about the less flashy ways in which it's showing up in other areas of our lives as well. Yeah. The email example, when I read that in your book, I was like, Oh my gosh, you're right. That's a really helpful, simple way. I want to ask you to unpack, again, an example you use in your book, talking about algorithms, because I think one of the things about AI that's certainly kind of like dumbfounded me and I think probably other people as well is this idea that, Oh, no one knows how it works. And you're like, Well, what? What have we unleashed? Like, how is that possible that you create something and no one knows how it works? But you kind of set up the scene where we're as though we're trying to land a plane and twisting a million knobs, right? So can you maybe, with the goal of helping listeners to understand how it's possible, we could have this thing that no one fully understands, drop us into that scene in the plane there so that people can kind of use that example in their own reflections on this new technology. Yeah. So when you're building an AI, there's really two main parts you need to know about. The first is what's called the training data and the training data is just lots and lots of information or data that the AI is looking at to identify patterns in. So again, the one that all of us are familiar with is something like your YouTube preferences, right? It's looking at data about how you and others watch videos to predict what videos you're going to want to watch next. For different kinds of AI, it's going to be different training data. So it might be medical records if you're training in AI and health care. Something like chat GPT is just going to be trained on lots and lots and lots of text. Okay. So that's the first part. The second part is the parameters that allow that AI to make the predictions that it's making. So the way I like to think about this, and I should say, Eric, I'm not a computer scientist. So I'm coming to that this as an amateur too, but I think about it as a dial, right? So you can think about, okay, we're going to make the dial this way and now it's going to make this kind of prediction or going to correlate this set of data with this set of data. Now, now we're going to tweak the dial a little bit that way. And you could think about a human tweaking like four or five dials in a way that you're kind of trying to get the mix right, right? If anyone's ever done any audio mixing, you know, you're kind of boosting the base a little, cutting the trail. Oh, that sounds about right. In the case of AI, we're not talking about four parameters to dial it in. We're rather talking about millions of parameters, right? So just scores and scores of parameters. It's more whenever you're changing one of those dials, you're subtly affecting the way the other dials work. So again, think about an audio mix where when you change the base, it actually subtly changes the volume, the trouble in the mids, right? But now think about you've got scores and scores more dials than this. And every time you adjust a dial, it's changing the other ones as well. Here's another layer of complication in an AI that's really sophisticated. It's not a human changing those dials. It's the program itself that's automatically adjusting the dials as it goes. So what this means is that because you've got so many parameters, because the machine is the thing doing the adjusting, and because when you adjust one parameter, it can affect other things in certain ways. It means that ultimately at the end of the day, well, any specific instance of an output in an artificial intelligence, you might be able to trace back to here's what was going on. So the big picture is just impossible to tell and to trace all the web of parameters and dialing in and things like that, in a way that you can actually understand what's going on. So the upshot here is that, and it's really counterintuitive, but that when we get an AI dialed in right, and it's doing what it's supposed to do, you actually really can't understand the full picture of what's going on, which is kind of surprising because we've created something that is not fully transparent to us anymore. Yeah, which is a little terrifying, but I like that image of the dials because it helps me understand at least, oh, okay, this is how we could get there. Like we're constantly trying to course correct. Even that idea of audio mixing, because I know when I'm working on a video or a podcast or whatever, I'm just kind of tweaking here and there and can't ever quite replicate what I've done. Exactly. In case with a lot fewer dials too, and you're the one doing the turning, and even then we lose track of, so how did I dial that in, and it's easy to forget and easy to lose track of that. I just assume it's because I'm terrible at audio mixing, but maybe you're right, maybe it's because I am on my way to becoming an AI. Well, early in your book, you point to both this current reality and this ongoing potential right of AI to unravel or really to further unravel our trust in one another, right? And what better example than this idea that we don't even know, we don't even know how these, how it works entirely, you know, and then all of a sudden it's, it's taking on the persona of, of, you know, presidential candidates or whatever. Um, so you say this, you say, quote, the better AI gets, the more difficult it will be to tell if we're engaging with a human and the more difficult this becomes, the more distrust will be sown and the less human will become. This in fact is one of the most pressing threats to staying human in an era of AI. End quote. I like, I like this idea of kind of centering on trust. So I'm wondering if, if you can say a little bit more about why trust is necessary to our existence as, as, as human beings and how this kind of weaves into this, the story of AI. Yeah, it really is a, a central feature of almost every human relationship. And you can think about this with some specific examples and how AI might interfere with that. So I, I work at a university, which means I interact a lot with students. And we do talk about AI a lot in class and we talk about, you know, what works, what doesn't work, what misuse might look like and that sort of thing. But there's something that's changed in the last couple of years, not just in my mind, but in the mind of a lot of my colleagues, as we talk about this all the time, which is when we get a essay from a student, there's always just a little voice in the back of your head that's wondering, did Eric write this or did he phone it in and just use chat GBT to write all are part of this essay. And even if you didn't, even if no one in the class did, all of a sudden there's just this little distrust that's been sewn in between teacher and student that if you let that get out of hand, I think can really undermine the whole educational experience. Because the, if all of a sudden you have teachers who are completely distrustful of their students, that's no good. Students are going to pick up on that and think, well, this teacher doesn't even trust me, like, why would I put in effort? Something that might land a little bit closer, if you're not in the educational spaces, you can think about, so I'll give you a personal example here that'll probably generalize, my father-in-law is a letter writer. And literally every week, been married for 13 years now, every week, my wife and I and my family will get a letter from my father-in-law that's handwritten to kind of tell us about the week and ask how we're doing. That's pretty cool. Yeah, no, it's amazing. Let's say that a year from now, all of a sudden, those go from handwritten to typed and I'm wondering, you know, I wonder if Bob is using a little AI to help out with this. And you could think in your own life about this too, you know, maybe all of a sudden, a friend of yours always checks in with a text or an email. And in the last year or so, there's like a little shift in tone and you start wondering, are they really writing to me anymore or not? And then all of a sudden, this mistrust has taken hold, which can really eat away at those relationships. Me with my father-in-law, you with a friend that you keep in contact with over email. So I really do think that this trust is foundational to almost all human relationships and the fact that AI can interfere with that trust. And again, here's what's so pernicious. It's even if no one involved is using the AI, the suspicion and the possibility of AI use is enough to inject just a little bit of distrust into the conversation, which in turn can chip away at those relationships. It's interesting that in the spiritual exercises of St. Ignatius, trust is really foundational to this relationship between director and directee, right? The director has to trust that the Holy Spirit is guiding the person they're directing and the directee has to really trust in the advice of their director. I don't know if AI is going to upend that relationship anytime soon, but I think it's another really good paradigmatic example about how any relationship that's going to be worth its salt has to have trusted its foundation. And because AI is capable of undermining that trust of challenging it, it's something that I think is one of the biggest issues as we're entering this new era of AI is to make sure that we're aware of that and that we're doing what we can to mitigate that distrust that can creep in. Yeah. I mean, yeah, it's almost obvious in the sense that of course trust is important to relationship, but then what's not obvious and demands, as you said, reflection because it can't be quite, I like the word pernicious, is this new injection of technology. And you use the exercises as an example. I always see what does Ignatius say the evil spirit likes to work in where it's not transparent in the shadows where things can't be fully seen, which is perhaps what you just described as how algorithms are formed. So it's a little, I don't know, it's a little unnerving in that sense. I don't know if you agree or not. No, it is. And I think that's my big worry. So I think that sometimes we'll get focused on actual uses of artificial intelligence, right? Is what happens in my context when a student actually just goes on chat GPT and uses it to write an essay? Or should I in this instance use chat GPT to send a nice thank you to Eric after we've had this conversation? And those are good questions. Really crucial questions to ask, but I think in some ways, the bigger question and the one that gets asked less often is even if no one involved is using it, how does that all of a sudden insert this question mark into my follow up emails into relationships I have into relationships with my students? So yeah, I totally agree. And I think that it's the lack of transparency both in how the thing is working, but also who's using it? When are they using it? That is part of what's going on here and part of my big concern and part of why I want to get out in front of these issues and get people to start talking about how are we going to build relationships and engage with each other when there is that question mark that wasn't there before? Yeah. I don't want to spoil anything for anyone who might read your book or who should, who's currently in the process of buying your book now as they listen to our conversation. But the main villain of your book, at least in my mind, is narcissism. I was really intrigued by your example of Zoom fatigue as a one that shows the error in this way of thinking. So I'm wondering if you can tell us what narcissism is and how we do or do not experience its insights via Zoom. Yeah. So what narcissism is, is it's really a cluster of theories. So it's, you know, it's a historical position. It's a current way of thinking. So, so people use it in a lot of different ways. The way I like to think about it is what Gnostics are unified by is a flight from or a rejection of or a de-emphasis on our bodily aspect. So what, what almost all Gnostics have in common is in some way saying that our body is not the most important part about us or maybe that our body isn't even essential to who we are. So maybe we're in a material spirit, maybe we're disembodied, but whatever it is we are, these bodies are things that are not us and in fact should maybe be gotten rid of or distance from as much as possible. Where Zoom fatigue comes in, and this is an experience that I use in the book because it's one that I think all of us have had experience with by this time, post, post sort of COVID lockdowns and doing everything via Zoom for sure is that if Gnosticism was true and if really the most important aspect of ourselves was disembodied, a disembodied existence of the kind that we had for several years over COVID and just that we continue to have on Zoom wouldn't pose much of a problem, right? Because the body part of us would be irrelevant, so hopping on Zoom to interact with each other would kind of be just as good as the real thing, right? We're still able to talk, we're able to connect with each other, sure we're not bodily in the same room, but that's not that important to who we are anyways. But of course that's not what we experience, right? As a day spent on Zoom, our spent on Zoom can be hugely draining and we have all sorts of evidence for this, you know, total Zoom learning is just not going to be as effective. People experience Zoom fatigue. We were all just so excited to get back in person once we were able to after COVID. So all that suggests, it's not like a knockdown argument against Gnosticism, but it suggests that our bodily lives are not just an afterthought, but they really are an essential part to who we are. And I think that Zoom fatigue experience is a really nice every day kind of experience that suggests that, no, that the Gnostic view here can't be the whole picture. Our bodies have to be more important than just something that we can disregard or get away from. And so now help us make the connection between that argument. It's a very like a very common experience of being on Zoom, right? So it's helpful. I think we can all kind of picture it and hold it in our minds. Make the connection now between that and AI and kind of some of the arguments you're making about why it's important to know and understand Gnosticism to engage with AI in this current moment. Yeah, so I think a big part of the connection here is that I mentioned already that AI is maybe not totally unique, but I think it's distinctive in how much it asks us to reflect on who we are as human beings. Because when we're interacting with chat GPT or seeing an AI create an image, it sort of looks to us like the sort of thing that a human would do. And because of that, we're sort of naturally led to ask these questions about what is that makes a human different from a machine? What is it that an AI is doing that's similar to what I do and that is fundamentally different? And I think that if you approach those questions with a Gnostic frame of mind, I think they become much more difficult to answer. Not impossible to answer necessarily, but I think it becomes confusing because you look at what a machine's doing, what I'm doing, how I might write an essay, how chat GPT might write an essay, and you think, well, the input and output here look about the same. So maybe it is approaching humanity, or I think even the more problematic conclusion is maybe all I am is a computer algorithm, or basically at bedrock, all I am is a computer algorithm. I think that argument, though, only or best has legs, if you're going into it with this assumption that our bodily existence is not that relevant. Because as soon as you add that idea back in that our bodily lives actually are important to who we are, it becomes clear that whatever AI is doing, it's not the same thing as I'm doing. Why? Well, because I'm an embodied being, I'm a rational animal, I'm body and soul together. And Catholics have a whole bunch of ways of talking about this. And once you add that back into the conversation, there's still interesting questions to ask about what an AI is doing, but I think some of the existential dread is taken out. Because we see, well, no, it can't be a human intelligence, it has to be something different from that, and then we can move on from those sort of big picture questions to maybe some more productive, smaller scale questions about what exactly AI is doing. Yeah, and I'm always in favor of removing existential dread. So if we can do that and so the other side of that pendulum that you kind of, again, you walk us through really, really, hopefully in your book would be kind of materialism, right? This idea that we're just mud and matter and that's it. And so the pendulum swings back and forth and you say, well, in fact, actually there's a Catholic anthropology that can help us and you've already been going to hint at this, help us to really sink into, again, our own, probably our own calling in this moment. And you write, quote, "Our bodies are essential to who we are, but human nature is not exhausted by our embodiment." I really, I loved that phrasing. I think that, again, it's just, it's the both and, right, Catholicism is always both and. Exactly. So help us to, you know, that's a great thing to think about in philosophy class. But for those of us who are sitting at our desks or driving in our cars or going about our daily lives, how does that reality play out in a way that is helpful and engaging? Yeah, I think there's, there's sort of two steps to address that kind of question. And the first one is to remind ourselves of our embodied existence. I think we really are at this pivotal moment in the development of technology and the way we relate to each other, in which oftentimes our knowledge of ourselves as embodied beings comes apart from our practices that a big chunk of our days is spent like this. We're sitting right now in different locations corresponding to a screen. And a lot of our lives are like that now. We spend a lot of our days sending emails on Zoom calls, things like that. So I think the first step is really in this technological age to do things regularly that remind ourselves that that's not all we are. We aren't merely disembodied beings that it's good enough for us to show up together on a Zoom screen. There actually is more to human existence than that. So do embodied things. Take a walk with a friend, wrestle with your kids, really enjoy a meal together with family or friends. I think it's actually really important to remind ourselves of our embodiment. But like you said, it's, it's also important to remind ourselves that we're not just mud. We're not just biological beings. And again, you said, this is the, the great both hand of Catholicism. And this goes right from the catechism is what's the human being? We're body and soul together, right? So we're, we're both. We are body. We, we're essentially body, but that's not the whole story. We're also soul. We're something more than that. And I think the trick here, and it's a different move than the Gnostic trick. The Gnostic trick is to say the immaterial side of us and the way we get to that is by way of flight from the body. The Catholic response to this is to say, no, it's actually the immaterial side of us is a perfection of our body. It's a deeper dive into our embodiment. So the, the easy example here is sacramental life, right? Is when we, when we pray and we kneel, when we receive the sacraments and all of these things were engaging in bodily practices. And they're not ways that we're saying, I'm going to get away from my body in order to transcend it. It's rather, no, I'm going to transcend my body in some ways by becoming even more embodied. So this is always, I think, the, the move that the Catholic intellectual tradition takes. Things like the conversation between faith and reason, right? It's not that when we have faith, we depart from reason. It's that faith is where we go after we've exhausted the limits of our reason. And then faith brings us to a new plane of understanding. Same thing with going beyond our body is that we're not going away from our body. We're fully embodied and then we're going someplace with that embodiment. That's, that's what I would say is sort of a very practical takeaway. Remind yourself you're a body and then lean into your embodiment in ways that actually end up taking you beyond it. How do you think about, I think you mentioned this briefly in your book, kind of the, the far going, the far direction of embodiment, right? I'm at the gym all the time. I'm, I'm super, super obsessed with what I eat, what I look like, anxious about probably my bodily appearance and that's certainly probably a way of being, I don't know, I don't say to embodied, but, but that's a temptation, right? And probably not the hell of a bad way of describing it. Yeah. Right. But I'm also, as I was reading your book, I was also thinking a lot about, I mean, you know, they're folks that, that are, they're bodily limited, right? Their bodies are limited or, or as, as, you know, life unfolds, you know, limbs are lost or eyesight is lost or things happen that deteriorate the body. And I wonder if, if, you know, there's not of like, oh, I wish I was just spirit because the body is, is so hard, it's so hard to be embodied sometimes. How do you, how do you think about that? I don't, I don't have quite phrase this question, but those are the kind of the two polls I was thinking about this kind of like, I got to go to the gym, I got to, I got to bulk up, I got to eat right or like I'm failing, you know, half of my, my lived experience. And also this idea of what about folks that are, are really struggle in their bodies for very valid reasons, how do you think about those, those two kind of issues? Yeah. I mean, I think it's something we all face at some, some point in time. And I think it's one reason for the continuing allure of Gnosticism, his bodies are super inconvenient, right? They, they get sick, they, they start to break down for all of us eventually. And they, they limit us in important ways. You know, I, I wish I could do this thing, but I can't do this thing because my body's not going to let me do it. I think that the answer here is seeing that as an important feature and not necessarily a bug of our humanity is that, and this is not to discredit people who have, like you said, very legitimate reasons for suffering bodily. And you know, I would never want to say, well, that's a feature and not a bug. That's not what I'm saying. What I'm rather saying is that human beings are limited in important ways. And part of those limitations have to do with our embodied nature. I can't be anywhere right now. I've got to be where my body is. And seeing that as a crucial part of what it means to say that we're human is that we're limited in certain ways. I like to think about this in terms of the medieval great chain of being, right? So the great chain of being, and it's, you don't have to accept the metaphysical picture to see, get some insights out of this. The idea is there's this chain of being with God at the top, then the angels, then humans, then animals, then plants, then rocks and so on. So you've got all the, all the beings in existence organized along a hierarchy of kinds. I think one thing though, that that picture does, again, even if you're not going to take all the metaphysics on board, is it says humans are really incredible, right? They're high up on this chain, but at the same time, they're importantly limited. We're not God, we're not angels, right? So angels are not limited in the same way that human beings are. And I think that one thing with, with again, this, this idea that the person who wants to escape their body or the person who's leaning too much into their embodiment and trying to sort of get this perfect body or perfect health or perfect diet in a way that I think really, um, sidelines a lot of ways of being human that are fully human is what we're doing is we're leaving out the crucial part of our humanity, which is that part of being human is being limited, it's being limited bodily, it's being limited intellectually, it's being limited in all sorts of different ways. And I think realizing that can help correct for some of the, the, um, the pendulum swings on either side, no, we're, we're not, we're not just spirits. To try and be just spirits is actually a deeply unhuman way of proceeding, but we're also not just bodies that are called to perfect embodiment, we're limited in important ways. And then I think that, that, that vision of ourselves as limited, I think is a really good correction as we are trying to avoid both swings of that pendulum. Yeah. Let's dig into that more. Cause that's really helpful. Um, a few episodes ago, you know, listeners might remember, and if they don't, they should go back and listen to this, uh, but I spoke with Dr. Jason Eberle about ethics and transhumanism. And I know you, you do some work in that in those areas too. Um, so, so Jason and I also spoke about what makes us human, right? This is kind of a perennial question. Uh, and this was particular important when it came to this idea of physically altering the human body, which is, I would say at the heart of transhumanism, altering the body to, to achieve some state of perfection or permanence. Um, so what I took from my conversation with Dr. Eberle, um, was that foundational to what makes us human, not unlike what you're, what you're hinting at here. Uh, are these two things as idea of we are necessarily vulnerable and we are necessarily finite? Um, so what, how do you think about that? Um, and how do you think about that in terms of, of our engagement with AI? Yeah, I, I think that a big part of the, the picture here that I want to paint for people is that I think a lot of the existential questions we have in the face of AI, whether is this thing a human being, is it conscious, is it sentient, or even if you're not asking that like it's disturbing how much it seems to be conscious and sentient or like a human being to also questions about, well, this actually not only seems human, like it actually seems a little better, right? That, that essay I just asked it to write actually was better than the one that I would have written or the thank you note to grandma. Man, it's way more eloquent. Um, so these questions about super, I love the AI so much more than me. Yeah. Exactly. And, and you know, it's, it's all knowing in a way. It seems almost God like, um, and I think that it's coming to terms with our finiteness and vulnerability in some ways that's the appropriate response here. It's saying no, it's actually not that as an AI gets smarter, it becomes more human or could even eclipse our humanity. Why because that is not ultimately what makes us a human being to begin with, um, and it's a tricky space to navigate here because part of what we are as humans is we're rational animals. So intelligence is a really important part of our humanity, but there's also a really important part of the, the Catholic intellectual tradition that says any individuals intelligence is not what makes that individual morally worthy or have human dignity, right? So someone at the end of life still has human dignity, even if their intelligence is fading. Um, someone at the beginning of life has dignity, even if their intelligence isn't there yet. Um, and I think this principle generalizes and really makes us think about, well, what is it that makes me human? And it's not the sorts of things that AI can replicate. It's rather the fact that I am limited, that I'm vulnerable in certain ways that, um, that sometimes I, I do make mistakes and I do have, you know, a, a poorly worded thank you note to grandma. That wasn't that eloquent or, you know, an essay. Yeah, exactly. I know. Um, yeah, the, the grandmas deserve, deserve our best, but even then sometimes we, we make some mistakes. Um, but I think sort of realizing that that messiness and that finitude is a crucial part of our humanity being humbled about that and being vulnerable about that. I think is a really great antidote in remedy to a lot of the questions that people are having about AI right now. You're making me think of, um, again, to bring in the designation spirituality part here. Uh, the, the two standards, right? You're familiar with the two standards, um, that, the kind of famous meditation in the second, second week of the exercises. Right. Um, and just, you know, the idea of, of, of the way of Christ, right is one of, of, uh, humility, uh, poverty, rejection, humility, right? Which in, in some ways is what you're describing, this idea of, of to become fully human, um, the spiritual life invites us to this, this kind of uncomfortable, unpleasant path of, of poverty, rejection, humility. But that means we're finite. That means we're vulnerable. As opposed to the other side of things, right, is, is this path of, of, uh, uh, inordinate wealth, inordinate honor and, and bloated pride, um, which to me is that, oh, I, I can be anything but vulnerable, uh, and then, uh, I don't want to engage with other people. And then there's, there's a degree of isolation, which I think is kind of the, um, the necessary trajectory of that, of that path. Um, but I think that even in this, um, conversation about technology, uh, an anthropology that, that, that, that some of this wisdom in the Ignatian tradition, I think plots some of the, the course. I don't know. Yeah. Absolutely. And I think that you're exactly right to tie discussions of transhumanism to discussions about AI, because I think baked into, you could call, you know, sort of technological utopian or at least technological, optimist conversations that either AI is going to make human life significantly better or it's going to transcend our humanity. Same thing with transhumanism, right? We're going to engage in bodily or cognitive modifications that are going to make life better for us. I think there's a lot baked into what's counting as better and what's counting as our goal for human beings in the first place, right? That faster, more efficient, more profitable, better looking, we could make this giant laundry list of, well, here's all the assumptions that are baked into the whole project, right? Like, why are you pursuing these things? Well, that's, that's a big part of why they're being pursued, that that's the good that's being aimed after. And I think as soon as you articulate that laundry list, right, more efficient, more profitable, better looking, so on and so forth, hold that up next to the Beatitudes, hold that up next to Ignatius' two standards thought experiment and it becomes crystal clear what the Christian should be saying about this, right? That that's, that's not the standard of Christ that's being held up and the other standard, the standard of humility of rejecting not entirely, it's not that efficiency is always bad, but as the thing that we're pursuing primarily, of course, that's not the thing that Christ is calling to us too. He's calling us to humility and all those other virtues that we were just talking about. Yeah, I think that's really helpful and I, and I liked how you reframed that conversation into the question, right? It's not about what could I do, but what is it for, right? It kind of in some ways puts it in its, in its place, in its place, but, but, but reminds us, well, how are we engaging with this? This is a tool, you know, again, for the betterment, hopefully, of the world and, and human kind. I want to, I mean, any, any conversation, as I did with Jason, I have to do with you, any conversation about these kind of technological wonders evokes, in my mind at least, you know, the Starship Enterprise, maybe some lightsabers, things of that nature. And I can see, I mean, listeners can't see, but I can see Lord of the Rings is on your shelf there. You can likely see all my Star Wars stuff behind me. I think we have a very, very common reading list over the last decade or so. I think you're, you're probably not wrong. So let's, I mean, just kind of to, to end our conversation, I wonder if you might talk a little bit about how some of these, these kind of speculative stories, sci-fi, fantasy, horror, you know, what have you, how have they influenced your own imagination, right? Which is so important in the nation tradition. Your own personal imagination and your, your ability to engage with some of these, these things that seem like they are still science fiction. Yeah. So short answer is a lot. So I actually teach a class on, with a friend of mine who's a biologist, then we read science fiction novels and then talk about the philosophy and the science and the theology that's baked into those novels. So it's really the science fiction is the spine of that course and we've been teaching that for years at this point. So do you have to edit that class, is that, is that available to audit? You actually, so there, there is some content. You can go to scienceforhumans.com and we make some of the content available online. So yeah, you and your listeners, it's just, it's a, it's a public facing website that obviously doesn't have all the ins and outs of the class stuff, but a lot of the content. We just post, post available for anyone who wants to look at it. So that class has been really foundational to how I both approach these topics with students and in my own thinking. And one thing in that class and what I try and do in my own thought is to really emphasize the idea that when we talk about future technology and when we use speculative fiction. We're talking about the future and we're talking about imagined worlds, but we're also at the same time and maybe even more centrally talking about the present and talking about ourselves. So what I find that speculative fiction does really, really well is it takes an idea or a train of thought or a currently available technology and almost is like a caricature of that. It, it takes one aspect of it and blows it out of proportion so that we really focus on this thing that's going on. And the resulting thing that you get, again, whether it's, you know, a whole fantasy world like the Lord of the Rings, I've been on a Brandon Sanderson kick lately. I just love Brandon Sanderson stuff. Star Wars, right? And you sort of take it and name it. All these things, what they do is they, they take away of thinking and then have us look at it more closely or when we start talking about how might AI develop in 20 years, right? We're taking it. Well, here's the thing it does now. What if you like blow that up and that it evolves in this way? So yes, we're talking about the future. Yes, we're talking about some other imagined reality, but we're ultimately doing is we're looking at this blown up version of that part of reality, this caricature. So that we then can go back and look at ourselves more closely. And if anyone's ever had a caricature done of themselves, you know that this is what happens, right? You, especially if it's a good caricature artist, you look and you're like, whoa, wait, is that really how my face looks? And then you look and you're like, oh, that kind of is. I like never even noticed that before. Wow. My forehead is enormous. How does this happen? Exactly. Exactly. And I think that speculative fiction does the same thing, right? So we're able to say, oh, wait, like, you know, is the dark side and light side of the forest, right? We kind of look at that and we're like, oh, like, yeah, so this is an imagined world. But how do different ideologies in different way of carving up our world reflect that and how can I understand my own reality and my own thought processes in light of this imagined world that I've just been interacting with? So that's really the way I think about it is that it's important to kind of realize that as we're talking about speculative worlds, whether they're fantasy, whether they're sci-fi, whether it's TV shows or movies or novels, that we're always looking at those worlds in order to come back and understand ourselves in our present moment more fully. Yeah, well said. Last last question, for real, I want to go back and underline something you said earlier, but at the end of your book, you said one of the ways for us to kind of exist or deal with this moment are I think there's little acts of resistance, something to that effect. And you mentioned earlier, you wrestle with your kids, go for a walk, really enjoy your dinner, that kind of thing. Give us some more thoughts on that. How might listeners today who might be thinking about AI engage in these little acts of resistance and how might they offer those kind of be almost as a sort of prayer? Yeah, I think what's crucial is to be really deliberate with our use of artificial intelligence intelligence. So, I think it's tricky here because I think a lot of the little uses of AI are not going to be crossing any ethical red line in the sand. I use an email example after this is done, maybe I send you an email, Eric, that I actually just use an AI because I've got a busy, busy day, right? So I don't think that would stain this point. It would hurt. No, it would really. Yeah, I know. I know we've really connected over this conversation here. I don't think that's necessarily crossing an ethical red line in the sand, right? It's not like I've done something morally atrocious, but what have I done? And let's say even say that the email I write to you as a follow up is better than the one I would have written by myself. What I've done is I've undermined just a little bit of the human connection that we've started to develop, right? And I think when you put it in those terms and you sort of say, well, why would I do this? Well, it'd be more efficient. It would superficially be a better written email. And then you say, but what would I lose? Well, I would lose this relationship I've started to develop with Eric. And even if he never figures out that I used an AI, I've sort of interiorly started to undermine some of the relationship I have with him. So I think really deliberately asking ourselves, how does my use of AI in this specific instance either undermine or support my own humanity and also my humanity of others? And I think that when we ask ourselves that and ask it in a really thoughtful way, a lot of the ways in which we're tempted to use AI, but also new technology generally, we realize, oh, no, this is not a human centered use of the technology after all. That doesn't mean that we'll never use it. Here's just one really quick example. Let's say that I'm prepping for a lesson that I've got to teach tomorrow. It's 5 p.m. in the evening and I really need this slide with this image, right, in order to show my students. Same time my kids are tugging out my arms saying, dad, dad, it's five o'clock and we go out and like toss a football around or whatever. In that instance, I might think, well, maybe I'll just use an AI to finish this last task of my day. Why? Well, I'm not really undermining this relationship with my students. It's just a PowerPoint slide. And in fact, I'll be able to spend more time with my kids. Now, you might disagree with me on that, but I think there could be instances in which we actually look at a particular application and think, no, actually, in this instance, it's not that the efficiency or the shiny newness of the tech use is undermining my humanity. It's maybe actually helping support me and my relationships. So that would be the question I would direct people to is really ask yourself, is it supporting my humanity? Is it supporting my relationships? Or is it doing the opposite? And I think that's a really clarifying question to ask. Joe, I think it's really, really helpful place to end, really good insight and really, again, like a good thing that we can all go and do and think about right now. But before we do that, I wonder where might we find your book and where we find you and your work all across the internet? Yeah. So the book is available on Amazon or anywhere else you're going to buy books from. From my work, you can take a look at just Joseph Vukov, just my name, no caps, no periods.com. And I try to keep that as up to date as possible with both books that I'm writing, but also articles, classes I'm teaching, things like that. Cool. And we'll put some links to those things in the notes for today's podcast. Joe, this has been a ton of fun. I hope you'll come back and talk about more technology, speculative fiction, and maybe Star Trek in the future. Absolutely. Yeah, it was great, Eric. Thanks a lot for having me. AMDG is a production of the Jesuit Media Lab, a project of the Jesuit Conference of Canada and the United States in Washington, D.C. This episode was edited by me, Eric Clayton. Our theme music is by Kevin Lasky. The Jesuit Conference Communications Team is Marcus Bleach, Michael Lasky, Megan Leepch, Becky Sundalar, and me, Eric Clayton. Connect with the Jesuits online at Jesuits.org, on X at @ Jesuits news, on Instagram at @wearethejessuits and on Facebook at facebook.com/jessuits. You can also sign up for our weekly email series, now to discern this, by visiting Jesuits.org/weekly. The Jesuit Media Lab offers courses and resources at the intersection of Ignatian spirituality and creativity. If you're a writer, podcaster, filmmaker, visual artist, or other creator, check out our offerings at Jesuit Media Lab.org. If you or someone you know might be called to discern a vocation to the Jesuits, connect with a Jesuit vocation promoter at via Jesuit.org. Drop us an email with questions or comments at media@jessuits.org. You can subscribe to the show on iTunes, Spotify, or wherever you listen to podcasts. And as St. Ignatius of Loyola, may or may not have said, go and set the world on fire. [Music] [BLANK_AUDIO]
Think back to the early days of ChatGPT and generative AI. It was a topic discussed on seemingly every podcast and countless news segments. Nearly every one of them started those segments with some elaborate introduction about the risks and opportunities that the new technology posed, how the way we communicate with one another would be irrevocably changed, how we would no longer be able to differentiate the writing of humans from that of computers. And then, to conclude the intro, the host would say something along the lines of, “Bet you didn’t realize everything I just said was written by ChatGPT.” Don’t worry—we didn’t do that here. All that clunky writing is your host's. But for a second, you were unsure. Even now, you might be wondering if you can trust us, if you can take us at our word. And that, our guest today says, is a problem. Dr. Joseph Vukov is an associate professor of philosophy and the associate director of the Hank Center for the Catholic Intellectual Heritage at Loyola University Chicago. His latest book—and the topic of today’s podcast—is called “Staying Human in an Era of Artificial Intelligence.” Joe points to this erosion of trust as just one of the threats AI poses to our ability to stay human. But he doesn’t stop there. Throughout our conversation, he takes on this idea that just because AI can write something that sounds vaguely human doesn’t at all mean it’s eroding the building blocks of our humanity. All the same, as people of faith responding to the signs of the times, continuing to reflect on AI and its inevitable role in our present and future is important. And that’s what we do today. It’s a fun conversation. If you want to learn more about Joe and his work, visit josephvukov.com and check out the links below. Get his book: https://www.amazon.com/Staying-Human-Era-Artificial-Intelligence/dp/1565485998 Learn about his course: https://www.scienceforhumans.com/