So one quick note before we begin this episode, with only a single exception to date, neither myself nor any of the guests on data skeptic have ever used any curse words, in this particular episode, both my guests and I are going to use the B word, but not not that B word, the word often the phrase I guess often abbreviated as BS. So if that's offensive to you, I don't know, maybe get over it, but if there are kids in the car or something like that, we're going to say BS the long form a couple of times. We use it in a very academic way, we define its meaning in an empirical sense, but you know, you've been warned. Data skeptic features interviews with experts on topics related to data science, all through the eye of scientific skepticism. Gordon Penny Cooke is currently working on a PhD in cognitive psychology at the University of Waterloo. His prior work has explored the differences in language used by climate change deniers and proponents, and covered topics such as how people offload their cognition to smartphone technology. His primary research interests center around why individuals behave rationally and analytically or trust their gut feelings and instincts. He is the lead author of the recent paper on the reception and detection of pseudo-profound bullshit, which appeared last year in the Journal of Judgment and Decision Making. It's that particular work that will begin our conversation today. Gordon, welcome to data skeptic. Thank you very much. So maybe to kind of frame the conversation, could you define what pseudo-profound bullshit is? Well, bullshit to have a more general definition. The first definition was from Harry Frankfort, at least the first one that we're using in the paper. And what Frankfort says is that bullshit is something that is constructed without any concern for the truth. So bullshit is different from lying because the liar is very concerned with the truth, they're just kind of concerned with subverting it. Whereas the bullshit artist or whatever you want to call them, they're trying to impress rather than inform or disinform or whatever. And then pseudo-profound is just the editor made us give a label to the type of bullshit that we investigated. And the reason that it's pseudo-profound is because the actual items we used were just sentences put together with random words. So it seems like they have, they are profound, but they aren't actually, they're just kind of random words hence pseudo-profound. Is there a deep taxonomy of types of bullshit or was this just an effort to be especially specific? Well, it was an effort to be especially specific, partly because this is actually the first empirical study on bullshit. When I first submitted the paper, it was just on the perception and detection of bullshit. But the editor compelled us and we agree with them that if people are going to do more research in this field, they have to be kind of specific about the bullshit that they're investigating because it comes in all different flavors. Like, you know, you could bullshit with somebody out of pub over beers and that's clearly quite a bit different than the bullshit that we investigated in our study, right? Yeah, absolutely. I was really excited to see this study come out because it's a true empirical study of a topic that really had, at least to the best of my knowledge, no previous such treatment. Perhaps you could give us an example of a pseudo-profound bullshit statement, so people get a sense of what one would be like. One that we got from the website called wisdomoftalkra.com is wholenessquiet's infinite phenomena. Wow. Yeah, right? That sounds pretty good. Yeah. Say you're all what it means. You know what, I hear these or read the ones that are in the paper. Maybe I should be embarrassed to say this, but they do make me stop and think. I get this feeling like, "Oh, there's something I'm missing here, but I'm never able to get to the bottom of it." Do you think that's a common experience or do people experience these in different ways? I think that that would be the way that typically people come around them. And I think what happens is you see the item. It has the structure of a sentence. So no one's going to assume that it's random words put together, obviously, right? You're going to assume that someone created the sentence and they probably did it for some sort of reason. So this is kind of the intrinsic bias we have that makes us receptive to bullshit. We want to assume that there's probably some sort of meaning here. The way we approach information is that we assume that it's right first and then maybe we false type. What happens often is that people just don't get to the falsification stage. You know, they read it and they say, "Well, that sounds pretty good. Maybe I don't know what it means, but they might even say, "Well, if I don't know what it means, then it must be something really important." When people create these types of statements, is there an intention behind it? I mean, I guess maybe I could argue that some of them are just poets and these are things that sort of sound nice. Or do you think there are other intentions that some of the authors of these types of statements have in mind? Well, I think there's various intentions. So, I mean, you bring up artists and poets. In their case, I mean, I wouldn't call it bullshit per se because their stance is that they're not trying to give people information about something. Sure, they're not concerned with the truth, but they might be concerned with the truth in a way but more concerned with trying to reveal new truths through use of metaphor and stuff like that. It's not as direct as, you know, running a scientific study or whatever, but they're still sort of concerned with the truth in a different sort of sense. The thing that I would call bullshit is when people are acting as though they're concerned with the truth. You know, they're making truth claims about how the universe works or they're talking about something to do with folk psychology or whatever. Instead of saying something specific and direct, they floured up with vague and abstract words. So that's the kind of thing that I think this is getting more towards. It's not really about how artists use flowery language. Is there anything grammatically that would separate a pseudo profound statement from a regular factual utterance? I don't, I'm not really a linguist, so I wouldn't be able to answer that, but I would be surprised if there was. Intuitively, it's difficult. We know that if I give you those sentences, it is really a hard task if you're going to guess which ones were created by the generator and which ones weren't if I just kind of created them myself. I mean, it's the use of vagueness to mask, like the lack of meaning. The one thing that you might want to ask yourself to like determine if something is bullshit is, if this person who wrote this statement is not a bullshitter, how would they have written this statement? If they were really concerned with the truth, would they have been more clear or would they have used these words that they're using and so on and so forth? In reading through all the examples in your paper and in some of the sources of them where you got a lot of the statements you guys used, to my eyes, I'm not a linguist either, but there was nothing grammatical there, they're all proper sentences. There's one argument a person could make here that says that you and I are just not wise enough about the universe to appreciate the depth of meaning in these statements. Is there any way we could maybe test a statement that's truly random from one that's produced by someone wise and beyond the limits of our meager brains? There's a couple of different responses to that. The first is that it doesn't matter in the sense that we know where the bullshit statements came from. They came from the websites that put words together randomly. One misconception of the study about the research is that we're using the word bullshit in order to say that things are wrong or to chastise people or whatever, but we're using it as a technical term. The bullshit sentences were literally constructed without any concern for the truth because they were just random, so therefore they are bullshit. That doesn't mean that I couldn't come up with some sort of interesting interpretation of them if I wanted to. It's just that they are literally bullshit based on our definition of it. Yeah, and if they were generated by a program, an algorithm that it's only sole purpose is to generate such statements, it's fairly safe to conclude that it's not going to accidentally generate something profound, at least with any frequency. That's the other thing is that if you take a stance where, well, if I think something is profound, then it's profound, that's fine. Some people might actually find a sentence that was constructed by a computer algorithm to be profound. That doesn't mean that it's not bullshit, of course. Those people who say that wouldn't assume that the computer generator would be as efficient as creating profundity as a human would be a force, right? Sure. So it might happen. It's kind of like the monkey's type writing and the coming across Shakespeare type of thing, right? Yeah, we actually covered that on the show a couple of times. So it's interesting that we have two generators or two classes, I guess you could say. We have the algorithmic ones, and then we have human beings who generate a class of material that perhaps we might label as pseudo-profound. Could you talk to me a little bit about how you go about measuring the profundity of a statement? So what we did in the study was really simple. We just gave people the items that we created from the bullshit generator websites, and then we asked them to rate on a scale from one to five, how profound they think they are. One being not at all profound, and then five being very profound or something like that. In terms of the human generated, we didn't want people to say, okay, so you have these items that you found from these websites, but this isn't something that you see in everyday life because people aren't usually reading sentences constructed by websites, right? What we did is we took actual tweets from Deepak Chopra's web page as a bit of background on who Deepak Chopra is. He's a spiritualist, people call him a new age guru, although he hates that term apparently. And he does a lot of things that one of his books is called Quantum Healing, and there's a bunch of other ones. He's had like 20 bestsellers on the New York Times list, or more than that probably. One of the bullshit generators called wisdom of Chopra.com, that actually takes the buzzwords from Deepak Chopra's actual Twitter feed. And that's what it uses to construct the bullshit sentences. So the kind of natural progression for us was to find items from Deepak Chopra's Twitter feed that seemed to us to be particularly vague and bullshitty, and then give those to participants as well. Okay, so then we did the same thing, we just intermix them with the ones from the random websites psychologically speaking. The Chopra items and the ones from the bullshit websites were psychologically indistinguishable. So that would be the only kind of empirical way that I could see to investigate bullshit. Of course we don't know what Deepak Chopra's intentions were when he wrote the statements, but it seems to be kind of obvious that they're written with flowery language to create the sense of profundity. And the primary goal was not to communicate some sort of truth or whatever. So he's not first and foremost concerned with the truth, which is the definition of bullshit, right? If my reading is correct, your paper sets out to test the hypothesis that more analytical individuals should be more likely to detect the need for additional scrutiny when exposed to pseudo-profound bullshit. Could you start by talking a little bit about some of your prior work and what made this hypothesis possible to you? The thing that binds a lot of my work is that we're interested in how people use analytic, more kind of logical, you might say rational processes to undermine their own intuitions. Our brains are constructed in such a way that if I were to, for example, ask your name, you can respond without having to think at all. The other side of the spectrum is that if I gave you a complex algebra problem, no answers are going to pop into your head. You have to actually go through the steps and figure out the answer to the problem. So we have these two kind of diametrically different sorts of thinking, and the problem, at least in my view, is that when you rely on your intuition, you don't know the source of the response. If I ask your name, the answer pops into mind, you don't have to think about it, but because you didn't think about it, you don't know where it came from. That's just something that you learned a long time ago, and whatever. What happens a lot of the time when we're making decisions about what card to buy, who to hang out with any sorts of everyday decisions, we rely on our intuition, because that gives us an answer immediately, and we often aren't rational for that sort of reason. This is just another case study that is the same kind of story that we use in other studies. When you give people these kind of bullshit senses, they're going to first assume in a kind of intuitive way that there's some sort of meaning there, and they have to kind of reflect. And an analytic way on the problem in order to determine that the items are actually kind of vacuous. Would it be fair to say that perhaps part of the definition of rationality might be in the heuristics people develop for when they go with their gut? My name is Kyle versus when I just take truth from my gut versus think about it. So if I were someone deeply believing in cryptozoology and someone mentioned a new cryptozoological creature, and I just said, oh yeah, that's probably right, because Nessie's a real thing versus stopping and actually think about it. Do those heuristics we develop as people and to which parts of our thinking process we should apply, are those really components of rationality? Well, rationality is kind of a label that we put on outputs. Given an uncertain context, being analytic is typically more rational. So in the context of the bullshit study, thinking more about the, you could say kind of safely in floss was always give you troubles about using the word rationality. But in this case, I think it's pretty safe to say that not saying that bullshit is profound is probably more rational than saying that is profound. In the case of say if I ask your name and you respond immediately, that's perfectly rational, right? Because you know your name, there's no there's no uncertainty there, and there's no reason for there to be uncertainty in any kind of objective sense. Our intuitions are often very rational too, and our heuristics are often created by thinking analytically about things first and then developing heuristics so that we can save time. For example, chess masters, they don't analyze every single step. They've done the thing so much that they have really, really good rational heuristics. They started thinking about it actually probably screw them up more than if they were relying on their gut feeling. So it's not black and white. Yeah, I think that's an excellent comparison. I don't know if you're familiar with this study, but they'll show chess boards very quickly to chess masters. And they'll remember more than the average person, but if they randomly place the pieces, then the chess master has no advantage over the average person to recall the board positions. Because the heuristics are very specific, right? And that's why they work, but it's also why they're problematic because you think that your mind just automatically wants to find the easiest possible answer. And what you end up doing is you answer the wrong problem. So I know in the paper, you looked at a number of existing metrics we have for measuring different sort of cognitive scales. I was familiar with some, but I've also enjoyed going through some of the references and learning about others. You leverage a lot in the four trials you guys ran. So I don't know that we have time to go through all of them, but could you touch on maybe some of the more interesting ones and how they were useful tools in your guys study. In terms of the things that's most related to my area is the analytic thinking measures. So that's the cognitive reflection test is one that's used very often. It's kind of like everyone's favorite three word problem. Basically, you can administer the participants in no time and it correlates with almost everything. So one example from that is a bat in a ball together costs $1.10. The bat costs $1 more than the ball. How much is the ball cost? Part of the problem is that these problems are actually getting more too famous. And I'm not contributing to that, but the kind of intuitive answer when you see the problem, it's harder to do when you're listening to it. But if you wrote out the thing and then you looked at it, the intuitive response that comes to mind is that the answer is $0.10. That's what I was biting my tongue to not say. You know, but within the context of the conversation, you know, that is correct. It's just mind, like you don't know how you got that $0.10. It just pops into your mind. And then the key to the problem, of course, is that $0.10 is not the right answer. To get the right answer, you have to first question that intuition. You don't just put the $0.10 down and move on to the next question. You have to stop and think, maybe that was too easy. I don't know if that's the right answer. And if you double check it, it's quite simple. You know, if the ball was $0.10, the bat would be $1.10. And then together it'd be $1.20, which is not the right answer. The right answer is $0.05, by the way. And the reason why it's so predictive is that it captures not just your ability to solve math problems, which is kind of like an intelligence test. But it also captures how willing or, you know, your disposition toward thinking in analytic ways to questioning your intuitions. And that's why we kind of focus on those types of questions. And there's a bunch of different questions of that sort that we used across the different studies. For, I guess, a sake of completeness, could you comment on the chronebox alpha metric you're leveraging? I was unfamiliar with that, but I've since read up on it. And to my understanding, it's a way that we measure consistency, which would give us some trust and reliability in these types of metrics. Is that a fair way of looking at it? Yeah, so like any measure that you have is basically a amalgamation of a number of different items. If we're going to use a label, like the cognitive reflection test or a label, like, you know, say religious belief or something like that, for a set of items, then those items ought to correlate with each other reliably. Otherwise, your label is going to be wrong and you don't have a reliable measure. So the chronebox alpha basically just computes based on the intercorrelations between the measures or the items within a single measure. So if I had, like, a questionnaire that was about for this belief, but then I asked people questions about what they like for breakfast, then I would not have a very reliable measure because what you have for breakfast has to do with your religious beliefs. So if my understanding is correct, one of the major innovations in the paper is the proposal of the BSR, the bullshit receptivity scale. Is that correct? That's right. And so the only thing you might say. So if we find that that is highly correlated with some of the existing well-studied measures, we'd have good support for it being a useful metric in future research. Am I looking at the right way? Yeah, that's exactly the goal of the study. I mean, it was, in a certain sense, methodological. We say no one's studying bullshit, so we need to find a way in which we can study it, not just how it's constructed, how people approach it. Yeah. And that's why we created the scale. So there's no one kind of response to it has been, oh, these results are obvious. But that's what we're looking for, right? Because we selected the things to correlate with bullshit receptivity that are the most kind of obvious things that you would expect to correlate with it. If it is, in fact, our measure is, in fact, assessing this bullshit receptivity. Right. Yeah. Once we can measure it, we can do really good science on it. And with that in mind, maybe we could get into the four experiments, whether we go through them individually or just at a high level. Could you describe what you guys set out to the test you ran to measure the BSR? Yeah. So we can basically kind of just break it up into two sections. So the first two studies, what we do is we're trying to first validate our use of these random bullshit sentences, such as fullness quites in the phenomena, and then asking people how profound they are. And basically, all we want to do there was take those ratings and see if they correlate with things that you would expect them to. So the analytic thinking measures that I mentioned, we also had religious belief measures and paranormal belief. So the idea there is that people that are more skeptical about supernatural claims are going to be more skeptical, but bullshit probably. We had a measure called ontological confusions, which is perhaps aptly named because it's kind of confusing. But the idea there is that what people do often is they confuse two ontological categories. For example, ESP, you know, be able to read something else's mind is confusing the mental with the physical. There's a scale that's been used about that, and it has to do with as very related to supernatural and paranormal belief. We included that as well, basically as an additional thing that seems related with also. And we also included standard and straightforward intelligence tests, you know, verbal intelligence, and then how people reason with kind of matrices, which is kind of like a spatial reasoning task. And numeracy, which is how people deal with numbers. The numeracy one doesn't coordinate as well, because obviously people aren't using numbers when they're investigating bullshit, but it's kind of a general cardinal ability that relates to other things, right? So that's the first two studies. Basically, we found that the bullshit receptivity scale coordinates quite nicely with all these things. Everything we threw at it, coordinated with, which is what you'd expect if we were, in fact, measuring bullshit receptivity with our bullshit receptivity scale. So we're just kind of justifying the label of words. And then the second two studies we want to investigate further by saying, is it the case that the reason that these things are all coordinating with our bullshit receptivity scale is that people that are more analytic, more skeptical, less intuitive and so on, is that they just don't think things are profound. That would produce the same effect, right? What we wanted to do is we wanted to make sure that it was specific to bullshit. So we gave people these kind of common motivational quotations and asked them to write their profundity. One example is a wet person does not fear the rain, right? So when you think about that, it's like it's using clear language. It's something that you would see on a motivational poster. Yeah. It has clear meaning, so it's not bullshit, right? Right. And if you take the difference between that and the bullshit items, that difference also correlates with analytic thinking and paranormal belief. So it's basically if you control for bias towards saying that something is more or less profound, you can still find that people's response to the bullshit is associated with analytic thinking. It's not just that people that are more analytic are saying that everything is less profound. It's actually specific to the bullshit items. It was very interesting and the first study really kind of sold me on the methodology that when the CRT, the cognitive reflection test we talked about earlier, negatively correlates with the BSR, meaning that the, let's see, the more receptive you are to bullshit, the lower score you get on the CRT test, very intuitive. And sort of the opposite correlation is there for ontological confusion, which also has a lot of intuitive appeal for me. And then I was a bit surprised in some of the later paths, maybe my reading's incorrect, but it seems like some of the conspiracy ideation and belief in so-called alternative medicine doesn't necessarily have the same impact. Was that a surprise to you guys or am I reading too much into that being the wrong sign on the correlation? Well, those were the right sign in the sense that people typically who are higher on conspiracy ideation and agree more with complementary and alternative medicine are also more receptive to bullshit. Ah, okay, yeah, that makes sense. Yeah, so that is the way that it turned out. It's the way that you'd expect. There is a difference there, though, in the sense that those scales did not correlate nearly as strongly with bullshit receptivity, as did paranormal belief, and certainly not as much as analytic thinking. So it's hard to say right now, because that's only kind of based on one study. It might be because the scales for those two types of things, like spiritual ideation and complementary and alternative medicine, just aren't as good as the scale that we have for paranormal belief. It's like a psychometric issue or like an actual issue. More work needs to be done, but it might be that conspiracy, for example, is a whole different sort of animal than paranormal belief, right, because it's sort of like it's not really a failure of intuition, as much as it's a failure of rationality or analytic thinking in the sense that conspiracy theories are sort of elaborate of, you know, it actually requires some thinking to get to it in conspiracy theory. It's because people are focusing on the wrong elements and then they're getting this more motivated reasoning as opposed to kind of analytic reasoning. So it might be a whole different animal and that might be why it doesn't correlate that strongly with the bullshit receptivity scale, but we don't know at this point. It's just another factoid from the study, I guess. I guess my question is around the power of the study. It would appear here that, as you point out, someone with paranormal beliefs or receptive to them is going to score higher on the BSR scale than someone with conspiracy beliefs. For me, when I hear that, I'm thinking, well, there have been things that meet the definition of a conspiracy that have turned out to be true. I mean, maybe not, you know, the ones they point to, like, JFK or anything like that, but we could find historical events that qualify as conspiracies. Yet we have no evidence of ghosts or monsters or some of the typical paranormal claims. So in my take, it's more outlandish. Do you think I'm drawing too much from it and saying that explains the different magnitude of correlation? That might be plausible. I mean, that's a similar thing. Watergate was a conspiracy that conspiracies are not always wrong. I mean, it's just that the sort of conspiracies that people usually think about when you say conspiracy theories are almost certainly wrong. But it's a whole different sort of animal, which is why I wasn't overly surprised that it wasn't a strongly correlated with that as a paranormal belief. But we didn't do us, I should be clear about this. We didn't do a statistical test to say that the correlation with paranormal belief is actually statistically larger than the correlation with the conspiracy ideation. So I wouldn't want to make too much out of it, but certainly room there for future work, I think. Yeah. And maybe some of that gets into discussion of sample size and predictive power. Could you comment a bit on how far you would draw claims from the study given that there's, you know, finite sample size and a lot of variables being tested. I guess maybe if I were going to come in and be a critic of the study, that perhaps could be one avenue by which I could attack it. I would say, well, you had so many people fail the attention check and whatnot. What would be kind of just a general response to the structure of this first gamut of studies you've run? Well, given the effect sizes that we had for the PSR scale, like, for when it's correlated with the CRT and the heuristics and biases, like the two analytic measures in the first study, the effect sizes were over point three something, which is actually, you know, that's among the highest that you would see for those types of measures. So, I mean, we had adequate sample sizes in every study and, you know, they would be what you expect given the given what we got in the first one. I mean, you always want larger sample sizes, of course, so it's not like if we did double the people that wasn't great, but I think that we have, you know, pretty much the same results, I guess. The other thing though, with the attention checks, which is actually really funny because so in the, in the first sample, there's a lot of people that fail the attention check, it's like 35%. And that's the, I think it's the only student sample that's the University of Waterloo students. And we had, I guess, the attention check that I had was abnormally difficult. So what we did for that one was we basically, I had a screen that said, okay, for the next part of the experiment, we're going to ask you about leisure activities. And then it went on to the next screen, which basically said it had like a list of leisure activities that people could select, you know, like, playing sports or reading books and stuff like that. But then on the top, it says below is a list of leisure activities. If you're reading these instructions, click the other screen or the other button and say I read the instructions. So we gave them a prompt before they did the thing. So they already read the instructions before they had to read the second set of instructions. So it's actually a very kind of difficult instruction check. So all the people that were that past that were really paying attention to the instructions. So the way that people interpret it is that if people fail the instructions, 35% of the sample failed, then hardly anybody's paying attention. But it's actually like we had lots of people that were actually paying like attention. But apart from that, which is just, you know, whatever and aside, including or not including those people, probably because of the difficulty instruction check had no effect on the results. And the same with the other studies, which had fewer people that failed because they were used a different sample. So it's irrelevant. Like if I would be worried if the effect was only there among people that failed the instruction check. Yeah, makes sense. Yeah, if the effect gets stronger among the people that actually passed the instruction check, but it's still there, even if you include everybody, that means that you have a fairly robust effect. And that's what we had in the study. Yeah, interesting. So I'm curious to know how much or to what degree people such as yourself or researchers doing similar work consider the Bayesian perspective when thinking about how a human cognition. Oh, the Bayesian perspective of human cognition. So that's funny because my area, which is reasoning, there's kind of two camps, more or less. There's a camp of people that are really interested in Bayesian stuff, and then the camp that does things like I do, which is they focus more on, you know, intuition versus reflection and stuff like that. And the camps not communicate. And I would say that they cannot in the sense that they don't want to. This is that they're speaking entirely different languages about what's going on. And I honestly don't understand what the Bayesian people are talking about. And I think a lot of us do with the different sorts of things that we're interested in talking about. So I do work on kind of high level things like religious belief, bullshit, stuff like that. The Bayesian people are trying to figure out that people can predict how to catch something that falls off of the table. Those are both sort of, you know, predictions and stuff like that about how the world is and so on and so forth. But the kind of cognitive mechanisms that produce those two, or whatever, any number of different effects are quite different from each other. So, yeah. It could be, I don't know if I read everything perfectly correctly, but in my read, there was something Bayesian-ish about the way you, in the prompting, in one of the surveys, you asked people if they were familiar with Deepak Chopra, I think post asking them all the profundity questions. And if I recall people who were familiar with him scored lower on the BSR scales, that right? Yeah. But I think that was only true for two out of three studies, but then I was speaking. Ah, yeah. So I could maybe build a case that that's a Bayesian sort of response that, given the information about who Deepak Chopra is, that many people are arriving at the conclusion that he's a generator of these types of statements. And that makes their posterior beliefs in such statements reduced in some level. I don't know, that would just be kind of my way of generalizing it, I suppose. But it does lead into an interesting question about these sorts of statements in our culture. Are these kind of just this transient phenomenon we have, or do you think there's a real social harm that can come from pseudo-profound bullshit statements? Well, I think often it doesn't matter. What Deepak Chopra says on Twitter probably often doesn't matter. It doesn't matter if people think it's profound or not, it's not of that much consequence. But the general principle is, you know what I mean? You don't want people to not care about the truth if they are in a position where people are assuming that they're caring about the truth. So I'll give you an example, and Dr. Oz, so actually I wrote about this in Aeon magazine, or maybe it's pronounced Aeon magazine, I don't know. But I should know, given that I wrote an article for them, but they don't put that in the contract. Anyway, so Dr. Oz, you know, he's a doctor on TV, a medical doctor, someone that is looked to for advice on health and so on, of course. He peddles things like the magic bean, coffee bean thing that, you know, will cure all your problems and lots of different pseudo-scientific remedies. He was put in front of Congress to kind of testify on his use of, let's say, non-evidence-based health practices and so on. And what he said essentially was that, I mean, he said, I'm concerned with entertaining, of course, and he's mostly there as a cheerleader for his audience. Okay, so with that, you know, do you want to dig through what he means by that? I think it kind of means his goal is to, you know, convince people that they can get better, and that they can try all sorts of different things. And his goal is not to put things in people's, in front of people that, you know, if they spend their money on them, they should be confident that it's going to work. So in other words, he's concerned with the feelings of his audience and not, you know, the truth of the things that he's putting on TV. Right, so that's bullshit, right? That's, I mean, that's kind of what the definition of the bullshit is. He's not concerned with the truth, he's concerned with some other thing that's not the truth, and just so happens to be the thing that will get him, you know, ratings and money in the bank and so on. So, I mean, that's important. There's a lot of misinformation about health, lots of bullshit about health, and, you know, not to be too dramatic, but people's lives are at stake, so, you know, we should care about bullshit. Yeah, absolutely. I don't know the court's origin, but I know it from a guy named Matt Dilahunty, I'm quite fond of it. It says, "I want to believe as many true things and as few false things as possible." Yeah, and what people do is they don't have the people have the wrong criterion in that they believe lots of things, so they'll believe the true things, but they don't care enough, nearly enough about not believing the false things. There's a delicate balance there, perhaps. So, yeah, I really enjoyed the study. I'm glad you guys put it out. Any study that quotes Bo Sagan and Michael Schurmer is definitely on my to-to-read list. So, yeah, I'm curious about some of your future research and maybe current lines of inquiry since the papers come out. Well, I have my fingers in lots of different things, but we're doing more work on the bullshit thing, partly to, we basically kind of replicated what we have using not just profanity ratings. We wanted to kind of get away from that, so what we did is we just asked people to rate whether we said, from the outset, that some of the things that you can be presented are nonsense. They were constructed by any concern for the truth, so we used nonsense instead of bullshit because you can't really say bullshit to participants because they'll be like, who the hell are these researchers? Sure, yeah. And then we just, they have to decide whether it's nonsense or meaningful, so it's a simple one or the other, and using that measure, it actually is very similar in terms of the correlations with MLD thinking as the profanity thing, so it's not like it requires that specific sort of scale, it's a general kind of thing. And then there's lots of different things that we're working on. I did, this is kind of funny, you mentioned climate change language analysis, that same guy. Yeah, tell us a little bit about that. So that was like, they have the, you know, the international panel, intergovernmental panel on climate change is, you know, the IPCC is the big global warming, they put together the giant reports on the current status of global warming, essentially. And they're, they have thousands of authors and they're just very long technical documents, right? One of the more interesting recent developments in the climate change denier, you feel so you say skeptic, but I say denier. I use it. Yeah. Yeah, so they, what they did is they published the NIPCC, which is the non-internet, entered non-governmental climate change or something like that. The not IPC, I guess, is the more simple way to say it. And that's like, that's what all the typical players like Pete Singer and those guys. And what they did basically is they copied a very similar format to the IPCC and they created like an 800 page document that says the opposite, using very similar studies actually. So obviously we aren't me and my collaborator, Sir Dan, we aren't climate scientists, so we're not going to go through all these documents and see which one is peddling the wrong story or whatever. So what we did is we did an actual language analysis, we put all of the words from all the documents over a million words into these kind of language analysis and our analyzers, sorry. Language analyzers, I'll try to say. And what it does basically is it kind of tells you things about the language that they use. So there's typical sort of way that scientists write about things and it's used kind of uncertain language. They tend to be very tentative, conservative in their language, of course, right? And the kind of key difference, there's like a ton of differences, they might have been on different topics. But, you know, one of the key differences is that the IPCC, despite being the group that is warning the planet about impeding, you know, climate change. They were more conservative in their language than the NIPCC, right? Which is, I mean, actually very straightforward, it's just the people of the NIPCC, the deniers are trying to kind of combat the scientific consensus, and they're using much more certain, much more non-conservative language in doing so. Yeah, which is exactly as you expect, but it's a good kind of validation check of what you'd expect in those documents. Yeah, and a nice empirical way of measuring it, not just kind of expressing opinion, but actually taking a empirical data point, which I like a lot. Yeah, I know people that is like, there are no significance tests, because we analyzed every single word in all of the documents. So like, we have the entire population. And equal, yeah. But every difference is literally a difference, because it's very different for a psychologist to see that kind of thing. So I know this is pretty recent work that we've been discussing, so it may be too soon for this question, but have you seen other researchers starting to use the BSR in publications and in work? Not in publications, yeah, but I've had emails from people to ask for the scale to do this and that, but the thing has come out yet. Well, it'd be really interesting to follow. I look forward to seeing some replication studies and some further expansion on this topic. I think it's a really interesting line of inquiry. Where can people find you online, a Twitter, a blog, website, anything like that? I have a website, but it's just, you know, a list of, it just tells people what my education is and what my publications are, but I am on Twitter, just at Gordon Penny Cook. Excellent, I'll link to that as well as your list of publications and this paper in the show notes for anyone who wants to go do some additional reading. Good, yeah. Yeah, other than that, thank you so much for coming on the show, Gordon, this is really interesting. My pleasure. All right, take care. And until next time, I want to remind everyone to keep thinking skeptically of and with data. More on this episode, visit latest skeptic.com. If you enjoyed the show, please give us a review on iTunes or Stitcher. (upbeat music) (gentle music)