I recently gave a talk as part of the TEDx event at St. Lawrence University. In it, I discuss the ways in which we are susceptible to bullshitting ourselves. While we are careful and alert about the bullshit spread by others, the bullshit we tell ourselves is often even more effective, and more insidious.
In the wake of the white supremacist rally in Charlottesville, there has been a lot of discussion about free speech, and whether this right protects such groups. This debate is coming out of a larger one about harmful speech, particularly on college campuses. I struggle with this question (for reasons noted below), but find that the way this debate is popularly framed is misleading in two respects.
(1) It is not a debate between free speech advocates and opponents of free speech. Rather, it is a debate about what free speech means, and how this right is best implemented politically. Consider those who want to restrict the speech of hate groups. The argument for this position is that such hate speech limits the ability of other people in society to speak. Thus, limiting hate speech may actually *increase* the access to free speech in a society. The basic idea is that we should think about free speech as the ability to do things with words (like make assertions that others listen to), not to merely vocalize syllables. When others engage in hate speech, they help create an environment in which the oppressed are unable or unwilling to speak, because others do not listen, or out of fear for their safety. It is this latter idea which motivates the ACLU's recent decision not to defend armed protestors (their guns, it is argued, have a chilling effect on speech).
On the other side, defenders of greater legal protection for speech tend to argue that the legal right has to be absolute, otherwise it will be selectively applied in ways that favor the powerful. While this may mean permitting harmful speech, it would be more harmful, it is argued, to give those in power (who can do more to reinforce systems of oppression) the legal authority to silence the voices of the powerless.
It is a difficult issue, and one I struggle with myself. The important point, however, is that it is not a matter of supporting or opposing free speech. It is about what kind of society best promotes free speech.
(2) Following this point, it is not a matter of "offensive" speech. Taking offense is a fairly weak property. People can take offensive to all sorts of things, including the innocuous and the true. There probably are white supremacists who marched at Charlottesville who would take offense at being called a "white supremacist." I hardly think that would be adequate grounds for denying my right to say so.
If, however, we understand the debate in terms of speech that silences or removes the voices of others from public discourse, we have firmer grounds for distinguishing truly harmful speech from that which merely offends someone. I'll be offended if you call me an idiot, but your doing so in no way reduces my ability to speak in our society (it is not systemic, and it is not part of any broader social structure that oppresses me; in fact, I am a benficiary of social power structures).
As I said, I still struggle myself with these questions. What I think is crucial, however, is that we engage the actual points of disagreement, rather than a caricature of the positions.
My latest paper, entitled "Steering into the Skid: On the Norms of Critical Thinking" has just been published in Informal Logic. You can find it here, and on the research page.
In the paper I look at the implications of arguments from Gigerenzer and Mercier & Sperber that cognitive heuristics and biases are ecologically rational, that is, that they are rational when used under the right conditions. This might be taken as a challenge to teaching critical thinking skills, as such as skills may not match the reasoning success of our informal heuristics. I argue that a sophisticated conception of the goals of critical thinking, namely, critical thinking as the metacognitive skill to recognize and implement the right cognitive strategies in the right circumstances, helps us to avoid this objection.
Comments and thoughts welcome!
Here's the cool thing about language: it is transparent in our everyday use, in that we use it so readily that we barely even recognize how amazing it is that we can express an infinite range of thoughts and be understood by others. We get a window into how interesting language really is when we encounter funny utterances, utterances which are funny in both the "interesting" and "amusing" senses of the word.
I just recently encountered two such funny utterances on some recent travels up to northern New Hampshire for a kayak trip. Looking more closely at each gives us insight into an aspect of the ways language works.
I heard the first on a podcast while driving home. On This American Life, when they are about to air a story that involves sex in some way (whether directly or obliquely), Ira Glass will provide a warning to the listener:
This story acknowledges the existence of sex.
This seems to me either a pragmatically paradoxical utterance, or an insufficient one. Let's suppose first that it is indeed the mere existence of sex that is problematic for some portion of the audience, and that some members of the audience might want to avoid knowing that sex exists, or more likely, they want to prevent someone else (e.g., children) from knowing sex exists.
The trouble, however, is that "acknowledges" is factive. A factive verb is one which presupposes the truth of its complement (the sentence that one acknowledges, in this case, "sex exists"). That is, to say that one "acknowledges that sex exists" presupposes that it is true that sex exists. To see this, compare the following sentences:
(1) This story acknowledges the existence of Santa Claus.
(2) This story acknowledges that the Mets won the 1986 World Series.
(3) This story acknowledges that the Mets lost the 1986 World Series.
Sentences (1) and (3) are unacceptable, they just don't sound right. They don't sound right because Santa Claus does not exist (sorry! content warning: this post acknowledges that Santa does not exist), and the Mets did win the 1986 World Series. You could say that someone "claimed that Santa exists," or that someone "discusses the existence of Santa Claus," but not that she "acknowledges" Santa's existence.
This is interesting, then, because the warning itself conveys the fact that sex exists. If the aim of the warning is to help the audience avoid this knowledge, then the warning is self-undermining!
Presumably, then, the intent of the warning is to help the audience avoid descriptions of sex that they might find inappropriate. In this case, however, the warning is insufficient - it does not properly warn you against what you might want to avoid, nor does it provide enough information to know whether the story is worth avoiding or not.
I suspect that this point is not lost on the producers of This American Life, and indeed may be a statement about the inanity of FCC requirements - but it is an interesting case for the reasons why the warning is so funny.
The second case was a road sign that encountered in a small northern NH town. It was a warning about the nearby presence of moose, with a sign under it:
Next 5500 feet.
Now, a mile is 5280 feet. A mile would be a fairly standard unit of measurement on a sign like this, and 5500 rounds down to 5280 fairly easily. One might naturally expect the sign maker to simply round it to 1 mile, unless that additional 220 feet were significantly more likely to contain moose than the area around it. The sign struck me as funny precisely because it seems committed to this surprising claim about the significance of those 220 feet for moose-alertness.
But why does it seem committed to this? The philosopher H.P. Grice has an answer. He was interested in content that we communicate beyond the literal content of what we say. Sarcasm provides a ready example - if I remark of some hideous shirt, "oh wow, that's an awesome shirt" I have literally said that the shirt is awesome. However, it might be clear from tone and context that I am communicating, or implicating, the opposite of what I literally said.
How does my audience know what I am implicating in a given context? Well, Grice thinks we enter into conversation with the mutual understanding that we will cooperate, and that we follow a few basic rules:
- Maxim of Quantity: make your contribution as informative as is required.
- Maxim of Quality: do not say what you believe to be false, or that which for which you lack evidence.
- Maxim of Relation: make your contribution relevant.
- Maxim of Manner: make your contribution clear.
Suppose I tell you that the shirt is awesome. This, given the clear fact that the shirt is hideous, is false and so in violation of Maxim of Quality. You, however, are assuming that I am cooperating. Grice's insight is that you will then assume I have violated the maxim on purpose, and that you should infer I actually mean that the shirt is awful.
Returning, then, to our road sign - it seems to violate the Maxim of Quantity. The use of a more fine-grained measurement scale (feet), when it seems like a broader one (miles) would do, seems to provide more information than is really required. Such information is not required for the simple reason that, when driving down the road, I cannot tell the difference between the passage of 5280 feet, and 5500 feet. So why, then, did the author of the sign choose feet? The answer, if I am following Grice, would seem to be that the author wants me to recognize that more specific information is actually required - that those extra 220 feet really do matter. This would bring the author's statement on the sign back in line with the Maxim of Quantity - exactly what I should want if I assume that s/he is being cooperative.
The actual explanation is probably not that the author wants me to pay extra close attention to those 220 feet. As with the first case, what I find interesting about the example is the way in which it illuminates a fascinating fact about language. In the first case, it was the way factive verbs work. In this second case, it is Grice's account of how we communicate content we do not literally say. These are features of language that we use like second nature, and only become aware when they become funny enough to rise to our attention.
"Bullshit is a greater enemy to the truth than lies are." So writes Harry Frankfurt, in his essay "On Bullshit". I've had cause to think about this line in light of the Presidential campaign of Donald Trump, and the way that bullshit and lying can become deeply entwined for the serial bullshitter. Perhaps it is worth first taking a step back and looking at Frankfurt's distinction between bullshit and lying.
Lying is intentional deception. The liar needs to know the truth, and deliberately speaks falsely in order to bring about a belief in that falsehood. The bullshitter, by contrast, does not know or care whether what s/he says is true or false. Both the liar and the bullshitter care about the effects of their words, but the liar possesses knowledge of the truth, while the bullshitter ignores it entirely.
At first glance, it might seem that this makes lying the worse activity. After all, the liar knowingly disregards the truth; it is an intentional act, and it is at least possible that the bullshitter might speak the truth. Not so, argues Frankfurt.
Both in lying and in telling the truth people are guided by their beliefs concerning the way things are. These guide them as they endeavor either to describe the world correctly or to describe it deceitfully. For this reason, telling lies does not tend to unfit a person for telling the truth in the same way that bullshitting tends to. Through excessive indulgence in the latter activity, which involves making assertions without paying attention to anything except what it suits one to say, a person's normal habit of attending to the ways things are may become attenuated or lost.
While instances of lying may be more harmful than instances of bullshit, it does not undermine the value of truth in the same way that bullshit does. The liar has to know the truth in order to lie, and so one is required to remain in the habit of caring about the way the world really is. The bullshitter, however, consistently undermines the value of the truth. Not only are particular truths not important to the bullshitter, but the the truth itself is not important.
I was thinking about the passage quoted above when listening to some recent remarks by Donald Trump. The first was his claim that he had never met Putin, which is clearly at odds with his remarks that he had. The second was his claim that the NFL sent him a letter complaining about the debate schedule, which the NFL immediately denied. In the latter case, Trump is clearly lying, and in the former, at least one of those remarks is a lie.
What's interesting about these lies is the way in which they differ from what you might call, 'normal political lies.' They are not of great political importance, they are made by Trump directly (rather than intermediaries or third parties), and they are easily checked factual assertions. The use of such lies is one of the factors that has led Ezra Klein to call this a campaign between a normal politician, and an abnormal one.
Frankfurt gives us the diagnosis. Trump has shown an ease with bullshit, particularly in the promise of "deals" that will solve America's problems. Frankfurt himself looked at Trump's use of lies and bullshit, and a simple search for "Trump bullshit" brings up several examples of writers applying Frankfurt's analysis to Trump. What I think has gone underappreciated, however, is the way in which bullshitting makes lying all that much easier.
Trump's casual lying in the Putin or NFL cases, which has baffled political observers (particularly since avoiding the "gaffes" seems so easy) is symptomatic of a general disregard for the value of truth. While there is an important conceptual distinction between lying and bullshitting, the latter makes the former easier. The liar recognizes the value of truth, but values it less than what s/he will achieve through the lie. A politician, for example, might think that winning an election is more important than being truthful, and so, is willing to deceive others into believing a falsehood. As bullshit diminishes the value we place on truth, it is more easily outweighed by other other values. Let's call it the "bullshit death spiral" - the more you bullshit, the easier it is to lie and to bullshit, ad infinitum.
This is not to say that we should demand "normal political lies" in place of the more brazen Trump-lies. It is, to say, however, that we should be mindful everywhere of how bullshit undermines the value we place on the truth. While, in normal cases of political misrepresentation and lying, we value victory over the truth, the ubiquity of "fact checkers" suggests that we do still value the truth. The age of bullshit, of which Trump is the apotheosis, threatens a fundamental, though perhaps fragile, value.
A year ago, Christopher Long gave a talk at St. Lawrence, arguing in favor of integrating contemporary social media (and in particular, Twitter) into the classroom. During the talk, the audience was encouraged to tweet, and I hit my yearly twitter quota in an hour and a half. By the end, I tweeted out that I was convinced to tweet more by an argument based on Aristotle (what better illustration of what it is to be an academic could one ask for?). My recall of the details of the argument is fuzzy, but the crucial consideration was this: social media is becoming one of the primary platforms through which we engage with the world. If we care about being responsible citizens in the political, ethical, and cultural conversations of our age, then we should care about using social media.
I was moved by this argument; at least, I was moved enough to agree with it without doing anything differently. In looking back at my twitter feed, I can't help but notice a total of 11 original tweets since then. This is by far my most active social media account. What happened, and, should I be moved by arguments in favor of greater digital engagement?
Before turning to prescription, it's worth starting with the diagnosis. Part of my reluctance to engage with social media is simply habitual - I rarely have the thought "I should tweet about this!" Habits, however, are changeable if one cares to. Were there other considerations, ones which might weigh against arguments in favor of an increased digital presence? I suspect that there are two operating in my own case.
The first is a wariness of the placating effect of online posting (slacktivism). That is, social media posting may make one less likely to act on the expressed sentiments. The second is a worry about thinking in public in a medium which lasts essentially forever. I change my mind often, and if I have learned anything from spending my adult life among academics, it is that just about every topic is more complex than it first appears.
Neither of these concerns is compelling; upon reflection, they sound more like excuses than justifying reasons. Two considerations mitigate against the first worry. The most obvious reply is that the possibility of this happening does not thereby diminish the value of doing both. More substantively, however, digital engagement is a type of action. What's more, as an academic, it is likely the type of public contribution to discourse for which I am best suited to make. If anything, I should be more inclined to use these new platforms, for it is in working through ideas that I am best prepared to contribute to the world.
The second concern is more interesting. We are often encouraged not to share underdeveloped ideas for a few reasons. One is that we may be wrong, and be made to look foolish. Another is that, especially for an academic, our ideas might be "scooped" and developed by someone else. Or, one might be worried that expressing in process ideas shows too much process, and the muddy work that goes into developing ideas might reflect poorly on you as a scholar.
There is something to this second concern. Ideas on the internet are long lived, and never truly lost. Further, the rapidity and ease with which we castigate people for ideas speaks to a real discomfort with people changing their minds or growing in their views.
It is this attitude, however, that speaks in favor of developing our ideas in public. Showing our students, and showing the public, what it looks like to develop an idea has a range of benefits. It shows that ideas do not fall fully formed into the laps of geniuses, a pernicious conception of intellectual labor that suggests good thinking is the province of those with a gift, one that you either have or don't. It also shows the public that good ideas require false steps. Without recognizing this, it is too easy to see public discussion as a matter of choosing sides - a model where advocates of different positions try only to get the undecided onto their "team," without engaging each other. Further, it demonstrates that we come to our ideas by taking the evidence for, and against them, seriously. We are not born into our political positions and identity, we come to them through an honest and, at times, arduous search for the truth.
In reflecting on this, it strikes me as resting on a fairly obvious point, one Long made forcefully - the arguments in favor of public thinking through social media are the same arguments in favor of engaging the world as a public intellectual. It is the platforms that have changed, not the underlying arguments.
A year ago, I made the judgment that I should better engage using the digital tools of modern social media and the like. Today, I find these arguments, if anything, even more persuasive than I did then. This post then, is a public commitment to public writing.
In a recent paper, I explore the ways in which critical thinking pedagogy can address cognitive bias. You can find the paper here:
Comments are always welcome!
Today I was reading a recent paper by David Chalmers entitled "Intuitions in Philosophy: A Minimal Defense," which you can find here. It's a reply to a recent book by Herman Cappelen which argues that, contrary to a recent interpretation of philosophical methodology particularly prevalent in experimental philosophy, philosophers do not rely upon intuitions in their arguments. Cappelen's basic strategy is largely a negative one: he tries to show by analysis of the language philosophers use, and the arguments philosophers employ, in several cases studies, that intuitions are not at play. Chalmers has a rather interesting reply to one of Cappelen's arguments that I want to briefly take up here.
Cappelen, when examining these case studies, classifies them as not appealing to intuition in cases where the author makes an argument for the claim purportedly supported by intuition. That is, if the author has an argument that p, then that p is not supported by appeal to intuition. I offered a similar argument in my dissertation by pointing to the arguments that Putnam offers in the Twin Earth thought experiment. This seems obvious enough at first glance, since one would think that the intuition would only be at play if no argument was possible. If an argument is available, why bother with an intuition?
On Chalmers' analysis of intuitions, an intuition has dialectical non-inferential justification. That is, we need not make any claims about the actual epistemic justification of the intuition, but rather we look at what justifications are accepted for that claim in argument. For intuitions, they are accepted without inferential justification. This, however, only means that they do not require inferential justification. It is entirely consistent that a claim might have both non-inferential justification and inferential justification. If so, then we have to deny the general principle that if an author argues that p, then that p is not supported by an intuition.
For example, the Gettier case includes both an argument and an intuition. We can see that they come apart in the further reporting of the Gettier case in secondary literature, where the argument is often dropped and only the intuition remains. Indeed, the same could be said of the Putnam case (it is why I thought it an important contribution to even point out the Putnam argument at all!). The later authors have taken the intuition as non-inferentially justified. If they did not, then they would have maintained the inferential justification. Perhaps they were wrong to do so (the question of epistemic justification), but from the point of view of methodological analysis, it is clear that they did accept the intuition.
I think Chalmers is making an important point here, and it reveals an aspect of Cappelen's analysis that didn't sit right with me either. Even if the arguments he looks at can be best understood as not requiring intuition, this is a different claim from the one that intuition has not played a central evidential role in the field as it has actually progressed. Indeed, I have argued that Kripke's arguments can be interpreted as not relying on intuition at all, but it does not follow that Kripke himself is not using his own intuitions in making the case.
However, I think caution is needed with Chalmer's position as well. One reason for endorsing Cappelen's principle is the Principle of Charity. If we can interpret an argument as not relying on intuition, when those intuitions are epistemically problematic, then we ought do so. Perhaps others have made the dialectical move of taking the intuition on board, but this might not just be epistemically unjustified, but unjustified as a matter of interpretation as well. Interpreting involves balancing the competing demands of textual fidelity and charity. We can read Cappelen's principle as tacitly assuming that it is uncharitable to ascribe an appeal to intuition when not necessary.*
This is important because we should also be cautious with using the secondary literature as evidence here. Intuitions are evocative and easier to understand than arguments. The author might be using them for the purposes of illustrating the position she is considering, rather than as justifying it. At the least, care is required when looking at the secondary literature to see if the author is actually taking it on as a dialectical assumption rather than using it for more pragmatic purposes.
* This might be more problematic for Cappelen than I am giving credit for here. His intention is not to evaluate the methodology, but strictly to interpret it. He might want to avoid taking on the assumption that intuitions are epistemically suspect.