News

Inspirational Quotes and Intelligence Study

No, a study didn't determine people who post inspirational quotes on Facebook are dumber than you.

Published Dec. 4, 2015

[green-label]NEWS:[/green-label] In late 2015, multiple web sites posted articles about an article titled "On the Reception and Detection of Pseudo-Profound Bullshit," [PDF] published in the journal Judgment and Decision Making on 6 November 2015.

Roughly a month passed before the media took note of the paper. Jezebel, Vox, Mic, Factually.Gizmodo, and Forbes were among the outlets to package the article's findings for social media users to share (and tacitly mock friends whose posting habits were similar to those supposedly examined in the study). Jezebel's framing exhibited a trait common in many articles about the paper: titled "Do You Love 'Wise-Sounding' Quotes? Surprise! You're Probably Dumb," it held that folks who favored sharing images with quotes appended on social media were quantifiably less intelligent than those who did not:

Is half of your social media presence comprised of inspirational quotes superimposed over images of babbling brooks or beautiful mountain ranges? Are you brought to tears when reading quotes like, “Hidden meaning transforms unparalleled abstract beauty.” Well, I hate to be the one who says this, but you might be a big ol’ dummy.

That implication was an exceptionally common one. Articles about the research featured titles like "People Who Post Inspirational Bullshit on Facebook More Likely to Have Lower Intelligence" (Mic)," "People who share inspirational quotes on Facebook are less intelligent, believe in conspiracy theories: Study" (IBNLive), "People Who Post Inspirational Facebook Quotes Are Morons, According to Science" (Maxim), "People who upload inspirational quotes are dumb: study" (India.com) "People posting inspirational quotes on Facebook actually dumb: Study" (Deccan Herald), and "People who post inspirational quotes on Facebook are actually idiots, study says" (BGR).

But the original study had nothing to do with the liking and sharing of quotes superimposed on images, and a number of articles about the paper quoted secondary sources (not the original article) to describe the purported results. The abstract for "On the reception and detection of pseudo-profound bullshit" explained its scope in a manner that differed meaningfully from subsequent media interpretations of it:

Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous. We presented participants with bullshit statements consisting of buzzwords randomly organized into statements with syntactic structure but no discernible meaning (e.g., “Wholeness quiets infinite phenomena”). Across multiple studies, the propensity to judge bullshit statements as profound was associated with a variety of conceptually relevant variables (e.g., intuitive cognitive style, supernatural belief). Parallel associations were less evident among profundity judgments for more conventionally profound (e.g., “A wet person does not fear the rain”) or mundane (e.g., “Newborn babies require constant attention”) statements. These results support the idea that some people are more receptive to this type of bullshit and that detecting it is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims. Our results also suggest that a bias toward accepting statements as true may be an important component of pseudo-profound bullshit receptivity.

As the abstract explained, the research focused on "pseudo-profound" remarks (not quotation-based memes). The original paper went on to quantify the meaning of the term "bullshit" with respect to its findings:

In On Bullshit, the philosopher Frankfurt (2005) defines bullshit as something that is designed to impress but that was constructed absent direct concern for the truth. This distinguishes bullshit from lying, which entails a deliberate manipulation and subversion of truth (as understood by the liar).

After laying out a mutually understood definition for bullshit, the paper continued by noting:

There is little question that bullshit is a real and consequential phenomenon. Indeed, given the rise of communication technology and the associated increase in the availability of information from a variety of sources, both expert and otherwise, bullshit may be more pervasive than ever before. Despite these seemingly commonplace observations, we know of no psychological research on bullshit. Are people able to detect blatant bullshit? Who is most likely to fall prey to bullshit and why? 

Even if the authors intended their definition to be novel or newly applied, it seemed unlikely there was no psychological research on subjects' reactions to or interactions with hollow platitudes. Genres of all bullshit (and the bullshit specifically cited by the authors) was addressed in the second portion of the article, which mentioned the future of type-specific bullshit research:

We focus on pseudo-profound bullshit because it represents a rather extreme point on what could be considered a spectrum of bullshit. We can say quite confidently that the above example (a) is bullshit, but one might also label an exaggerated story told over drinks to be bullshit. In future studies on bullshit, it will be important to define the type of bullshit under investigation (see Discussion for further comment on this issue).

In the discussion portion, the authors maintained their "findings [were] consistent with the idea that the tendency to rate vague, meaningless statements as profound (i.e., pseudoprofound bullshit receptivity) is a legitimate psychological phenomenon that is consistently related to at least some variables of theoretical interest." The research was broken up into four segments, in which a total of approximately 800 subjects were involved; participants were recruited for the first segment from among University of Waterloo undergraduates, from and Amazon's Mechanical Turk worker pool for the remaining three. The authors wrote of the consistency of the latter group's responses:

Twenty-three participants were removed because they responded affirmatively when asked if they responded randomly at any time during the study. Twelve participants failed an attention check question but were retained as removing them had no effect on the pattern of results.

In conclusion, the authors asserted they uncovered correlations between receptivity to bullshit among participants in the four pools and a number of negatively associated intellect-related traits:

Most importantly, we have provided evidence that individuals vary in conceptually interpretable ways in their propensity to ascribe profundity to bullshit statements; a tendency we refer to as “bullshit receptivity”. Those more receptive to bullshit are less reflective, lower in cognitive ability (i.e., verbal and fluid intelligence, numeracy), are more prone to ontological confusions and conspiratorial ideation, are more likely to hold religious and paranormal beliefs, and are more likely to endorse complementary and alternative medicine. Finally, we introduced a measure of pseudo-profound bullshit sensitivity by computing a difference score between profundity ratings for pseudo-profound bullshit and legitimately meaningful motivational quotations. This measure was related to analytic cognitive style and paranormal skepticism. However, there was no association between bullshit sensitivity and either conspiratorial ideation or acceptance of complementary and alternative medicine (CAM).

Unsurprisingly, the article spread like wildfire once it was picked up by social media users. A large number of readers latched on to the "inspirational quotes are for dummies" angle, while skepticism and science pages banged the anti-woo drum. However, not all interpretations of the research were favorable (even among the latter camp). For example, a commenter on a forum for atheists observed that the authors findings hinged on the flawed assumption such platitudes were definitively meaningless:

I do not know what I find more disturbing: the paper itself or the uncritical receptivity to it findings. The authors seem trapped within their own literalist worldview. Some of their examples of bullshit are either 1) pretentious ways of conveying more simple ideas or 2) propositions heavily-laden with connotations. Take for example the Chopra quote they consider representative of bullshit:

"Attention and inattention are the mechanics of manifestation."

The authors classify the above as bullshit. It is not. Even though I disagree with the quote, I can see how it captures an idea that fits within the Idealist worldview of Chopra. It's just another way of saying that the world is your mirror. In effect, we construct our personal reality from things we notice and care about. That is actually a well known psychological fact, one that Chopra, mistakenly in my opinion, raises to a metaphysical fact. That's strike one against the paper.

The ambiguity of the phrases actually allow them to point to a wider range of possible content. Interpretation of highly connotative phrases depend more on context than narrow denotative ones.

Objections based upon the authors' strict interpretation of ambiguous criteria were not uncommon. In a lengthy assessment, ScienceBlogs commenter Sadmar described the paper as "particularly awful" and described a number of apparent flaws in the research structure:

The [authors] refer to the test statements in the first study as "meaningless" yet offer no theory of "meaning" and make no references to scholarly work on how "meaning" works. The ten test sentences are simply assumed to be "meaningless" because they "have syntactic structure but consist of a series of randomly selected vague buzzwords." However, with the Chopra generator, for example, the relatively small size of the vocabulary base, and the algorithm for valid syntax, will sometimes combine to assemble vague buzzwords into statements that aren’t meaningless at all, but rather 'polysemic' or 'open texts' — that is, depending on what readers bring to the table, they can mean several different things. A statement that lacks one precise meaning is hardly empty of meaning.

In the study, none of the statements are given any context whatsoever. Several of them strike me as the kind of claims that would actually make sense in some context that has defined the terms being used, and given examples for which the claims are some sort of summary conclusion, though not to be taken literally, but allowing for 'poetic license'.

There are no controls whatsoever applied in the research design. The subjects are led to believe that the statements were authored by human beings, and appeared on sites relevant to 'experiencing the profound.' They are directed to think about meaning — encouraged to 'read in' if you will. Given the institutional framework, the effect of assumed researcher authority, observation effects, etc., we don’t know whether the score any individual subject assigned to a statement reflected whether they personally found the statement profound, or whether it reflected a judgement of 'this is the kind of statement to which profundity is generally attributed by people who are smarter than I am.'

Over on Reddit's r/science, a commenter succinctly voiced skepticism of the findings:

Surprisingly enough, this paper, its authors, and the journal it was published in all appear to be real, but I'm reserving judgment on the question of whether it's an elaborate prank on the skeptic community.

Whether the authors were sincere or the findings truly credible didn't seem to be a matter of consensus (specifically among the communities to which the paper most directly spoke, science and skepticism). Reasonable objections about the quantification of "meaningful" or "meaningless" as a basis for the research's conclusions were raised in comments, as well as observations about the influence the design of the research had on participants' responses.

"On the reception and detection of pseudo-profound bullshit" appeared to be a genuine paper, legitimately published in the journal Judgment and Decision Making in November 2015. The value of its assertions remains a matter of debate, but it did not suggest that those who share inspirational images and quotes are in any fashion less intelligent than their Candy Crush-ing peers.

[article-meta]

Kim LaCapria is a former writer for Snopes.