The Art of the Argument: Stefan Molyneux’s Book Reviewed on LGP

(Note: co-posted as a product review on with the necessary modifications.)

Stefan Molyneux, The Art of the Argument: Western Civilization’s Last Stand (Kindle Edition, Amazon Digital Services LLC: August 27, 2017). Pp. 172 / kb 299. 

This book was panned on Brian Leiter’s philosophy blog) which is best when he stays away from politics, and since both Stefan Molyneux and I are independent writers / scholars, I had to see for myself. I’m sad to have to report: the negative reviews are correct. Worse: the author is a known libertarian YouTuber and noted critic of all things left wing and politically correct. Mediocre white guy alerts are therefore going up all over the Web (cf. e.g., this).

The problem is, Molyneux has embarrassed himself with this ebook.

From the first page: “The first thing to understand is that The Argument is everything. The Argument is civilization; The Argument is peace; The Argument is love; The Argument is truth and beauty; The Argument is, in fact, life itself.” Caps in the original; always emboldened. Every appearance, in fact, of the phrase The Argument is emboldened.

Such writing marks its author as an amateur, possibly one grinding axes instead of communicating information or educating. The two aren’t necessarily mutually exclusive, but they are here.

What is an argument? What I used to tell students (working off several textbooks): “An argument is a state of statements, at least one of which, called the premise(s), is offered as evidence for another statement called the conclusion.” This opens the door to discussions of statements, terms, concepts, and definitions, all of which are good to have before we get to the purposes of argument.

Molyneux: “An argument is an attempt to convince another person of the truth or value of your position using only reason and evidence” which conveys the right idea but is technically loose. As it turns out, technical looseness often descends into sloppiness and sometimes into incoherence.

Molyneux divides arguments into “truth arguments” and “value arguments” for reasons unclear to me, because if the argument is any good it will exhibit a logically correct structure in either case. I question, therefore, that we need to say this: “A truth argument will tell us who killed someone. A value argument will tell us that murder is wrong. Truth arguments are the court; value arguments are the law…. A truth argument can establish whether arsenic is present in a drink. A value argument can convince you not to serve it to someone.” Such bizarre and sometimes demented illustrations permeate Molyneux’s tract.

More worrisome to me, given that Molyneux is likely to have a readership much larger than any professional logician with a textbook, is that The Art of the Argument is riddled with mistakes students who read it and then enroll in a logic class will have to “unlearn.”

He properly distinguishes deductive from inductive arguments, provides the standard example of the former ((1) “All men are mortal” (2) “Socrates is a man” (3) “Therefore Socrates is mortal”), and then delivers this cringeworthy explanation: “Given that premises one and two are valid, the conclusion – three – is inescapable.”


Molyneux has just confused truth and validity! Statements are true or false. They are never valid or invalid. Deductive arguments are valid or invalid; they are never true or false. Validity is a function of deductive structure, not content (the information in premises and conclusion). Getting students to grasp this difference is every logic instructor’s first challenge. If a deductive argument has a valid structure and true premises, moreover, it is called sound. Following a foray into premature attacks on relativism and socialism – premature because the groundwork for such arguments has not yet been laid – Molyneux botches soundness as well: “If I say (1) All men are immortal, (2) Socrates is a man, (3) Therefore Socrates is immortal; then the structure remains logically sound.” This is actually a good example of an unsound argument, because it has a valid structure but a false premise.

In other words, Molyneux does not appear to grasp the difference between validity and soundness. This is in a section entitled “The Difference between ‘Logical’ and ‘True.’”

He does seem to grasp, roughly, outlines of the difference between deduction and induction when he states that deduction is about certainty and induction is about probability. What deductive arguments supply is logical closure: if the premises are true the conclusion must be true. Hence the sense of “inescapability.” He says: “Getting most modern thinkers to accept the absolutism of deductive reasoning is like trying to use a nail-gun to attach electrified jell-O to a fog bank.” Huh? Who? Unfortunately his book has no account of what makes an argument valid, and he introduces myriad examples involving potentially confusing propositions with forms like “Only x are y” when he hasn’t even introduced the Aristotelian Square of Opposition (All S is P, No S is P, Some S is P, Some S is not P are the basic standard forms) — a staple of every book introducing logic.

Inductive arguments establish their conclusions only to some degree of probability (which may be very high). It is therefore true that as Molyneux says, Inductive reasoning “deals more with probability than with certainty.” It is not true that all inductive reasoning “attempts to draw general rules from specific instances.” Generalizations do this, but inferences to the next case proceed from a collection of known instances to, well, the predicted next instance. Arguments from analogy proceed from case to case: because a known case is similar to an undecided one in specific ways (can be compared to it), it is probably also similar to the undecided one in some additional respect.

Nor is it true that “deductive reasoning goes from the general to the specific.” Sometimes it does, sometimes not. It can go from universal premises to a universal conclusion ((1) All men are primates. (2) All primates are mammals. (3) Therefore all men are mammals.) Or it can go from a universal-particular combination of premises to a particular conclusion. ((1) All politicians are liars. (2) Some Democrats are politicians. (3) Therefore some Democrats are liars.) And so on.

Don’t expect any of these specifics from Molyneux, nor a discussion of the specifics of what happens when either a deductive or an inductive argument has gone awry. He doesn’t seem to understand the difference between formal fallacies and arguments with false premises.

Sometimes he is clear as mud, as when he states, “There is another category called abductive reasoning that draws a tentative hypothesis from disparate data, but which is related to some sort of testable hypothesis, rather than the reaching of a specific conclusion.”

For the record, here is my paraphrase of philosopher C.S. Peirce’s account of abduction (he coined the term): “Puzzling phenomenon P is observed. If H were true, then P would follow as a matter of course. Hence there is some reason for believing H to be true.” There’s work to be done, such as identifying what makes P “puzzling” and explaining “follow as a matter of course,” but it’s a start!

Sadly, this book is riddled with so many errors and such imprecision that we might as well stop here. Professionals are unlikely to finish it, and well before they get to Molyneux’s libertarian arguments. This is unfortunate because I think Molyneux means well (hence two stars instead of just one). He favors using reason to persuade instead of using threats of force to coerce or intimidate. So do I. He wants to save Western civilization from its enemies, some of whom hang out in academia. So do I. His book appeals to reason and inveighs against every form of leftist political correctness, every sort of irrationalist postmodernism and radical academic feminism, every blithe assumption that those in authority know what’s best for the individual, and other mistakes of the past half century.

But if we’re going to undermine these with solid logic, we have to lay its groundwork correctly. There are sections here on definitions — introduced too late to do any good. There is a reference near the end to the importance of identifying fallacies but no careful and systematic discussion of them: ad hominem attacks, appeals to authority, appeals to pity, ad baculum or threats including ostracism, red herrings, equivocations, arguments from ignorance, and so on, which are central to any tract on logic or critical thinking and would have been invaluable here.

Nor is there any discussion of statistics or statistical fallacies which would also have been invaluable in combating sloppy social policy. There is an abundance of discussion of philosophy vs sophistry: the philosopher wants to find and communicate truth, says Molyneux rightly; the sophist wants control and will use lies or BS to obtain it. The former studies reasoning. The latter studies people (presumably psychology). The former makes his/her case. The latter uses language to confuse and manipulate. There are plenty of appeals to the connection between proper thought and reality, which is unhelpful since in the areas Molyneux is most concerned about, social issues and matters of economic organization, there is massive disagreement about what the facts are and therefore what reality is.

I’ve the impression that Molyneux learned more logic from participating in debates than from actual study of texts and available literature written by competent logicians. His book is therefore far more a defense of free market economics and libertarian social philosophy; maybe that’s why it’s marketed here as political philosophy instead of logic. I’m sympathetic to many libertarian ideas, but I would not want to claim that they are panaceas, universally established by pure logic and empirical evidence (where, for example, do we actually see a functioning libertarian society of any size, enabling us to confirm that the libertarian self-regulating free market really does work in practice???). I don’t think one has to be a leftist or an egalitarian to deny that Western societies are in some sense meritocracies, or could be, and therefore I can wonder if certain government actions on behalf of, e.g., those who are poor or infirm through no fault of their own are requirements of a moral social order. This has been the prevailing opinion in modern thought, long predating the present PC mindset, sometimes argued for at great length, and if one expects to be taken seriously one has to grapple with them on their own terms. Also, does anyone truly believe that without legal mandates corporations would be forthcoming about actual (not merely possible) dangers of their products if they make money? (Think: cigarettes, or the less extreme case of mildly addictive additives in unhealthy processed foods.)

Is this nitpicking? I don’t believe so. My response is that if you’re going to undertake a project like this, don’t do it half-assed! I write as an independent writer/scholar myself, someone who walked away from academia because of the prevalence of many of the superstitions enumerated above and how they’ve corrupted the entire enterprise. One thing I’ve learned: because of the prejudice of those “on the inside,” we are held to higher standards than they often hold themselves to (and I’m not saying the professionals haven’t written some stuff that is absolutely awful), and so we must hold ourselves to high standards or we might as well not bother. Sometimes it gets frustrating, and there are temptations to cut corners with, e.g., appeals to “common sense” that might not be shared by others. These need to be resisted.

To the best of my knowledge, a book that starts with the basic foundations and principles of logic and critical thinking, proceeds through definitions and fallacies in a logical sequence, arriving at the kind of groundwork that would successfully take down the above academic superstitions in the public square has yet to be written. I am not sure how many academics these days would be motivated to try. Be that as it may, this book is not it.

Posted in Books, E-Philosophy, Higher Education Generally, Libertarianism, Logic, Political Philosophy | Tagged , , , , , , , , | 3 Comments

Analytic Philosophy: An Informal Defense (and a Modest Criticism)

This is a post dealing with a few basic issues in contemporary philosophy, issues easily lost sight of even by some trained professionals depending on their inclination.

History discloses four major traditions, or more precisely, methods, of doing philosophy. There is systematic or speculative philosophy — the tradition represented by major pivotal figures such as Plato, Aristotle, Aquinas, Descartes, Kant, Hegel, and the Whitehead of Process and Reality. It tries to provide an account of reality and everything in it, including where we fit in, what is of value in human life, and the moral rules or principles by which we should guide our conduct, among other things integrated into a single consistent system. Then there is analytic philosophy, which developed very slowly out of a sense long predating, e.g., Frege, that there was need for logical clarification of the questions philosophers ask and the language used to answer them. You can find plenty of hints of such in Leibniz and Wolff on the Continent, predating Kant, and you’ll find similar moments in Locke and Hume in the English-speaking world. There are figures such as the early Wittgenstein who cross-pollinate the two traditions in specific ways in his Tractatus Logico-Philosophicus which is systematic but tries to draw limits to thought by exhibiting the limits of the logic of our language.

(The other two schools are, of course, the existentialist-phenomenological tradition: philosophy should describe the human condition, which may mean writing novels as Sartre and Camus did or mean producing detailed analyses of lived experience as phenomenologists beginning with Husserl did; and Marxism / Frankfurt School thought: philosophers have tried to describe the world; the point is to change it.)

My topic here is analytic philosophy. Some say most of it is trivial, or at least inconsequential. While some analysts assuredly go overboard with the logic-chopping, if one considers this method’s potential to make a difference in our understanding of language, I beg to differ. If we give analytic philosophy a chance, we find that its approach to philosophical method is of the first importance.

What’s important about analytic philosophy is that sense, which one does not have to be a trained philosopher to appreciate, that it is frequently important to clarify what a question is asking, or a conclusion really asserting, as a condition for knowing if one has gotten anywhere with an inquiry. This is surely true enough in the traditional problems of philosophy. The difference of methods is the difference between an assertion that God exists, perhaps backed up with a standard argument, versus asking, What does the term ‘God’ mean? Does it mean the same thing to everyone? Or between being asked “Are you free?” and realizing the need for clarity: “What sense of ‘free’ are you talking about?” Even asking the far more specific, “Is your will free?” gets into trouble, absent a clear sense of what the question is asking and what a good answer to it looks like.

So analytic philosophers have seen their job as stepping back from the Big Questions and asking what they mean, through close attention paid to the language one uses to ask them, and to defend specific answers to them. If we cannot get clear what it is we are talking about, in these cases or many others, then we can hardly expect to achieve results that anyone will agree on or are useful. If there is insufficient attention paid to language, then interlocutors who disagree will continue to talk right past one another … which, of course, they might do anyway, but for other and less savory reasons than mere linguistic or intellectual confusion.

So with God most of us mean the God of Christianity, or of the Bible, a Unique Being, uncreated, existing outside of space and time as we experience and understand them (which may be Western constructs in any event), all-powerful and able to suspend causality to perform miracles, all-knowing, His nature manifesting both perfect logicality and perfect moral goodness, and perhaps more. All these concepts may (do) stand in need of further analysis, but with this sense of what we are talking about, we have material to work with (which may, of course, be old hat … or not). For there is also, some will note at once, the Creator invoked by deists, who doesn’t necessarily have all the above characteristics as he doesn’t intervene supernaturally in the world. Analytic philosophy of religion can clarify such differences and perhaps contribute to the discussion of why they might matter for such policies as the separation between church and state.

With freedom of the will, we find ourselves exploring such issues dating back at least to Hume’s and Kant’s time, such as whether it means action taken outside the causal structure of the universe: free actions being the set of actions for which what I want to do is the sole determinant. This, of course, raises a number of questions, such as: if what I want to do is determined by nothing outside of itself, then how does it avoid being completely arbitrary? One answer is that there are influences but not determinants on what I want to do which can only be identified in the case of specific, concrete actions. Influences, unlike determinants, can conflict with one another … as one knows if one has had a decision to make and been caught between conflicting impulses. There are many other answers as well; although it’s a separate discussion because there are many senses of free, I would argue that it is confusing and misleading to bifurcate freedom of the will and influences (even determinants) on our actions as if the difference was absolute.

Analytic philosophy did more than explore conundrums like these, of course. While there continued to be healthy explorations of the traditional questions of philosophy and an insistence that they be given answers that made logical sense (and, for positivist and logical empiricist sub-schools of analytic philosophy, that they be fully consistent with the pronouncements of the specialized sciences), there were also explorations of philosophical method itself … including whether philosophy could have a method of its own, resolving special logical-mathematical-set-theoretical conundrums such as the Liar’s Paradox (“I am at this moment lying to you; do you believe me or not?”) and the paradoxes of set theory with which Russell wrestled. Soon we saw close analyses of the language and justification of the findings of physical science (the origin of modern philosophy of science) which were deemed important as physics had just seen the most significant revolution since the scientific revolution itself with the fall of the Newtonian edifice.

A division appeared among analysts, however, over whether they should approach their subject matter from the standpoint of an ideal language, the preferred ideal language being the formal logic developed during the 1800s and refined by Russell, Whitehead, and the early Wittgenstein, or whether analysts should pay more attention to language in its “ordinary,” unrefined and unreconstructed usages. The later Wittgenstein, and Strawson, became the first proponents of the latter, and the term ordinary language philosophy was coined as more and more British philosophers came on board, much to the chagrin of ideal language philosophers such as Russell. Ordinary language philosophy seemed to have as its advantage that it could take account of the many uses to which language was put, uses not captured by truth-functional accounts. We use language not merely to assert true statements about things but to request information (ask questions), provide instructions, give orders, express emotions, tell stories, tell jokes … and there are other uses we’ll encounter presently.

There can be no single “ordinary” language, of course; there are many “ordinary” languages. Every natural spoken language, in its raw, unanalyzed form used by a community of native speakers, could be viewed as an “ordinary” language; and natural languages, responding not to the severe requirements of an abstract formal logic but to a multitude of practical, workaday demands placed on them by users, grade into the more formal languages as they became specialized in various endeavors, be they scientific, technological, commercial, entertainment-focused, or some combination of these. The difference between a formal language and a natural one is, therefore, a continuum and not a dichotomy. (Regrettably, Western philosophy is full of untenable dichotomies, but that, too, is a post for another day.) Thus the term I prefer over ordinary language philosophy is natural language philosophy. It is more versatile, able to cover more territory in the human world. Charles Morris, who sought to integrate the insights of both pragmatism and behavioral psychology into analytic philosophy (and who was instrumental in enabling many logical positivists to emigrate to the U.S.) distinguished three areas of inquiry: syntax (or syntactics): purely formal relations between signs; semantics, or relations between signs and objects or categories of objects; and pragmatics, relations between signs and sign-users. This trichotomous study has sometimes been called semiotics, the study of signs.

The third of these, pragmatics, is by far the most important. If we wish analytic philosophy to be relevant to the world outside of academic cubicles and seminar rooms, it is where we should end up. The later Wittgenstein and his philosophical progenitors enlightened us about the many uses to which language can be put, depending on what a speaker wants to accomplish. Examination of the many motives human beings bring to language use can enhance any such account.

Propaganda is, of course, one such use — the one I personally find to be both the most interesting and the most useful to analyze. Language can be used to utter true statements, or statements that are true to the best of the speaker’s knowledge which is invariably somewhat fallible. It can have other innocuous uses such as those listed above. Or it can be used to utter statements the speaker knows to be false, or is unsure of but seeks to conceal his (her) uncertainty. It may be fair to call many such statements lies. Or, of course, language can be used in assertions the truth value of which the speaker does not care about one way or the other, in which case the term bullshit has almost become part of the standard lexicon following Harry Frankfurt’s ingenious short analysis of it (On Bullshit, 2005; cf. also his On Truth, 2006). There are philosophers paying attention to such issues (e.g., Jason Stanley; cf. his How Propaganda Works, 2015, which I hope to discuss in a future post at some point). The fact that I can virtually count them on my fingers is part of today’s problem.

Language is used propagandistically when it is used to mislead, so that its intended audience goes away believing something to be true that is really false, or at least insufficiently supported by publicly-available evidence. We can all think of examples. Most of us have probably fallen into the trap of contributing a few of our own. An example Stanley conceded he hadn’t thought of (because his slant is, on the whole, typically academically left of center), is homophobia. It is an example I have mentioned before, as just one of a family of such examples all characterized by the use of the suffix phobia against critics of a belief or lifestyle or some combination thereof: Islamophobia, transphobia, and so on. A phobia is, of course, a recognized mental illness. Examples of real phobias include claustrophobia and agoraphobia. One does not respond to arguments made by suffers of a phobia, e.g., that one really is in danger of suffocating because the place is enclosed. One never assumes their view of a situation is veridical. One therefore tries to cure them with therapy. Apply this to our examples above. What it means is that those accused of homophobia, Islamophobia, and now transphobia, etc., are falsely if indirectly accused of suffering from mental illness because they criticized specific assertions, activities, and policy decisions, and whose conclusions are therefore not seen as worth arguing against. Maybe they can be “cured” with the “therapy” of “sensitivity training.” (It is very interesting that although the word Christophobia has been used in a similar context by Christians, this word has never caught on and remains generally unknown.)

This is how propaganda actually works both inside and outside academia, and analytic philosophers (especially those with the protection of tenure) can be criticized for their unwillingness to go anywhere near such examples. This is the “modest” criticism of my title. I keep it modest because the majority of professional philosophers are introverts. Even if not, few have any taste for the rough and tumble world of political discussion, where it is easy to conclude that arguments and evidence aren’t what matter. (Most who do are contributing the wrong things! Or so I would argue.)

Distinguishing between the use of a word or phrase and a mention of it is also useful in analytic philosophy; consciousness of such could prevent many mishaps, including some that have sabotaged careers. A formal account will be helpful. To use a word in a sentence is, again, to say something about the word’s referent or reference class in the natural language where it is most at home. If I assert, “The cat is on the mat,” I am obviously saying something about a particular cat (“The cat …”), a nonlinguistic entity in the world. Or with the entire reference class: “Cats tend to be nocturnal animals.” If I assert, on the other hand, “’Cat’ is an English word with three letters,” or perhaps, “‘Cat’ is an easy word for children to learn with an appropriate picture,” I am not using the word to refer. I am mentioning it, to say something about a linguistic entity. The bulk of our discourse about language(s) consists, obviously, of mentions and not uses.

If this distinction can be made clear and public, mentions of words or phrases such as illegal immigrant, or of other words or phrases deemed derogatory of politically protected groups (or of consisting of hate speech, again imputing a mental condition to the user or mentioner absent any evidence of such) and therefore verboten, and of creating a hostile work or academic environment given today’s prevailing hypersensitivities, should be seen as a legitimate part of intellectual inquiry, even by those who reject using them. Mentioning is already done by the groups themselves on occasion. If asked, some would be able to recount how they reclaimed a word or phrase so that their uses of it are no longer negative or hateful, under the condition that the use is limited to their speakers, as when blacks use the word nigger among themselves to refer to one of their own, or when homosexuals use the word queer for themselves, or as part of phrases such as queer theory that are now part of their academic lexicon. I trust it is clear that all of these are mentions and not uses (mentions of mentions, if you will). A use is always for something nonlinguistic. A mention is always of something linguistic.

In sum, analytic philosophy serves important purposes within philosophy, and if one knows where to look, it can serve important purposes in analyzing the natural language in which trends and tendencies that affect the “public mind” are expressed. That includes “hot button” issues that, sadly, fill the public discourse with more heat than light. Be this as it may, one should be able to see from this short discussion that analytic philosophy is potentially far from trivial, and many of its most powerful techniques are sadly underutilized. Analyzing the language of op-eds, policy statements, political speeches, and related pronouncements of all kinds, is part of a broader service analytically-trained philosophers are capable of performing if only more would rise to the occasion, and if they could further train themselves to reflect on their own ideological biases where these exist, seeing them for what they are, and not avoid analyses of words or phrases favorable to an ideology they prefer.


Author’s Note: if you believe this article and others like it were worth your time, please consider making a $5/mo. pledge on my Patreon site. If the first 100 people who read this all donate, my goal of just $500/mo. would be reached in no time! And if we’re honest about it, we all waste that much money each day.  

Telling the truth can have negative consequences. Around this time last year my computer was hacked — it wasn’t the Russians, either! Repeated attempted repairs of the OS failed, and the device gradually became unusable — a reason I haven’t been around much lately — and I’ve had to replace it off-budget.

This is also an attempt to raise money to publish and promote a novel, Reality 101 (a globalist speaks in a voice filled with irony and dripping with cynicism). Promoting a book means, in my case, the necessity of international travel which is not cheap.

I do not write for an audience of one. I write for you, readers of this site. If you believe this work makes a worthwhile contribution, please consider supporting it financially. I am not a wealthy person, and unlike the leftist groups I criticize, I do not have a George Soros funneling a bottomless well of cash my way.

If I reach the above goal of $500/mo., I may be able to speak at an event in your area (contact info below). On the other hand, if this effort fails, I am considering taking an indefinite “leave of absence” beginning later this year to pursue other goals. To sum up, these are your articles (and books). I don’t write to please myself. No one is forcing me to do it, as sometimes it brings me grief instead of satisfaction. So if others do not value the results enough to support them, I might as well go into retirement while I am still able to enjoy it.

Posted in Academic Politics, analytic philosophy, Language, Philosophy | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Official Narratives

Note:  the post below is a brief excerpt from the central section of a much longer work in progress, tentatively entitled Confessions of an (Ex) Academic Dissident, which may or may not see the light of print someday. The topic, though, seems important enough given our present situation that it merits separate posting. In fact, I am kicking myself for not having done this long ago. Official narratives should be of interest to philosophers concerned with propagandistic barriers to important truths of various sorts. They are, after all, a form of propaganda that is not usually recognized as such, because they are literally everywhere …  

For our purposes let’s define an official narrative as a government-approved and media-sanitized account of some dramatic event, such as an assassination or war or terrorist attack or mass shooting, or perhaps any major event which went contrary to official expectations such as the outcome of national election. An official narrative can often be identified as such by appearing in relatively complete form very quickly after the event that prompted it, and then being reiterated endlessly in all major media, its essentials never again questioned by “responsible” commentators — frequently despite the absence of actual evidence (witnesses, physical, “smoking gun,” etc.). A convenient enemy is named whose motivations explain the event. Patriotism may be invoked (or possibly its opposite as the case may be), as this also suspends judgment and helps manufacture public consent around the narrative.(1)

Two obvious examples are the immediate arrest of Lee Harvey Oswald for the assassination of President John F. Kennedy, as Oswald had lived in the Soviet Union, recently been to Cuba, and could easily be associated in the public’s mind with Communism; and the announcement of Osama bin Laden as the mastermind behind the 9/11 attacks within hours of the attacks themselves — bin Laden being easily demonized as an Islamic jihadist. Mass media regaled viewers endlessly with televised images of the burning towers, creating a climate of fear as the name Osama bin Laden and phrases like axis of evil were heard over and over, like mantras.

Language is indeed used — repeatedly — to bewitch the public’s intelligence and take all thought deemed legitimate in a desired direction.

A third and brilliant exemplar of an official narrative was falling into place even as I was putting the final version of this essay together: the idea that Russian hackers and other agents of the Russian government directed by Russia’s president Vladimir Putin were able to influence the outcome of the November 8, 2016 Electoral College victory of Donald Trump against Hillary Clinton, possibly having colluded with members of the Trump campaign, including delivering hacked John Podesta emails to Julian Assange’s WikiLeaks, badly damaging the already-troubled Clinton campaign. During the December days up to the actual Electoral College vote on December 19, this claim, originating within the intelligence community including the CIA, was repeated continually despite the absence of evidence it was true. We were clearly expected to accept the word of the CIA and media reportage on faith. This despite the CIA’s having “inaccurately” claimed, back in 2002, that Iraq’s Saddam Hussein had weapons of mass destruction and was able and willing to use them against Americans, leading to the most disastrous war since Vietnam — a war the U.S. started!

That this is not labeled a conspiracy theory is telling!

Another key sign of an official narrative is that its targeted audience (usually the general public) is expected to accept it on faith, which means simply not seeing or hearing what does not fit the narrative. Ignored in the case of the Kennedy assassination is the fact that a single bullet inflicting the documented damage, which included injuring Sen. John F. Connolly seated near Kennedy in the motorcade, would have violated the laws of physics: hence use of the phrase magic bullet, which should have been a dead giveaway that something wasn’t right. Also ignored is that with the vehicle in which Kennedy was riding being in motion, however slowly, and no injuries to either spouse seated next to them, the shooting was clearly the work of a trained professional, which Oswald was not.(2)

In the case of 9/11, surely it isn’t crazy to wonder how 19 Saudis were able to hijack four planes, presumably without trial runs of any sort, fly them (or force their hapless pilots to fly them as there was no documented evidence of their ability to fly them) for lengthy periods of time across multiple states, crash two of them into the Twin Towers, and fly a third into the Pentagon after executing a tight maneuver experienced pilots are on record as saying they couldn’t duplicate. Where, precisely, was the most expansive (and expensive) multilayered air defense system in the world that morning? This is only a smattering of what is left unexplained by the 9/11 official narrative. Also ignored are the claims of those escaping the Twin Towers to have heard explosions in the buildings not caused by burning jet fuel as they came from below, not above, or scientists claiming that the laws of physics preclude the specific kind of collapse that was witnessed not twice but three times that day: that is, there was also the mysterious collapse of the third tower, WTC-7 (ignored by the much-touted 9/11 Commission Report), which had not been struck by anything substantial.(3)

In the case of the supposed Russian hackers and agents (soon to include Sergey Kislyak, Russian ambassador to the U.S.!), it is reasonably clear that something unusual occurred to tilt the election in Trump’s favor at the eleventh hour. WikiLeaks had done one of their infamous data dumps just days before, and it contained some potentially damning information about Hillary Clinton. Wikileaks founder Julian Assange insisted that his source was not the Russians. One can believe him or not. Or perhaps one can believe, or not, the account that Seth Rich, a disgruntled Bernie Sanders supporter, was murdered because he was the leaker of the information intended to hurt Clinton. There is, of course, no “smoking gun” evidence proving this to be true, just as there is no “smoking gun” evidence linking anyone in Russia to the outcome of the 2016 election — evidence made available for public inspection, anyway. But it is interesting that despite this structural similarity between the two, the Seth Rich allegation has been repeatedly labeled a conspiracy theory by all major media.

(1)  Cf. Edward S. Herman and Noam Chomsky, Manufacturing Consent: The Political Economy of the Mass Media (Pantheon, 1988, 2002).

(2)  See James H. Fetzer, ed., Assassination Science: Experts Speak Out on the Death of JFK (Open Court, 1998). More recent work on the Kennedy assassination focuses not just on the hows but on the whys, and draws conclusions that should be considerably more disturbing to anyone who believes the U.S. is really a representative democracy (cf. the next section). Cf. also James W. Douglass, JFK and the Unspeakable: Why He Died and Why It Matters (Touchstone, 2010) or David Talbot, The Devil’s Chessboard: Allen Dulles, the CIA, and the Rise of America’s Secret Government (Harper, 2015).

(3)  Before dismissing the allegations implied in this paragraph out of hand, readers ought to sit down with materials some of us have spent years with. A good place to begin is Jesse Richard, “You only believe the official 9/11 story because you don’t know the official 9/11 story,” (accessed 2 Sept., 2011). Follow it up by actually reading works such as David Ray Griffin, The New Pearl Harbor (Olive Branch Press, 2004); Steven E. Jones, “Why indeed did the WTC buildings collapse?” (accessed 23 December, 2005); Rowland Morgan and Ian Henshall, 9/11 Revealed (Avalon Press, 2005); James H. Fetzer, ed., The 9/11 Conspiracy: the Scamming of America (Open Court, 2007); or Judy Wood, Where Did the Towers Go? Evidence of Directed Free-Energy Technology on 9/11 (The New Investigation, 6th, 2010). All are perhaps worth reading in light of Michael C. Ruppert’s revealing Crossing the Rubicon: The Decline of the American Empire at the End of the Age of Oil (New Society Publishers, 2004). There are time-stamped videos, finally, of reporters announcing the collapse of the third World Trade Center tower, WTC-7, before it fell. Indeed, the building is visible in the background if you know which one it is: another dead giveaway to anyone paying attention that something is seriously amiss with the official narrative of what happened that day! Did someone miscount the number of time zones?

Author’s Note: if you believe this article and others like it were worth your time, please consider making a $5/mo. pledge on my Patreon site. If the first 100 people who read this all donate, my goal of just $500/mo. would be reached in no time! And if we’re honest about it, we all waste that much money each day.  

Telling the truth can have negative consequences. Around this time last year my computer was hacked — it wasn’t the Russians, either! Repeated attempted repairs of the OS failed, and the device gradually became unusable — a reason I haven’t been around much lately — and I’ve had to replace it off-budget.

This is also an attempt to raise money to publish and promote a novel, Reality 101 (a globalist speaks in a voice filled with irony and dripping with cynicism). Promoting a book means, in my case, the necessity of international travel which is not cheap.

I do not write for an audience of one. I write for you, readers of this site. If you believe this work makes a worthwhile contribution, please consider supporting it financially. I am not a wealthy person, and unlike the leftist groups I criticize, I do not have a George Soros funneling a bottomless well of cash my way.

If I reach the above goal of $500/mo., I may be able to speak at an event in your area (contact info below). On the other hand, if this effort fails, I am considering taking an indefinite “leave of absence” beginning later this year to pursue other goals. To sum up, these are your articles (and books). I don’t write to please myself. No one is forcing me to do it, as sometimes it brings me grief instead of satisfaction. So if others do not value the results enough to support them, I might as well go into retirement while I am still able to enjoy it.

Posted in Culture, Election 2016 and Aftermath, Language, Political Economy | Tagged , , , , , , , , , , , | 3 Comments

Wittgenstein’s Two Greatest Insights About Language

We’re back, after another unfortunate hiatus caused by a lingering illness and furthered by a computer meltdown. Might as well accept it: I will never be a technology person. But anywize….

This post is one I’ve been planning for some time. One could argue that Ludwig Wittgenstein was the twentieth century’s most important philosopher. He made a substantial contribution to the ideal language analytic philosophy that began with Frege and Russell and emphasized formal systems, and basically pioneered the natural language analytic philosophy that rose during the 1940s and even more during the 1950s, emphasizing the varieties of uses or purposes natural language serves. Wittgenstein was one of those rare thinkers who developed two quite different philosophies, the second of which was a devastating criticism and rejection of the first. He also greatly influenced the historicists in philosophy of science (Toulmin, Hanson, Kuhn, Feyerabend, etc.) and allowed the building of bridges to other philosophical traditions such as French poststructuralism (Foucault) and outside of philosophy to cultural anthropology (Geertz).

To my mind, two of Wittgenstein’s statements stand out as singularly profound, and important in ways going far beyond the antiseptic groves of academia.

Despite the popularity of logical positivism at the time, a close reading of the Tractatus shows that Wittgenstein was no positivist: the influences on his philosophy of language and its relationship to thought — to what can be said versus what cannot be said — range well outside Russell and Frege. Think Tolstoy, for example, or possibly Kierkegaard, or even Kafka. But all them aside, one statement late in the Tractatus suggests that Wittgenstein was already trying to think outside the set of conceptual boxes that were constraining ideal language analytic philosophy.

The first statement, in this case: “In philosophy the question, ‘What do we actually use this word or this proposition for?’ repeatedly leads to valuable insights.” Final paragraph of 6.211, Pears / McGuinness translation.

The early Wittgenstein is observing here that whatever supremacy one gives logical form alone, use matters. Paying attention to it leads to “valuable insights.”

Language is more than form (logical, grammatical). It has purposes: communication, a storehouse of information, instruction, and so on. Wittgenstein lists a variety of the uses of language near the beginning of Philosophical Investigations. A language is a system. More precisely, it is a concatenation of systems: its own set of sounds or phonemes used in often specific combinations (it is interesting to observe that nonnative speakers of a given language will typically have difficulty pronouncing sounds or combinations their native language does not use, especially if they are trying to learn the language as adults); their combination into words; rules attaching words and phrases to objects or classes of objects or attributes; a grammar allowing for discussion of present continuous, past simple, future, and other tenses; words allowing for relations in space and time, movements within space and time (prepositions); and so on.

These, the careful student of language eventually must realize, are conventions only. There is no logical or any other necessity about them. They are what they are because they allowed for solving problems of communication, etc., and are taken for granted within a community of speakers.

This all becomes evident to learners of a second language if they spend some time reflecting. It’s tempting for a native English speaker such as myself to ask of Spanish, for example, Why do they say it that way? There’s no good answer to such a question beyond the fact that these are the linguistic conventions speakers accept and use, having inherited them from generations past. One may make the point further by looking back at English and retorting, Well, why do we English speakers do it this way and not some other way? It’s actually easier to ask the question of English, given that Spanish is a pure Romance language following the streamlined internal conventions common to Romance language, while English is messier, having drawn from both Romance and Germanic roots, resulting in greater complexity. In any event, one embraces and uses the conventions of the target language if one hopes to be understood by its speakers.

The point, though, is that a language consists of conventions “all the way down,” one might say. There is no “metaphysical” necessity between a word or phrase and a given object or class of objects, not in English or in any other language. These conventions include usages which can be observed; one can learn a language such as French or Spanish by noting what words and phrases in those languages are being used to do: the situations in which those words or phrases occur, the responses they generate from others, etc. Sometimes these usages involve human motivations which change over time. To the extent that the aggregate motivations of a human community affect the uses of language, its conventions are somewhat flexible, although this flexibility is always limited. New words and phrases are coined; others drop into disuse; a few are changed beyond all recognition. An example of the latter is the word gay. One may review the attributes this word was used for in, say, the 1950s, versus its conventional uses today. Conventions change in response to pressures placed on them, and these can come from a variety of directions, including political ones.

Summing up this part of the post, the primary insight here is that we learn the conventional nature of language from observing what words and phrases are used by speakers to do, and note how these change over time.

In Philosophical Investigations the later Wittgenstein makes many observations worth pondering, but the one I would single out is: “Philosophy is a battle against the bewitchment of our intelligence by means of language.” End of paragraph 109, Anscombe translation.

Unfortunately, Wittgenstein’s primary concern, very likely his working premise, was with that departures from conventions that govern “ordinary usage” had given rise to traditional philosophical problems about knowledge and certainty, perception, free will versus determinism, and so on. Again, what do the standard conventions of the English language permit users of terms like knowledge, certainty, free, etc., to do? Is a philosopher’s language “gone on holiday” (another memorable Wittgenstein phrase from Philosophical Investigations) when he asserts, “I am certain that p” as opposed to just asserting “p“? I do not believe Wittgenstein ever dwells on the third of the above, free will versus determinism, but other philosophers did in his wake, and their results helped to sort out the often-ambiguous and therefore confusing fashion through which we describe ourselves as “free.” I used to ask students, “Are you free?” The proper response was to ask, “What do you mean by free?” That would have been to ask for a list of the standard conventions in which we use that word.

Calling something a convention, however, does not mean we should never question it.

Given the range of uses of language, if philosophy is indeed a battle against the bewitchment of our intelligence by means of language, we neither can or should confine ourselves to the ‘problems of philosophy’ including the preoccupations of nearly all analytic philosophers, past and present. Drawing in human motivation is a factor here. Not all uses of language aim at uttering true or even useful propositions. Lies, conventionally, are propositions the speaker knows are false but wants you, the listener, to believe are true. And if Harry Frankfurt is right, bullshit consists of propositions uttered by a speaker who doesn’t much care about truth or falsity, only achieving some other effect such as muddying the linguistic waters. (See his On Bullshit, 2005.)

It is necessary, in this light, for philosophers (and any other interested parties) to consider propaganda, which uses language according to, or to establish conventions, less concerned with truth or falsity and more concerned with leading an audience in a desired direction. This direction may favor or disfavor that which is being described propagandistically.

Propagandistic language may be used to discourage independent investigation of an idea by discrediting it through what may be called weaponized language. An example is homophobia … a term almost automatically applied across the board today to anyone critical of the homosexual lifestyle, or of homosexual unions. This term has become conventional. One sees it everywhere. A phobia is, of course, an irrational fear. People suffering from irrational fears are answered with logic; they are offered cures or at least attempts to control their fear (“sensitivity training”?). There are legitimate phobias (agoraphobia, claustrophobia, etc.). Is homophobia one of these? What makes it such? The word was not used prior to the 1990s, which alone makes the matter suspicious, if not decisive. We can raise the matter because of those worldviews, religious or otherwise, that reject the mainstreaming of homosexual conduct, or produce claims backed up by scientific evidence, independent of anyone’s theology, that the lifestyle is physically damaging and actually shortens the lifespans of those who practice it. I cannot claim to know, sitting here, that such claims are true (I suspect they are). What I am sure of, though, is that they cannot be dismissed prior to investigation, and should not be circumvented by the application of a term designed to portray anyone raising the issue as subject to an irrational fear.

A similar situation exists with regard to the phrase conspiracy theory. As conventionally used in major media, calling something a conspiracy theory equates to dismissing it as irrational, and is usually sufficient to discourage closer looks. But is this the same as having supplied evidence that the idea to be rejected really is irrational, or unsupported by any sort of fact? Given the history of the convention, its creation by the Central Intelligence Agency back in 1967 to circumvent the arguments brought forth for questioning the Warren Commission Report on the Kennedy assassination, we have every reason to be suspicious of this convention. Why, in that case, is the claim that Seth Rich was murdered because he supplied Democratic Party emails to Wikileaks (a claim where we have no smoking-gun evidence) a conspiracy theory, while the claim that members of the Trump campaign might have colluded with agents of the Russian government to affect the outcome of the 2016 election (a claim also without smoking-gun evidence) not a conspiracy theory?

Calling a usage conventional does not exempt it from criticism if we can draw attention to what a word or phrase is being used to do (Wittgenstein’s first observation above), and especially if we can see that what it is being used to do is to bewitch our intelligence (Wittgenstein’s second observation).

It would be a great use of analytic philosophy of natural language, especially that which takes Wittgenstein as its point of departure, if philosophers were to begin questioning usages of terms and phrases, even those that have become conventions, that are clearly propagandistic, designed to sway audiences and move them in desired directions, despite their newfound conventional status. I used the two examples I thought of first. There are dozens of others. I am sure readers can think of some of them without trying too hard.

Posted in Language, Philosophy | Tagged , , , , , , , , , | Leave a comment

Academia Embarrasses Itself Again: the Hypatia Affair

The last time I wrote a piece of this sort, an exposé of academic philosophers embarrassing themselves, it caused me some problems. I try to learn from my mistakes, and what I learned from that occasion could be set down as a general rule: other things being equal, before evaluating someone based on what they’ve said on social media, first be sure to investigate their major statements (articles, books if available, etc.) rather than respond to something hastily-scribbled on Facebook or Twitter.

But that doesn’t apply in what I’m about to discuss, and I mention it only as background, for completeness’ sake. For at issue here isn’t an ill-considered Facebook post or, God help us, a 140-character tweet, but an article deemed to have passed muster for, and which appeared in, a refereed academic journal. The journal is Hypatia, one of the major mouthpieces of radical academic feminism for over 30 years now. It’s the sort of journal that would not have been conceived back in the happy and carefree days of, say, logical positivism, whatever that school’s drawbacks, but which fits perfectly into our unhappy era. After all, major spokespersons for what is now considered important scholarship in the latter would denounce logical positivism not for its drawbacks (logical, methodological, epistemological) but because nearly all its practitioners were too pale of flesh, too “cisgendered,” and because their methods didn’t take into account the anguish and horrific hurts to others’ feelings of correspondence rules or theoretical reduction or even just an insistence on clarity and exactitude. The previous might seem a caricature. I’m not sure it is!

The article in question was penned by one Rebecca Tuvel, an assistant professor, untenured in philosophy at Rhodes College in Memphis, Tenn. The article’s title was “In Defense of Transracialism.” I will admit up front: I haven’t read it and have no plans to do so. I did read the abstract, though (available most easily here in case the link to the original no longer works). I gather that the effort at a neologism represented by the word transracialism refers to someone such as Rachel Dolezal who, though born white, has attempted to portray herself as black and even perhaps transition to blackness, at least by implication. Her story gained some notoriety as it appeared roughly the same time as the now-infamous celebrity gender transition of Bruce Jenner to “Caitlyn” Jenner.

Tuvel’s article appears to have employed a familiar and fairly standard argument form, that of arguing from analogy. (If I am wrong about this and the article does something else, someone will have to comment and inform me.) Thus the considerations that belong to one category of transitioning person might apply to those of another and perhaps more hypothetical category of transitioning person, based on any identifiable relevant similarities. There is a review and analysis of the article here, and Tuvel’s article looks to be as well-reasoned as anything to be found in academic philosophy today despite the trendiness of its subject matter (and, if we’re honest, we must acknowledge: the happy and carefree days of logical positivism were hardly free of trendiness). Going off the abstract, the article seems cautious and conservative in the sense that Tuvel apparently never actually averred that Dolezal is a “transracial” person.

What she appears not to have considered, though, is the singular trait of our unhappy era: the degree to which those hurt feelings and sense of offense would trump even the best arguments and inspire an explosion of unbridled rage. The rage against the most likely innocent author would spill onto the journal that published her work. Let it never be said that politicized academic radicals will hesitate for a minute to savage one or even many their own at the slightest indication of independent thought stepping across a constantly-shifting boundary into heresy!

Sorting out the exact sequence of events by referring to the originals is problematic, as some of the links have gone dead (I would probably have removed them, too). But, assuming I can rely on this, some 150 “new scholars” and their tenured-class supporters (including a couple of people on her dissertation committee!) put together and signed an “Open Letter to Hypatia” to denounce the article (read it here). They demanded Tuvel’s article be retracted, that it “caused … harm” (whatever that means, as no evidence of “harm” is presented or cited) and even demanded — taking presumption and arrogance to never-before-seen heights — that Hypatia revise its editorial standards to ensure that no article of this sort ever again gets through its review process.

This comes in place of what would most likely have happened back in those happy and carefree days. An article with a novel and perhaps controversial idea would be refereed and published. Some ambitious person, probably young and untenured, would find elements of its argument problematic and craft a carefully nuanced discussion piece, possibly eliciting a response from the first author. Further discussion would be generated, possibly leading to a round table type debate during the next available time slot at the next APA convention. There would be no (or very little) hostility. People’s feelings would not be at issue; their motives would not be impugned; everyone would be assumed to be attempting to advance a common conversation to greater understanding of the original article’s area of reference.

Would this happen now, with today’s race-obsessed, gender-drenched, etc., academic culture?

You’re kidding, right?

In our unhappy, post-Enlightenment era, emotions have largely overwhelmed reason, especially among the noisy, politicized subdisciplinary “new scholars,” and the sort of procedure I just described would be, if anything, far too slow. Far easier instead — especially given the near-instant communications now available to everyone — to satiate the passions of the moment and dash off a nasty list of superficial criticisms and allegations. These boil down to the author’s failure to accommodate the latest fashions of, e.g., the “critical theory” that dominates these sub-disciplines. Suffice it to say, some of what you’ll read in the “Open Letter” (assuming the link continues to work) goes beyond standard academic criticism of an author’s work to the borderline defamatory — this is Professor Leiter’s judgment, not mine. He goes on to express “hope” that she consults a lawyer to discuss her options, even offering in one of his several posts to assist with her obtaining contacts and deal with legal expenses!

Whatever one’s negative judgment of logical positivism, its practitioners did not defame one another in print! Ah, those happy and carefree days …

The situation is worse the above makes things appear. ‘

Hypatia’s editorial board instantly caved!

Their statement stands as an exhibition of all that is wrong with American academia today, especially the humanities! I will reproduce the first paragraph of their “apology” (hardly an apologetic in the ancient Greek sense of Plato’s “Apology”!):

We, the members of Hypatia’s Board of Associate Editors, extend our profound apology to our friends and colleagues in feminist philosophy, especially transfeminists, queer feminists, and feminists of color, for the harms that the publication of the article on transracialism has caused. The sources of those harms are multiple, and include: descriptions of trans lives that perpetuate harmful assumptions and (not coincidentally) ignore important scholarship by trans philosophers; the practice of deadnaming, in which a trans person’s name is accompanied by a reference to the name they were assigned at birth; the use of methodologies which take up important social and political phenomena in dehistoricized and decontextualized ways, thus neglecting to address and take seriously the ways in which those phenomena marginalize and commit acts of violence upon actual persons; and an insufficient engagement with the field of critical race theory. Perhaps most fundamentally, to compare ethically the lived experience of trans people (from a distinctly external perspective) primarily to a single example of a white person claiming to have adopted a black identity creates an equivalency that fails to recognize the history of racial appropriation, while also associating trans people with racial appropriation. We recognize and mourn that these harms will disproportionately fall upon those members of our community who continue to experience marginalization and discrimination due to racism and cisnormativity.

Anyone so inclined can read the whole thing here where Professor Leiter reproduced it. His self-description as a New Yorker notwithstanding (referencing a low tolerance for self-evident bullshit), he clearly has a stronger stomach for this sort of thing than I do. Maybe that’s a necessary rite of passage for entrance to tenured-class status in our unhappy era. Suffice it to say, the remainder of Hypatia’s longwinded and neologism-laden “apology” caved to the demands on every point, even the one to revise editorial policy. We even get to learn from the above one of the newest neologisms: deadnaming: the speech crime of referring to the name a “trans” person used prior to their “transing” (or whatever the hell we’re supposed to call it). Critics of Tuvel’s article went further than what the above block quote implies, impugning her abilities as a writer and a scholar; it was this that Leiter, with his legal acuity, picked up on. Others have as well. Several other responses to the “Open Letter” and the Hypatia cave-in have appeared as the word has spread.

Leiter’s overall views are left-of-center on most issues of public policy as with the majority of the academics of his generation, but not batshit-insane radical leftist. He calls the entire affair a “fiasco.” It’s hard to disagree with that, but easy to enhance such a description. Leiter would not agree with my overall take on this, as I do not think it will suffice to limit it, as opposed to placing it alongside other numerous recent events such as the obvious suppression of conservative speech on campuses reflected in, e.g., the cancellation of Ann Coulter’s scheduled appearance at Berkeley in the face of threats of violence, but in the still-broader historical context of what’s happened to academia since the present unhappy era began: the 1970s.

What happened was the emergence of the “argument” (it was always far more an exercise in propagandizing and then bullying, at which the academic left has always excelled) that minorities, women, and now apparently “transgenders” (but — gasp! — not “transracials”) are “underrepresented” in academia, and that all departments should make efforts not just at outreach but to hire more such people — for after all, “diversity is our strength,” is it not?

We arguably started down this troubled road with the Supreme Court’s catastrophic Griggs v. Duke Power decision in 1971. This decision changed the fundamental meaning of discrimination from an action taken by individuals to a mere lack of politically acceptable outcomes and timetables for such. The meaning of affirmative action, ambiguous from the get-go, changed from that of well-intended outreach based on calls for an end to racial and sexual discrimination to an insistence on measurable results as a test of “nondiscrimination.” The gold standard become proportional representation. Hence the creation of the “underrepresented” group in all official policy recommendations relevant to hiring and promotions. Organizations with insufficient numbers of blacks and women in positions of responsibility could expect warnings if not actual EEOC lawsuits (Auburn University, a former affiliation of mine, received a warning regarding admissions of black students while I was teaching there).

A process was set in motion. I described this process in some detail, along with its effects on occupations ranging from the construction industry to academic, in my book Civil Wrongs: What Went Wrong With Affirmative Action (1994), a work not once discussed or argued with by academic philosophers but instead basically blacklisted in academia. I learned in 1996 from a sociology professor at Bowling Green State University with whom I’d begun corresponding that the book had been placed on an actual “index of banned books” there — an “index” of how medieval academia was becoming even then!

I’d committed one of the ultimate heresies, providing a political explanation of the rise of the “new scholarship”: so-called “critical theory” which borrowed freely from French philosophy (e.g., Foucault, Derrida, etc.), radical feminism, critical race theory, and rising homosexualism which at the time was barely on the public radar, were really forms of pseudo-scholarship. They were not advancing a common conversation but launching platforms for political activism. Their primary method of protecting themselves from criticism was political correctness, rooted in Frankfurt School educated Marxist philosopher Herbert Marcuse’s “Repressive Tolerance”: allowing the same free speech standards for “nonrepressed” as for “repressed” groups maintains systemic repression! By the end of the 1980s affirmative action had clearly evolved into race and gender preferences never called for back in the 1960s, and the idea became to protect these both from legitimate criticism (in many cases from scholars far better situated than I was), while cowardly Republican politicians such as the first George Bush caved and signed a Civil Rights Act of 1991. That law reversed lower court decisions such as Croson (1989) upheld by the Supreme Court which was threatening to drain the affirmative action swamp.

I opined further in my book that the particular attacks on such notions as rationality and objectivity coming from “new scholarship” quarters, though originating independently, had been incorporated into and even enhanced by this effort: a rational and objective approach to race and gender in American public policy did not yield the results activists wanted. There was no reason whatsoever why nondiscrimination should yield proportional representation of all ethnicities and both genders in all or most organizations (schools, workplaces, etc.). Nowhere in the world did such representation exist. Arguments based on experience seemed to show that efforts to bring it about were counterproductive and ought to be discontinued. (There was no automatic reason, moreover to equate discrimination with repression. Jews faced discrimination and even segregation for centuries in Europe, and still sometimes ended up wealthy from running businesses.)

Such approaches based on fact and logical argument had to go. The result was that any sort of genuinely enlightened discussion of such subjects as race and gender was clearly dead in the water by the time my book appeared. I just hadn’t realized it. As Hobbes says somewhere, “When reason goeth against a man, a man goeth against reason.” A woman, too. Or any other gender you like!

Feelings thus reign supreme in the subdisciplines of the “new scholarship”! And they are getting increasingly unpredictable: I am sure Rebecca Tuvel, a recent Ph.D., is as much a product as a victim of this academic culture, and probably never dreamed this kind of fracas would erupt over her attempt to add something new to the conversation, trendy and sordid though the conversation is. I rather hope she is polishing her CV in addition to whatever legal maneuvers she might be considering. Being female, after all, is doubtless an asset today, but isn’t a guarantee if one has become controversial. Ask any well-educated woman who rejects the assumptions and methods of radical academic feminism.

Summing up: it is small wonder appeals to “expertise” no longer cut much ice with significant and possibly growing segments of the public, those who voted for Donald Trump last year? Expertise, obviously, is keyed to academia, which once trained the experts. Politicians and commentators invariably turn to academics when they want to back up their claims with evidence — to the extent this still occurs. While different issues are different, academic “experts” increasingly, whether rightly or wrongly, are perceived by members of the voting public as existing in an elitist echo chamber, their motives often suspect, their forests long ago rendered invisible by the trees, and in which what doesn’t fit into their official narratives is often not seen at all.

While admittedly only a tiny segment of the public is likely to encounter the specifics of a case like this, the larger group almost automatically associates much of academia with this above elitist, insular, big-city, “blue” culture mentality. What antics they have seen, which include women marching wearing pink “hats,” displays obviously designed to represent their vaginas, are hardly encouraging. The “red” culture that had just put Trump in the White House saw vindication, in its own eyes at least, the very next day after Trump’s inauguration. This was the much-touted March for Women, and to be blunt about it, the “red” culture instinctively saw “pussy costumes” as sick and degenerate. They regard someone wearing one as having something wrong with her mentally. How do I know this? Because I know such people personally sometimes through years of correspondence, and they told me as much.

Nothing that has since come from media left-liberals or from academics is likely to change this.

The “red” culture, moreover, does not regard “trans”-whatevers as some kind of an intellectual and cultural avant garde but as symptoms of the worsening sexual confusion and depravity of a society in rapid decline.

Why does this matter? Because these people vote! They have, at least for the moment, thwarted the efforts of the multicultural and trans-whatever “blues” to dominate the country — or, at the very least, to dominate them. Even if violently attacked, as some were outside of Trump rallies last year, they will continue to vote. At least one columnist just voiced the suspicion that, given the disruption behavior the left has engaged in since the election, were it held today, Trump might actually win the popular vote!

The more obnoxious and violent the left gets, the more it loses. One has to wonder if and when radical leftists will set aside their feelings long enough to figure this out.

Or, to bring things back to academia — and to academic philosophy — will its saner members, even representatives of a left-of-center that wants to hang onto sanity, as Professor Leiter clearly wants, decide they have had enough of this radical nonsense? What can they do about it? First they have to ask, At what point will they realize the need to abandon as irrational and destructive the obsession, now over a quarter-century old, to “get more women into philosophy,” as a species of the further obsession with proportional representation? (When I checked his blog more recently, Leiter had posted a partial list of signees of the “Open Letter,” calling for What-were-you-thinking? type explanations. Seven looked to be men; 27 were women; one only gave initials; with the occasional first name like ‘Tempest,’ who knows? In fairness, not all the signees are academic philosophers. But some are. Too many are.)

Unless the presumption that academic philosophy needs more women is subjected to critical scrutiny (and silly allegations of the ‘cisnormativity’ of the critics are simply ignored), the sane will be able to do nothing. On the other hands, enough radical crazies will continue to obtain the tiny handful of available tenure-track jobs to cause problems like this to erupt. They will continue to corrupt obviously floundering humanities disciplines, now facing defunding in many places, until many departments are forced to close and there is next to nothing left.

What is bad is that whatever visibility such incidents as the Tuvel-Hypatia fiasco reach, their association not just with academic culture but with intellectualism generally, will continue to damage the latter, that it will be even harder to get serious ideas discussed in our unhappy age … an age in which “diversity,” far from being “our strength,” has accomplished little besides destroying careers and served up little besides division and hostility.

SPECIAL NOTE:  If you like this article or value my writing and wish to see more of what’s presently in the planning stages, please consider going to my site and making a pledge/donation.

UPDATE May 5, 2017:  The philosophy department at Rhodes College stands fully behind Rebecca Tuval … for now, at least. Maybe she doesn’t need to polish up her CV after all. I would anyway, though, just to be on the safe side. (Here.)

Posted in Academia, Academic Politics, Where Is Philosophy Going? | Tagged , , , , , , , , , , , , | 4 Comments

May 1 – International Workers Holiday or International Diversion

It’s May 1. Here in Chile, it’s a national holiday, the official name for which is Día Internacionale de Trajabadores (International Workers’ Day). The holiday isn’t celebrated in the U.S., of course, or in Canada, because of its association with far left groups, especially communist ones. This day has an interesting history, to say the least. It was on May 1, 1776, that Jesuit-trained law professor Adam Weishaupt founded the notorious secret society known as the Illuminati, which infiltrated European Freemasonry until, accused of conspiring to subvert and destroy all the governments of Europe, they were suppressed and driven underground. That was the late 1780s. In 1789, of course, the French Revolution erupted. Its causes are well known, or so we are told. Weishaupt himself continued to live in relative obscurity until 1830, interestingly the same year the secretive Skull & Bones was founded at Yale University in the U.S. Both George W. Bush and John Kerry were members, and it was “so secret they couldn’t talk about it.” Some make more of this entire trajectory of events than others. The trajectory may be noted as one of those intriguing sequences of factoids about which we may never know the whole story, if there is a whole story to be known.

May 1 is also the day, in 1884, a proclamation demanding an 8-hour working day began; on the May 1 two years later a general strike erupted over the issue that led to the violence of the Haymarket affair, which resulted in a number of deaths when someone lobbed a bomb at police. They fired back, killing four demonstrators; the next day, disruptions resulted in bystanders being shot to death by militiamen. The 2nd International Congress called for an International Workers Day, to be observed on May 1. “May Day” was formally recognized for the first time in 1891, and has remained a focal point for various leftist groups claiming to represent the interests of the proletariat. It remains one of the most important national holidays in mainland China, North Korea, Cuba, and is still recognized in the former Soviet Union. Many noncommunist nations throughout Africa, South America, and elsewhere, celebrate International Workers Day, of course. It became a recognized holiday here in 1931.

Does celebrating a holiday accomplish anything for those not in power, however? I would surmise not.

Here in Santiago, Chile — in our neighborhood, anyway — the holiday was an excuse to stay up most of the night partying. I doubt the revelers gave the day’s history more than a moment’s passing thought, if that. I don’t have to know the personalities involved to be able to surmise this. I’ve lived here almost five years.

As an ex-academic, I’ve observed both how those seeking power operate, and how those with real power carry it forward.

The former are noisy and often obnoxious. The latter are generally as quiet as the dead. They don’t have to use noise to get what they want.

Many of us are heirs, in one way or another, of the so-called culture wars that began to erupt, little by little, during the Reagan-Bush years. Those years were witness to the meteoric rise of so-called new scholarship focused on race, gender, eventually sexual orientation, and identity-politics generally, which was incompatible with the ideals of objectivity and rationality that imposed essentially the same norms on everyone. Traditionalist responses that tried to articulate and reaffirm those norms had begun to appear. I was one of those people who began to look at policies such as affirmative action as propelling identity-scholarship, the fact that several highly visible Supreme Court decisions had begun pushback against race and gender preferences in, e.g., university hiring and admissions practices, advancing the thesis that political correctness erupted to protect preferences of various kinds from legitimate criticism by using propagandistic ruses (allegations of “racism” being the most obvious) to circumvent them as well as intimidate the critics. The ruse worked more often than not, as otherwise those bringing them forward generally saw their academic careers lying in ruins — even if they had tenure, in some cases.

It was all very noisy and visible. What nearly all of us missed was the fact that outside the academic disciplines they had hijacked, and outside the media, none of these groups truly had power, which is not merely organizational but political-economic and has as its sources those with very little interest in how many women the philosophy department has hired this past year.

Neoliberalism had begun its own top-down transformation of the universities as far back as the early 1970s following the Powell Memo, which specifically referenced figures such as Herbert Marcuse and Ralph Nader as having inculcated an anti-business mindset in the universities. Indeed, the late 1960s had seen the rise of an entire generation that hijacked the national conversation. As a result, a war those with economic power wanted was rendered so unpopular that it was no longer beneficial to said powers-that-be to bankroll it. U.S. efforts in Southeast Asia were curtailed and discontinued.

To be sure, members of that generation stayed in academia, eventually won tenure, and began to transform their disciplines from the inside. This would gradually discredit them on the outside. What, after all, were we to make of the claim that a “female-friendly” science would be different from “traditional” science because women see the world in more relational terms than men? Would such a science be able to erect skyscrapers and fly airplanes? Would it surpass quantum mechanics? And how seriously, later, were we to take the prevailing feminist allegations that one in four girls and women on campuses would be raped while they are in college? (Or is the official number one in six? I honestly can’t remember, and can’t see that it much matters.) Today, the humanities are struggling to survive. The neoliberal university, having vastly expanded administration and focused on the furtherance of corporate interests, inculcating students into that mindset by having redesigned campuses to look more and more corporate (using corporate logos, etc.), has little more use for academic philosophy than it has for “gender studies.” At its best, after all, academic philosophy still at least pays lip service to questioning authority.

While the visible debates surround such matters as the role of free speech on campus, the less visible ones involve such matters as the relative absence of academic jobs that pay livable wages, and whether several academic disciplines, once seen as at the center of intellectual inquiry, will be able to survive the present scourge of academic-corporatism intact. Or whether the average brick-and-mortar college with a full range of educational offerings (as opposed to well-endowed Ivy League institutions) will survive the rise of an online world which supplies the same offerings for nothing, or nearly nothing!

There is, in all this, little concern for those currently doing the real work of a college or university: educating its students, who have usually gone into once-unheard-of levels of debt in order to attend. Fewer than 30% of academics today have tenure or any hope of obtaining it outside of very, very good personal contacts. They are struggling to keep a roof over their heads at the same time they juggle stacks of papers to grade for the five or six courses they teach, spread over two or more campuses or even institutions (at one time I taught courses at two institutions spread across three campuses). And if students’ would-be employers ever discover that online educational world and tailor it to their advantage … ?

This environment is, of course, perfect for the intended political economy of the future, which if present tendencies continue will be global rulership by an economic superelite that will dominate the governments of the world through their central banks and financial systems, and through the latter, will dominate national portions of economies and educational systems. I believe STEM subjects are being encouraged because they fit this future model so well. Trained technicians will survive, and some may do reasonably well. Scholars trained in the art of questioning power systems will not do so well.

Not a pretty sight, but I am sure that what comes to pass will continue to allow holidays recognizing “worker’s rights,” which the workers themselves will use to escape into parties and such. “Where’s the revolution?” asks the British group Depeche Mode in their newest single. Answer: in the revolutionists’ dreams, where it always was.

Posted in Academic Politics, Culture, Political Economy, Where is Civilization Going? | Tagged , , , , , , , , , , , | Leave a comment

April Book Potpourri: Kipnis, Stanley, Jorjani, More …

Over recent months and weeks any number of items have come to my attention that could have been blog entries, had I complete information about them.

For example, there is the just-released book by Laura Kipnis, Unwanted Advances: Sexual Paranoia Comes to Campuses (Harpers, 2017), as of this writing listed as #1 bestseller in feminist theory on I don’t know how much “theory” it can contain, feminist or otherwise, but based on reports I’ve read there can be no doubt it is relevant to that. The book looks to be a scathing response to what its subtitle indicates: the sweeping sexual paranoia that has overwhelmed campuses over the past several years and is destroying people’s careers. In the case of her campus, Northwestern University, the target was philosopher Peter Ludlow who was publicly excoriated, and — these are Kipnis’s own words — it “was like watching someone being burned at the stake in slow motion …”  All following what appears to have been colossally ill-advised off-campus dalliances with a student.

Ludlow’s response to the proceedings seems to have been — well — philosophical. The publicly-available reports indicate that he resigned prior to actually being removed, packed his belongings, and was planning a move to Mexico.

Kipnis’s own point of departure was being attacked and nearly made into a pariah herself, for warning that such “witch hunts” (her term again) where the punishment vastly exceeds the proven extent of the crime was hurting the cause of women’s rights on campuses (assuming, for the sake of discussion, that women ought to have special “rights” that men don’t have). Ah, how the campus thought police of today have no problems eating their own if they step outside increasingly narrow orthodoxies, especially where sex/gender are concerned.

A cursory review of the “gang rape on campus” fiasco that occurred at the University of Virginia a few short years ago should have been sufficient to indicate that what can only be described as anti-male paranoia has gone completely off the rails.

I haven’t read Kipnis’s book, so I won’t attempt to comment further; a few revealing quotations from the book can be found on Brian Leiter’s blog, and a longer excerpt can be found here (there is relevant commentary here, but it lies behind a paywall). What my impression from a distance is, however, is that of someone who was stunned at how quickly those she considered political allies turned on her when she deviated from academic orthodoxy about sexual harassment and assault on campus. This is the problem with academic orthodoxies generally.

A quick time out, though, if you will.

I am based outside the U.S., and have been since 2012, following my walking away from a ridiculously underpaid adjunct position at a branch campus in the Southeast. One of the drawbacks of living in a foreign country, especially in South America, is getting North American books in a timely fashion. Which is why I don’t post about them more. I did manage to get a trove of books shipped here a couple of months ago (they arrived from Amazon in around six days, then sat in customs for over three weeks). Among them was Jason Stanley’s How Propaganda Works (Princeton University Press, 2015). I’ve not found the time to read it in any detail, though, and this time have no excerpts, so will defer any comments I might have for later this year. We had an exchange (ping to here) which was initially acrimonious but over time grew more cordial.

A private email went unanswered, however, which I thought unfortunate as we would likely have had a meeting of minds on something Stanley indicated he cared deeply about: the mass incarceration industry in the U.S., where people are not simply locked up but thrown into solitary confinement by sociopathic prison personnel and can actually die of thirst, or from insulin-deprivation in the case of diabetic prisoners, if they aren’t beaten to death or “commit suicide” (cases too numerous to link to). It is widely known overseas that the U.S. imprisons a larger percentage of its population than any other advanced nation in the world including Communist China. “Private” prisons (i.e., prisons operating for profit), moreover, have a perverse incentive to imprison more people.

In any event, the Stanley volume is one I hope to return to later this year, and comment on in light of earlier tracts related to its subject including those of Edward Bernays, George Orwell, Aldous Huxley, Jacques Ellul, and others.

Also in that trove of books was an unusual work which came to my attention by virtue of the denunciations of its author as some kind of neo-Nazi: the book is Jason Reza Jorjani’s Prometheus and Atlas (Arktos Media, 2016). Again I’ve only found the time to read a little of it, but based on what I’ve read so far (Introduction and Chapter One): permit me to assure anyone who cares: while this time I’ve had no interactions with the author, so far the book is nothing of the sort! On the contrary, instead of a typical exercise in micro-specialization it is a sweeping, systematic work of a kind one almost never sees in professionalized academic philosophy! It elicits the errors of pivotal historical thinkers such as Descartes, whose bifurcation of the world gave us the roots of the mechanized world picture that evolved into modern materialism. Jorjani draws on modern and contemporary figures as diverse as Leibniz, Kant, Schelling, James, Heidegger, Kuhn, Feyerabend, Foucault, and Derrida. How the various contributions of these philosophers are integrated, and how the ancient heroic images of Prometheus and Atlas fit in to the schema Jorjani gradually assembles over the course of 12 chapters, might fill several blog entries when the time is right.

Key to Jorjani’s work, however, is an attack on the above mentioned materialism and a defense of the idea of the spectral — a new and original take on what are routinely dismissed as “paranormal” phenomena, along with the idea that such phenomena might actually be more common than anyone realizes: not noticed because, to draw on a notion Kuhn famously supplied in Structure, what does not fit into the conceptual boxes supplied by our dominate paradigms whether in science or in life more broadly often isn’t even seen:  except in those cases, perhaps relatively rare, which intrude upon our consciousness to a degree sufficient to disrupt our daily doings. What has long fascinated me — the fascination goes back to my undergraduate days, in fact, and is among the things that drew me to philosophy in the first place — was how those who turn out to be committed materialists react to reports of such, generally by people who don’t have the imaginative power much less the motivation to make something up. The reaction is generally one of ridicule, not analysis or anything else to indicate a desire to get to the bottom of what really happened.

What Jorjani’s book has to do with the “alt-right” I’ve not discovered yet (he does have other writings on the subject), but maybe I will; or maybe someone will enlighten me. The publisher (Arktos Media) has resurrected a few European writers with views most likely derived from the right-wing Hegelianism that preceded the alt-right. Jorjani seems off on a different (ad)venture, however. But time will tell, as well as reveal whether Jorjani can survive in the long term in academic philosophy, having written a tract such as Prometheus and Atlas.  Its antimaterialism alone will alienate it from the present-day philosophical mainstream, quite independently of anything its author has to do with the alt-right.

The last book I will say a few words about, from the same imported trove, is Socrates Tenured, by Robert Frodeman and Adam Briggle (Rowman & Littlefield, 2016). This work might not have come to my attention had I not run across this. That article observed what historians know: that prior to its migration into the modern university, philosophers had worked in a wide variety of occupations: Locke’s being a physician and then a diplomat; Berkeley’s being a cleric; Spinoza being a lens grinder, and so on.

But the situating of philosophy in the modern research university came alongside the rise of what I call “third stage” (after Auguste Comte) civilization, which assigns to science a monopolistic status in knowledge-seeking, and to technology and commerce the favored status they have because they bring in cash. Philosophy — never much understood outside the circles of those who directly engaged it — does not set out to do this, of course. It had a home, but the price tag was inhabiting back-room educational-administrative cubicles teaching students about Socrates’s “the unexamined life is not worth living” but in a fashion carefully designed not to rock the boat. And it meant increasing specialization and micro-specialization. Even for those for whom “the personal is the political”: what does that mean, after all, outside a specific range of disciplinary matrices in contemporary academic humanities? (Would Laura Kipnis concur? I don’t know, but it would sure be interesting to find out!)

Could Socrates have won tenure today? Frankly, the answer to this seems self-evident. Isn’t it far more likely that, assuming him to be the same character we encounter in Plato’s dialogues, he would suffer a fate not even equivalent to drinking hemlock — something dramatic enough to win attention — but as someone who asked too many of the wrong questions, simply being refused job interviews until he faded into nothingness among the rest of the quietly excluded? At least he would be allowed to live. Today’s corporate-administrative consensus need not kill its dissidents when it is easier to allow them to disappear. Maybe Socrates could get “gigs” driving for Uber.

SPECIAL NOTE:  If you like this article or value my writing and wish to see more of what’s presently in the planning stages, please consider going to my site and making a pledge/donation.

Posted in Academia, Books, Philosophy, Where Is Philosophy Going? | Tagged , , , , , , , , , , , , , , , | Leave a comment