Of Climate Change, Science, and Experts: A Meditation

[Author´s note: co-posted on NewsWithViews.com but has yet to appear there. I have added and deleted a number of lines here and there and in general tried to increase clarity wherever possible.]

A few months ago, a friend of mine, his son who had swung left, and a few others, debated man-made climate change (MCC) over email. Was it the existential threat to civilization some made it out to be, a complete hoax, or something somewhere in the middle somehow. Being in this group, I was copied on each installment, but did not participate. I was asked why, and have been asked on other occasions whether I had anything to say about MCC.

I tend to reply that I’ve not researched the topic extensively, and can’t speak to it with any confidence. There’s abundant information online, of course; what’s missing are hours in the day sufficient to research everything out there. The topic has come up again, as MCC proponents have a field day in the wake of two destructive hurricanes, Harvey and Irma. A third, Maria, has devastated Puerto Rico as this is completed. All of us (I hope) are praying for those who lost loved ones in these storms, for rebuilding efforts which may take years in some cases, and that tragedy and hardship not be turned into an opportunity to score political points (for a change).

What research I’ve done on climate matters was mostly to inform students in contemporary moral issues and critical thinking classes, taught years past during my adjunct days, where I isolated three perspectives:

(1) Global warming is not real. For whatever reason, scientists are misreading their data, seeing something that isn’t there, perhaps generalizing falsely from local events such as glaciers in retreat after a few years of unusual warmth.

(2) Warming is indeed happening on a long-term, global scale, but we’re not the cause. Earth’s climate has warmed and cooled many times over planetary history, from various causes including fluctuations in solar energy; the climate, in any event, is far too vast for our paltry activities to affect it significantly. Volcanoes affect it more than we do.

The third perspective — (3) — holds that global warming or climate change is happening, that human activity, especially burning fossil fuels for energy and expelling the byproducts into the atmosphere for well over a century now, is causing the planet to heat up. (3), as I understand it, does not say every single year will be hotter than its predecessor, or will manifest violent hurricanes like this year has, just that over a long period of time, average temperatures will rise, sea levels will rise as polar ice fields melt, and on average, weather phenomena will increase in destructive force, be it hurricanes, severe winter storms, or droughts leading to forest fires.

So will it be door #(1), door #(2), or door #(3)?

Here is where I cannot speak with the confidence I have when speaking about, e.g., elite directedness of modern political economy, or philosophical critiques of secular ethics.

What I can say is that #(3) appears to be the one chosen by the majority of scientists and scientific organizations, something dissent alone can’t negate. Unfortunately, #(3) also has immense globalist appeal, given the adage that “global problems call for global solutions.”

If (3) is by some chance true, then claims like those of Naomi Klein in her This Changes Everything (2014) have to be looked at. Whether you agree or disagree with Klein’s view that “the free market” is at fault in creating the present situation (I don’t, as I don’t think we’ve had anything remotely resembling actual free markets in decades), the conclusion remains: we find other ways of powering our civilization or face the consequences: a hotter, more hostile world; what James Howard Kunstler calls The Long Emergency (2005), highlighted by dislocations that will make the present ones look tame by comparison as millions of people abandon flooded coastal cities, others migrating en masse from regions no longer inhabitable.

Alarmist? Perhaps, but many scientists will tell you that MCC is an established fact. Major scientific organizations including the American Association for the Advancement of Science have endorsed it. At least one online course I ran across earlier this year dispensed for free, presents information intended to debunk (1) and (2) above. The course’s main architect, John Cook of the George Mason University Center for Climate Change Communication, had earlier created this site, organizing information he maintains refutes “climate change denialism.”

Cook and his associates have assembled some interesting information. But they packaged it within an image of science I found rather naïve and dated. (Cook’s views on the “scientific consensus” are criticized here.)

Again, a brief disclaimer: I am not a scientist, climate or otherwise. I am a trained philosopher who for a number of years specialized in history and philosophy of science — especially the physical sciences — turning to moral philosophy and political economy only later.

This I can certify: what is found in most science texts is an image of a neat, disciplined, pristine method of formulating hypotheses to explain neutral data, testing them step by step whether by further observations or by experiment, then pronouncing them confirmed or disconfirmed — almost as if done by robots instead of human beings subject to all the biases and frailties human beings are subject to, including being forced to work in organizations that do not fund themselves.

So MCC aside for the moment, how well-confirmed are most scientific results, really?

One can point to “studies” in various disciplines that clearly reflect the biases of those who put up the money, because the researchers wanted or needed further grant money, and one of its conditions was obtaining “acceptable” outcomes. Such “studies” (not uncommon in the world of, say, pharmaceuticals, legal drugs), overstate what evidence validly permits, and may bury contrary findings. How much of science work this way?

Please allow me to digress …

As a bored public high school student in search of real intellectual stimuli I chanced to run across a curious volume in a local library: The Book of the Damned (1919) by one Charles Fort (1874 – 1932). Fort had a curious hobby. Upon receiving an inheritance, it became his career. A voracious reader, he’d mastered several scientific disciplines just by reading leading texts. He combed scientific journals and periodicals, antiquarian newsletters, and newspapers. Whenever he found something that did not fit the prevailing theories, he made a note of it. Soon he had thousands of notes, organized by subject matter: astronomical curiosities, unexplained weather and aerial phenomena, out-of-place artifacts, medical mysteries, etc. “Anomalism” was born: assemblages of “facts that don’t fit,” with wry commentary on the “scientific” manner of dealing with them: shoving them into the cognitive equivalents of windowless museum basements and forgetting about them.

Fort used his notes as the basis for four books: the above-mentioned The Book of the Damned, New Lands (1925), Lo! (1931) and Wild Talents (1932). He commented drily on “dogmatic Science” (cap S) as surrogate for God. Fort was more a provocateur than a serious theorist. He formulated intentionally ridiculous notions which left whole ranges of obvious facts unexplained and claimed them to be as well supported as the dogmas he saw imprisoning the minds of scientists.

The history of ideas manifests system-builders and system-smashers, one might call them. Among the system-builders: Plato and Aristotle, Aquinas, Newton, Lavoisier, Adam Smith, Kant, Darwin, Einstein, who left their respective disciplines large, logically-structured edifices of thought (systems). Among the system-smashers: the old Sophists who taunted Socrates in Plato’s dialogues, modern “outsiders” such as Kierkegaard and Nietzsche, aggravated skeptics such as Fort, and a couple of folks we’ll encounter below.

Modernity was a system-building endeavor. Postmodernity has been a system-smashing one.

It is not clear why some thinkers are drawn to one and not the other. Fort’s biographers state that his father was an abusive tyrant, from whom he fled as a teenager. His hostility to the authority of Science was then a projection. How very Freudian.

System-builders are confident of human reason’s capacity to grasp reality (or some part of it) as it is. System-smashers are just as convinced that the effort is delusional. They point to the smorgasbord of conflicting and competing systems in every domain, this being a problem even if we’ve mastered a certain instrumental rationality by manipulating objects into technology.

System-building takes itself seriously, is carefully argued, etc. Much system-smashing is literary provocation. Its purveyors use irony and rhetoric. They play mind games with their audience. Postmodernists, whatever else one says about them, are good at this.

Fort’s books sold reasonably well. At the end of his life, his health and eyesight failing, he was said to have laughed aloud upon learning that his writings had a cult following, organized as the Fortean Society, dedicated to continue poking holes in the pretenses of “scientistic” positivism. The Society published Fort’s unused notes and continued collecting anomalies that seemed to surround every major theory in every field of science. Fort’s books have stayed in print, and though for obvious reasons he was roundly dismissed as a crank, his work continues to fascinate those who have followed in his footsteps compiling anthologies of “misfit” facts such as physicist William R. Corliss (1926 – 2011), founder of The Sourcebook Project and editor of anthologies such as Ancient Man: A Handbook of Puzzling Artifacts (1978) and Unknown Earth: A Handbook of Geological Enigmas (1980); or more recent writers with substantive alternative hypotheses on ancient and unknown civilizations such as Graham Hancock (1950 –        ), author of Underworld: The Mysterious Origins of Civilization (2002), Magicians of the Gods (2015), and other works.

As a university student (still bored), I encountered the far more orthodox The Structure of Scientific Revolutions (1962, 1970, 2012) by Thomas S. Kuhn (1922 – 1996). My first exposure to Kuhn’s ideas was in a world history class. The professor discussed them with all the calm and neutrality of a leftist professor going off on conservatism. My curiosity was piqued, and I tracked the book down.

Kuhn’s thesis was that a mature “normal” science is always governed by a conceptual system embodied in concrete problem solutions he called a paradigm. Paradigms — exemplified in works such as Newton’s Principia or Lavoisier’s Chemistry or Darwin’s Origin — guided research in the science, its first premises not tested or challenged. Paradigms dictated use of the language of the discipline, as well as guiding authors of textbooks used to train the next generation who “stood on the shoulders of giants” as it were. Invariably a paradigm could not solve every problem it faced, however. These became anomalies — defined more precisely as violations of expectation. Eventually enough would accumulate to jeopardize allegiance to the paradigm (particularly among the young!). The science would enter a “revolutionary” crisis that ended with its embrace of a new paradigm able to solve the problems, often with new terms or old ones used in new ways. A new period of “normal” science would begin.

Physicist and early quantum theorist Max Planck (1858 – 1947) famously observed: “A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.” That’s the basic idea.

Kuhn denied that scientific practice could be shoehorned into the formal-logical methods positivists taught. He experienced the wrath of colleagues who had Science on a pedestal, was accused of “irrationalism” for saying the decision to embrace a new paradigm was a matter of “faith.” Despite a couple of careless uses of that word his overall message was nothing of the sort, and he spent the rest of his life trying to clarify the complex rationality of an enterprise conducted by fallible humans working in organizations.

More extreme was the unabashed system-smashing of Paul Feyerabend (1924 – 1994), who authored the controversial Against Method: Outline of an Anarchistic Theory of Knowledge (1975, 1988, 1993, 2010). Although Kuhn’s and Feyerabend’s names are often linked, both classified as “historicists” (i.e., those who see science as a historical phenomenon operating within institutions, and not a formulaic, frozen-in-time abstraction), Feyerabend’s views differed greatly from Kuhn’s. For one thing, he rejected the idea that a “mature” science should embrace a single paradigm. He advocated pluralism: multiple paradigms. Conformity of thought, he argued, might fit the needs of a church but is totally inappropriate for science.

He argued extensively that the most important scientific advances had not proceeded according to an single, identifiably rational method. Scientists had opportunistically used a variety of sometimes incompatible ideas and methods at hand, so that early modern physics and astronomy incorporated ideas and methods from Christianity, Platonism, astrology (Newtonian “action at a distance”), mysticism, and so on. Some of their claims, moreover, seemed contrary to “plain fact,” as when Copernicus removed the Earth from the center of the universe in the absence of a physics able to make sense of such an idea (he was dead well over a century before Newton came along). Positivism’s naïve just-the-facts-ma’am view of science would have stopped physics and astronomy 1543 – 1686 in their tracks! With “plain fact” not on their side, early astronomers advanced their heliocentric view of the solar system not just through argument but with storytelling and propaganda (Galileo wrote dialogues; some of his “experiments” as with dropping objects from the leaning tower of Pisa probably never took place).

Feyerabend’s point was that if science was more “anarchic” than “rational,” “anarchism” might help us in the present! It might free us from the “tyranny” of a “dogmatic Science” that was stifling our creativity within the cubicles of industrial civilization and robbing us of the potential richness life might have. According to him, the only abstract “rule” that could be guaranteed to work independent of situation was “anything goes”: not a rule or method but a jocular, system-smashing rejection of such abstractions. The idea: “proper scientific method” is always situation-specific. Feyerabend (unlike Kuhn) did not suffer fools gladly. He ridiculed critics who misread “anything goes” as some kind of new and avant garde abstract rule. He mocked them by openly defending “relativism”: a position resulting from comparing the richness supplied by history and anthropology to the desiccated requirements of positivist abstraction. One of his favorite targets was George Soros’s hero Karl Popper and his “conjectures and refutations” critical rationalism which he believed was hopelessly naïve. Feyerabend has been called “the worst enemy of science” by those who haven’t read him, but believe “scientific” minds should get the last word on all things human, including designing (or redesigning) societies.

Arguably, Feyerabend put an end to a certain way of viewing science — at least, if we look at the enterprise as it is, a human-all-too-human endeavor, instead of accepting the mythology that has surrounded it (touted by positivists, atheistic materialists, and technocrats).

End of long digression.

Why this dissertation?

Because there are abundant reasons for rejecting the presumptions of those who believe MCC on the mere authority of a naïve empiricism: who see science as mere data aggregation and integration, using a “method” frozen in time; and have occasionally been caught seeming to “cheat”: fudging data so that MCC seems better supported than it really is (e.g., “Climategate”: for contrasting views see here and here). As critics of MCC have pointed it, the scientists behind it receive government grants as well as lavish funding from elite foundations. (In fairness, MCC “deniers” also receive substantial support from private sources, e.g., the Koch Brothers and Exxon.)

Scientists are supposed to be the experts on such matters. But can we trust the objectivity and neutrality of the experts? Among the most noticeable phenomena of the Trump era is a profound skepticism towards “expertism” as a repository of biases: in general of being unable to see the forest because of the trees. The experts predicted Trump would lose in a landslide. Their major pronouncements about the economy going back well over two decades were wrong. They had not foreseen the end of the tech bubble in 2000. In 2008, Federal Reserve Chair Ben Bernanke failed to anticipate the worst financial crisis since the Great Depression, embarrassing himself in January of that year saying that the Fed “is not currently forecasting a recession.” The experts fail to see the role of top-down financialization in consolidating wealth and power at the (globalist) “commanding heights” via a system that removes labor’s share of national income. With authors who did see these things consigned to “alternative media,” frustration was inevitable.

Skepticism about experts isn’t limited to political economy, obviously. These days it crosses over a wide range of topics: so-called scientific medicine based on invasive procedures and the use of (expensive!) pharmaceuticals, which rejects alternative practices such as nutrition-based “holistic” or “integrative” healing, the use of dietary supplements, acupuncture, chiropractic, etc.; whether GMO foods pioneered by powerful global corporations such as Monsanto are proven safe for human consumption and for the ecosystem; whether other artificial substances such as aspartame have been proven safe; whether there is a causal relationship between vaccines (e.g., the MMR vaccine) and autism; whether the theory of evolution is as well-established as the scientific community maintains, well enough established to exclude intelligent design, and whether it is truly empirical or the product of a (materialist) worldview; whether there is a correlation between race/ethnicity and measurable average intelligence; and whether it is true that men and women have the same innate cognitive predispositions, so that workplace “imbalances” can be attributed to sexism/misogyny. There are doubtless others I haven’t thought of.

Again, a few of these I’ve looked at. Most I have not, at least not at length. But there is a discernable pattern running through nearly all of them, which is the same as the pattern often employed to circumvent careful consideration of the idea of history being directed by a globalist superelite or super-oligarchy. The pattern includes dogmatism and just-the-facts-ma’am appeals: “It’s true (or false) because we say so or because our studies say so” (the right rejoinder to any of any such study is, “Who funded it?”), followed by ridicule (“that’s a conspiracy theory!”), or the use of similar linguistic gambits to circumvent having to deal with the specifics offered, finally ending with an authoritarian gesture and a return to the official narrative.

In the case of MCC, this progression now sometimes ends with a threat: that “climate change denial” be criminalized, “denialists” prosecuted and jailed, just as those who deny that Hitler and his minions killed 6 million Jews in the Holocaust (as opposed to some smaller number) are jailed for the thought crime in some countries. This, in fact, is the origin of the term denialism in the context of MCC: a propagandistic term intended to invoke the subconscious thought of Holocaust denial in the reader’s mind.

When ideas, questioning authority, and independent thought generally are criminalized, watch out! Just recall the line attributed to Voltaire (1694 – 1778) (he probably didn’t say it, but it’s true nevertheless):

“To learn who rules over you, simply find out who you are not allowed to criticize.”

Applying: if you want to know if specific ideas or theories or policies have been afforded a special, unmerited status in institutions (academic, governmental, or corporate), find out if you can question them without the roof caving in — without, that is, being fired from your job, having your reputation trashed by social media trolls, etc.

Skepticism toward expertise has caused sufficient alarm that there is now pushback. Authors speak, often at great length, of “how we lost our minds” and of “American stupidity,” not just in articles (here, here, and here) but books (e.g., this one and this one). What these authors are dead set against is the possibility of epistemic equivalence suggested by the idea that what we have is a diffuse, poorly understood clash of worldviews — not just a resentful rebellion of “the stupid” against “the informed,” or “uneducated bigots” versus “educated cosmopolitans,” etc. Very similar is the authoritarianism of those who reject moral equivalence between conservatives and historical preservationists currently demonized as white supremacists and neo-Nazis versus leftists who self-identify with “progress” (and which Trumpism has so rudely interrupted!).

You’re probably wondering: where does all these leave MCC? What should we conclude about it??? Especially given that if we conclude wrongly, either way, we could end up paying a steep price!

I will say — reminding readers of my disclaimers! — I don’t see MCC as crazy, or crackbrained, or false just because globalists like it and can make use of it! Another topic I studied was systems thinking, and one of the things I noticed is how sensitive complex systems are to what can perturb them. It also became clear: complex systems adjust themselves to perturbations. The largest complex system in our civilization’s proximate environment, the ecosphere, could adjust our civilization out of the picture! I therefore dissent from many of my fellow alternative writers. No need to take my word for anything. I recommend readers go to the sites linked to above and see if they have refutations for what they find there. Was “Climategate” real, a dead giveaway, or was it blown out of proportion?

I cannot decide for you! I don’t have that kind of authority!

What I believe we do have is a new knowledge problem of some magnitude. What was the “old” problem? Just the philosophical question of how we acquire knowledge (through the senses, pure reason, or some other means including revelation). Its presumptions are problematic. I will not dwell on them here, as this discourse is already too long. The “new” problem: our own institutions and their hierarchical structures, enabling epistemic authoritarianism to pass for truth, are in the truth-seeker’s way, made worse by the fact that the circumstances necessary to decide complicated problems like MCC cannot pay for themselves in a fast-paced society devoted to instant gratification and mass entertainment. Nuanced debate and discussion, based on a careful but slow weighing of many opinions and considerations, is not “marketable” in a culture of WhatsAppers and Twitter addicts.

This is a problem because few have the time, skills, or inclination to do their own research. We need institutions we can trust. I have extensive notes on this problem, in the context of the general breakdown of academia in our time, which I hope to incorporate into a future slim book — a story in itself! Suffice it for now, I am not a postmodernist, like Fort or Feyerabend, however much I sympathize with their crusades against epistemic authoritarianism. Truth exists; and we must not do what the postmodernists do in face of the difficulty of finding it, which is to conflate institution-bound authority with what is true and proven, cry foul when it turns out we were bamboozled, and then throw up our hands in gestures of despair.

What we could use is support for smaller, parallel institutions that have been growing for years in the face of the insufferable political correctness that has ruined academia and is now trying to erase everything in Western civilization that might offense some minority. In every dominant institution, feelings have trumped truth. If we had institutions of knowledge-seekers free from the need for money, and therefore from potential outside control (if we had, that is, real philanthropy on a large enough scale), there might be hope for (among other things) a trustworthy answer to the MCC question before it’s too late, before our so-called leaders, whoever they might be, make decisions we will live to regret. Since we do not have such institutions on a scale large enough to matter, and any real philanthropists who might once have existed have been replaced by corporate donor types who typically fund politicians and political agendas, I am not all that optimistic.

Author’s Note: if you believe this article and others like it were worth your time, please consider making a $5/mo. pledge on my Patreon site. If the first 100 people who read this all donate, my goal of just $500/mo. would be reached in no time! And if we’re honest about it, we all waste that much money every day.

Telling the truth can have negative consequences. Last year my computer was hacked — it wasn’t the Russians, either! Repeated attempted repairs of the OS failed, the device became unusable, and I had to replace it off-budget.

This is also an attempt to raise money to publish and promote a novel, Reality 101, 98% finished as of this writing. In it, a globalist technocrat speaks in a voice filled with irony and dripping with cynicism — contrasted with the possibility of freedom outside the world as he sees it.

Promoting a book means, in my case, the necessity of international travel which is not cheap.

I do not write for an audience of one. I write for you, readers of this site. If you believe this work makes a worthwhile contribution to the world of political-economic ideas, please consider supporting it financially. I am not a wealthy person, and unlike the leftist groups I criticize, I do not have a George Soros funneling a bottomless well of cash my way.

If I reach the above goal of $500/mo., I may be able to speak at an event in your area (contact info below). On the other hand, if this effort fails, I am considering taking an indefinite “leave of absence” beginning later this year to pursue other goals. EDIT: thus far this effort has garnered just $62/mo. If it does not reach $250/mo. by the end of September, it will be time to write my farewell-and-good-luck piece.

To sum up, these are your articles (and books). I don’t write to please myself. No one is forcing me to do it, as sometimes it brings me grief instead of satisfaction. So if others do not value the results enough to support them, I might as well go into retirement while I am still able to enjoy it.

Advertisements
Posted in Academia, anarchism, Books, Donald Trump, Election 2016 and Aftermath, Higher Education Generally, Philosophy of Science, Political Economy | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

The Art of the Argument: Stefan Molyneux’s Book Reviewed on LGP

(Note: co-posted as a product review on Amazon.com with the necessary modifications.)

Stefan Molyneux, The Art of the Argument: Western Civilization’s Last Stand (Kindle Edition, Amazon Digital Services LLC: August 27, 2017). Pp. 172 / kb 299. 

This book was panned on Brian Leiter’s philosophy blog) which is best when he stays away from politics, and since both Stefan Molyneux and I are independent writers / scholars, I had to see for myself. I’m sad to have to report: the negative reviews are correct. Worse: the author is a known libertarian YouTuber and noted critic of all things left wing and politically correct. Mediocre white guy alerts are therefore going up all over the Web (cf. e.g., this).

The problem is, Molyneux has embarrassed himself with this ebook.

From the first page: “The first thing to understand is that The Argument is everything. The Argument is civilization; The Argument is peace; The Argument is love; The Argument is truth and beauty; The Argument is, in fact, life itself.” Caps in the original; always emboldened. Every appearance, in fact, of the phrase The Argument is emboldened.

Such writing marks its author as an amateur, possibly one grinding axes instead of communicating information or educating. The two aren’t necessarily mutually exclusive, but they are here.

What is an argument? What I used to tell students (working off several textbooks): “An argument is a state of statements, at least one of which, called the premise(s), is offered as evidence for another statement called the conclusion.” This opens the door to discussions of statements, terms, concepts, and definitions, all of which are good to have before we get to the purposes of argument.

Molyneux: “An argument is an attempt to convince another person of the truth or value of your position using only reason and evidence” which conveys the right idea but is technically loose. As it turns out, technical looseness often descends into sloppiness and sometimes into incoherence.

Molyneux divides arguments into “truth arguments” and “value arguments” for reasons unclear to me, because if the argument is any good it will exhibit a logically correct structure in either case. I question, therefore, that we need to say this: “A truth argument will tell us who killed someone. A value argument will tell us that murder is wrong. Truth arguments are the court; value arguments are the law…. A truth argument can establish whether arsenic is present in a drink. A value argument can convince you not to serve it to someone.” Such bizarre and sometimes demented illustrations permeate Molyneux’s tract.

More worrisome to me, given that Molyneux is likely to have a readership much larger than any professional logician with a textbook, is that The Art of the Argument is riddled with mistakes students who read it and then enroll in a logic class will have to “unlearn.”

He properly distinguishes deductive from inductive arguments, provides the standard example of the former ((1) “All men are mortal” (2) “Socrates is a man” (3) “Therefore Socrates is mortal”), and then delivers this cringeworthy explanation: “Given that premises one and two are valid, the conclusion – three – is inescapable.”

Ouch!

Molyneux has just confused truth and validity! Statements are true or false. They are never valid or invalid. Deductive arguments are valid or invalid; they are never true or false. Validity is a function of deductive structure, not content (the information in premises and conclusion). Getting students to grasp this difference is every logic instructor’s first challenge. If a deductive argument has a valid structure and true premises, moreover, it is called sound. Following a foray into premature attacks on relativism and socialism – premature because the groundwork for such arguments has not yet been laid – Molyneux botches soundness as well: “If I say (1) All men are immortal, (2) Socrates is a man, (3) Therefore Socrates is immortal; then the structure remains logically sound.” This is actually a good example of an unsound argument, because it has a valid structure but a false premise.

In other words, Molyneux does not appear to grasp the difference between validity and soundness. This is in a section entitled “The Difference between ‘Logical’ and ‘True.’”

He does seem to grasp, roughly, outlines of the difference between deduction and induction when he states that deduction is about certainty and induction is about probability. What deductive arguments supply is logical closure: if the premises are true the conclusion must be true. Hence the sense of “inescapability.” He says: “Getting most modern thinkers to accept the absolutism of deductive reasoning is like trying to use a nail-gun to attach electrified jell-O to a fog bank.” Huh? Who? Unfortunately his book has no account of what makes an argument valid, and he introduces myriad examples involving potentially confusing propositions with forms like “Only x are y” when he hasn’t even introduced the Aristotelian Square of Opposition (All S is P, No S is P, Some S is P, Some S is not P are the basic standard forms) — a staple of every book introducing logic.

Inductive arguments establish their conclusions only to some degree of probability (which may be very high). It is therefore true that as Molyneux says, Inductive reasoning “deals more with probability than with certainty.” It is not true that all inductive reasoning “attempts to draw general rules from specific instances.” Generalizations do this, but inferences to the next case proceed from a collection of known instances to, well, the predicted next instance. Arguments from analogy proceed from case to case: because a known case is similar to an undecided one in specific ways (can be compared to it), it is probably also similar to the undecided one in some additional respect.

Nor is it true that “deductive reasoning goes from the general to the specific.” Sometimes it does, sometimes not. It can go from universal premises to a universal conclusion ((1) All men are primates. (2) All primates are mammals. (3) Therefore all men are mammals.) Or it can go from a universal-particular combination of premises to a particular conclusion. ((1) All politicians are liars. (2) Some Democrats are politicians. (3) Therefore some Democrats are liars.) And so on.

Don’t expect any of these specifics from Molyneux, nor a discussion of the specifics of what happens when either a deductive or an inductive argument has gone awry. He doesn’t seem to understand the difference between formal fallacies and arguments with false premises.

Sometimes he is clear as mud, as when he states, “There is another category called abductive reasoning that draws a tentative hypothesis from disparate data, but which is related to some sort of testable hypothesis, rather than the reaching of a specific conclusion.”

For the record, here is my paraphrase of philosopher C.S. Peirce’s account of abduction (he coined the term): “Puzzling phenomenon P is observed. If H were true, then P would follow as a matter of course. Hence there is some reason for believing H to be true.” There’s work to be done, such as identifying what makes P “puzzling” and explaining “follow as a matter of course,” but it’s a start!

Sadly, this book is riddled with so many errors and such imprecision that we might as well stop here. Professionals are unlikely to finish it, and well before they get to Molyneux’s libertarian arguments. This is unfortunate because I think Molyneux means well (hence two stars instead of just one). He favors using reason to persuade instead of using threats of force to coerce or intimidate. So do I. He wants to save Western civilization from its enemies, some of whom hang out in academia. So do I. His book appeals to reason and inveighs against every form of leftist political correctness, every sort of irrationalist postmodernism and radical academic feminism, every blithe assumption that those in authority know what’s best for the individual, and other mistakes of the past half century.

But if we’re going to undermine these with solid logic, we have to lay its groundwork correctly. There are sections here on definitions — introduced too late to do any good. There is a reference near the end to the importance of identifying fallacies but no careful and systematic discussion of them: ad hominem attacks, appeals to authority, appeals to pity, ad baculum or threats including ostracism, red herrings, equivocations, arguments from ignorance, and so on, which are central to any tract on logic or critical thinking and would have been invaluable here.

Nor is there any discussion of statistics or statistical fallacies which would also have been invaluable in combating sloppy social policy. There is an abundance of discussion of philosophy vs sophistry: the philosopher wants to find and communicate truth, says Molyneux rightly; the sophist wants control and will use lies or BS to obtain it. The former studies reasoning. The latter studies people (presumably psychology). The former makes his/her case. The latter uses language to confuse and manipulate. There are plenty of appeals to the connection between proper thought and reality, which is unhelpful since in the areas Molyneux is most concerned about, social issues and matters of economic organization, there is massive disagreement about what the facts are and therefore what reality is.

I’ve the impression that Molyneux learned more logic from participating in debates than from actual study of texts and available literature written by competent logicians. His book is therefore far more a defense of free market economics and libertarian social philosophy; maybe that’s why it’s marketed here as political philosophy instead of logic. I’m sympathetic to many libertarian ideas, but I would not want to claim that they are panaceas, universally established by pure logic and empirical evidence (where, for example, do we actually see a functioning libertarian society of any size, enabling us to confirm that the libertarian self-regulating free market really does work in practice???). I don’t think one has to be a leftist or an egalitarian to deny that Western societies are in some sense meritocracies, or could be, and therefore I can wonder if certain government actions on behalf of, e.g., those who are poor or infirm through no fault of their own are requirements of a moral social order. This has been the prevailing opinion in modern thought, long predating the present PC mindset, sometimes argued for at great length, and if one expects to be taken seriously one has to grapple with them on their own terms. Also, does anyone truly believe that without legal mandates corporations would be forthcoming about actual (not merely possible) dangers of their products if they make money? (Think: cigarettes, or the less extreme case of mildly addictive additives in unhealthy processed foods.)

Is this nitpicking? I don’t believe so. My response is that if you’re going to undertake a project like this, don’t do it half-assed! I write as an independent writer/scholar myself, someone who walked away from academia because of the prevalence of many of the superstitions enumerated above and how they’ve corrupted the entire enterprise. One thing I’ve learned: because of the prejudice of those “on the inside,” we are held to higher standards than they often hold themselves to (and I’m not saying the professionals haven’t written some stuff that is absolutely awful), and so we must hold ourselves to high standards or we might as well not bother. Sometimes it gets frustrating, and there are temptations to cut corners with, e.g., appeals to “common sense” that might not be shared by others. These need to be resisted.

To the best of my knowledge, a book that starts with the basic foundations and principles of logic and critical thinking, proceeds through definitions and fallacies in a logical sequence, arriving at the kind of groundwork that would successfully take down the above academic superstitions in the public square has yet to be written. I am not sure how many academics these days would be motivated to try. Be that as it may, this book is not it.

Posted in Books, E-Philosophy, Higher Education Generally, Libertarianism, Logic, Political Philosophy | Tagged , , , , , , , , | 3 Comments

Analytic Philosophy: An Informal Defense (and a Modest Criticism)

This is a post dealing with a few basic issues in contemporary philosophy, issues easily lost sight of even by some trained professionals depending on their inclination.

History discloses four major traditions, or more precisely, methods, of doing philosophy. There is systematic or speculative philosophy — the tradition represented by major pivotal figures such as Plato, Aristotle, Aquinas, Descartes, Kant, Hegel, and the Whitehead of Process and Reality. It tries to provide an account of reality and everything in it, including where we fit in, what is of value in human life, and the moral rules or principles by which we should guide our conduct, among other things integrated into a single consistent system. Then there is analytic philosophy, which developed very slowly out of a sense long predating, e.g., Frege, that there was need for logical clarification of the questions philosophers ask and the language used to answer them. You can find plenty of hints of such in Leibniz and Wolff on the Continent, predating Kant, and you’ll find similar moments in Locke and Hume in the English-speaking world. There are figures such as the early Wittgenstein who cross-pollinate the two traditions in specific ways in his Tractatus Logico-Philosophicus which is systematic but tries to draw limits to thought by exhibiting the limits of the logic of our language.

(The other two schools are, of course, the existentialist-phenomenological tradition: philosophy should describe the human condition, which may mean writing novels as Sartre and Camus did or mean producing detailed analyses of lived experience as phenomenologists beginning with Husserl did; and Marxism / Frankfurt School thought: philosophers have tried to describe the world; the point is to change it.)

My topic here is analytic philosophy. Some say most of it is trivial, or at least inconsequential. While some analysts assuredly go overboard with the logic-chopping, if one considers this method’s potential to make a difference in our understanding of language, I beg to differ. If we give analytic philosophy a chance, we find that its approach to philosophical method is of the first importance.

What’s important about analytic philosophy is that sense, which one does not have to be a trained philosopher to appreciate, that it is frequently important to clarify what a question is asking, or a conclusion really asserting, as a condition for knowing if one has gotten anywhere with an inquiry. This is surely true enough in the traditional problems of philosophy. The difference of methods is the difference between an assertion that God exists, perhaps backed up with a standard argument, versus asking, What does the term ‘God’ mean? Does it mean the same thing to everyone? Or between being asked “Are you free?” and realizing the need for clarity: “What sense of ‘free’ are you talking about?” Even asking the far more specific, “Is your will free?” gets into trouble, absent a clear sense of what the question is asking and what a good answer to it looks like.

So analytic philosophers have seen their job as stepping back from the Big Questions and asking what they mean, through close attention paid to the language one uses to ask them, and to defend specific answers to them. If we cannot get clear what it is we are talking about, in these cases or many others, then we can hardly expect to achieve results that anyone will agree on or are useful. If there is insufficient attention paid to language, then interlocutors who disagree will continue to talk right past one another … which, of course, they might do anyway, but for other and less savory reasons than mere linguistic or intellectual confusion.

So with God most of us mean the God of Christianity, or of the Bible, a Unique Being, uncreated, existing outside of space and time as we experience and understand them (which may be Western constructs in any event), all-powerful and able to suspend causality to perform miracles, all-knowing, His nature manifesting both perfect logicality and perfect moral goodness, and perhaps more. All these concepts may (do) stand in need of further analysis, but with this sense of what we are talking about, we have material to work with (which may, of course, be old hat … or not). For there is also, some will note at once, the Creator invoked by deists, who doesn’t necessarily have all the above characteristics as he doesn’t intervene supernaturally in the world. Analytic philosophy of religion can clarify such differences and perhaps contribute to the discussion of why they might matter for such policies as the separation between church and state.

With freedom of the will, we find ourselves exploring such issues dating back at least to Hume’s and Kant’s time, such as whether it means action taken outside the causal structure of the universe: free actions being the set of actions for which what I want to do is the sole determinant. This, of course, raises a number of questions, such as: if what I want to do is determined by nothing outside of itself, then how does it avoid being completely arbitrary? One answer is that there are influences but not determinants on what I want to do which can only be identified in the case of specific, concrete actions. Influences, unlike determinants, can conflict with one another … as one knows if one has had a decision to make and been caught between conflicting impulses. There are many other answers as well; although it’s a separate discussion because there are many senses of free, I would argue that it is confusing and misleading to bifurcate freedom of the will and influences (even determinants) on our actions as if the difference was absolute.

Analytic philosophy did more than explore conundrums like these, of course. While there continued to be healthy explorations of the traditional questions of philosophy and an insistence that they be given answers that made logical sense (and, for positivist and logical empiricist sub-schools of analytic philosophy, that they be fully consistent with the pronouncements of the specialized sciences), there were also explorations of philosophical method itself … including whether philosophy could have a method of its own, resolving special logical-mathematical-set-theoretical conundrums such as the Liar’s Paradox (“I am at this moment lying to you; do you believe me or not?”) and the paradoxes of set theory with which Russell wrestled. Soon we saw close analyses of the language and justification of the findings of physical science (the origin of modern philosophy of science) which were deemed important as physics had just seen the most significant revolution since the scientific revolution itself with the fall of the Newtonian edifice.

A division appeared among analysts, however, over whether they should approach their subject matter from the standpoint of an ideal language, the preferred ideal language being the formal logic developed during the 1800s and refined by Russell, Whitehead, and the early Wittgenstein, or whether analysts should pay more attention to language in its “ordinary,” unrefined and unreconstructed usages. The later Wittgenstein, and Strawson, became the first proponents of the latter, and the term ordinary language philosophy was coined as more and more British philosophers came on board, much to the chagrin of ideal language philosophers such as Russell. Ordinary language philosophy seemed to have as its advantage that it could take account of the many uses to which language was put, uses not captured by truth-functional accounts. We use language not merely to assert true statements about things but to request information (ask questions), provide instructions, give orders, express emotions, tell stories, tell jokes … and there are other uses we’ll encounter presently.

There can be no single “ordinary” language, of course; there are many “ordinary” languages. Every natural spoken language, in its raw, unanalyzed form used by a community of native speakers, could be viewed as an “ordinary” language; and natural languages, responding not to the severe requirements of an abstract formal logic but to a multitude of practical, workaday demands placed on them by users, grade into the more formal languages as they became specialized in various endeavors, be they scientific, technological, commercial, entertainment-focused, or some combination of these. The difference between a formal language and a natural one is, therefore, a continuum and not a dichotomy. (Regrettably, Western philosophy is full of untenable dichotomies, but that, too, is a post for another day.) Thus the term I prefer over ordinary language philosophy is natural language philosophy. It is more versatile, able to cover more territory in the human world. Charles Morris, who sought to integrate the insights of both pragmatism and behavioral psychology into analytic philosophy (and who was instrumental in enabling many logical positivists to emigrate to the U.S.) distinguished three areas of inquiry: syntax (or syntactics): purely formal relations between signs; semantics, or relations between signs and objects or categories of objects; and pragmatics, relations between signs and sign-users. This trichotomous study has sometimes been called semiotics, the study of signs.

The third of these, pragmatics, is by far the most important. If we wish analytic philosophy to be relevant to the world outside of academic cubicles and seminar rooms, it is where we should end up. The later Wittgenstein and his philosophical progenitors enlightened us about the many uses to which language can be put, depending on what a speaker wants to accomplish. Examination of the many motives human beings bring to language use can enhance any such account.

Propaganda is, of course, one such use — the one I personally find to be both the most interesting and the most useful to analyze. Language can be used to utter true statements, or statements that are true to the best of the speaker’s knowledge which is invariably somewhat fallible. It can have other innocuous uses such as those listed above. Or it can be used to utter statements the speaker knows to be false, or is unsure of but seeks to conceal his (her) uncertainty. It may be fair to call many such statements lies. Or, of course, language can be used in assertions the truth value of which the speaker does not care about one way or the other, in which case the term bullshit has almost become part of the standard lexicon following Harry Frankfurt’s ingenious short analysis of it (On Bullshit, 2005; cf. also his On Truth, 2006). There are philosophers paying attention to such issues (e.g., Jason Stanley; cf. his How Propaganda Works, 2015, which I hope to discuss in a future post at some point). The fact that I can virtually count them on my fingers is part of today’s problem.

Language is used propagandistically when it is used to mislead, so that its intended audience goes away believing something to be true that is really false, or at least insufficiently supported by publicly-available evidence. We can all think of examples. Most of us have probably fallen into the trap of contributing a few of our own. An example Stanley conceded he hadn’t thought of (because his slant is, on the whole, typically academically left of center), is homophobia. It is an example I have mentioned before, as just one of a family of such examples all characterized by the use of the suffix phobia against critics of a belief or lifestyle or some combination thereof: Islamophobia, transphobia, and so on. A phobia is, of course, a recognized mental illness. Examples of real phobias include claustrophobia and agoraphobia. One does not respond to arguments made by suffers of a phobia, e.g., that one really is in danger of suffocating because the place is enclosed. One never assumes their view of a situation is veridical. One therefore tries to cure them with therapy. Apply this to our examples above. What it means is that those accused of homophobia, Islamophobia, and now transphobia, etc., are falsely if indirectly accused of suffering from mental illness because they criticized specific assertions, activities, and policy decisions, and whose conclusions are therefore not seen as worth arguing against. Maybe they can be “cured” with the “therapy” of “sensitivity training.” (It is very interesting that although the word Christophobia has been used in a similar context by Christians, this word has never caught on and remains generally unknown.)

This is how propaganda actually works both inside and outside academia, and analytic philosophers (especially those with the protection of tenure) can be criticized for their unwillingness to go anywhere near such examples. This is the “modest” criticism of my title. I keep it modest because the majority of professional philosophers are introverts. Even if not, few have any taste for the rough and tumble world of political discussion, where it is easy to conclude that arguments and evidence aren’t what matter. (Most who do are contributing the wrong things! Or so I would argue.)

Distinguishing between the use of a word or phrase and a mention of it is also useful in analytic philosophy; consciousness of such could prevent many mishaps, including some that have sabotaged careers. A formal account will be helpful. To use a word in a sentence is, again, to say something about the word’s referent or reference class in the natural language where it is most at home. If I assert, “The cat is on the mat,” I am obviously saying something about a particular cat (“The cat …”), a nonlinguistic entity in the world. Or with the entire reference class: “Cats tend to be nocturnal animals.” If I assert, on the other hand, “’Cat’ is an English word with three letters,” or perhaps, “‘Cat’ is an easy word for children to learn with an appropriate picture,” I am not using the word to refer. I am mentioning it, to say something about a linguistic entity. The bulk of our discourse about language(s) consists, obviously, of mentions and not uses.

If this distinction can be made clear and public, mentions of words or phrases such as illegal immigrant, or of other words or phrases deemed derogatory of politically protected groups (or of consisting of hate speech, again imputing a mental condition to the user or mentioner absent any evidence of such) and therefore verboten, and of creating a hostile work or academic environment given today’s prevailing hypersensitivities, should be seen as a legitimate part of intellectual inquiry, even by those who reject using them. Mentioning is already done by the groups themselves on occasion. If asked, some would be able to recount how they reclaimed a word or phrase so that their uses of it are no longer negative or hateful, under the condition that the use is limited to their speakers, as when blacks use the word nigger among themselves to refer to one of their own, or when homosexuals use the word queer for themselves, or as part of phrases such as queer theory that are now part of their academic lexicon. I trust it is clear that all of these are mentions and not uses (mentions of mentions, if you will). A use is always for something nonlinguistic. A mention is always of something linguistic.

In sum, analytic philosophy serves important purposes within philosophy, and if one knows where to look, it can serve important purposes in analyzing the natural language in which trends and tendencies that affect the “public mind” are expressed. That includes “hot button” issues that, sadly, fill the public discourse with more heat than light. Be this as it may, one should be able to see from this short discussion that analytic philosophy is potentially far from trivial, and many of its most powerful techniques are sadly underutilized. Analyzing the language of op-eds, policy statements, political speeches, and related pronouncements of all kinds, is part of a broader service analytically-trained philosophers are capable of performing if only more would rise to the occasion, and if they could further train themselves to reflect on their own ideological biases where these exist, seeing them for what they are, and not avoid analyses of words or phrases favorable to an ideology they prefer.

 

Author’s Note: if you believe this article and others like it were worth your time, please consider making a $5/mo. pledge on my Patreon site. If the first 100 people who read this all donate, my goal of just $500/mo. would be reached in no time! And if we’re honest about it, we all waste that much money each day.  

Telling the truth can have negative consequences. Around this time last year my computer was hacked — it wasn’t the Russians, either! Repeated attempted repairs of the OS failed, and the device gradually became unusable — a reason I haven’t been around much lately — and I’ve had to replace it off-budget.

This is also an attempt to raise money to publish and promote a novel, Reality 101 (a globalist speaks in a voice filled with irony and dripping with cynicism). Promoting a book means, in my case, the necessity of international travel which is not cheap.

I do not write for an audience of one. I write for you, readers of this site. If you believe this work makes a worthwhile contribution, please consider supporting it financially. I am not a wealthy person, and unlike the leftist groups I criticize, I do not have a George Soros funneling a bottomless well of cash my way.

If I reach the above goal of $500/mo., I may be able to speak at an event in your area (contact info below). On the other hand, if this effort fails, I am considering taking an indefinite “leave of absence” beginning later this year to pursue other goals. To sum up, these are your articles (and books). I don’t write to please myself. No one is forcing me to do it, as sometimes it brings me grief instead of satisfaction. So if others do not value the results enough to support them, I might as well go into retirement while I am still able to enjoy it.

Posted in Academic Politics, analytic philosophy, Language, Philosophy | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Official Narratives

Note:  the post below is a brief excerpt from the central section of a much longer work in progress, tentatively entitled Confessions of an (Ex) Academic Dissident, which may or may not see the light of print someday. The topic, though, seems important enough given our present situation that it merits separate posting. In fact, I am kicking myself for not having done this long ago. Official narratives should be of interest to philosophers concerned with propagandistic barriers to important truths of various sorts. They are, after all, a form of propaganda that is not usually recognized as such, because they are literally everywhere …  

For our purposes let’s define an official narrative as a government-approved and media-sanitized account of some dramatic event, such as an assassination or war or terrorist attack or mass shooting, or perhaps any major event which went contrary to official expectations such as the outcome of national election. An official narrative can often be identified as such by appearing in relatively complete form very quickly after the event that prompted it, and then being reiterated endlessly in all major media, its essentials never again questioned by “responsible” commentators — frequently despite the absence of actual evidence (witnesses, physical, “smoking gun,” etc.). A convenient enemy is named whose motivations explain the event. Patriotism may be invoked (or possibly its opposite as the case may be), as this also suspends judgment and helps manufacture public consent around the narrative.(1)

Two obvious examples are the immediate arrest of Lee Harvey Oswald for the assassination of President John F. Kennedy, as Oswald had lived in the Soviet Union, recently been to Cuba, and could easily be associated in the public’s mind with Communism; and the announcement of Osama bin Laden as the mastermind behind the 9/11 attacks within hours of the attacks themselves — bin Laden being easily demonized as an Islamic jihadist. Mass media regaled viewers endlessly with televised images of the burning towers, creating a climate of fear as the name Osama bin Laden and phrases like axis of evil were heard over and over, like mantras.

Language is indeed used — repeatedly — to bewitch the public’s intelligence and take all thought deemed legitimate in a desired direction.

A third and brilliant exemplar of an official narrative was falling into place even as I was putting the final version of this essay together: the idea that Russian hackers and other agents of the Russian government directed by Russia’s president Vladimir Putin were able to influence the outcome of the November 8, 2016 Electoral College victory of Donald Trump against Hillary Clinton, possibly having colluded with members of the Trump campaign, including delivering hacked John Podesta emails to Julian Assange’s WikiLeaks, badly damaging the already-troubled Clinton campaign. During the December days up to the actual Electoral College vote on December 19, this claim, originating within the intelligence community including the CIA, was repeated continually despite the absence of evidence it was true. We were clearly expected to accept the word of the CIA and media reportage on faith. This despite the CIA’s having “inaccurately” claimed, back in 2002, that Iraq’s Saddam Hussein had weapons of mass destruction and was able and willing to use them against Americans, leading to the most disastrous war since Vietnam — a war the U.S. started!

That this is not labeled a conspiracy theory is telling!

Another key sign of an official narrative is that its targeted audience (usually the general public) is expected to accept it on faith, which means simply not seeing or hearing what does not fit the narrative. Ignored in the case of the Kennedy assassination is the fact that a single bullet inflicting the documented damage, which included injuring Sen. John F. Connolly seated near Kennedy in the motorcade, would have violated the laws of physics: hence use of the phrase magic bullet, which should have been a dead giveaway that something wasn’t right. Also ignored is that with the vehicle in which Kennedy was riding being in motion, however slowly, and no injuries to either spouse seated next to them, the shooting was clearly the work of a trained professional, which Oswald was not.(2)

In the case of 9/11, surely it isn’t crazy to wonder how 19 Saudis were able to hijack four planes, presumably without trial runs of any sort, fly them (or force their hapless pilots to fly them as there was no documented evidence of their ability to fly them) for lengthy periods of time across multiple states, crash two of them into the Twin Towers, and fly a third into the Pentagon after executing a tight maneuver experienced pilots are on record as saying they couldn’t duplicate. Where, precisely, was the most expansive (and expensive) multilayered air defense system in the world that morning? This is only a smattering of what is left unexplained by the 9/11 official narrative. Also ignored are the claims of those escaping the Twin Towers to have heard explosions in the buildings not caused by burning jet fuel as they came from below, not above, or scientists claiming that the laws of physics preclude the specific kind of collapse that was witnessed not twice but three times that day: that is, there was also the mysterious collapse of the third tower, WTC-7 (ignored by the much-touted 9/11 Commission Report), which had not been struck by anything substantial.(3)

In the case of the supposed Russian hackers and agents (soon to include Sergey Kislyak, Russian ambassador to the U.S.!), it is reasonably clear that something unusual occurred to tilt the election in Trump’s favor at the eleventh hour. WikiLeaks had done one of their infamous data dumps just days before, and it contained some potentially damning information about Hillary Clinton. Wikileaks founder Julian Assange insisted that his source was not the Russians. One can believe him or not. Or perhaps one can believe, or not, the account that Seth Rich, a disgruntled Bernie Sanders supporter, was murdered because he was the leaker of the information intended to hurt Clinton. There is, of course, no “smoking gun” evidence proving this to be true, just as there is no “smoking gun” evidence linking anyone in Russia to the outcome of the 2016 election — evidence made available for public inspection, anyway. But it is interesting that despite this structural similarity between the two, the Seth Rich allegation has been repeatedly labeled a conspiracy theory by all major media.

(1)  Cf. Edward S. Herman and Noam Chomsky, Manufacturing Consent: The Political Economy of the Mass Media (Pantheon, 1988, 2002).

(2)  See James H. Fetzer, ed., Assassination Science: Experts Speak Out on the Death of JFK (Open Court, 1998). More recent work on the Kennedy assassination focuses not just on the hows but on the whys, and draws conclusions that should be considerably more disturbing to anyone who believes the U.S. is really a representative democracy (cf. the next section). Cf. also James W. Douglass, JFK and the Unspeakable: Why He Died and Why It Matters (Touchstone, 2010) or David Talbot, The Devil’s Chessboard: Allen Dulles, the CIA, and the Rise of America’s Secret Government (Harper, 2015).

(3)  Before dismissing the allegations implied in this paragraph out of hand, readers ought to sit down with materials some of us have spent years with. A good place to begin is Jesse Richard, “You only believe the official 9/11 story because you don’t know the official 9/11 story,” http://www.globalresearch.ca/index.php?content=va&aid=26340 (accessed 2 Sept., 2011). Follow it up by actually reading works such as David Ray Griffin, The New Pearl Harbor (Olive Branch Press, 2004); Steven E. Jones, “Why indeed did the WTC buildings collapse?” http://www.physics.byu.edu/research/energy/htm7.html (accessed 23 December, 2005); Rowland Morgan and Ian Henshall, 9/11 Revealed (Avalon Press, 2005); James H. Fetzer, ed., The 9/11 Conspiracy: the Scamming of America (Open Court, 2007); or Judy Wood, Where Did the Towers Go? Evidence of Directed Free-Energy Technology on 9/11 (The New Investigation, 6th, 2010). All are perhaps worth reading in light of Michael C. Ruppert’s revealing Crossing the Rubicon: The Decline of the American Empire at the End of the Age of Oil (New Society Publishers, 2004). There are time-stamped videos, finally, of reporters announcing the collapse of the third World Trade Center tower, WTC-7, before it fell. Indeed, the building is visible in the background if you know which one it is: another dead giveaway to anyone paying attention that something is seriously amiss with the official narrative of what happened that day! Did someone miscount the number of time zones?

Author’s Note: if you believe this article and others like it were worth your time, please consider making a $5/mo. pledge on my Patreon site. If the first 100 people who read this all donate, my goal of just $500/mo. would be reached in no time! And if we’re honest about it, we all waste that much money each day.  

Telling the truth can have negative consequences. Around this time last year my computer was hacked — it wasn’t the Russians, either! Repeated attempted repairs of the OS failed, and the device gradually became unusable — a reason I haven’t been around much lately — and I’ve had to replace it off-budget.

This is also an attempt to raise money to publish and promote a novel, Reality 101 (a globalist speaks in a voice filled with irony and dripping with cynicism). Promoting a book means, in my case, the necessity of international travel which is not cheap.

I do not write for an audience of one. I write for you, readers of this site. If you believe this work makes a worthwhile contribution, please consider supporting it financially. I am not a wealthy person, and unlike the leftist groups I criticize, I do not have a George Soros funneling a bottomless well of cash my way.

If I reach the above goal of $500/mo., I may be able to speak at an event in your area (contact info below). On the other hand, if this effort fails, I am considering taking an indefinite “leave of absence” beginning later this year to pursue other goals. To sum up, these are your articles (and books). I don’t write to please myself. No one is forcing me to do it, as sometimes it brings me grief instead of satisfaction. So if others do not value the results enough to support them, I might as well go into retirement while I am still able to enjoy it.

Posted in Culture, Election 2016 and Aftermath, Language, Political Economy | Tagged , , , , , , , , , , , | 2 Comments

Wittgenstein’s Two Greatest Insights About Language

We’re back, after another unfortunate hiatus caused by a lingering illness and furthered by a computer meltdown. Might as well accept it: I will never be a technology person. But anywize….

This post is one I’ve been planning for some time. One could argue that Ludwig Wittgenstein was the twentieth century’s most important philosopher. He made a substantial contribution to the ideal language analytic philosophy that began with Frege and Russell and emphasized formal systems, and basically pioneered the natural language analytic philosophy that rose during the 1940s and even more during the 1950s, emphasizing the varieties of uses or purposes natural language serves. Wittgenstein was one of those rare thinkers who developed two quite different philosophies, the second of which was a devastating criticism and rejection of the first. He also greatly influenced the historicists in philosophy of science (Toulmin, Hanson, Kuhn, Feyerabend, etc.) and allowed the building of bridges to other philosophical traditions such as French poststructuralism (Foucault) and outside of philosophy to cultural anthropology (Geertz).

To my mind, two of Wittgenstein’s statements stand out as singularly profound, and important in ways going far beyond the antiseptic groves of academia.

Despite the popularity of logical positivism at the time, a close reading of the Tractatus shows that Wittgenstein was no positivist: the influences on his philosophy of language and its relationship to thought — to what can be said versus what cannot be said — range well outside Russell and Frege. Think Tolstoy, for example, or possibly Kierkegaard, or even Kafka. But all them aside, one statement late in the Tractatus suggests that Wittgenstein was already trying to think outside the set of conceptual boxes that were constraining ideal language analytic philosophy.

The first statement, in this case: “In philosophy the question, ‘What do we actually use this word or this proposition for?’ repeatedly leads to valuable insights.” Final paragraph of 6.211, Pears / McGuinness translation.

The early Wittgenstein is observing here that whatever supremacy one gives logical form alone, use matters. Paying attention to it leads to “valuable insights.”

Language is more than form (logical, grammatical). It has purposes: communication, a storehouse of information, instruction, and so on. Wittgenstein lists a variety of the uses of language near the beginning of Philosophical Investigations. A language is a system. More precisely, it is a concatenation of systems: its own set of sounds or phonemes used in often specific combinations (it is interesting to observe that nonnative speakers of a given language will typically have difficulty pronouncing sounds or combinations their native language does not use, especially if they are trying to learn the language as adults); their combination into words; rules attaching words and phrases to objects or classes of objects or attributes; a grammar allowing for discussion of present continuous, past simple, future, and other tenses; words allowing for relations in space and time, movements within space and time (prepositions); and so on.

These, the careful student of language eventually must realize, are conventions only. There is no logical or any other necessity about them. They are what they are because they allowed for solving problems of communication, etc., and are taken for granted within a community of speakers.

This all becomes evident to learners of a second language if they spend some time reflecting. It’s tempting for a native English speaker such as myself to ask of Spanish, for example, Why do they say it that way? There’s no good answer to such a question beyond the fact that these are the linguistic conventions speakers accept and use, having inherited them from generations past. One may make the point further by looking back at English and retorting, Well, why do we English speakers do it this way and not some other way? It’s actually easier to ask the question of English, given that Spanish is a pure Romance language following the streamlined internal conventions common to Romance language, while English is messier, having drawn from both Romance and Germanic roots, resulting in greater complexity. In any event, one embraces and uses the conventions of the target language if one hopes to be understood by its speakers.

The point, though, is that a language consists of conventions “all the way down,” one might say. There is no “metaphysical” necessity between a word or phrase and a given object or class of objects, not in English or in any other language. These conventions include usages which can be observed; one can learn a language such as French or Spanish by noting what words and phrases in those languages are being used to do: the situations in which those words or phrases occur, the responses they generate from others, etc. Sometimes these usages involve human motivations which change over time. To the extent that the aggregate motivations of a human community affect the uses of language, its conventions are somewhat flexible, although this flexibility is always limited. New words and phrases are coined; others drop into disuse; a few are changed beyond all recognition. An example of the latter is the word gay. One may review the attributes this word was used for in, say, the 1950s, versus its conventional uses today. Conventions change in response to pressures placed on them, and these can come from a variety of directions, including political ones.

Summing up this part of the post, the primary insight here is that we learn the conventional nature of language from observing what words and phrases are used by speakers to do, and note how these change over time.

In Philosophical Investigations the later Wittgenstein makes many observations worth pondering, but the one I would single out is: “Philosophy is a battle against the bewitchment of our intelligence by means of language.” End of paragraph 109, Anscombe translation.

Unfortunately, Wittgenstein’s primary concern, very likely his working premise, was with that departures from conventions that govern “ordinary usage” had given rise to traditional philosophical problems about knowledge and certainty, perception, free will versus determinism, and so on. Again, what do the standard conventions of the English language permit users of terms like knowledge, certainty, free, etc., to do? Is a philosopher’s language “gone on holiday” (another memorable Wittgenstein phrase from Philosophical Investigations) when he asserts, “I am certain that p” as opposed to just asserting “p“? I do not believe Wittgenstein ever dwells on the third of the above, free will versus determinism, but other philosophers did in his wake, and their results helped to sort out the often-ambiguous and therefore confusing fashion through which we describe ourselves as “free.” I used to ask students, “Are you free?” The proper response was to ask, “What do you mean by free?” That would have been to ask for a list of the standard conventions in which we use that word.

Calling something a convention, however, does not mean we should never question it.

Given the range of uses of language, if philosophy is indeed a battle against the bewitchment of our intelligence by means of language, we neither can or should confine ourselves to the ‘problems of philosophy’ including the preoccupations of nearly all analytic philosophers, past and present. Drawing in human motivation is a factor here. Not all uses of language aim at uttering true or even useful propositions. Lies, conventionally, are propositions the speaker knows are false but wants you, the listener, to believe are true. And if Harry Frankfurt is right, bullshit consists of propositions uttered by a speaker who doesn’t much care about truth or falsity, only achieving some other effect such as muddying the linguistic waters. (See his On Bullshit, 2005.)

It is necessary, in this light, for philosophers (and any other interested parties) to consider propaganda, which uses language according to, or to establish conventions, less concerned with truth or falsity and more concerned with leading an audience in a desired direction. This direction may favor or disfavor that which is being described propagandistically.

Propagandistic language may be used to discourage independent investigation of an idea by discrediting it through what may be called weaponized language. An example is homophobia … a term almost automatically applied across the board today to anyone critical of the homosexual lifestyle, or of homosexual unions. This term has become conventional. One sees it everywhere. A phobia is, of course, an irrational fear. People suffering from irrational fears are answered with logic; they are offered cures or at least attempts to control their fear (“sensitivity training”?). There are legitimate phobias (agoraphobia, claustrophobia, etc.). Is homophobia one of these? What makes it such? The word was not used prior to the 1990s, which alone makes the matter suspicious, if not decisive. We can raise the matter because of those worldviews, religious or otherwise, that reject the mainstreaming of homosexual conduct, or produce claims backed up by scientific evidence, independent of anyone’s theology, that the lifestyle is physically damaging and actually shortens the lifespans of those who practice it. I cannot claim to know, sitting here, that such claims are true (I suspect they are). What I am sure of, though, is that they cannot be dismissed prior to investigation, and should not be circumvented by the application of a term designed to portray anyone raising the issue as subject to an irrational fear.

A similar situation exists with regard to the phrase conspiracy theory. As conventionally used in major media, calling something a conspiracy theory equates to dismissing it as irrational, and is usually sufficient to discourage closer looks. But is this the same as having supplied evidence that the idea to be rejected really is irrational, or unsupported by any sort of fact? Given the history of the convention, its creation by the Central Intelligence Agency back in 1967 to circumvent the arguments brought forth for questioning the Warren Commission Report on the Kennedy assassination, we have every reason to be suspicious of this convention. Why, in that case, is the claim that Seth Rich was murdered because he supplied Democratic Party emails to Wikileaks (a claim where we have no smoking-gun evidence) a conspiracy theory, while the claim that members of the Trump campaign might have colluded with agents of the Russian government to affect the outcome of the 2016 election (a claim also without smoking-gun evidence) not a conspiracy theory?

Calling a usage conventional does not exempt it from criticism if we can draw attention to what a word or phrase is being used to do (Wittgenstein’s first observation above), and especially if we can see that what it is being used to do is to bewitch our intelligence (Wittgenstein’s second observation).

It would be a great use of analytic philosophy of natural language, especially that which takes Wittgenstein as its point of departure, if philosophers were to begin questioning usages of terms and phrases, even those that have become conventions, that are clearly propagandistic, designed to sway audiences and move them in desired directions, despite their newfound conventional status. I used the two examples I thought of first. There are dozens of others. I am sure readers can think of some of them without trying too hard.

Posted in Language, Philosophy | Tagged , , , , , , , , , | Leave a comment

Academia Embarrasses Itself Again: the Hypatia Affair

The last time I wrote a piece of this sort, an exposé of academic philosophers embarrassing themselves, it caused me some problems. I try to learn from my mistakes, and what I learned from that occasion could be set down as a general rule: other things being equal, before evaluating someone based on what they’ve said on social media, first be sure to investigate their major statements (articles, books if available, etc.) rather than respond to something hastily-scribbled on Facebook or Twitter.

But that doesn’t apply in what I’m about to discuss, and I mention it only as background, for completeness’ sake. For at issue here isn’t an ill-considered Facebook post or, God help us, a 140-character tweet, but an article deemed to have passed muster for, and which appeared in, a refereed academic journal. The journal is Hypatia, one of the major mouthpieces of radical academic feminism for over 30 years now. It’s the sort of journal that would not have been conceived back in the happy and carefree days of, say, logical positivism, whatever that school’s drawbacks, but which fits perfectly into our unhappy era. After all, major spokespersons for what is now considered important scholarship in the latter would denounce logical positivism not for its drawbacks (logical, methodological, epistemological) but because nearly all its practitioners were too pale of flesh, too “cisgendered,” and because their methods didn’t take into account the anguish and horrific hurts to others’ feelings of correspondence rules or theoretical reduction or even just an insistence on clarity and exactitude. The previous might seem a caricature. I’m not sure it is!

The article in question was penned by one Rebecca Tuvel, an assistant professor, untenured in philosophy at Rhodes College in Memphis, Tenn. The article’s title was “In Defense of Transracialism.” I will admit up front: I haven’t read it and have no plans to do so. I did read the abstract, though (available most easily here in case the link to the original no longer works). I gather that the effort at a neologism represented by the word transracialism refers to someone such as Rachel Dolezal who, though born white, has attempted to portray herself as black and even perhaps transition to blackness, at least by implication. Her story gained some notoriety as it appeared roughly the same time as the now-infamous celebrity gender transition of Bruce Jenner to “Caitlyn” Jenner.

Tuvel’s article appears to have employed a familiar and fairly standard argument form, that of arguing from analogy. (If I am wrong about this and the article does something else, someone will have to comment and inform me.) Thus the considerations that belong to one category of transitioning person might apply to those of another and perhaps more hypothetical category of transitioning person, based on any identifiable relevant similarities. There is a review and analysis of the article here, and Tuvel’s article looks to be as well-reasoned as anything to be found in academic philosophy today despite the trendiness of its subject matter (and, if we’re honest, we must acknowledge: the happy and carefree days of logical positivism were hardly free of trendiness). Going off the abstract, the article seems cautious and conservative in the sense that Tuvel apparently never actually averred that Dolezal is a “transracial” person.

What she appears not to have considered, though, is the singular trait of our unhappy era: the degree to which those hurt feelings and sense of offense would trump even the best arguments and inspire an explosion of unbridled rage. The rage against the most likely innocent author would spill onto the journal that published her work. Let it never be said that politicized academic radicals will hesitate for a minute to savage one or even many their own at the slightest indication of independent thought stepping across a constantly-shifting boundary into heresy!

Sorting out the exact sequence of events by referring to the originals is problematic, as some of the links have gone dead (I would probably have removed them, too). But, assuming I can rely on this, some 150 “new scholars” and their tenured-class supporters (including a couple of people on her dissertation committee!) put together and signed an “Open Letter to Hypatia” to denounce the article (read it here). They demanded Tuvel’s article be retracted, that it “caused … harm” (whatever that means, as no evidence of “harm” is presented or cited) and even demanded — taking presumption and arrogance to never-before-seen heights — that Hypatia revise its editorial standards to ensure that no article of this sort ever again gets through its review process.

This comes in place of what would most likely have happened back in those happy and carefree days. An article with a novel and perhaps controversial idea would be refereed and published. Some ambitious person, probably young and untenured, would find elements of its argument problematic and craft a carefully nuanced discussion piece, possibly eliciting a response from the first author. Further discussion would be generated, possibly leading to a round table type debate during the next available time slot at the next APA convention. There would be no (or very little) hostility. People’s feelings would not be at issue; their motives would not be impugned; everyone would be assumed to be attempting to advance a common conversation to greater understanding of the original article’s area of reference.

Would this happen now, with today’s race-obsessed, gender-drenched, etc., academic culture?

You’re kidding, right?

In our unhappy, post-Enlightenment era, emotions have largely overwhelmed reason, especially among the noisy, politicized subdisciplinary “new scholars,” and the sort of procedure I just described would be, if anything, far too slow. Far easier instead — especially given the near-instant communications now available to everyone — to satiate the passions of the moment and dash off a nasty list of superficial criticisms and allegations. These boil down to the author’s failure to accommodate the latest fashions of, e.g., the “critical theory” that dominates these sub-disciplines. Suffice it to say, some of what you’ll read in the “Open Letter” (assuming the link continues to work) goes beyond standard academic criticism of an author’s work to the borderline defamatory — this is Professor Leiter’s judgment, not mine. He goes on to express “hope” that she consults a lawyer to discuss her options, even offering in one of his several posts to assist with her obtaining contacts and deal with legal expenses!

Whatever one’s negative judgment of logical positivism, its practitioners did not defame one another in print! Ah, those happy and carefree days …

The situation is worse the above makes things appear. ‘

Hypatia’s editorial board instantly caved!

Their statement stands as an exhibition of all that is wrong with American academia today, especially the humanities! I will reproduce the first paragraph of their “apology” (hardly an apologetic in the ancient Greek sense of Plato’s “Apology”!):

We, the members of Hypatia’s Board of Associate Editors, extend our profound apology to our friends and colleagues in feminist philosophy, especially transfeminists, queer feminists, and feminists of color, for the harms that the publication of the article on transracialism has caused. The sources of those harms are multiple, and include: descriptions of trans lives that perpetuate harmful assumptions and (not coincidentally) ignore important scholarship by trans philosophers; the practice of deadnaming, in which a trans person’s name is accompanied by a reference to the name they were assigned at birth; the use of methodologies which take up important social and political phenomena in dehistoricized and decontextualized ways, thus neglecting to address and take seriously the ways in which those phenomena marginalize and commit acts of violence upon actual persons; and an insufficient engagement with the field of critical race theory. Perhaps most fundamentally, to compare ethically the lived experience of trans people (from a distinctly external perspective) primarily to a single example of a white person claiming to have adopted a black identity creates an equivalency that fails to recognize the history of racial appropriation, while also associating trans people with racial appropriation. We recognize and mourn that these harms will disproportionately fall upon those members of our community who continue to experience marginalization and discrimination due to racism and cisnormativity.

Anyone so inclined can read the whole thing here where Professor Leiter reproduced it. His self-description as a New Yorker notwithstanding (referencing a low tolerance for self-evident bullshit), he clearly has a stronger stomach for this sort of thing than I do. Maybe that’s a necessary rite of passage for entrance to tenured-class status in our unhappy era. Suffice it to say, the remainder of Hypatia’s longwinded and neologism-laden “apology” caved to the demands on every point, even the one to revise editorial policy. We even get to learn from the above one of the newest neologisms: deadnaming: the speech crime of referring to the name a “trans” person used prior to their “transing” (or whatever the hell we’re supposed to call it). Critics of Tuvel’s article went further than what the above block quote implies, impugning her abilities as a writer and a scholar; it was this that Leiter, with his legal acuity, picked up on. Others have as well. Several other responses to the “Open Letter” and the Hypatia cave-in have appeared as the word has spread.

Leiter’s overall views are left-of-center on most issues of public policy as with the majority of the academics of his generation, but not batshit-insane radical leftist. He calls the entire affair a “fiasco.” It’s hard to disagree with that, but easy to enhance such a description. Leiter would not agree with my overall take on this, as I do not think it will suffice to limit it, as opposed to placing it alongside other numerous recent events such as the obvious suppression of conservative speech on campuses reflected in, e.g., the cancellation of Ann Coulter’s scheduled appearance at Berkeley in the face of threats of violence, but in the still-broader historical context of what’s happened to academia since the present unhappy era began: the 1970s.

What happened was the emergence of the “argument” (it was always far more an exercise in propagandizing and then bullying, at which the academic left has always excelled) that minorities, women, and now apparently “transgenders” (but — gasp! — not “transracials”) are “underrepresented” in academia, and that all departments should make efforts not just at outreach but to hire more such people — for after all, “diversity is our strength,” is it not?

We arguably started down this troubled road with the Supreme Court’s catastrophic Griggs v. Duke Power decision in 1971. This decision changed the fundamental meaning of discrimination from an action taken by individuals to a mere lack of politically acceptable outcomes and timetables for such. The meaning of affirmative action, ambiguous from the get-go, changed from that of well-intended outreach based on calls for an end to racial and sexual discrimination to an insistence on measurable results as a test of “nondiscrimination.” The gold standard become proportional representation. Hence the creation of the “underrepresented” group in all official policy recommendations relevant to hiring and promotions. Organizations with insufficient numbers of blacks and women in positions of responsibility could expect warnings if not actual EEOC lawsuits (Auburn University, a former affiliation of mine, received a warning regarding admissions of black students while I was teaching there).

A process was set in motion. I described this process in some detail, along with its effects on occupations ranging from the construction industry to academic, in my book Civil Wrongs: What Went Wrong With Affirmative Action (1994), a work not once discussed or argued with by academic philosophers but instead basically blacklisted in academia. I learned in 1996 from a sociology professor at Bowling Green State University with whom I’d begun corresponding that the book had been placed on an actual “index of banned books” there — an “index” of how medieval academia was becoming even then!

I’d committed one of the ultimate heresies, providing a political explanation of the rise of the “new scholarship”: so-called “critical theory” which borrowed freely from French philosophy (e.g., Foucault, Derrida, etc.), radical feminism, critical race theory, and rising homosexualism which at the time was barely on the public radar, were really forms of pseudo-scholarship. They were not advancing a common conversation but launching platforms for political activism. Their primary method of protecting themselves from criticism was political correctness, rooted in Frankfurt School educated Marxist philosopher Herbert Marcuse’s “Repressive Tolerance”: allowing the same free speech standards for “nonrepressed” as for “repressed” groups maintains systemic repression! By the end of the 1980s affirmative action had clearly evolved into race and gender preferences never called for back in the 1960s, and the idea became to protect these both from legitimate criticism (in many cases from scholars far better situated than I was), while cowardly Republican politicians such as the first George Bush caved and signed a Civil Rights Act of 1991. That law reversed lower court decisions such as Croson (1989) upheld by the Supreme Court which was threatening to drain the affirmative action swamp.

I opined further in my book that the particular attacks on such notions as rationality and objectivity coming from “new scholarship” quarters, though originating independently, had been incorporated into and even enhanced by this effort: a rational and objective approach to race and gender in American public policy did not yield the results activists wanted. There was no reason whatsoever why nondiscrimination should yield proportional representation of all ethnicities and both genders in all or most organizations (schools, workplaces, etc.). Nowhere in the world did such representation exist. Arguments based on experience seemed to show that efforts to bring it about were counterproductive and ought to be discontinued. (There was no automatic reason, moreover to equate discrimination with repression. Jews faced discrimination and even segregation for centuries in Europe, and still sometimes ended up wealthy from running businesses.)

Such approaches based on fact and logical argument had to go. The result was that any sort of genuinely enlightened discussion of such subjects as race and gender was clearly dead in the water by the time my book appeared. I just hadn’t realized it. As Hobbes says somewhere, “When reason goeth against a man, a man goeth against reason.” A woman, too. Or any other gender you like!

Feelings thus reign supreme in the subdisciplines of the “new scholarship”! And they are getting increasingly unpredictable: I am sure Rebecca Tuvel, a recent Ph.D., is as much a product as a victim of this academic culture, and probably never dreamed this kind of fracas would erupt over her attempt to add something new to the conversation, trendy and sordid though the conversation is. I rather hope she is polishing her CV in addition to whatever legal maneuvers she might be considering. Being female, after all, is doubtless an asset today, but isn’t a guarantee if one has become controversial. Ask any well-educated woman who rejects the assumptions and methods of radical academic feminism.

Summing up: it is small wonder appeals to “expertise” no longer cut much ice with significant and possibly growing segments of the public, those who voted for Donald Trump last year? Expertise, obviously, is keyed to academia, which once trained the experts. Politicians and commentators invariably turn to academics when they want to back up their claims with evidence — to the extent this still occurs. While different issues are different, academic “experts” increasingly, whether rightly or wrongly, are perceived by members of the voting public as existing in an elitist echo chamber, their motives often suspect, their forests long ago rendered invisible by the trees, and in which what doesn’t fit into their official narratives is often not seen at all.

While admittedly only a tiny segment of the public is likely to encounter the specifics of a case like this, the larger group almost automatically associates much of academia with this above elitist, insular, big-city, “blue” culture mentality. What antics they have seen, which include women marching wearing pink “hats,” displays obviously designed to represent their vaginas, are hardly encouraging. The “red” culture that had just put Trump in the White House saw vindication, in its own eyes at least, the very next day after Trump’s inauguration. This was the much-touted March for Women, and to be blunt about it, the “red” culture instinctively saw “pussy costumes” as sick and degenerate. They regard someone wearing one as having something wrong with her mentally. How do I know this? Because I know such people personally sometimes through years of correspondence, and they told me as much.

Nothing that has since come from media left-liberals or from academics is likely to change this.

The “red” culture, moreover, does not regard “trans”-whatevers as some kind of an intellectual and cultural avant garde but as symptoms of the worsening sexual confusion and depravity of a society in rapid decline.

Why does this matter? Because these people vote! They have, at least for the moment, thwarted the efforts of the multicultural and trans-whatever “blues” to dominate the country — or, at the very least, to dominate them. Even if violently attacked, as some were outside of Trump rallies last year, they will continue to vote. At least one columnist just voiced the suspicion that, given the disruption behavior the left has engaged in since the election, were it held today, Trump might actually win the popular vote!

The more obnoxious and violent the left gets, the more it loses. One has to wonder if and when radical leftists will set aside their feelings long enough to figure this out.

Or, to bring things back to academia — and to academic philosophy — will its saner members, even representatives of a left-of-center that wants to hang onto sanity, as Professor Leiter clearly wants, decide they have had enough of this radical nonsense? What can they do about it? First they have to ask, At what point will they realize the need to abandon as irrational and destructive the obsession, now over a quarter-century old, to “get more women into philosophy,” as a species of the further obsession with proportional representation? (When I checked his blog more recently, Leiter had posted a partial list of signees of the “Open Letter,” calling for What-were-you-thinking? type explanations. Seven looked to be men; 27 were women; one only gave initials; with the occasional first name like ‘Tempest,’ who knows? In fairness, not all the signees are academic philosophers. But some are. Too many are.)

Unless the presumption that academic philosophy needs more women is subjected to critical scrutiny (and silly allegations of the ‘cisnormativity’ of the critics are simply ignored), the sane will be able to do nothing. On the other hands, enough radical crazies will continue to obtain the tiny handful of available tenure-track jobs to cause problems like this to erupt. They will continue to corrupt obviously floundering humanities disciplines, now facing defunding in many places, until many departments are forced to close and there is next to nothing left.

What is bad is that whatever visibility such incidents as the Tuvel-Hypatia fiasco reach, their association not just with academic culture but with intellectualism generally, will continue to damage the latter, that it will be even harder to get serious ideas discussed in our unhappy age … an age in which “diversity,” far from being “our strength,” has accomplished little besides destroying careers and served up little besides division and hostility.

SPECIAL NOTE:  If you like this article or value my writing and wish to see more of what’s presently in the planning stages, please consider going to my Patreon.com site and making a pledge/donation.

UPDATE May 5, 2017:  The philosophy department at Rhodes College stands fully behind Rebecca Tuval … for now, at least. Maybe she doesn’t need to polish up her CV after all. I would anyway, though, just to be on the safe side. (Here.)

Posted in Academia, Academic Politics, Where Is Philosophy Going? | Tagged , , , , , , , , , , , , | 4 Comments

May 1 – International Workers Holiday or International Diversion

It’s May 1. Here in Chile, it’s a national holiday, the official name for which is Día Internacionale de Trajabadores (International Workers’ Day). The holiday isn’t celebrated in the U.S., of course, or in Canada, because of its association with far left groups, especially communist ones. This day has an interesting history, to say the least. It was on May 1, 1776, that Jesuit-trained law professor Adam Weishaupt founded the notorious secret society known as the Illuminati, which infiltrated European Freemasonry until, accused of conspiring to subvert and destroy all the governments of Europe, they were suppressed and driven underground. That was the late 1780s. In 1789, of course, the French Revolution erupted. Its causes are well known, or so we are told. Weishaupt himself continued to live in relative obscurity until 1830, interestingly the same year the secretive Skull & Bones was founded at Yale University in the U.S. Both George W. Bush and John Kerry were members, and it was “so secret they couldn’t talk about it.” Some make more of this entire trajectory of events than others. The trajectory may be noted as one of those intriguing sequences of factoids about which we may never know the whole story, if there is a whole story to be known.

May 1 is also the day, in 1884, a proclamation demanding an 8-hour working day began; on the May 1 two years later a general strike erupted over the issue that led to the violence of the Haymarket affair, which resulted in a number of deaths when someone lobbed a bomb at police. They fired back, killing four demonstrators; the next day, disruptions resulted in bystanders being shot to death by militiamen. The 2nd International Congress called for an International Workers Day, to be observed on May 1. “May Day” was formally recognized for the first time in 1891, and has remained a focal point for various leftist groups claiming to represent the interests of the proletariat. It remains one of the most important national holidays in mainland China, North Korea, Cuba, and is still recognized in the former Soviet Union. Many noncommunist nations throughout Africa, South America, and elsewhere, celebrate International Workers Day, of course. It became a recognized holiday here in 1931.

Does celebrating a holiday accomplish anything for those not in power, however? I would surmise not.

Here in Santiago, Chile — in our neighborhood, anyway — the holiday was an excuse to stay up most of the night partying. I doubt the revelers gave the day’s history more than a moment’s passing thought, if that. I don’t have to know the personalities involved to be able to surmise this. I’ve lived here almost five years.

As an ex-academic, I’ve observed both how those seeking power operate, and how those with real power carry it forward.

The former are noisy and often obnoxious. The latter are generally as quiet as the dead. They don’t have to use noise to get what they want.

Many of us are heirs, in one way or another, of the so-called culture wars that began to erupt, little by little, during the Reagan-Bush years. Those years were witness to the meteoric rise of so-called new scholarship focused on race, gender, eventually sexual orientation, and identity-politics generally, which was incompatible with the ideals of objectivity and rationality that imposed essentially the same norms on everyone. Traditionalist responses that tried to articulate and reaffirm those norms had begun to appear. I was one of those people who began to look at policies such as affirmative action as propelling identity-scholarship, the fact that several highly visible Supreme Court decisions had begun pushback against race and gender preferences in, e.g., university hiring and admissions practices, advancing the thesis that political correctness erupted to protect preferences of various kinds from legitimate criticism by using propagandistic ruses (allegations of “racism” being the most obvious) to circumvent them as well as intimidate the critics. The ruse worked more often than not, as otherwise those bringing them forward generally saw their academic careers lying in ruins — even if they had tenure, in some cases.

It was all very noisy and visible. What nearly all of us missed was the fact that outside the academic disciplines they had hijacked, and outside the media, none of these groups truly had power, which is not merely organizational but political-economic and has as its sources those with very little interest in how many women the philosophy department has hired this past year.

Neoliberalism had begun its own top-down transformation of the universities as far back as the early 1970s following the Powell Memo, which specifically referenced figures such as Herbert Marcuse and Ralph Nader as having inculcated an anti-business mindset in the universities. Indeed, the late 1960s had seen the rise of an entire generation that hijacked the national conversation. As a result, a war those with economic power wanted was rendered so unpopular that it was no longer beneficial to said powers-that-be to bankroll it. U.S. efforts in Southeast Asia were curtailed and discontinued.

To be sure, members of that generation stayed in academia, eventually won tenure, and began to transform their disciplines from the inside. This would gradually discredit them on the outside. What, after all, were we to make of the claim that a “female-friendly” science would be different from “traditional” science because women see the world in more relational terms than men? Would such a science be able to erect skyscrapers and fly airplanes? Would it surpass quantum mechanics? And how seriously, later, were we to take the prevailing feminist allegations that one in four girls and women on campuses would be raped while they are in college? (Or is the official number one in six? I honestly can’t remember, and can’t see that it much matters.) Today, the humanities are struggling to survive. The neoliberal university, having vastly expanded administration and focused on the furtherance of corporate interests, inculcating students into that mindset by having redesigned campuses to look more and more corporate (using corporate logos, etc.), has little more use for academic philosophy than it has for “gender studies.” At its best, after all, academic philosophy still at least pays lip service to questioning authority.

While the visible debates surround such matters as the role of free speech on campus, the less visible ones involve such matters as the relative absence of academic jobs that pay livable wages, and whether several academic disciplines, once seen as at the center of intellectual inquiry, will be able to survive the present scourge of academic-corporatism intact. Or whether the average brick-and-mortar college with a full range of educational offerings (as opposed to well-endowed Ivy League institutions) will survive the rise of an online world which supplies the same offerings for nothing, or nearly nothing!

There is, in all this, little concern for those currently doing the real work of a college or university: educating its students, who have usually gone into once-unheard-of levels of debt in order to attend. Fewer than 30% of academics today have tenure or any hope of obtaining it outside of very, very good personal contacts. They are struggling to keep a roof over their heads at the same time they juggle stacks of papers to grade for the five or six courses they teach, spread over two or more campuses or even institutions (at one time I taught courses at two institutions spread across three campuses). And if students’ would-be employers ever discover that online educational world and tailor it to their advantage … ?

This environment is, of course, perfect for the intended political economy of the future, which if present tendencies continue will be global rulership by an economic superelite that will dominate the governments of the world through their central banks and financial systems, and through the latter, will dominate national portions of economies and educational systems. I believe STEM subjects are being encouraged because they fit this future model so well. Trained technicians will survive, and some may do reasonably well. Scholars trained in the art of questioning power systems will not do so well.

Not a pretty sight, but I am sure that what comes to pass will continue to allow holidays recognizing “worker’s rights,” which the workers themselves will use to escape into parties and such. “Where’s the revolution?” asks the British group Depeche Mode in their newest single. Answer: in the revolutionists’ dreams, where it always was.

Posted in Academic Politics, Culture, Political Economy, Where is Civilization Going? | Tagged , , , , , , , , , , , | Leave a comment