On Euthyphro’s Frustration (An Occasional Philosophical Note #1)

Note: with this post I am beginning a series of shorter posts to lay out specific foundational issues illustrated in basic philosophical texts which, for my purposes, I will take at face value (i.e., I am not “deconstructing” them or some such, but using them to ponder basic problems about the prevailing premises, preoccupations, and direction of Western philosophy). 

In Plato’s The Euthyphro, the character Euthyphro struggles trying to supply what Socrates wants: an essential definition of piety. For Plato, this is a definition consistent with the Platonist metaphysics of universals (or forms) as the primary reality — abstractions grasped by intellect alone as being “real” while our immediate experience of concrete objects and events delivers, at best, an unreliable concatenation of particulars.

Euthyphro’s efforts fail in increasingly interesting ways, of course, and finally, out of frustration, he abandons the conversation. The dialogue drips with Socratic irony: Euthyphro (as every introductory philosophy student learns) is prosecuting his father for the neglectful murder of a slave, and this action bespeaks of a man well advanced in wisdom. Socrates alone knows that he is not wise, and has implored Euthyphro to teach him. We are to see Socrates as the wise one, of course, and Euthyphro as a pretender.

But why should we attribute Euthyphro’s frustration and ultimate abandonment of the project at the end of the dialogue to his lack of wisdom, or insight, or ability? If we make the contrary supposition that Plato, and Aristotle in a somewhat different way, were wrong, because there are no “universals” (only universal ideas, or universal statements), then Euthyphro’s frustration is explained and perhaps justified.

Suppose, furthermore, that all knowledge obtained by human means really is local (as an anthropologist like Cliffort Geertz or the philosopher Paul Feyerabend would insist): tied to specific concrete practices, problem-solutions, events, personalities. In that case, the situation in Western philosophy is far more serious. For it follows that the Platonist premise of free abstractions as primary metaphysical realities, in some kind of hierarchy with the Good at the very top, and abstract certainty achieved through pure intellection as the primary criterion for knowing, threw the entire philosophical enterprise off track around 2,400 years ago! In that case, all the “system-builders” turn out to be wrong; the “system-smashers” turn out to be right. That the “quest for certainty” seems always to have ended in skepticism if pushed with sufficient ruthlessness, or to have devolved into some kind of anthropological cultural relativism or compromising pragmatism which (possibly sensibly) has relaxed requirements for knowing, is a telling further consideration.

 

Advertisements
Posted in Philosophy | Tagged , , , , , , , , , | Leave a comment

“Reality 101”

[Note: cross-posted on the news and commentary site NewsWithViews.com as https://newswithviews.com/reality-101/ with a few minor changes.]

Time for something different. I’ve written a novel. As I write this, it’s 98% finished (all but massaging and embellishing). It will be marketed as the “first novel of the Trump era.” Well, one can hope. I’d been planning to try my hand at fiction if Hillary Clinton had won last year. Even though she didn’t, the idea had taken root, and since I needed no precognitive abilities to know how the Establishment would react to the Trump victory, I decided to run with it anyway.

I’ve been directing my own effort to raise money for an international promotion effort. So far, the effort hasn’t met with as much success as I would prefer. Without promotion there is little point, though. So whether this will actually be published if it does not find its way into the hands of a major publisher is iffy. I am not a wealthy person.

Why write fiction?

The late philosopher of science and historian of ideas Paul Feyerabend (discussed briefly in my last article) once penned a short essay entitled “Let’s Make More Movies!” (1975). Despite the playful title it isn’t light reading. The basic idea: there are ways of getting a point across other than didactic argument. Authors, playwrights, and writers for cinema have all used them. So — and these are the cases that interested Feyerabend — have scientific geniuses such as Galileo who presented his ideas in dialogues (as did the philosopher Plato well over 1,500 years before). Feyerabend actually studied theater briefly during his youth under the tutelage of German playwright and theater director Berthold Brecht.

Storytelling involves showing and not merely telling: presenting how things might look, or events play out given a situation, instead of arguing for this or that abstract point. Instead of an author arguing a thesis, characters speak, act, and interact. Properly drawn characters have histories of their own including crucial events which shaped them, just as our backgrounds and events in our lives shaped us. An author wants to create a kind of movie in the reader’s mind. He or she sets the conditions, then gets out of the way as the characters assume center stage. Often, they turn out to have experienced things the author did not anticipate, have complicated and sometimes conflicted motives, and do things he/she did not plan for—all required by the story’s own dynamic. This is how creativity sometimes works.

So without wanting to give away the whole thing….

Imagine a convinced globalist — convinced, that is, of the necessity of a global state given the globalization of the economy, because his education and line of work brought him into continuous contact with globalist actors and instruments, year after year — has decided that it is time to tell the truth, or at least as much of it as he knows. He believes a world state is inevitable, and is the next stage in the evolution of modernity. All we peons can do is prepare for it, “retooling” ourselves to be innovative and competitive in the coming global mass-consumption marketplace. Retired and with plenty of money, our globalist has written a tell-all book of his own and gone on tour to promote it. His tour brings him into our story’s purview.

There are such people in the actual world, of course. Georgetown University School of Foreign Relations macrohistorian Carroll Quigley wrote such a work, but his Tragedy & Hope: A History of the World in Our Time (orig. 1966) is an intimidating tome of over 1,300 pages, and while he discusses globalism and its emergence in international finance and central banking, his revelations are more part of the backdrop of his sweeping modern history of civilization. Only here and there do they assume a central place in his discussion.

Quigley’s is just the first major work I became aware of that writes history with this idea as background that the most important directions modern civilization has taken were not accidents. My fictional globalist shares with Quigley the idea, contrary to those he will call “conspiracy writers,” that the emerging world state will be a good thing. He regards those he calls the global oligarchy as “benign philosopher-kings” who invented capitalism by originally investing in, i.e., putting up the money for, capitalist endeavors (e.g., factories in England, Germany, and eventually in the U.S.). Capitalism’s early apologists, in their private correspondence (my fictional globalist observes) encouraged, in their private correspondence, forcing independent farmers from their land and into the new factories in the cities because, in those days, capitalists needed laborers. And then, Adam Smith in particular wrote about laissez-faire, hiding the truth (whether intentionally or not).

In other words, my fictional globalist has written a poor man’s Tragedy & Hope. He is  appearing by invitation at the local university in a county ravaged by the effects of globalization, and proclaiming something major business publications are no longer even bothering to hide, but placing it in a larger context.

Now imagine him stating that the most dangerous result of the modernist capitalist consensus was its building up a financially independent middle class in the 1950s and 1960s, so that too much leisure and time on their hands allowed the children of that class to begin to challenge elements of the system in the 1960s. And how it was decided, within the oligarchy, that the American middle class was dangerous to their goals for the world and so had to be destroyed. Imagine him laying out, step by step, exactly how this was accomplished.

The young man who narrates this story, of millennial age and native to the county, has suffered directly from the results, and again without giving away too many details, he does not take kindly to being told all this. I did not set out, initially, to create a central character whose father committed suicide following the loss of his career with the county’s largest employer when it shuttered and went south of the border, followed by a string of professional and business failures; it just happened (that’s that creativity thing I mentioned, with characters taking on lives of their own). I can do this both because studies have shown that suicide in such communities has grown by leaps and bounds over the past 20 years or so, and I have known people who have tragically lost a parent to suicide, in one case seeing the emotional devastation up close. It isn’t pretty! The point in this context: few ordinary people can simply “reinvent themselves for the New Economy.” That’s more a fantasy than anything in a novel.

By the way, lest I forget: my globalist character has no use for Donald Trump. Well, surprise, surprise.

He comes under verbal attack. A complex character and not a sociopath, he stands his ground — not out of a desire to be cruel and indifferent but out of a sense that the truth must be faced. He does not believe that the “global marketplace” can regulate itself, and does not think “free trade” deals are enough. Not to mention the dangers of war in a world of peoples who are very different from one another, some with nuclear weapons; and, of course, there is human-caused climate change which he endorses as real based on the authority of science: a problem calling for a top-down coordinated global solution.

Is such a character credible? For some time now, some writers have been declaring the nation state outdated and arguing for some kind of global federation if not an out-and-out global state. Some such statements are quite eloquent (one current example here).

The location of this story is an imaginary Oklahoma county not too far from where I lived for a time, so I know the history and lay of the land at least somewhat. This place has its own political economy, stemming originally from the actions of its own aristocratic family who build the county, but could not keep one scion from helping to destroy it. Invented long ago to tell a different story which did not pan out, this imaginary county (town, university, their histories) just sat in my mind for a long, long time. It seemed logical to use it now for this different purpose.

Incidentally, this being Oklahoma, an indigenous population lives there. Through them, we become conscious of the possibility of a localist alternative based on separation, with a touch of something non-Western to give us a perspective on the possibility of economic arrangements absent an obsession with growth and change.

In other words, anyone thinking this novel will somehow defend “white supremacy,” assuming this means anything these days other than disagreement with the cultural hard left, is mistaken. I am not “alt-right” (I explain why not here). And although I’ve barely written on the subject as I’ve never been able to make it a priority, I’ve long believed the minority group with the greatest claim to have been harmed by the “white man” and his modernity is the one that has been the most silent: Native Americans, whose land was taken from them, every treaty made with them by the U.S. federal government broken, many dying from diseases brought from Europe to which they had no natural resistance, with those who survived the wars and attempts at extermination typically sinking into poverty even when not herded onto “reservations.” Although many Europeans dismissed them as savages, some Native Americans built civilizations on a par with those of the ancient Mediterranean world (the Toltecs, the Maya, the Inca, are examples). A few invented writing, and one group (the Iroquois, with their League or Confederacy) actually had a form of representative government.

Not being an anthropologist I don’t know, but I have often wondered what we could still learn from the remnants of cultures which modernity has largely erased. These cultures surely merit attention and study. In addition to physical architecture including pyramids, they developed rich mythological narratives designed to do what worldviews always do: give them a sense of place in the universe, something modernity has taken from us all.

Returning to my story line, which draws on such a narrative when the time is right, the Christian Gospel puts in a strategic appearance. So does the Austrian school of economics, which portrays free market capitalism as the “unknown ideal” — a self-regulating system able to operate completely free of government interference, whether through regulation or subsidy. Also appearing, as I was unable to resist, is a Marxist-style critique of globalized capitalism in its current globalized form, whose defender contends that the “pure” capitalism of the Austrians is an impossible fiction, that the “crony capitalism” they criticize just is capitalism; there is no other. Incidentally, while not opposing it, this character has little to say about cultural Marxism (otherwise known as the identity politics that has swept through academia over the past three decades).

My speaker is not a Christian, not an Austrian, not a Marxist. He considers himself a realist, a rare animal in today’s world. Hence the title. He’s also a transhumanist, who believes we will eventually use technology to transform not just the world but ourselves. So he’s an optimist who believes we can save ourselves by trusting in the benign nature of our betters, the philosopher-kings of modernity, the movers and shakers who make things happen behind the scene, who will deal with problems like war and climate change in their own way. This despite how the county his visiting to promote his own book has become a wasteland since NAFTA, and even more so since the Meltdown of 2008. Like many such places.

My narrator is a damaged soul, a seeker still trying to find his way. He knows he wants nothing to do with any of the above! What he comes to realize is that modernity in its current form offers him (us) no future. Not really.

There’s no sex or violence; readers interested only in cheap entertainment had best look elsewhere. There is, however, a unique love interest, between my narrator and his girlfriend, as one cannot have compelling characters without that. She is a member of the indigenous population. This opens some interesting doors. Through the narratives of her people there are intimations of the world beyond our familiar one, perhaps in light of Hamlet’s ever-intriguing remark that “There are more things in Heaven and Earth, Marcellus, than are dreamt of in your philosophy.” Some of these suggest that in the long run, evil indeed meets with an appropriate fate.

What matters most is the warning, about a view of the world and our place in it: an economics-über-alles view of human beings as infinitely malleable, like lumps of clay; of common people as little more than cattle to be used to enrich their self-anointed betters, and then discarded when they are no longer of use; and especially of our arrogant belief that we can save ourselves from our own many follies. Where can this view lead, except to technocratic de facto totalitarianism where not just freedom but privacy are things of the past, not even missed if generations grow up without them. Present-day globalism is not the end, just the most important stepping stone. (Incidentally, you don’t have to be a Christian to believe all this — but it helps!)

Is such a warning credible?

I submit that slightly over 25 years ago, I began warning anyone who would listen what political correctness would do to the body politic if allowed to spread from the universities through the rest of society’s institutions almost unimpeded, defended with brain-paralyzing phrases like social justice. Guys like me weren’t listened to, and just look at campuses today, with their “safe spaces” and “trigger warnings,” and now the open assault by students themselves on Constitutionally protected free speech (they’ve grown up with the cults of diversity and social justice).

Twenty-two years ago I merely lost a teaching job from having spoken against race and gender preferences (“affirmative action”). Today I would fear for my safety.

This book is another warning. Will it, too, be ignored? Will it even be published? Assuming it is, the questions readers are invited to confront: how much of what my speaker says of the near future is absolutely true? Biblical and other prophesies speak of a coming totalitarian world state, or an equivalent, in which you will be forced to adopt “the mark of the beast” to be able to buy or sell (Rev. 13:16-17). Not just global geopolitics but technology itself are opening such doors. If you’re ordered to go through them — or, what is more likely, find your life becoming increasingly inconvenient if you avoid going through them — what will be your Plan B?

Author’s Note: if you believe this article and others like it were worth your time, please consider making a $5/mo. pledge on my Patreon site. If the first 100 people who read this all donate, my goal of just $500/mo. would be reached in no time! And if we’re honest about it, we all waste that much money every day.

Telling the truth can have negative consequences. Last year my computer was hacked — it wasn’t the Russians, either! Repeated attempted repairs of the OS failed, the device became unusable, and I had to replace it off-budget.

This is also an attempt to raise money to publish and promote a novel, Reality 101, 99% finished as of this writing. In it, a globalist technocrat speaks in a voice filled with irony and dripping with cynicism — contrasted with the possibility of freedom outside the world as he sees it.

Promoting a book means, in my case, the necessity of international travel which is not cheap.

I do not write for an audience of one. I write for you, readers of this site. If you believe this work makes a worthwhile contribution to the world of political-economic ideas, please consider supporting it financially. I am not a wealthy person, and unlike the leftist groups I criticize, I do not have a George Soros funneling a bottomless well of cash my way.

If I reach the above goal of $500/mo., I may be able to speak at an event in your area (contact info below). On the other hand, if this effort fails, I am considering taking an indefinite “leave of absence” beginning later this year to pursue other goals. EDIT: thus far this effort has garnered just $62/mo. If it does not reach $250/mo. by the end of this month, it will be time to write my farewell-and-good-luck piece.

To sum up, these are your articles (and books). I don’t write to please myself. No one is forcing me to do it, as sometimes it brings me grief instead of satisfaction. So if others do not value the results enough to support them, I might as well go into retirement while I am still able to enjoy it.

 

Posted in Books, Donald Trump, Election 2016 and Aftermath, Political Economy, Where is Civilization Going? | Tagged , , , , , , , , , , , , , , , | Leave a comment

Of Climate Change, Science, and Experts: A Meditation

[Author´s note: co-posted on NewsWithViews.com but has yet to appear there. I have added and deleted a number of lines here and there and in general tried to increase clarity wherever possible.]

A few months ago, a friend of mine, his son who had swung left, and a few others, debated man-made climate change (MCC) over email. Was it the existential threat to civilization some made it out to be, a complete hoax, or something somewhere in the middle somehow. Being in this group, I was copied on each installment, but did not participate. I was asked why, and have been asked on other occasions whether I had anything to say about MCC.

I tend to reply that I’ve not researched the topic extensively, and can’t speak to it with any confidence. There’s abundant information online, of course; what’s missing are hours in the day sufficient to research everything out there. The topic has come up again, as MCC proponents have a field day in the wake of two destructive hurricanes, Harvey and Irma. A third, Maria, has devastated Puerto Rico as this is completed. All of us (I hope) are praying for those who lost loved ones in these storms, for rebuilding efforts which may take years in some cases, and that tragedy and hardship not be turned into an opportunity to score political points (for a change).

What research I’ve done on climate matters was mostly to inform students in contemporary moral issues and critical thinking classes, taught years past during my adjunct days, where I isolated three perspectives:

(1) Global warming is not real. For whatever reason, scientists are misreading their data, seeing something that isn’t there, perhaps generalizing falsely from local events such as glaciers in retreat after a few years of unusual warmth.

(2) Warming is indeed happening on a long-term, global scale, but we’re not the cause. Earth’s climate has warmed and cooled many times over planetary history, from various causes including fluctuations in solar energy; the climate, in any event, is far too vast for our paltry activities to affect it significantly. Volcanoes affect it more than we do.

The third perspective — (3) — holds that global warming or climate change is happening, that human activity, especially burning fossil fuels for energy and expelling the byproducts into the atmosphere for well over a century now, is causing the planet to heat up. (3), as I understand it, does not say every single year will be hotter than its predecessor, or will manifest violent hurricanes like this year has, just that over a long period of time, average temperatures will rise, sea levels will rise as polar ice fields melt, and on average, weather phenomena will increase in destructive force, be it hurricanes, severe winter storms, or droughts leading to forest fires.

So will it be door #(1), door #(2), or door #(3)?

Here is where I cannot speak with the confidence I have when speaking about, e.g., elite directedness of modern political economy, or philosophical critiques of secular ethics.

What I can say is that #(3) appears to be the one chosen by the majority of scientists and scientific organizations, something dissent alone can’t negate. Unfortunately, #(3) also has immense globalist appeal, given the adage that “global problems call for global solutions.”

If (3) is by some chance true, then claims like those of Naomi Klein in her This Changes Everything (2014) have to be looked at. Whether you agree or disagree with Klein’s view that “the free market” is at fault in creating the present situation (I don’t, as I don’t think we’ve had anything remotely resembling actual free markets in decades), the conclusion remains: we find other ways of powering our civilization or face the consequences: a hotter, more hostile world; what James Howard Kunstler calls The Long Emergency (2005), highlighted by dislocations that will make the present ones look tame by comparison as millions of people abandon flooded coastal cities, others migrating en masse from regions no longer inhabitable.

Alarmist? Perhaps, but many scientists will tell you that MCC is an established fact. Major scientific organizations including the American Association for the Advancement of Science have endorsed it. At least one online course I ran across earlier this year dispensed for free, presents information intended to debunk (1) and (2) above. The course’s main architect, John Cook of the George Mason University Center for Climate Change Communication, had earlier created this site, organizing information he maintains refutes “climate change denialism.”

Cook and his associates have assembled some interesting information. But they packaged it within an image of science I found rather naïve and dated. (Cook’s views on the “scientific consensus” are criticized here.)

Again, a brief disclaimer: I am not a scientist, climate or otherwise. I am a trained philosopher who for a number of years specialized in history and philosophy of science — especially the physical sciences — turning to moral philosophy and political economy only later.

This I can certify: what is found in most science texts is an image of a neat, disciplined, pristine method of formulating hypotheses to explain neutral data, testing them step by step whether by further observations or by experiment, then pronouncing them confirmed or disconfirmed — almost as if done by robots instead of human beings subject to all the biases and frailties human beings are subject to, including being forced to work in organizations that do not fund themselves.

So MCC aside for the moment, how well-confirmed are most scientific results, really?

One can point to “studies” in various disciplines that clearly reflect the biases of those who put up the money, because the researchers wanted or needed further grant money, and one of its conditions was obtaining “acceptable” outcomes. Such “studies” (not uncommon in the world of, say, pharmaceuticals, legal drugs), overstate what evidence validly permits, and may bury contrary findings. How much of science work this way?

Please allow me to digress …

As a bored public high school student in search of real intellectual stimuli I chanced to run across a curious volume in a local library: The Book of the Damned (1919) by one Charles Fort (1874 – 1932). Fort had a curious hobby. Upon receiving an inheritance, it became his career. A voracious reader, he’d mastered several scientific disciplines just by reading leading texts. He combed scientific journals and periodicals, antiquarian newsletters, and newspapers. Whenever he found something that did not fit the prevailing theories, he made a note of it. Soon he had thousands of notes, organized by subject matter: astronomical curiosities, unexplained weather and aerial phenomena, out-of-place artifacts, medical mysteries, etc. “Anomalism” was born: assemblages of “facts that don’t fit,” with wry commentary on the “scientific” manner of dealing with them: shoving them into the cognitive equivalents of windowless museum basements and forgetting about them.

Fort used his notes as the basis for four books: the above-mentioned The Book of the Damned, New Lands (1925), Lo! (1931) and Wild Talents (1932). He commented drily on “dogmatic Science” (cap S) as surrogate for God. Fort was more a provocateur than a serious theorist. He formulated intentionally ridiculous notions which left whole ranges of obvious facts unexplained and claimed them to be as well supported as the dogmas he saw imprisoning the minds of scientists.

The history of ideas manifests system-builders and system-smashers, one might call them. Among the system-builders: Plato and Aristotle, Aquinas, Newton, Lavoisier, Adam Smith, Kant, Darwin, Einstein, who left their respective disciplines large, logically-structured edifices of thought (systems). Among the system-smashers: the old Sophists who taunted Socrates in Plato’s dialogues, modern “outsiders” such as Kierkegaard and Nietzsche, aggravated skeptics such as Fort, and a couple of folks we’ll encounter below.

Modernity was a system-building endeavor. Postmodernity has been a system-smashing one.

It is not clear why some thinkers are drawn to one and not the other. Fort’s biographers state that his father was an abusive tyrant, from whom he fled as a teenager. His hostility to the authority of Science was then a projection. How very Freudian.

System-builders are confident of human reason’s capacity to grasp reality (or some part of it) as it is. System-smashers are just as convinced that the effort is delusional. They point to the smorgasbord of conflicting and competing systems in every domain, this being a problem even if we’ve mastered a certain instrumental rationality by manipulating objects into technology.

System-building takes itself seriously, is carefully argued, etc. Much system-smashing is literary provocation. Its purveyors use irony and rhetoric. They play mind games with their audience. Postmodernists, whatever else one says about them, are good at this.

Fort’s books sold reasonably well. At the end of his life, his health and eyesight failing, he was said to have laughed aloud upon learning that his writings had a cult following, organized as the Fortean Society, dedicated to continue poking holes in the pretenses of “scientistic” positivism. The Society published Fort’s unused notes and continued collecting anomalies that seemed to surround every major theory in every field of science. Fort’s books have stayed in print, and though for obvious reasons he was roundly dismissed as a crank, his work continues to fascinate those who have followed in his footsteps compiling anthologies of “misfit” facts such as physicist William R. Corliss (1926 – 2011), founder of The Sourcebook Project and editor of anthologies such as Ancient Man: A Handbook of Puzzling Artifacts (1978) and Unknown Earth: A Handbook of Geological Enigmas (1980); or more recent writers with substantive alternative hypotheses on ancient and unknown civilizations such as Graham Hancock (1950 –        ), author of Underworld: The Mysterious Origins of Civilization (2002), Magicians of the Gods (2015), and other works.

As a university student (still bored), I encountered the far more orthodox The Structure of Scientific Revolutions (1962, 1970, 2012) by Thomas S. Kuhn (1922 – 1996). My first exposure to Kuhn’s ideas was in a world history class. The professor discussed them with all the calm and neutrality of a leftist professor going off on conservatism. My curiosity was piqued, and I tracked the book down.

Kuhn’s thesis was that a mature “normal” science is always governed by a conceptual system embodied in concrete problem solutions he called a paradigm. Paradigms — exemplified in works such as Newton’s Principia or Lavoisier’s Chemistry or Darwin’s Origin — guided research in the science, its first premises not tested or challenged. Paradigms dictated use of the language of the discipline, as well as guiding authors of textbooks used to train the next generation who “stood on the shoulders of giants” as it were. Invariably a paradigm could not solve every problem it faced, however. These became anomalies — defined more precisely as violations of expectation. Eventually enough would accumulate to jeopardize allegiance to the paradigm (particularly among the young!). The science would enter a “revolutionary” crisis that ended with its embrace of a new paradigm able to solve the problems, often with new terms or old ones used in new ways. A new period of “normal” science would begin.

Physicist and early quantum theorist Max Planck (1858 – 1947) famously observed: “A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.” That’s the basic idea.

Kuhn denied that scientific practice could be shoehorned into the formal-logical methods positivists taught. He experienced the wrath of colleagues who had Science on a pedestal, was accused of “irrationalism” for saying the decision to embrace a new paradigm was a matter of “faith.” Despite a couple of careless uses of that word his overall message was nothing of the sort, and he spent the rest of his life trying to clarify the complex rationality of an enterprise conducted by fallible humans working in organizations.

More extreme was the unabashed system-smashing of Paul Feyerabend (1924 – 1994), who authored the controversial Against Method: Outline of an Anarchistic Theory of Knowledge (1975, 1988, 1993, 2010). Although Kuhn’s and Feyerabend’s names are often linked, both classified as “historicists” (i.e., those who see science as a historical phenomenon operating within institutions, and not a formulaic, frozen-in-time abstraction), Feyerabend’s views differed greatly from Kuhn’s. For one thing, he rejected the idea that a “mature” science should embrace a single paradigm. He advocated pluralism: multiple paradigms. Conformity of thought, he argued, might fit the needs of a church but is totally inappropriate for science.

He argued extensively that the most important scientific advances had not proceeded according to an single, identifiably rational method. Scientists had opportunistically used a variety of sometimes incompatible ideas and methods at hand, so that early modern physics and astronomy incorporated ideas and methods from Christianity, Platonism, astrology (Newtonian “action at a distance”), mysticism, and so on. Some of their claims, moreover, seemed contrary to “plain fact,” as when Copernicus removed the Earth from the center of the universe in the absence of a physics able to make sense of such an idea (he was dead well over a century before Newton came along). Positivism’s naïve just-the-facts-ma’am view of science would have stopped physics and astronomy 1543 – 1686 in their tracks! With “plain fact” not on their side, early astronomers advanced their heliocentric view of the solar system not just through argument but with storytelling and propaganda (Galileo wrote dialogues; some of his “experiments” as with dropping objects from the leaning tower of Pisa probably never took place).

Feyerabend’s point was that if science was more “anarchic” than “rational,” “anarchism” might help us in the present! It might free us from the “tyranny” of a “dogmatic Science” that was stifling our creativity within the cubicles of industrial civilization and robbing us of the potential richness life might have. According to him, the only abstract “rule” that could be guaranteed to work independent of situation was “anything goes”: not a rule or method but a jocular, system-smashing rejection of such abstractions. The idea: “proper scientific method” is always situation-specific. Feyerabend (unlike Kuhn) did not suffer fools gladly. He ridiculed critics who misread “anything goes” as some kind of new and avant garde abstract rule. He mocked them by openly defending “relativism”: a position resulting from comparing the richness supplied by history and anthropology to the desiccated requirements of positivist abstraction. One of his favorite targets was George Soros’s hero Karl Popper and his “conjectures and refutations” critical rationalism which he believed was hopelessly naïve. Feyerabend has been called “the worst enemy of science” by those who haven’t read him, but believe “scientific” minds should get the last word on all things human, including designing (or redesigning) societies.

Arguably, Feyerabend put an end to a certain way of viewing science — at least, if we look at the enterprise as it is, a human-all-too-human endeavor, instead of accepting the mythology that has surrounded it (touted by positivists, atheistic materialists, and technocrats).

End of long digression.

Why this dissertation?

Because there are abundant reasons for rejecting the presumptions of those who believe MCC on the mere authority of a naïve empiricism: who see science as mere data aggregation and integration, using a “method” frozen in time; and have occasionally been caught seeming to “cheat”: fudging data so that MCC seems better supported than it really is (e.g., “Climategate”: for contrasting views see here and here). As critics of MCC have pointed it, the scientists behind it receive government grants as well as lavish funding from elite foundations. (In fairness, MCC “deniers” also receive substantial support from private sources, e.g., the Koch Brothers and Exxon.)

Scientists are supposed to be the experts on such matters. But can we trust the objectivity and neutrality of the experts? Among the most noticeable phenomena of the Trump era is a profound skepticism towards “expertism” as a repository of biases: in general of being unable to see the forest because of the trees. The experts predicted Trump would lose in a landslide. Their major pronouncements about the economy going back well over two decades were wrong. They had not foreseen the end of the tech bubble in 2000. In 2008, Federal Reserve Chair Ben Bernanke failed to anticipate the worst financial crisis since the Great Depression, embarrassing himself in January of that year saying that the Fed “is not currently forecasting a recession.” The experts fail to see the role of top-down financialization in consolidating wealth and power at the (globalist) “commanding heights” via a system that removes labor’s share of national income. With authors who did see these things consigned to “alternative media,” frustration was inevitable.

Skepticism about experts isn’t limited to political economy, obviously. These days it crosses over a wide range of topics: so-called scientific medicine based on invasive procedures and the use of (expensive!) pharmaceuticals, which rejects alternative practices such as nutrition-based “holistic” or “integrative” healing, the use of dietary supplements, acupuncture, chiropractic, etc.; whether GMO foods pioneered by powerful global corporations such as Monsanto are proven safe for human consumption and for the ecosystem; whether other artificial substances such as aspartame have been proven safe; whether there is a causal relationship between vaccines (e.g., the MMR vaccine) and autism; whether the theory of evolution is as well-established as the scientific community maintains, well enough established to exclude intelligent design, and whether it is truly empirical or the product of a (materialist) worldview; whether there is a correlation between race/ethnicity and measurable average intelligence; and whether it is true that men and women have the same innate cognitive predispositions, so that workplace “imbalances” can be attributed to sexism/misogyny. There are doubtless others I haven’t thought of.

Again, a few of these I’ve looked at. Most I have not, at least not at length. But there is a discernable pattern running through nearly all of them, which is the same as the pattern often employed to circumvent careful consideration of the idea of history being directed by a globalist superelite or super-oligarchy. The pattern includes dogmatism and just-the-facts-ma’am appeals: “It’s true (or false) because we say so or because our studies say so” (the right rejoinder to any of any such study is, “Who funded it?”), followed by ridicule (“that’s a conspiracy theory!”), or the use of similar linguistic gambits to circumvent having to deal with the specifics offered, finally ending with an authoritarian gesture and a return to the official narrative.

In the case of MCC, this progression now sometimes ends with a threat: that “climate change denial” be criminalized, “denialists” prosecuted and jailed, just as those who deny that Hitler and his minions killed 6 million Jews in the Holocaust (as opposed to some smaller number) are jailed for the thought crime in some countries. This, in fact, is the origin of the term denialism in the context of MCC: a propagandistic term intended to invoke the subconscious thought of Holocaust denial in the reader’s mind.

When ideas, questioning authority, and independent thought generally are criminalized, watch out! Just recall the line attributed to Voltaire (1694 – 1778) (he probably didn’t say it, but it’s true nevertheless):

“To learn who rules over you, simply find out who you are not allowed to criticize.”

Applying: if you want to know if specific ideas or theories or policies have been afforded a special, unmerited status in institutions (academic, governmental, or corporate), find out if you can question them without the roof caving in — without, that is, being fired from your job, having your reputation trashed by social media trolls, etc.

Skepticism toward expertise has caused sufficient alarm that there is now pushback. Authors speak, often at great length, of “how we lost our minds” and of “American stupidity,” not just in articles (here, here, and here) but books (e.g., this one and this one). What these authors are dead set against is the possibility of epistemic equivalence suggested by the idea that what we have is a diffuse, poorly understood clash of worldviews — not just a resentful rebellion of “the stupid” against “the informed,” or “uneducated bigots” versus “educated cosmopolitans,” etc. Very similar is the authoritarianism of those who reject moral equivalence between conservatives and historical preservationists currently demonized as white supremacists and neo-Nazis versus leftists who self-identify with “progress” (and which Trumpism has so rudely interrupted!).

You’re probably wondering: where does all these leave MCC? What should we conclude about it??? Especially given that if we conclude wrongly, either way, we could end up paying a steep price!

I will say — reminding readers of my disclaimers! — I don’t see MCC as crazy, or crackbrained, or false just because globalists like it and can make use of it! Another topic I studied was systems thinking, and one of the things I noticed is how sensitive complex systems are to what can perturb them. It also became clear: complex systems adjust themselves to perturbations. The largest complex system in our civilization’s proximate environment, the ecosphere, could adjust our civilization out of the picture! I therefore dissent from many of my fellow alternative writers. No need to take my word for anything. I recommend readers go to the sites linked to above and see if they have refutations for what they find there. Was “Climategate” real, a dead giveaway, or was it blown out of proportion?

I cannot decide for you! I don’t have that kind of authority!

What I believe we do have is a new knowledge problem of some magnitude. What was the “old” problem? Just the philosophical question of how we acquire knowledge (through the senses, pure reason, or some other means including revelation). Its presumptions are problematic. I will not dwell on them here, as this discourse is already too long. The “new” problem: our own institutions and their hierarchical structures, enabling epistemic authoritarianism to pass for truth, are in the truth-seeker’s way, made worse by the fact that the circumstances necessary to decide complicated problems like MCC cannot pay for themselves in a fast-paced society devoted to instant gratification and mass entertainment. Nuanced debate and discussion, based on a careful but slow weighing of many opinions and considerations, is not “marketable” in a culture of WhatsAppers and Twitter addicts.

This is a problem because few have the time, skills, or inclination to do their own research. We need institutions we can trust. I have extensive notes on this problem, in the context of the general breakdown of academia in our time, which I hope to incorporate into a future slim book — a story in itself! Suffice it for now, I am not a postmodernist, like Fort or Feyerabend, however much I sympathize with their crusades against epistemic authoritarianism. Truth exists; and we must not do what the postmodernists do in face of the difficulty of finding it, which is to conflate institution-bound authority with what is true and proven, cry foul when it turns out we were bamboozled, and then throw up our hands in gestures of despair.

What we could use is support for smaller, parallel institutions that have been growing for years in the face of the insufferable political correctness that has ruined academia and is now trying to erase everything in Western civilization that might offense some minority. In every dominant institution, feelings have trumped truth. If we had institutions of knowledge-seekers free from the need for money, and therefore from potential outside control (if we had, that is, real philanthropy on a large enough scale), there might be hope for (among other things) a trustworthy answer to the MCC question before it’s too late, before our so-called leaders, whoever they might be, make decisions we will live to regret. Since we do not have such institutions on a scale large enough to matter, and any real philanthropists who might once have existed have been replaced by corporate donor types who typically fund politicians and political agendas, I am not all that optimistic.

Author’s Note: if you believe this article and others like it were worth your time, please consider making a $5/mo. pledge on my Patreon site. If the first 100 people who read this all donate, my goal of just $500/mo. would be reached in no time! And if we’re honest about it, we all waste that much money every day.

Telling the truth can have negative consequences. Last year my computer was hacked — it wasn’t the Russians, either! Repeated attempted repairs of the OS failed, the device became unusable, and I had to replace it off-budget.

This is also an attempt to raise money to publish and promote a novel, Reality 101, 98% finished as of this writing. In it, a globalist technocrat speaks in a voice filled with irony and dripping with cynicism — contrasted with the possibility of freedom outside the world as he sees it.

Promoting a book means, in my case, the necessity of international travel which is not cheap.

I do not write for an audience of one. I write for you, readers of this site. If you believe this work makes a worthwhile contribution to the world of political-economic ideas, please consider supporting it financially. I am not a wealthy person, and unlike the leftist groups I criticize, I do not have a George Soros funneling a bottomless well of cash my way.

If I reach the above goal of $500/mo., I may be able to speak at an event in your area (contact info below). On the other hand, if this effort fails, I am considering taking an indefinite “leave of absence” beginning later this year to pursue other goals. EDIT: thus far this effort has garnered just $62/mo. If it does not reach $250/mo. by the end of September, it will be time to write my farewell-and-good-luck piece.

To sum up, these are your articles (and books). I don’t write to please myself. No one is forcing me to do it, as sometimes it brings me grief instead of satisfaction. So if others do not value the results enough to support them, I might as well go into retirement while I am still able to enjoy it.

Posted in Academia, anarchism, Books, Donald Trump, Election 2016 and Aftermath, Higher Education Generally, Philosophy of Science, Political Economy | Tagged , , , , , , , , , , , , , , , , , | 2 Comments

The Art of the Argument: Stefan Molyneux’s Book Reviewed on LGP

(Note: co-posted as a product review on Amazon.com with the necessary modifications.)

Stefan Molyneux, The Art of the Argument: Western Civilization’s Last Stand (Kindle Edition, Amazon Digital Services LLC: August 27, 2017). Pp. 172 / kb 299. 

This book was panned on Brian Leiter’s philosophy blog) which is best when he stays away from politics, and since both Stefan Molyneux and I are independent writers / scholars, I had to see for myself. I’m sad to have to report: the negative reviews are correct. Worse: the author is a known libertarian YouTuber and noted critic of all things left wing and politically correct. Mediocre white guy alerts are therefore going up all over the Web (cf. e.g., this).

The problem is, Molyneux has embarrassed himself with this ebook.

From the first page: “The first thing to understand is that The Argument is everything. The Argument is civilization; The Argument is peace; The Argument is love; The Argument is truth and beauty; The Argument is, in fact, life itself.” Caps in the original; always emboldened. Every appearance, in fact, of the phrase The Argument is emboldened.

Such writing marks its author as an amateur, possibly one grinding axes instead of communicating information or educating. The two aren’t necessarily mutually exclusive, but they are here.

What is an argument? What I used to tell students (working off several textbooks): “An argument is a state of statements, at least one of which, called the premise(s), is offered as evidence for another statement called the conclusion.” This opens the door to discussions of statements, terms, concepts, and definitions, all of which are good to have before we get to the purposes of argument.

Molyneux: “An argument is an attempt to convince another person of the truth or value of your position using only reason and evidence” which conveys the right idea but is technically loose. As it turns out, technical looseness often descends into sloppiness and sometimes into incoherence.

Molyneux divides arguments into “truth arguments” and “value arguments” for reasons unclear to me, because if the argument is any good it will exhibit a logically correct structure in either case. I question, therefore, that we need to say this: “A truth argument will tell us who killed someone. A value argument will tell us that murder is wrong. Truth arguments are the court; value arguments are the law…. A truth argument can establish whether arsenic is present in a drink. A value argument can convince you not to serve it to someone.” Such bizarre and sometimes demented illustrations permeate Molyneux’s tract.

More worrisome to me, given that Molyneux is likely to have a readership much larger than any professional logician with a textbook, is that The Art of the Argument is riddled with mistakes students who read it and then enroll in a logic class will have to “unlearn.”

He properly distinguishes deductive from inductive arguments, provides the standard example of the former ((1) “All men are mortal” (2) “Socrates is a man” (3) “Therefore Socrates is mortal”), and then delivers this cringeworthy explanation: “Given that premises one and two are valid, the conclusion – three – is inescapable.”

Ouch!

Molyneux has just confused truth and validity! Statements are true or false. They are never valid or invalid. Deductive arguments are valid or invalid; they are never true or false. Validity is a function of deductive structure, not content (the information in premises and conclusion). Getting students to grasp this difference is every logic instructor’s first challenge. If a deductive argument has a valid structure and true premises, moreover, it is called sound. Following a foray into premature attacks on relativism and socialism – premature because the groundwork for such arguments has not yet been laid – Molyneux botches soundness as well: “If I say (1) All men are immortal, (2) Socrates is a man, (3) Therefore Socrates is immortal; then the structure remains logically sound.” This is actually a good example of an unsound argument, because it has a valid structure but a false premise.

In other words, Molyneux does not appear to grasp the difference between validity and soundness. This is in a section entitled “The Difference between ‘Logical’ and ‘True.’”

He does seem to grasp, roughly, outlines of the difference between deduction and induction when he states that deduction is about certainty and induction is about probability. What deductive arguments supply is logical closure: if the premises are true the conclusion must be true. Hence the sense of “inescapability.” He says: “Getting most modern thinkers to accept the absolutism of deductive reasoning is like trying to use a nail-gun to attach electrified jell-O to a fog bank.” Huh? Who? Unfortunately his book has no account of what makes an argument valid, and he introduces myriad examples involving potentially confusing propositions with forms like “Only x are y” when he hasn’t even introduced the Aristotelian Square of Opposition (All S is P, No S is P, Some S is P, Some S is not P are the basic standard forms) — a staple of every book introducing logic.

Inductive arguments establish their conclusions only to some degree of probability (which may be very high). It is therefore true that as Molyneux says, Inductive reasoning “deals more with probability than with certainty.” It is not true that all inductive reasoning “attempts to draw general rules from specific instances.” Generalizations do this, but inferences to the next case proceed from a collection of known instances to, well, the predicted next instance. Arguments from analogy proceed from case to case: because a known case is similar to an undecided one in specific ways (can be compared to it), it is probably also similar to the undecided one in some additional respect.

Nor is it true that “deductive reasoning goes from the general to the specific.” Sometimes it does, sometimes not. It can go from universal premises to a universal conclusion ((1) All men are primates. (2) All primates are mammals. (3) Therefore all men are mammals.) Or it can go from a universal-particular combination of premises to a particular conclusion. ((1) All politicians are liars. (2) Some Democrats are politicians. (3) Therefore some Democrats are liars.) And so on.

Don’t expect any of these specifics from Molyneux, nor a discussion of the specifics of what happens when either a deductive or an inductive argument has gone awry. He doesn’t seem to understand the difference between formal fallacies and arguments with false premises.

Sometimes he is clear as mud, as when he states, “There is another category called abductive reasoning that draws a tentative hypothesis from disparate data, but which is related to some sort of testable hypothesis, rather than the reaching of a specific conclusion.”

For the record, here is my paraphrase of philosopher C.S. Peirce’s account of abduction (he coined the term): “Puzzling phenomenon P is observed. If H were true, then P would follow as a matter of course. Hence there is some reason for believing H to be true.” There’s work to be done, such as identifying what makes P “puzzling” and explaining “follow as a matter of course,” but it’s a start!

Sadly, this book is riddled with so many errors and such imprecision that we might as well stop here. Professionals are unlikely to finish it, and well before they get to Molyneux’s libertarian arguments. This is unfortunate because I think Molyneux means well (hence two stars instead of just one). He favors using reason to persuade instead of using threats of force to coerce or intimidate. So do I. He wants to save Western civilization from its enemies, some of whom hang out in academia. So do I. His book appeals to reason and inveighs against every form of leftist political correctness, every sort of irrationalist postmodernism and radical academic feminism, every blithe assumption that those in authority know what’s best for the individual, and other mistakes of the past half century.

But if we’re going to undermine these with solid logic, we have to lay its groundwork correctly. There are sections here on definitions — introduced too late to do any good. There is a reference near the end to the importance of identifying fallacies but no careful and systematic discussion of them: ad hominem attacks, appeals to authority, appeals to pity, ad baculum or threats including ostracism, red herrings, equivocations, arguments from ignorance, and so on, which are central to any tract on logic or critical thinking and would have been invaluable here.

Nor is there any discussion of statistics or statistical fallacies which would also have been invaluable in combating sloppy social policy. There is an abundance of discussion of philosophy vs sophistry: the philosopher wants to find and communicate truth, says Molyneux rightly; the sophist wants control and will use lies or BS to obtain it. The former studies reasoning. The latter studies people (presumably psychology). The former makes his/her case. The latter uses language to confuse and manipulate. There are plenty of appeals to the connection between proper thought and reality, which is unhelpful since in the areas Molyneux is most concerned about, social issues and matters of economic organization, there is massive disagreement about what the facts are and therefore what reality is.

I’ve the impression that Molyneux learned more logic from participating in debates than from actual study of texts and available literature written by competent logicians. His book is therefore far more a defense of free market economics and libertarian social philosophy; maybe that’s why it’s marketed here as political philosophy instead of logic. I’m sympathetic to many libertarian ideas, but I would not want to claim that they are panaceas, universally established by pure logic and empirical evidence (where, for example, do we actually see a functioning libertarian society of any size, enabling us to confirm that the libertarian self-regulating free market really does work in practice???). I don’t think one has to be a leftist or an egalitarian to deny that Western societies are in some sense meritocracies, or could be, and therefore I can wonder if certain government actions on behalf of, e.g., those who are poor or infirm through no fault of their own are requirements of a moral social order. This has been the prevailing opinion in modern thought, long predating the present PC mindset, sometimes argued for at great length, and if one expects to be taken seriously one has to grapple with them on their own terms. Also, does anyone truly believe that without legal mandates corporations would be forthcoming about actual (not merely possible) dangers of their products if they make money? (Think: cigarettes, or the less extreme case of mildly addictive additives in unhealthy processed foods.)

Is this nitpicking? I don’t believe so. My response is that if you’re going to undertake a project like this, don’t do it half-assed! I write as an independent writer/scholar myself, someone who walked away from academia because of the prevalence of many of the superstitions enumerated above and how they’ve corrupted the entire enterprise. One thing I’ve learned: because of the prejudice of those “on the inside,” we are held to higher standards than they often hold themselves to (and I’m not saying the professionals haven’t written some stuff that is absolutely awful), and so we must hold ourselves to high standards or we might as well not bother. Sometimes it gets frustrating, and there are temptations to cut corners with, e.g., appeals to “common sense” that might not be shared by others. These need to be resisted.

To the best of my knowledge, a book that starts with the basic foundations and principles of logic and critical thinking, proceeds through definitions and fallacies in a logical sequence, arriving at the kind of groundwork that would successfully take down the above academic superstitions in the public square has yet to be written. I am not sure how many academics these days would be motivated to try. Be that as it may, this book is not it.

Posted in Books, E-Philosophy, Higher Education Generally, Libertarianism, Logic, Political Philosophy | Tagged , , , , , , , , | 3 Comments

Analytic Philosophy: An Informal Defense (and a Modest Criticism)

This is a post dealing with a few basic issues in contemporary philosophy, issues easily lost sight of even by some trained professionals depending on their inclination.

History discloses four major traditions, or more precisely, methods, of doing philosophy. There is systematic or speculative philosophy — the tradition represented by major pivotal figures such as Plato, Aristotle, Aquinas, Descartes, Kant, Hegel, and the Whitehead of Process and Reality. It tries to provide an account of reality and everything in it, including where we fit in, what is of value in human life, and the moral rules or principles by which we should guide our conduct, among other things integrated into a single consistent system. Then there is analytic philosophy, which developed very slowly out of a sense long predating, e.g., Frege, that there was need for logical clarification of the questions philosophers ask and the language used to answer them. You can find plenty of hints of such in Leibniz and Wolff on the Continent, predating Kant, and you’ll find similar moments in Locke and Hume in the English-speaking world. There are figures such as the early Wittgenstein who cross-pollinate the two traditions in specific ways in his Tractatus Logico-Philosophicus which is systematic but tries to draw limits to thought by exhibiting the limits of the logic of our language.

(The other two schools are, of course, the existentialist-phenomenological tradition: philosophy should describe the human condition, which may mean writing novels as Sartre and Camus did or mean producing detailed analyses of lived experience as phenomenologists beginning with Husserl did; and Marxism / Frankfurt School thought: philosophers have tried to describe the world; the point is to change it.)

My topic here is analytic philosophy. Some say most of it is trivial, or at least inconsequential. While some analysts assuredly go overboard with the logic-chopping, if one considers this method’s potential to make a difference in our understanding of language, I beg to differ. If we give analytic philosophy a chance, we find that its approach to philosophical method is of the first importance.

What’s important about analytic philosophy is that sense, which one does not have to be a trained philosopher to appreciate, that it is frequently important to clarify what a question is asking, or a conclusion really asserting, as a condition for knowing if one has gotten anywhere with an inquiry. This is surely true enough in the traditional problems of philosophy. The difference of methods is the difference between an assertion that God exists, perhaps backed up with a standard argument, versus asking, What does the term ‘God’ mean? Does it mean the same thing to everyone? Or between being asked “Are you free?” and realizing the need for clarity: “What sense of ‘free’ are you talking about?” Even asking the far more specific, “Is your will free?” gets into trouble, absent a clear sense of what the question is asking and what a good answer to it looks like.

So analytic philosophers have seen their job as stepping back from the Big Questions and asking what they mean, through close attention paid to the language one uses to ask them, and to defend specific answers to them. If we cannot get clear what it is we are talking about, in these cases or many others, then we can hardly expect to achieve results that anyone will agree on or are useful. If there is insufficient attention paid to language, then interlocutors who disagree will continue to talk right past one another … which, of course, they might do anyway, but for other and less savory reasons than mere linguistic or intellectual confusion.

So with God most of us mean the God of Christianity, or of the Bible, a Unique Being, uncreated, existing outside of space and time as we experience and understand them (which may be Western constructs in any event), all-powerful and able to suspend causality to perform miracles, all-knowing, His nature manifesting both perfect logicality and perfect moral goodness, and perhaps more. All these concepts may (do) stand in need of further analysis, but with this sense of what we are talking about, we have material to work with (which may, of course, be old hat … or not). For there is also, some will note at once, the Creator invoked by deists, who doesn’t necessarily have all the above characteristics as he doesn’t intervene supernaturally in the world. Analytic philosophy of religion can clarify such differences and perhaps contribute to the discussion of why they might matter for such policies as the separation between church and state.

With freedom of the will, we find ourselves exploring such issues dating back at least to Hume’s and Kant’s time, such as whether it means action taken outside the causal structure of the universe: free actions being the set of actions for which what I want to do is the sole determinant. This, of course, raises a number of questions, such as: if what I want to do is determined by nothing outside of itself, then how does it avoid being completely arbitrary? One answer is that there are influences but not determinants on what I want to do which can only be identified in the case of specific, concrete actions. Influences, unlike determinants, can conflict with one another … as one knows if one has had a decision to make and been caught between conflicting impulses. There are many other answers as well; although it’s a separate discussion because there are many senses of free, I would argue that it is confusing and misleading to bifurcate freedom of the will and influences (even determinants) on our actions as if the difference was absolute.

Analytic philosophy did more than explore conundrums like these, of course. While there continued to be healthy explorations of the traditional questions of philosophy and an insistence that they be given answers that made logical sense (and, for positivist and logical empiricist sub-schools of analytic philosophy, that they be fully consistent with the pronouncements of the specialized sciences), there were also explorations of philosophical method itself … including whether philosophy could have a method of its own, resolving special logical-mathematical-set-theoretical conundrums such as the Liar’s Paradox (“I am at this moment lying to you; do you believe me or not?”) and the paradoxes of set theory with which Russell wrestled. Soon we saw close analyses of the language and justification of the findings of physical science (the origin of modern philosophy of science) which were deemed important as physics had just seen the most significant revolution since the scientific revolution itself with the fall of the Newtonian edifice.

A division appeared among analysts, however, over whether they should approach their subject matter from the standpoint of an ideal language, the preferred ideal language being the formal logic developed during the 1800s and refined by Russell, Whitehead, and the early Wittgenstein, or whether analysts should pay more attention to language in its “ordinary,” unrefined and unreconstructed usages. The later Wittgenstein, and Strawson, became the first proponents of the latter, and the term ordinary language philosophy was coined as more and more British philosophers came on board, much to the chagrin of ideal language philosophers such as Russell. Ordinary language philosophy seemed to have as its advantage that it could take account of the many uses to which language was put, uses not captured by truth-functional accounts. We use language not merely to assert true statements about things but to request information (ask questions), provide instructions, give orders, express emotions, tell stories, tell jokes … and there are other uses we’ll encounter presently.

There can be no single “ordinary” language, of course; there are many “ordinary” languages. Every natural spoken language, in its raw, unanalyzed form used by a community of native speakers, could be viewed as an “ordinary” language; and natural languages, responding not to the severe requirements of an abstract formal logic but to a multitude of practical, workaday demands placed on them by users, grade into the more formal languages as they became specialized in various endeavors, be they scientific, technological, commercial, entertainment-focused, or some combination of these. The difference between a formal language and a natural one is, therefore, a continuum and not a dichotomy. (Regrettably, Western philosophy is full of untenable dichotomies, but that, too, is a post for another day.) Thus the term I prefer over ordinary language philosophy is natural language philosophy. It is more versatile, able to cover more territory in the human world. Charles Morris, who sought to integrate the insights of both pragmatism and behavioral psychology into analytic philosophy (and who was instrumental in enabling many logical positivists to emigrate to the U.S.) distinguished three areas of inquiry: syntax (or syntactics): purely formal relations between signs; semantics, or relations between signs and objects or categories of objects; and pragmatics, relations between signs and sign-users. This trichotomous study has sometimes been called semiotics, the study of signs.

The third of these, pragmatics, is by far the most important. If we wish analytic philosophy to be relevant to the world outside of academic cubicles and seminar rooms, it is where we should end up. The later Wittgenstein and his philosophical progenitors enlightened us about the many uses to which language can be put, depending on what a speaker wants to accomplish. Examination of the many motives human beings bring to language use can enhance any such account.

Propaganda is, of course, one such use — the one I personally find to be both the most interesting and the most useful to analyze. Language can be used to utter true statements, or statements that are true to the best of the speaker’s knowledge which is invariably somewhat fallible. It can have other innocuous uses such as those listed above. Or it can be used to utter statements the speaker knows to be false, or is unsure of but seeks to conceal his (her) uncertainty. It may be fair to call many such statements lies. Or, of course, language can be used in assertions the truth value of which the speaker does not care about one way or the other, in which case the term bullshit has almost become part of the standard lexicon following Harry Frankfurt’s ingenious short analysis of it (On Bullshit, 2005; cf. also his On Truth, 2006). There are philosophers paying attention to such issues (e.g., Jason Stanley; cf. his How Propaganda Works, 2015, which I hope to discuss in a future post at some point). The fact that I can virtually count them on my fingers is part of today’s problem.

Language is used propagandistically when it is used to mislead, so that its intended audience goes away believing something to be true that is really false, or at least insufficiently supported by publicly-available evidence. We can all think of examples. Most of us have probably fallen into the trap of contributing a few of our own. An example Stanley conceded he hadn’t thought of (because his slant is, on the whole, typically academically left of center), is homophobia. It is an example I have mentioned before, as just one of a family of such examples all characterized by the use of the suffix phobia against critics of a belief or lifestyle or some combination thereof: Islamophobia, transphobia, and so on. A phobia is, of course, a recognized mental illness. Examples of real phobias include claustrophobia and agoraphobia. One does not respond to arguments made by suffers of a phobia, e.g., that one really is in danger of suffocating because the place is enclosed. One never assumes their view of a situation is veridical. One therefore tries to cure them with therapy. Apply this to our examples above. What it means is that those accused of homophobia, Islamophobia, and now transphobia, etc., are falsely if indirectly accused of suffering from mental illness because they criticized specific assertions, activities, and policy decisions, and whose conclusions are therefore not seen as worth arguing against. Maybe they can be “cured” with the “therapy” of “sensitivity training.” (It is very interesting that although the word Christophobia has been used in a similar context by Christians, this word has never caught on and remains generally unknown.)

This is how propaganda actually works both inside and outside academia, and analytic philosophers (especially those with the protection of tenure) can be criticized for their unwillingness to go anywhere near such examples. This is the “modest” criticism of my title. I keep it modest because the majority of professional philosophers are introverts. Even if not, few have any taste for the rough and tumble world of political discussion, where it is easy to conclude that arguments and evidence aren’t what matter. (Most who do are contributing the wrong things! Or so I would argue.)

Distinguishing between the use of a word or phrase and a mention of it is also useful in analytic philosophy; consciousness of such could prevent many mishaps, including some that have sabotaged careers. A formal account will be helpful. To use a word in a sentence is, again, to say something about the word’s referent or reference class in the natural language where it is most at home. If I assert, “The cat is on the mat,” I am obviously saying something about a particular cat (“The cat …”), a nonlinguistic entity in the world. Or with the entire reference class: “Cats tend to be nocturnal animals.” If I assert, on the other hand, “’Cat’ is an English word with three letters,” or perhaps, “‘Cat’ is an easy word for children to learn with an appropriate picture,” I am not using the word to refer. I am mentioning it, to say something about a linguistic entity. The bulk of our discourse about language(s) consists, obviously, of mentions and not uses.

If this distinction can be made clear and public, mentions of words or phrases such as illegal immigrant, or of other words or phrases deemed derogatory of politically protected groups (or of consisting of hate speech, again imputing a mental condition to the user or mentioner absent any evidence of such) and therefore verboten, and of creating a hostile work or academic environment given today’s prevailing hypersensitivities, should be seen as a legitimate part of intellectual inquiry, even by those who reject using them. Mentioning is already done by the groups themselves on occasion. If asked, some would be able to recount how they reclaimed a word or phrase so that their uses of it are no longer negative or hateful, under the condition that the use is limited to their speakers, as when blacks use the word nigger among themselves to refer to one of their own, or when homosexuals use the word queer for themselves, or as part of phrases such as queer theory that are now part of their academic lexicon. I trust it is clear that all of these are mentions and not uses (mentions of mentions, if you will). A use is always for something nonlinguistic. A mention is always of something linguistic.

In sum, analytic philosophy serves important purposes within philosophy, and if one knows where to look, it can serve important purposes in analyzing the natural language in which trends and tendencies that affect the “public mind” are expressed. That includes “hot button” issues that, sadly, fill the public discourse with more heat than light. Be this as it may, one should be able to see from this short discussion that analytic philosophy is potentially far from trivial, and many of its most powerful techniques are sadly underutilized. Analyzing the language of op-eds, policy statements, political speeches, and related pronouncements of all kinds, is part of a broader service analytically-trained philosophers are capable of performing if only more would rise to the occasion, and if they could further train themselves to reflect on their own ideological biases where these exist, seeing them for what they are, and not avoid analyses of words or phrases favorable to an ideology they prefer.

 

Author’s Note: if you believe this article and others like it were worth your time, please consider making a $5/mo. pledge on my Patreon site. If the first 100 people who read this all donate, my goal of just $500/mo. would be reached in no time! And if we’re honest about it, we all waste that much money each day.  

Telling the truth can have negative consequences. Around this time last year my computer was hacked — it wasn’t the Russians, either! Repeated attempted repairs of the OS failed, and the device gradually became unusable — a reason I haven’t been around much lately — and I’ve had to replace it off-budget.

This is also an attempt to raise money to publish and promote a novel, Reality 101 (a globalist speaks in a voice filled with irony and dripping with cynicism). Promoting a book means, in my case, the necessity of international travel which is not cheap.

I do not write for an audience of one. I write for you, readers of this site. If you believe this work makes a worthwhile contribution, please consider supporting it financially. I am not a wealthy person, and unlike the leftist groups I criticize, I do not have a George Soros funneling a bottomless well of cash my way.

If I reach the above goal of $500/mo., I may be able to speak at an event in your area (contact info below). On the other hand, if this effort fails, I am considering taking an indefinite “leave of absence” beginning later this year to pursue other goals. To sum up, these are your articles (and books). I don’t write to please myself. No one is forcing me to do it, as sometimes it brings me grief instead of satisfaction. So if others do not value the results enough to support them, I might as well go into retirement while I am still able to enjoy it.

Posted in Academic Politics, analytic philosophy, Language, Philosophy | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Official Narratives

Note:  the post below is a brief excerpt from the central section of a much longer work in progress, tentatively entitled Confessions of an (Ex) Academic Dissident, which may or may not see the light of print someday. The topic, though, seems important enough given our present situation that it merits separate posting. In fact, I am kicking myself for not having done this long ago. Official narratives should be of interest to philosophers concerned with propagandistic barriers to important truths of various sorts. They are, after all, a form of propaganda that is not usually recognized as such, because they are literally everywhere …  

For our purposes let’s define an official narrative as a government-approved and media-sanitized account of some dramatic event, such as an assassination or war or terrorist attack or mass shooting, or perhaps any major event which went contrary to official expectations such as the outcome of national election. An official narrative can often be identified as such by appearing in relatively complete form very quickly after the event that prompted it, and then being reiterated endlessly in all major media, its essentials never again questioned by “responsible” commentators — frequently despite the absence of actual evidence (witnesses, physical, “smoking gun,” etc.). A convenient enemy is named whose motivations explain the event. Patriotism may be invoked (or possibly its opposite as the case may be), as this also suspends judgment and helps manufacture public consent around the narrative.(1)

Two obvious examples are the immediate arrest of Lee Harvey Oswald for the assassination of President John F. Kennedy, as Oswald had lived in the Soviet Union, recently been to Cuba, and could easily be associated in the public’s mind with Communism; and the announcement of Osama bin Laden as the mastermind behind the 9/11 attacks within hours of the attacks themselves — bin Laden being easily demonized as an Islamic jihadist. Mass media regaled viewers endlessly with televised images of the burning towers, creating a climate of fear as the name Osama bin Laden and phrases like axis of evil were heard over and over, like mantras.

Language is indeed used — repeatedly — to bewitch the public’s intelligence and take all thought deemed legitimate in a desired direction.

A third and brilliant exemplar of an official narrative was falling into place even as I was putting the final version of this essay together: the idea that Russian hackers and other agents of the Russian government directed by Russia’s president Vladimir Putin were able to influence the outcome of the November 8, 2016 Electoral College victory of Donald Trump against Hillary Clinton, possibly having colluded with members of the Trump campaign, including delivering hacked John Podesta emails to Julian Assange’s WikiLeaks, badly damaging the already-troubled Clinton campaign. During the December days up to the actual Electoral College vote on December 19, this claim, originating within the intelligence community including the CIA, was repeated continually despite the absence of evidence it was true. We were clearly expected to accept the word of the CIA and media reportage on faith. This despite the CIA’s having “inaccurately” claimed, back in 2002, that Iraq’s Saddam Hussein had weapons of mass destruction and was able and willing to use them against Americans, leading to the most disastrous war since Vietnam — a war the U.S. started!

That this is not labeled a conspiracy theory is telling!

Another key sign of an official narrative is that its targeted audience (usually the general public) is expected to accept it on faith, which means simply not seeing or hearing what does not fit the narrative. Ignored in the case of the Kennedy assassination is the fact that a single bullet inflicting the documented damage, which included injuring Sen. John F. Connolly seated near Kennedy in the motorcade, would have violated the laws of physics: hence use of the phrase magic bullet, which should have been a dead giveaway that something wasn’t right. Also ignored is that with the vehicle in which Kennedy was riding being in motion, however slowly, and no injuries to either spouse seated next to them, the shooting was clearly the work of a trained professional, which Oswald was not.(2)

In the case of 9/11, surely it isn’t crazy to wonder how 19 Saudis were able to hijack four planes, presumably without trial runs of any sort, fly them (or force their hapless pilots to fly them as there was no documented evidence of their ability to fly them) for lengthy periods of time across multiple states, crash two of them into the Twin Towers, and fly a third into the Pentagon after executing a tight maneuver experienced pilots are on record as saying they couldn’t duplicate. Where, precisely, was the most expansive (and expensive) multilayered air defense system in the world that morning? This is only a smattering of what is left unexplained by the 9/11 official narrative. Also ignored are the claims of those escaping the Twin Towers to have heard explosions in the buildings not caused by burning jet fuel as they came from below, not above, or scientists claiming that the laws of physics preclude the specific kind of collapse that was witnessed not twice but three times that day: that is, there was also the mysterious collapse of the third tower, WTC-7 (ignored by the much-touted 9/11 Commission Report), which had not been struck by anything substantial.(3)

In the case of the supposed Russian hackers and agents (soon to include Sergey Kislyak, Russian ambassador to the U.S.!), it is reasonably clear that something unusual occurred to tilt the election in Trump’s favor at the eleventh hour. WikiLeaks had done one of their infamous data dumps just days before, and it contained some potentially damning information about Hillary Clinton. Wikileaks founder Julian Assange insisted that his source was not the Russians. One can believe him or not. Or perhaps one can believe, or not, the account that Seth Rich, a disgruntled Bernie Sanders supporter, was murdered because he was the leaker of the information intended to hurt Clinton. There is, of course, no “smoking gun” evidence proving this to be true, just as there is no “smoking gun” evidence linking anyone in Russia to the outcome of the 2016 election — evidence made available for public inspection, anyway. But it is interesting that despite this structural similarity between the two, the Seth Rich allegation has been repeatedly labeled a conspiracy theory by all major media.

(1)  Cf. Edward S. Herman and Noam Chomsky, Manufacturing Consent: The Political Economy of the Mass Media (Pantheon, 1988, 2002).

(2)  See James H. Fetzer, ed., Assassination Science: Experts Speak Out on the Death of JFK (Open Court, 1998). More recent work on the Kennedy assassination focuses not just on the hows but on the whys, and draws conclusions that should be considerably more disturbing to anyone who believes the U.S. is really a representative democracy (cf. the next section). Cf. also James W. Douglass, JFK and the Unspeakable: Why He Died and Why It Matters (Touchstone, 2010) or David Talbot, The Devil’s Chessboard: Allen Dulles, the CIA, and the Rise of America’s Secret Government (Harper, 2015).

(3)  Before dismissing the allegations implied in this paragraph out of hand, readers ought to sit down with materials some of us have spent years with. A good place to begin is Jesse Richard, “You only believe the official 9/11 story because you don’t know the official 9/11 story,” http://www.globalresearch.ca/index.php?content=va&aid=26340 (accessed 2 Sept., 2011). Follow it up by actually reading works such as David Ray Griffin, The New Pearl Harbor (Olive Branch Press, 2004); Steven E. Jones, “Why indeed did the WTC buildings collapse?” http://www.physics.byu.edu/research/energy/htm7.html (accessed 23 December, 2005); Rowland Morgan and Ian Henshall, 9/11 Revealed (Avalon Press, 2005); James H. Fetzer, ed., The 9/11 Conspiracy: the Scamming of America (Open Court, 2007); or Judy Wood, Where Did the Towers Go? Evidence of Directed Free-Energy Technology on 9/11 (The New Investigation, 6th, 2010). All are perhaps worth reading in light of Michael C. Ruppert’s revealing Crossing the Rubicon: The Decline of the American Empire at the End of the Age of Oil (New Society Publishers, 2004). There are time-stamped videos, finally, of reporters announcing the collapse of the third World Trade Center tower, WTC-7, before it fell. Indeed, the building is visible in the background if you know which one it is: another dead giveaway to anyone paying attention that something is seriously amiss with the official narrative of what happened that day! Did someone miscount the number of time zones?

Author’s Note: if you believe this article and others like it were worth your time, please consider making a $5/mo. pledge on my Patreon site. If the first 100 people who read this all donate, my goal of just $500/mo. would be reached in no time! And if we’re honest about it, we all waste that much money each day.  

Telling the truth can have negative consequences. Around this time last year my computer was hacked — it wasn’t the Russians, either! Repeated attempted repairs of the OS failed, and the device gradually became unusable — a reason I haven’t been around much lately — and I’ve had to replace it off-budget.

This is also an attempt to raise money to publish and promote a novel, Reality 101 (a globalist speaks in a voice filled with irony and dripping with cynicism). Promoting a book means, in my case, the necessity of international travel which is not cheap.

I do not write for an audience of one. I write for you, readers of this site. If you believe this work makes a worthwhile contribution, please consider supporting it financially. I am not a wealthy person, and unlike the leftist groups I criticize, I do not have a George Soros funneling a bottomless well of cash my way.

If I reach the above goal of $500/mo., I may be able to speak at an event in your area (contact info below). On the other hand, if this effort fails, I am considering taking an indefinite “leave of absence” beginning later this year to pursue other goals. To sum up, these are your articles (and books). I don’t write to please myself. No one is forcing me to do it, as sometimes it brings me grief instead of satisfaction. So if others do not value the results enough to support them, I might as well go into retirement while I am still able to enjoy it.

Posted in Culture, Election 2016 and Aftermath, Language, Political Economy | Tagged , , , , , , , , , , , | 3 Comments

Wittgenstein’s Two Greatest Insights About Language

We’re back, after another unfortunate hiatus caused by a lingering illness and furthered by a computer meltdown. Might as well accept it: I will never be a technology person. But anywize….

This post is one I’ve been planning for some time. One could argue that Ludwig Wittgenstein was the twentieth century’s most important philosopher. He made a substantial contribution to the ideal language analytic philosophy that began with Frege and Russell and emphasized formal systems, and basically pioneered the natural language analytic philosophy that rose during the 1940s and even more during the 1950s, emphasizing the varieties of uses or purposes natural language serves. Wittgenstein was one of those rare thinkers who developed two quite different philosophies, the second of which was a devastating criticism and rejection of the first. He also greatly influenced the historicists in philosophy of science (Toulmin, Hanson, Kuhn, Feyerabend, etc.) and allowed the building of bridges to other philosophical traditions such as French poststructuralism (Foucault) and outside of philosophy to cultural anthropology (Geertz).

To my mind, two of Wittgenstein’s statements stand out as singularly profound, and important in ways going far beyond the antiseptic groves of academia.

Despite the popularity of logical positivism at the time, a close reading of the Tractatus shows that Wittgenstein was no positivist: the influences on his philosophy of language and its relationship to thought — to what can be said versus what cannot be said — range well outside Russell and Frege. Think Tolstoy, for example, or possibly Kierkegaard, or even Kafka. But all them aside, one statement late in the Tractatus suggests that Wittgenstein was already trying to think outside the set of conceptual boxes that were constraining ideal language analytic philosophy.

The first statement, in this case: “In philosophy the question, ‘What do we actually use this word or this proposition for?’ repeatedly leads to valuable insights.” Final paragraph of 6.211, Pears / McGuinness translation.

The early Wittgenstein is observing here that whatever supremacy one gives logical form alone, use matters. Paying attention to it leads to “valuable insights.”

Language is more than form (logical, grammatical). It has purposes: communication, a storehouse of information, instruction, and so on. Wittgenstein lists a variety of the uses of language near the beginning of Philosophical Investigations. A language is a system. More precisely, it is a concatenation of systems: its own set of sounds or phonemes used in often specific combinations (it is interesting to observe that nonnative speakers of a given language will typically have difficulty pronouncing sounds or combinations their native language does not use, especially if they are trying to learn the language as adults); their combination into words; rules attaching words and phrases to objects or classes of objects or attributes; a grammar allowing for discussion of present continuous, past simple, future, and other tenses; words allowing for relations in space and time, movements within space and time (prepositions); and so on.

These, the careful student of language eventually must realize, are conventions only. There is no logical or any other necessity about them. They are what they are because they allowed for solving problems of communication, etc., and are taken for granted within a community of speakers.

This all becomes evident to learners of a second language if they spend some time reflecting. It’s tempting for a native English speaker such as myself to ask of Spanish, for example, Why do they say it that way? There’s no good answer to such a question beyond the fact that these are the linguistic conventions speakers accept and use, having inherited them from generations past. One may make the point further by looking back at English and retorting, Well, why do we English speakers do it this way and not some other way? It’s actually easier to ask the question of English, given that Spanish is a pure Romance language following the streamlined internal conventions common to Romance language, while English is messier, having drawn from both Romance and Germanic roots, resulting in greater complexity. In any event, one embraces and uses the conventions of the target language if one hopes to be understood by its speakers.

The point, though, is that a language consists of conventions “all the way down,” one might say. There is no “metaphysical” necessity between a word or phrase and a given object or class of objects, not in English or in any other language. These conventions include usages which can be observed; one can learn a language such as French or Spanish by noting what words and phrases in those languages are being used to do: the situations in which those words or phrases occur, the responses they generate from others, etc. Sometimes these usages involve human motivations which change over time. To the extent that the aggregate motivations of a human community affect the uses of language, its conventions are somewhat flexible, although this flexibility is always limited. New words and phrases are coined; others drop into disuse; a few are changed beyond all recognition. An example of the latter is the word gay. One may review the attributes this word was used for in, say, the 1950s, versus its conventional uses today. Conventions change in response to pressures placed on them, and these can come from a variety of directions, including political ones.

Summing up this part of the post, the primary insight here is that we learn the conventional nature of language from observing what words and phrases are used by speakers to do, and note how these change over time.

In Philosophical Investigations the later Wittgenstein makes many observations worth pondering, but the one I would single out is: “Philosophy is a battle against the bewitchment of our intelligence by means of language.” End of paragraph 109, Anscombe translation.

Unfortunately, Wittgenstein’s primary concern, very likely his working premise, was with that departures from conventions that govern “ordinary usage” had given rise to traditional philosophical problems about knowledge and certainty, perception, free will versus determinism, and so on. Again, what do the standard conventions of the English language permit users of terms like knowledge, certainty, free, etc., to do? Is a philosopher’s language “gone on holiday” (another memorable Wittgenstein phrase from Philosophical Investigations) when he asserts, “I am certain that p” as opposed to just asserting “p“? I do not believe Wittgenstein ever dwells on the third of the above, free will versus determinism, but other philosophers did in his wake, and their results helped to sort out the often-ambiguous and therefore confusing fashion through which we describe ourselves as “free.” I used to ask students, “Are you free?” The proper response was to ask, “What do you mean by free?” That would have been to ask for a list of the standard conventions in which we use that word.

Calling something a convention, however, does not mean we should never question it.

Given the range of uses of language, if philosophy is indeed a battle against the bewitchment of our intelligence by means of language, we neither can or should confine ourselves to the ‘problems of philosophy’ including the preoccupations of nearly all analytic philosophers, past and present. Drawing in human motivation is a factor here. Not all uses of language aim at uttering true or even useful propositions. Lies, conventionally, are propositions the speaker knows are false but wants you, the listener, to believe are true. And if Harry Frankfurt is right, bullshit consists of propositions uttered by a speaker who doesn’t much care about truth or falsity, only achieving some other effect such as muddying the linguistic waters. (See his On Bullshit, 2005.)

It is necessary, in this light, for philosophers (and any other interested parties) to consider propaganda, which uses language according to, or to establish conventions, less concerned with truth or falsity and more concerned with leading an audience in a desired direction. This direction may favor or disfavor that which is being described propagandistically.

Propagandistic language may be used to discourage independent investigation of an idea by discrediting it through what may be called weaponized language. An example is homophobia … a term almost automatically applied across the board today to anyone critical of the homosexual lifestyle, or of homosexual unions. This term has become conventional. One sees it everywhere. A phobia is, of course, an irrational fear. People suffering from irrational fears are answered with logic; they are offered cures or at least attempts to control their fear (“sensitivity training”?). There are legitimate phobias (agoraphobia, claustrophobia, etc.). Is homophobia one of these? What makes it such? The word was not used prior to the 1990s, which alone makes the matter suspicious, if not decisive. We can raise the matter because of those worldviews, religious or otherwise, that reject the mainstreaming of homosexual conduct, or produce claims backed up by scientific evidence, independent of anyone’s theology, that the lifestyle is physically damaging and actually shortens the lifespans of those who practice it. I cannot claim to know, sitting here, that such claims are true (I suspect they are). What I am sure of, though, is that they cannot be dismissed prior to investigation, and should not be circumvented by the application of a term designed to portray anyone raising the issue as subject to an irrational fear.

A similar situation exists with regard to the phrase conspiracy theory. As conventionally used in major media, calling something a conspiracy theory equates to dismissing it as irrational, and is usually sufficient to discourage closer looks. But is this the same as having supplied evidence that the idea to be rejected really is irrational, or unsupported by any sort of fact? Given the history of the convention, its creation by the Central Intelligence Agency back in 1967 to circumvent the arguments brought forth for questioning the Warren Commission Report on the Kennedy assassination, we have every reason to be suspicious of this convention. Why, in that case, is the claim that Seth Rich was murdered because he supplied Democratic Party emails to Wikileaks (a claim where we have no smoking-gun evidence) a conspiracy theory, while the claim that members of the Trump campaign might have colluded with agents of the Russian government to affect the outcome of the 2016 election (a claim also without smoking-gun evidence) not a conspiracy theory?

Calling a usage conventional does not exempt it from criticism if we can draw attention to what a word or phrase is being used to do (Wittgenstein’s first observation above), and especially if we can see that what it is being used to do is to bewitch our intelligence (Wittgenstein’s second observation).

It would be a great use of analytic philosophy of natural language, especially that which takes Wittgenstein as its point of departure, if philosophers were to begin questioning usages of terms and phrases, even those that have become conventions, that are clearly propagandistic, designed to sway audiences and move them in desired directions, despite their newfound conventional status. I used the two examples I thought of first. There are dozens of others. I am sure readers can think of some of them without trying too hard.

Posted in Language, Philosophy | Tagged , , , , , , , , , | Leave a comment