Academia Embarrasses Itself Again: the Hypatia Affair

The last time I wrote a piece of this sort, an exposé of academic philosophers embarrassing themselves, it caused me some problems. I try to learn from my mistakes, and what I learned from that occasion could be set down as a general rule: other things being equal, before evaluating someone based on what they’ve said on social media, first be sure to investigate their major statements (articles, books if available, etc.) rather than respond to something hastily-scribbled on Facebook or Twitter.

But that doesn’t apply in what I’m about to discuss, and I mention it only as background, for completeness’ sake. For at issue here isn’t an ill-considered Facebook post or, God help us, a 140-character tweet, but an article deemed to have passed muster for, and which appeared in, a refereed academic journal. The journal is Hypatia, one of the major mouthpieces of radical academic feminism for over 30 years now. It’s the sort of journal that would not have been conceived back in the happy and carefree days of, say, logical positivism, whatever that school’s drawbacks, but which fits perfectly into our unhappy era. After all, major spokespersons for what is now considered important scholarship in the latter would denounce logical positivism not for its drawbacks (logical, methodological, epistemological) but because nearly all its practitioners were too pale of flesh, too “cisgendered,” and because their methods didn’t take into account the anguish and horrific hurts to others’ feelings of correspondence rules or theoretical reduction or even just an insistence on clarity and exactitude. The previous might seem a caricature. I’m not sure it is!

The article in question was penned by one Rebecca Tuvel, an assistant professor, untenured in philosophy at Rhodes College in Memphis, Tenn. The article’s title was “In Defense of Transracialism.” I will admit up front: I haven’t read it and have no plans to do so. I did read the abstract, though (available most easily here in case the link to the original no longer works). I gather that the effort at a neologism represented by the word transracialism refers to someone such as Rachel Dolezal who, though born white, has attempted to portray herself as black and even perhaps transition to blackness, at least by implication. Her story gained some notoriety as it appeared roughly the same time as the now-infamous celebrity gender transition of Bruce Jenner to “Caitlyn” Jenner.

Tuvel’s article appears to have employed a familiar and fairly standard argument form, that of arguing from analogy. (If I am wrong about this and the article does something else, someone will have to comment and inform me.) Thus the considerations that belong to one category of transitioning person might apply to those of another and perhaps more hypothetical category of transitioning person, based on any identifiable relevant similarities. There is a review and analysis of the article here, and Tuvel’s article looks to be as well-reasoned as anything to be found in academic philosophy today despite the trendiness of its subject matter (and, if we’re honest, we must acknowledge: the happy and carefree days of logical positivism were hardly free of trendiness). Going off the abstract, the article seems cautious and conservative in the sense that Tuvel apparently never actually averred that Dolezal is a “transracial” person.

What she appears not to have considered, though, is the singular trait of our unhappy era: the degree to which those hurt feelings and sense of offense would trump even the best arguments and inspire an explosion of unbridled rage. The rage against the most likely innocent author would spill onto the journal that published her work. Let it never be said that politicized academic radicals will hesitate for a minute to savage one or even many their own at the slightest indication of independent thought stepping across a constantly-shifting boundary into heresy!

Sorting out the exact sequence of events by referring to the originals is problematic, as some of the links have gone dead (I would probably have removed them, too). But, assuming I can rely on this, some 150 “new scholars” and their tenured-class supporters (including a couple of people on her dissertation committee!) put together and signed an “Open Letter to Hypatia” to denounce the article (read it here). They demanded Tuvel’s article be retracted, that it “caused … harm” (whatever that means, as no evidence of “harm” is presented or cited) and even demanded — taking presumption and arrogance to never-before-seen heights — that Hypatia revise its editorial standards to ensure that no article of this sort ever again gets through its review process.

This comes in place of what would most likely have happened back in those happy and carefree days. An article with a novel and perhaps controversial idea would be refereed and published. Some ambitious person, probably young and untenured, would find elements of its argument problematic and craft a carefully nuanced discussion piece, possibly eliciting a response from the first author. Further discussion would be generated, possibly leading to a round table type debate during the next available time slot at the next APA convention. There would be no (or very little) hostility. People’s feelings would not be at issue; their motives would not be impugned; everyone would be assumed to be attempting to advance a common conversation to greater understanding of the original article’s area of reference.

Would this happen now, with today’s race-obsessed, gender-drenched, etc., academic culture?

You’re kidding, right?

In our unhappy, post-Enlightenment era, emotions have largely overwhelmed reason, especially among the noisy, politicized subdisciplinary “new scholars,” and the sort of procedure I just described would be, if anything, far too slow. Far easier instead — especially given the near-instant communications now available to everyone — to satiate the passions of the moment and dash off a nasty list of superficial criticisms and allegations. These boil down to the author’s failure to accommodate the latest fashions of, e.g., the “critical theory” that dominates these sub-disciplines. Suffice it to say, some of what you’ll read in the “Open Letter” (assuming the link continues to work) goes beyond standard academic criticism of an author’s work to the borderline defamatory — this is Professor Leiter’s judgment, not mine. He goes on to express “hope” that she consults a lawyer to discuss her options, even offering in one of his several posts to assist with her obtaining contacts and deal with legal expenses!

Whatever one’s negative judgment of logical positivism, its practitioners did not defame one another in print! Ah, those happy and carefree days …

The situation is worse the above makes things appear. ‘

Hypatia’s editorial board instantly caved!

Their statement stands as an exhibition of all that is wrong with American academia today, especially the humanities! I will reproduce the first paragraph of their “apology” (hardly an apologetic in the ancient Greek sense of Plato’s “Apology”!):

We, the members of Hypatia’s Board of Associate Editors, extend our profound apology to our friends and colleagues in feminist philosophy, especially transfeminists, queer feminists, and feminists of color, for the harms that the publication of the article on transracialism has caused. The sources of those harms are multiple, and include: descriptions of trans lives that perpetuate harmful assumptions and (not coincidentally) ignore important scholarship by trans philosophers; the practice of deadnaming, in which a trans person’s name is accompanied by a reference to the name they were assigned at birth; the use of methodologies which take up important social and political phenomena in dehistoricized and decontextualized ways, thus neglecting to address and take seriously the ways in which those phenomena marginalize and commit acts of violence upon actual persons; and an insufficient engagement with the field of critical race theory. Perhaps most fundamentally, to compare ethically the lived experience of trans people (from a distinctly external perspective) primarily to a single example of a white person claiming to have adopted a black identity creates an equivalency that fails to recognize the history of racial appropriation, while also associating trans people with racial appropriation. We recognize and mourn that these harms will disproportionately fall upon those members of our community who continue to experience marginalization and discrimination due to racism and cisnormativity.

Anyone so inclined can read the whole thing here where Professor Leiter reproduced it. His self-description as a New Yorker notwithstanding (referencing a low tolerance for self-evident bullshit), he clearly has a stronger stomach for this sort of thing than I do. Maybe that’s a necessary rite of passage for entrance to tenured-class status in our unhappy era. Suffice it to say, the remainder of Hypatia’s longwinded and neologism-laden “apology” caved to the demands on every point, even the one to revise editorial policy. We even get to learn from the above one of the newest neologisms: deadnaming: the speech crime of referring to the name a “trans” person used prior to their “transing” (or whatever the hell we’re supposed to call it). Critics of Tuvel’s article went further than what the above block quote implies, impugning her abilities as a writer and a scholar; it was this that Leiter, with his legal acuity, picked up on. Others have as well. Several other responses to the “Open Letter” and the Hypatia cave-in have appeared as the word has spread.

Leiter’s overall views are left-of-center on most issues of public policy as with the majority of the academics of his generation, but not batshit-insane radical leftist. He calls the entire affair a “fiasco.” It’s hard to disagree with that, but easy to enhance such a description. Leiter would not agree with my overall take on this, as I do not think it will suffice to limit it, as opposed to placing it alongside other numerous recent events such as the obvious suppression of conservative speech on campuses reflected in, e.g., the cancellation of Ann Coulter’s scheduled appearance at Berkeley in the face of threats of violence, but in the still-broader historical context of what’s happened to academia since the present unhappy era began: the 1970s.

What happened was the emergence of the “argument” (it was always far more an exercise in propagandizing and then bullying, at which the academic left has always excelled) that minorities, women, and now apparently “transgenders” (but — gasp! — not “transracials”) are “underrepresented” in academia, and that all departments should make efforts not just at outreach but to hire more such people — for after all, “diversity is our strength,” is it not?

We arguably started down this troubled road with the Supreme Court’s catastrophic Griggs v. Duke Power decision in 1971. This decision changed the fundamental meaning of discrimination from an action taken by individuals to a mere lack of politically acceptable outcomes and timetables for such. The meaning of affirmative action, ambiguous from the get-go, changed from that of well-intended outreach based on calls for an end to racial and sexual discrimination to an insistence on measurable results as a test of “nondiscrimination.” The gold standard become proportional representation. Hence the creation of the “underrepresented” group in all official policy recommendations relevant to hiring and promotions. Organizations with insufficient numbers of blacks and women in positions of responsibility could expect warnings if not actual EEOC lawsuits (Auburn University, a former affiliation of mine, received a warning regarding admissions of black students while I was teaching there).

A process was set in motion. I described this process in some detail, along with its effects on occupations ranging from the construction industry to academic, in my book Civil Wrongs: What Went Wrong With Affirmative Action (1994), a work not once discussed or argued with by academic philosophers but instead basically blacklisted in academia. I learned in 1996 from a sociology professor at Bowling Green State University with whom I’d begun corresponding that the book had been placed on an actual “index of banned books” there — an “index” of how medieval academia was becoming even then!

I’d committed one of the ultimate heresies, providing a political explanation of the rise of the “new scholarship”: so-called “critical theory” which borrowed freely from French philosophy (e.g., Foucault, Derrida, etc.), radical feminism, critical race theory, and rising homosexualism which at the time was barely on the public radar, were really forms of pseudo-scholarship. They were not advancing a common conversation but launching platforms for political activism. Their primary method of protecting themselves from criticism was political correctness, rooted in Frankfurt School educated Marxist philosopher Herbert Marcuse’s “Repressive Tolerance”: allowing the same free speech standards for “nonrepressed” as for “repressed” groups maintains systemic repression! By the end of the 1980s affirmative action had clearly evolved into race and gender preferences never called for back in the 1960s, and the idea became to protect these both from legitimate criticism (in many cases from scholars far better situated than I was), while cowardly Republican politicians such as the first George Bush caved and signed a Civil Rights Act of 1991. That law reversed lower court decisions such as Croson (1989) upheld by the Supreme Court which was threatening to drain the affirmative action swamp.

I opined further in my book that the particular attacks on such notions as rationality and objectivity coming from “new scholarship” quarters, though originating independently, had been incorporated into and even enhanced by this effort: a rational and objective approach to race and gender in American public policy did not yield the results activists wanted. There was no reason whatsoever why nondiscrimination should yield proportional representation of all ethnicities and both genders in all or most organizations (schools, workplaces, etc.). Nowhere in the world did such representation exist. Arguments based on experience seemed to show that efforts to bring it about were counterproductive and ought to be discontinued. (There was no automatic reason, moreover to equate discrimination with repression. Jews faced discrimination and even segregation for centuries in Europe, and still sometimes ended up wealthy from running businesses.)

Such approaches based on fact and logical argument had to go. The result was that any sort of genuinely enlightened discussion of such subjects as race and gender was clearly dead in the water by the time my book appeared. I just hadn’t realized it. As Hobbes says somewhere, “When reason goeth against a man, a man goeth against reason.” A woman, too. Or any other gender you like!

Feelings thus reign supreme in the subdisciplines of the “new scholarship”! And they are getting increasingly unpredictable: I am sure Rebecca Tuvel, a recent Ph.D., is as much a product as a victim of this academic culture, and probably never dreamed this kind of fracas would erupt over her attempt to add something new to the conversation, trendy and sordid though the conversation is. I rather hope she is polishing her CV in addition to whatever legal maneuvers she might be considering. Being female, after all, is doubtless an asset today, but isn’t a guarantee if one has become controversial. Ask any well-educated woman who rejects the assumptions and methods of radical academic feminism.

Summing up: it is small wonder appeals to “expertise” no longer cut much ice with significant and possibly growing segments of the public, those who voted for Donald Trump last year? Expertise, obviously, is keyed to academia, which once trained the experts. Politicians and commentators invariably turn to academics when they want to back up their claims with evidence — to the extent this still occurs. While different issues are different, academic “experts” increasingly, whether rightly or wrongly, are perceived by members of the voting public as existing in an elitist echo chamber, their motives often suspect, their forests long ago rendered invisible by the trees, and in which what doesn’t fit into their official narratives is often not seen at all.

While admittedly only a tiny segment of the public is likely to encounter the specifics of a case like this, the larger group almost automatically associates much of academia with this above elitist, insular, big-city, “blue” culture mentality. What antics they have seen, which include women marching wearing pink “hats,” displays obviously designed to represent their vaginas, are hardly encouraging. The “red” culture that had just put Trump in the White House saw vindication, in its own eyes at least, the very next day after Trump’s inauguration. This was the much-touted March for Women, and to be blunt about it, the “red” culture instinctively saw “pussy costumes” as sick and degenerate. They regard someone wearing one as having something wrong with her mentally. How do I know this? Because I know such people personally sometimes through years of correspondence, and they told me as much.

Nothing that has since come from media left-liberals or from academics is likely to change this.

The “red” culture, moreover, does not regard “trans”-whatevers as some kind of an intellectual and cultural avant garde but as symptoms of the worsening sexual confusion and depravity of a society in rapid decline.

Why does this matter? Because these people vote! They have, at least for the moment, thwarted the efforts of the multicultural and trans-whatever “blues” to dominate the country — or, at the very least, to dominate them. Even if violently attacked, as some were outside of Trump rallies last year, they will continue to vote. At least one columnist just voiced the suspicion that, given the disruption behavior the left has engaged in since the election, were it held today, Trump might actually win the popular vote!

The more obnoxious and violent the left gets, the more it loses. One has to wonder if and when radical leftists will set aside their feelings long enough to figure this out.

Or, to bring things back to academia — and to academic philosophy — will its saner members, even representatives of a left-of-center that wants to hang onto sanity, as Professor Leiter clearly wants, decide they have had enough of this radical nonsense? What can they do about it? First they have to ask, At what point will they realize the need to abandon as irrational and destructive the obsession, now over a quarter-century old, to “get more women into philosophy,” as a species of the further obsession with proportional representation? (When I checked his blog more recently, Leiter had posted a partial list of signees of the “Open Letter,” calling for What-were-you-thinking? type explanations. Seven looked to be men; 27 were women; one only gave initials; with the occasional first name like ‘Tempest,’ who knows? In fairness, not all the signees are academic philosophers. But some are. Too many are.)

Unless the presumption that academic philosophy needs more women is subjected to critical scrutiny (and silly allegations of the ‘cisnormativity’ of the critics are simply ignored), the sane will be able to do nothing. On the other hands, enough radical crazies will continue to obtain the tiny handful of available tenure-track jobs to cause problems like this to erupt. They will continue to corrupt obviously floundering humanities disciplines, now facing defunding in many places, until many departments are forced to close and there is next to nothing left.

What is bad is that whatever visibility such incidents as the Tuvel-Hypatia fiasco reach, their association not just with academic culture but with intellectualism generally, will continue to damage the latter, that it will be even harder to get serious ideas discussed in our unhappy age … an age in which “diversity,” far from being “our strength,” has accomplished little besides destroying careers and served up little besides division and hostility.

SPECIAL NOTE:  If you like this article or value my writing and wish to see more of what’s presently in the planning stages, please consider going to my Patreon.com site and making a pledge/donation.

UPDATE May 5, 2017:  The philosophy department at Rhodes College stands fully behind Rebecca Tuval … for now, at least. Maybe she doesn’t need to polish up her CV after all. I would anyway, though, just to be on the safe side. (Here.)

Advertisements
Posted in Academia, Academic Politics, Where Is Philosophy Going? | Tagged , , , , , , , , , , , , | 4 Comments

May 1 – International Workers Holiday or International Diversion

It’s May 1. Here in Chile, it’s a national holiday, the official name for which is Día Internacionale de Trajabadores (International Workers’ Day). The holiday isn’t celebrated in the U.S., of course, or in Canada, because of its association with far left groups, especially communist ones. This day has an interesting history, to say the least. It was on May 1, 1776, that Jesuit-trained law professor Adam Weishaupt founded the notorious secret society known as the Illuminati, which infiltrated European Freemasonry until, accused of conspiring to subvert and destroy all the governments of Europe, they were suppressed and driven underground. That was the late 1780s. In 1789, of course, the French Revolution erupted. Its causes are well known, or so we are told. Weishaupt himself continued to live in relative obscurity until 1830, interestingly the same year the secretive Skull & Bones was founded at Yale University in the U.S. Both George W. Bush and John Kerry were members, and it was “so secret they couldn’t talk about it.” Some make more of this entire trajectory of events than others. The trajectory may be noted as one of those intriguing sequences of factoids about which we may never know the whole story, if there is a whole story to be known.

May 1 is also the day, in 1884, a proclamation demanding an 8-hour working day began; on the May 1 two years later a general strike erupted over the issue that led to the violence of the Haymarket affair, which resulted in a number of deaths when someone lobbed a bomb at police. They fired back, killing four demonstrators; the next day, disruptions resulted in bystanders being shot to death by militiamen. The 2nd International Congress called for an International Workers Day, to be observed on May 1. “May Day” was formally recognized for the first time in 1891, and has remained a focal point for various leftist groups claiming to represent the interests of the proletariat. It remains one of the most important national holidays in mainland China, North Korea, Cuba, and is still recognized in the former Soviet Union. Many noncommunist nations throughout Africa, South America, and elsewhere, celebrate International Workers Day, of course. It became a recognized holiday here in 1931.

Does celebrating a holiday accomplish anything for those not in power, however? I would surmise not.

Here in Santiago, Chile — in our neighborhood, anyway — the holiday was an excuse to stay up most of the night partying. I doubt the revelers gave the day’s history more than a moment’s passing thought, if that. I don’t have to know the personalities involved to be able to surmise this. I’ve lived here almost five years.

As an ex-academic, I’ve observed both how those seeking power operate, and how those with real power carry it forward.

The former are noisy and often obnoxious. The latter are generally as quiet as the dead. They don’t have to use noise to get what they want.

Many of us are heirs, in one way or another, of the so-called culture wars that began to erupt, little by little, during the Reagan-Bush years. Those years were witness to the meteoric rise of so-called new scholarship focused on race, gender, eventually sexual orientation, and identity-politics generally, which was incompatible with the ideals of objectivity and rationality that imposed essentially the same norms on everyone. Traditionalist responses that tried to articulate and reaffirm those norms had begun to appear. I was one of those people who began to look at policies such as affirmative action as propelling identity-scholarship, the fact that several highly visible Supreme Court decisions had begun pushback against race and gender preferences in, e.g., university hiring and admissions practices, advancing the thesis that political correctness erupted to protect preferences of various kinds from legitimate criticism by using propagandistic ruses (allegations of “racism” being the most obvious) to circumvent them as well as intimidate the critics. The ruse worked more often than not, as otherwise those bringing them forward generally saw their academic careers lying in ruins — even if they had tenure, in some cases.

It was all very noisy and visible. What nearly all of us missed was the fact that outside the academic disciplines they had hijacked, and outside the media, none of these groups truly had power, which is not merely organizational but political-economic and has as its sources those with very little interest in how many women the philosophy department has hired this past year.

Neoliberalism had begun its own top-down transformation of the universities as far back as the early 1970s following the Powell Memo, which specifically referenced figures such as Herbert Marcuse and Ralph Nader as having inculcated an anti-business mindset in the universities. Indeed, the late 1960s had seen the rise of an entire generation that hijacked the national conversation. As a result, a war those with economic power wanted was rendered so unpopular that it was no longer beneficial to said powers-that-be to bankroll it. U.S. efforts in Southeast Asia were curtailed and discontinued.

To be sure, members of that generation stayed in academia, eventually won tenure, and began to transform their disciplines from the inside. This would gradually discredit them on the outside. What, after all, were we to make of the claim that a “female-friendly” science would be different from “traditional” science because women see the world in more relational terms than men? Would such a science be able to erect skyscrapers and fly airplanes? Would it surpass quantum mechanics? And how seriously, later, were we to take the prevailing feminist allegations that one in four girls and women on campuses would be raped while they are in college? (Or is the official number one in six? I honestly can’t remember, and can’t see that it much matters.) Today, the humanities are struggling to survive. The neoliberal university, having vastly expanded administration and focused on the furtherance of corporate interests, inculcating students into that mindset by having redesigned campuses to look more and more corporate (using corporate logos, etc.), has little more use for academic philosophy than it has for “gender studies.” At its best, after all, academic philosophy still at least pays lip service to questioning authority.

While the visible debates surround such matters as the role of free speech on campus, the less visible ones involve such matters as the relative absence of academic jobs that pay livable wages, and whether several academic disciplines, once seen as at the center of intellectual inquiry, will be able to survive the present scourge of academic-corporatism intact. Or whether the average brick-and-mortar college with a full range of educational offerings (as opposed to well-endowed Ivy League institutions) will survive the rise of an online world which supplies the same offerings for nothing, or nearly nothing!

There is, in all this, little concern for those currently doing the real work of a college or university: educating its students, who have usually gone into once-unheard-of levels of debt in order to attend. Fewer than 30% of academics today have tenure or any hope of obtaining it outside of very, very good personal contacts. They are struggling to keep a roof over their heads at the same time they juggle stacks of papers to grade for the five or six courses they teach, spread over two or more campuses or even institutions (at one time I taught courses at two institutions spread across three campuses). And if students’ would-be employers ever discover that online educational world and tailor it to their advantage … ?

This environment is, of course, perfect for the intended political economy of the future, which if present tendencies continue will be global rulership by an economic superelite that will dominate the governments of the world through their central banks and financial systems, and through the latter, will dominate national portions of economies and educational systems. I believe STEM subjects are being encouraged because they fit this future model so well. Trained technicians will survive, and some may do reasonably well. Scholars trained in the art of questioning power systems will not do so well.

Not a pretty sight, but I am sure that what comes to pass will continue to allow holidays recognizing “worker’s rights,” which the workers themselves will use to escape into parties and such. “Where’s the revolution?” asks the British group Depeche Mode in their newest single. Answer: in the revolutionists’ dreams, where it always was.

Posted in Academic Politics, Culture, Political Economy, Where is Civilization Going? | Tagged , , , , , , , , , , , | Leave a comment

April Book Potpourri: Kipnis, Stanley, Jorjani, More …

Over recent months and weeks any number of items have come to my attention that could have been blog entries, had I complete information about them.

For example, there is the just-released book by Laura Kipnis, Unwanted Advances: Sexual Paranoia Comes to Campuses (Harpers, 2017), as of this writing listed as #1 bestseller in feminist theory on Amazon.com. I don’t know how much “theory” it can contain, feminist or otherwise, but based on reports I’ve read there can be no doubt it is relevant to that. The book looks to be a scathing response to what its subtitle indicates: the sweeping sexual paranoia that has overwhelmed campuses over the past several years and is destroying people’s careers. In the case of her campus, Northwestern University, the target was philosopher Peter Ludlow who was publicly excoriated, and — these are Kipnis’s own words — it “was like watching someone being burned at the stake in slow motion …”  All following what appears to have been colossally ill-advised off-campus dalliances with a student.

Ludlow’s response to the proceedings seems to have been — well — philosophical. The publicly-available reports indicate that he resigned prior to actually being removed, packed his belongings, and was planning a move to Mexico.

Kipnis’s own point of departure was being attacked and nearly made into a pariah herself, for warning that such “witch hunts” (her term again) where the punishment vastly exceeds the proven extent of the crime was hurting the cause of women’s rights on campuses (assuming, for the sake of discussion, that women ought to have special “rights” that men don’t have). Ah, how the campus thought police of today have no problems eating their own if they step outside increasingly narrow orthodoxies, especially where sex/gender are concerned.

A cursory review of the “gang rape on campus” fiasco that occurred at the University of Virginia a few short years ago should have been sufficient to indicate that what can only be described as anti-male paranoia has gone completely off the rails.

I haven’t read Kipnis’s book, so I won’t attempt to comment further; a few revealing quotations from the book can be found on Brian Leiter’s blog, and a longer excerpt can be found here (there is relevant commentary here, but it lies behind a paywall). What my impression from a distance is, however, is that of someone who was stunned at how quickly those she considered political allies turned on her when she deviated from academic orthodoxy about sexual harassment and assault on campus. This is the problem with academic orthodoxies generally.

A quick time out, though, if you will.

I am based outside the U.S., and have been since 2012, following my walking away from a ridiculously underpaid adjunct position at a branch campus in the Southeast. One of the drawbacks of living in a foreign country, especially in South America, is getting North American books in a timely fashion. Which is why I don’t post about them more. I did manage to get a trove of books shipped here a couple of months ago (they arrived from Amazon in around six days, then sat in customs for over three weeks). Among them was Jason Stanley’s How Propaganda Works (Princeton University Press, 2015). I’ve not found the time to read it in any detail, though, and this time have no excerpts, so will defer any comments I might have for later this year. We had an exchange (ping to here) which was initially acrimonious but over time grew more cordial.

A private email went unanswered, however, which I thought unfortunate as we would likely have had a meeting of minds on something Stanley indicated he cared deeply about: the mass incarceration industry in the U.S., where people are not simply locked up but thrown into solitary confinement by sociopathic prison personnel and can actually die of thirst, or from insulin-deprivation in the case of diabetic prisoners, if they aren’t beaten to death or “commit suicide” (cases too numerous to link to). It is widely known overseas that the U.S. imprisons a larger percentage of its population than any other advanced nation in the world including Communist China. “Private” prisons (i.e., prisons operating for profit), moreover, have a perverse incentive to imprison more people.

In any event, the Stanley volume is one I hope to return to later this year, and comment on in light of earlier tracts related to its subject including those of Edward Bernays, George Orwell, Aldous Huxley, Jacques Ellul, and others.

Also in that trove of books was an unusual work which came to my attention by virtue of the denunciations of its author as some kind of neo-Nazi: the book is Jason Reza Jorjani’s Prometheus and Atlas (Arktos Media, 2016). Again I’ve only found the time to read a little of it, but based on what I’ve read so far (Introduction and Chapter One): permit me to assure anyone who cares: while this time I’ve had no interactions with the author, so far the book is nothing of the sort! On the contrary, instead of a typical exercise in micro-specialization it is a sweeping, systematic work of a kind one almost never sees in professionalized academic philosophy! It elicits the errors of pivotal historical thinkers such as Descartes, whose bifurcation of the world gave us the roots of the mechanized world picture that evolved into modern materialism. Jorjani draws on modern and contemporary figures as diverse as Leibniz, Kant, Schelling, James, Heidegger, Kuhn, Feyerabend, Foucault, and Derrida. How the various contributions of these philosophers are integrated, and how the ancient heroic images of Prometheus and Atlas fit in to the schema Jorjani gradually assembles over the course of 12 chapters, might fill several blog entries when the time is right.

Key to Jorjani’s work, however, is an attack on the above mentioned materialism and a defense of the idea of the spectral — a new and original take on what are routinely dismissed as “paranormal” phenomena, along with the idea that such phenomena might actually be more common than anyone realizes: not noticed because, to draw on a notion Kuhn famously supplied in Structure, what does not fit into the conceptual boxes supplied by our dominate paradigms whether in science or in life more broadly often isn’t even seen:  except in those cases, perhaps relatively rare, which intrude upon our consciousness to a degree sufficient to disrupt our daily doings. What has long fascinated me — the fascination goes back to my undergraduate days, in fact, and is among the things that drew me to philosophy in the first place — was how those who turn out to be committed materialists react to reports of such, generally by people who don’t have the imaginative power much less the motivation to make something up. The reaction is generally one of ridicule, not analysis or anything else to indicate a desire to get to the bottom of what really happened.

What Jorjani’s book has to do with the “alt-right” I’ve not discovered yet (he does have other writings on the subject), but maybe I will; or maybe someone will enlighten me. The publisher (Arktos Media) has resurrected a few European writers with views most likely derived from the right-wing Hegelianism that preceded the alt-right. Jorjani seems off on a different (ad)venture, however. But time will tell, as well as reveal whether Jorjani can survive in the long term in academic philosophy, having written a tract such as Prometheus and Atlas.  Its antimaterialism alone will alienate it from the present-day philosophical mainstream, quite independently of anything its author has to do with the alt-right.

The last book I will say a few words about, from the same imported trove, is Socrates Tenured, by Robert Frodeman and Adam Briggle (Rowman & Littlefield, 2016). This work might not have come to my attention had I not run across this. That article observed what historians know: that prior to its migration into the modern university, philosophers had worked in a wide variety of occupations: Locke’s being a physician and then a diplomat; Berkeley’s being a cleric; Spinoza being a lens grinder, and so on.

But the situating of philosophy in the modern research university came alongside the rise of what I call “third stage” (after Auguste Comte) civilization, which assigns to science a monopolistic status in knowledge-seeking, and to technology and commerce the favored status they have because they bring in cash. Philosophy — never much understood outside the circles of those who directly engaged it — does not set out to do this, of course. It had a home, but the price tag was inhabiting back-room educational-administrative cubicles teaching students about Socrates’s “the unexamined life is not worth living” but in a fashion carefully designed not to rock the boat. And it meant increasing specialization and micro-specialization. Even for those for whom “the personal is the political”: what does that mean, after all, outside a specific range of disciplinary matrices in contemporary academic humanities? (Would Laura Kipnis concur? I don’t know, but it would sure be interesting to find out!)

Could Socrates have won tenure today? Frankly, the answer to this seems self-evident. Isn’t it far more likely that, assuming him to be the same character we encounter in Plato’s dialogues, he would suffer a fate not even equivalent to drinking hemlock — something dramatic enough to win attention — but as someone who asked too many of the wrong questions, simply being refused job interviews until he faded into nothingness among the rest of the quietly excluded? At least he would be allowed to live. Today’s corporate-administrative consensus need not kill its dissidents when it is easier to allow them to disappear. Maybe Socrates could get “gigs” driving for Uber.

SPECIAL NOTE:  If you like this article or value my writing and wish to see more of what’s presently in the planning stages, please consider going to my Patreon.com site and making a pledge/donation.

Posted in Academia, Books, Philosophy, Where Is Philosophy Going? | Tagged , , , , , , , , , , , , , , , | Leave a comment

Announcement: I Have Joined Patreon / Patreon.com

I’m back after another vacation, lol. Actually, I have been working very hard on completing a novel in six months (begun in early November with its projected completion date in early May).

To help with the costs of publication and promotion, I’ve joined Patreon, or Patreon.com to raise money. (Just getting an ISBN # of the right kind will cost $99.)

For full details, and potential rewards which include free books but possibly donations to worthy causes when I have the money, please go to my page on Patreon:  https://www.patreon.com/stevenyates.

Thank you, and I’ll be back with more philosophical writing and commentary as soon as time permits!

Posted in Uncategorized | Tagged , | Leave a comment

What Should Philosophy Do? (Part 3)

In the first two installments of this trilogy, our point of departure being John Horgan’s series on Scientific American, we offered a tentative response to the questions posed both by his title (“What Is Philosophy’s Point?”) and by ours. Before continuing, I am proud to note that the first installment on this humble blog was actually noticed (I am the seventh philosopher quoted). Even if what was quoted didn’t capture the full context of my discussion, any notice is better than no notice at all.

Earlier, we considered four aims for philosophy. (1) Philosophy aims to build grand systems of thought, attempting to account for everything, integrated into a single conceptual structure. (2) Philosophy aims at logical and linguistic clarification, because as Wittgenstein wisely said, “philosophy is the battle against the bewitchment of our intelligence by means of language” (Philosophical Investigations). (3) Philosophy aims to describe the human condition, especially under conditions imposed by modernity. (4) Philosophy aims at large scale social change, or at least laying the groundwork for such, by drawing on diagnoses (e.g., those of someone such as Marx) and outlining goals (e.g., democracy and equality).

We also distinguished between system-builders and system-smashers, although the dichotomy is somewhat loose (as are most dichotomies, actually). System-builders include Plato, Aristotle, Aquinas, Descartes, Locke, Kant, Hegel, the early Wittgenstein, and Whitehead. System-smashers include Thrasymachus and other Sophists (nemeses of Socrates, Plato and Aristotle), Montaigne the pre-Cartesian skeptic, Hume when he looked at theology (not as much otherwise), Kierkegaard, Nietzsche, the later Wittgenstein, and later, someone such as Paul Feyerabend.

The system-builders (who are not limited to philosophy, by the way) all had one strong psychological trait: a great deal of certainty about their stance. In some cases, this certainty manifested itself in their attempting to ground their system on propositions they believed could not be false, as their falsehood would involve something self-contradictory (e.g., Descartes denying that he could simultaneously not exist and be in a state of doubt; hence, “I am thinking; therefore I exist”) or unintelligible (e.g., the results, in Aristotle’s view, of doubting or setting aside the venerable principle of non-contradiction).

Yet there are dozens of things that can go wrong with such reasoning, either before it reaches such a point, and then afterwards — meaning that even if the certainty were justified, the philosophy goes nowhere, or at least, nowhere interesting or worthwhile. Maybe there are principles demanding of adherence as epistemically certain. But do such propositions have any content? Do they solve any actual problems for us?

Absolute certainty can be dangerous. As Horgan observes, if combined with political or institutional power, convictions of certainty can serve as a basis for theocracies, secular dictatorships, wars, campaigns of terror, or worse. They can lead to the suppression of alternative points of view, carried out with varying degrees of ruthlessness ranging from mere public ridicule and ostracism to house arrest (Galileo) to legal incarceration of or even the cold-blooded murder of dissidents. One thinks here of the Spanish Inquisition, but there have been many “inquisitions” throughout history including those with no link to religion, but still motivated by the absolute convictions of those ordering them and those carrying them out as on the “right side” of history.

Horgan’s grand finale thus provides his most concrete suggestion of a “point” to doing philosophy, and it is a good one — worth building into any proposal developed here or in the future. He calls his idea negative philosophy and offers this description: “Philosophy … is, or should be, primarily an instrument of doubt, which counters our terrible tendency toward certitude….  [P]hilosophers should … embrace their role as wrecking balls. Demolition is a noble calling, given all the harm caused by know-it-all-ness. And by harm I mean everything from over-prescription of antidepressants to genocide.

“Let’s call this critical pursuit ‘negative philosophy,’ The allusion to negative theology is deliberate. Just as negative theology exalts God by rejecting all descriptions of Him, so negative philosophy honors Truth by skewering all expressions of It.”

Socrates, continues Horgan, became the first practitioner of negative philosophy when he defined wisdom as knowing that one is not wise, and does not have knowledge. Socrates’s forays around his native Athens, buttonholing prominent citizens of the city whose actions were suggestive of wisdom, provided the basis for many a colorful Platonist dialogue in which Socrates shows them up as poseurs. Plato’s parable of the cave was intended to show how the majority of us humans are prisoners of mass delusions, some of them our own. The problem is, those who claim they’ve escaped from the cave have often just escaped into another delusion — and some of these delusions confer great power to those they possess. The great temptation is always to confuse reality with our beliefs about it, and then to stop questioning those beliefs.

For example, the delusions of what I’ve called third stage thinking (in Auguste Comte’s sense of the third of his Law of Three Stages, the “scientific and positive”). I’ll quote Horgan on this point: “Today, God is still kicking, but science is the dominant mode of knowledge, with good reason, because it has given us deep insights into and power over nature. Some scientists, intoxicated by success, claim that science is revealing the Truth about, well, everything.”

At this point, of course, science ceases to be science and becomes metaphysics — the very thing third stage thinking claimed to have jettisoned. So is its success real, i.e., universal, or is this just one more delusion?

We can’t ask the question in isolation, as there is no abstract entity, Science, existing unimbedded outside a variety of institutions (e.g., research universities) which merge seamlessly into others endeavors, some of them nonscientific, including technology, commerce, and government, the problem being that very little if any “pure” scientific research funds itself. Moreover, there are peoples who, prior to their first contact with the West, never so much as heard of any of these, and yet led contented lives, lives arguably happier than many of ours. The “scientific outlook,” that is, is neither a necessary nor a sufficient condition for human happiness, and might even have interfered with it on a scale not really appreciated outside those schools of philosophical thought who focus on describing the human condition, and/or who claim to find, e.g., dehumanizing elements in modern technology and in the “technological society” (think, e.g., Jacques Ellul).

There is a negative role for philosophy, that is, in forcefully articulating skeptical questions for the worldview that circumscribes much of Western civilization today. This worldview is complex, and is probably not fully captured by terms such as materialism although that term surely describes its core features. Other aspects of this worldview are captured by phrases such as liberal democracy, market capitalism, the liberal international order, and so on.

If there’s a positive element here, it goes something like this: the fullness of reality is never captured or circumscribed by our concepts, our methods, or the vocabularies we use to express and communicate. Reality, however we characterize it, is bound to be vastly messier and more complex than our perceptions, conceptions, and images of it. It constantly surprises us. There is no reason to assume that it is static and unchanging. All too easily we fall into the trap of thinking that we’ve hit upon the Philosopher’s Stone — the One Right Way — and even if, as seems likely, some of our concepts and methods capture some of the truth of our proximate environment, we easily allow this to inflate our egos, as it were, and assume them to have captured all the truth for all of space and time (the Truth, cap-T).

Horgan quotes a colorful Feyerabend rejoinder to this mistake which simply cannot be passed over. Having observed, “Paul Feyerabend, when I interviewed him in 1992, ridiculed the idea that scientists can ‘figure out’ the world.” Feyerabend: “What they figured out … is one particular response to their actions, and this response gives this universe, and the result that is behind this is laughing! ‘Ha ha! They think they have found me out!”

This remark reveals a key difference between the system-building and also many analytic approaches to philosophy versus the system-smashing approach: the former tends to be locked into an essentialism going back to Plato, and to the idea that we can supply logically necessary and sufficient conditions for finding the Truth. The system-smasher almost instinctively rejects essentialism, seeing it as the source of many a self-deception and delusion chaining us in that Platonist cave. He/she believes that when all is said and done, we inhabit a world (reality) of particulars, that our primary focus both is and must be on specifics, and that there is an important sense in which all genuine knowledge, based on successful problem solving, the conditions for which change from circumstance to circumstance, is therefore local. Perhaps this rejection of essentialism and universalism is the one thing that will survive answers to the sometimes ill-advised criticisms of metaphysics that the past century or so of both analytic and Continental philosophy have supplied in various forms.

Negative philosophy both can and should draw on the methods of analysts in showing that not just philosophers but scientists, politicians, and activists of various stripes have fallen captive to specific ways of speaking. The problem is that these will embody unexamined assumptions and valuations about which they feel great certainty, but which will not stand up to criticism if criticism is allowed. (Examples: the multitude of fake phobias, I will call them, that have put in appearances over the past couple of decades.)

Negative philosophy thus has important services to perform in a polarized world where propaganda is literally everywhere, where soundbites are confused with insights, in which all manner of individuals are quick to claim that Truth is on their side, and in which the other side has fallen prey to “alternative facts.” It can help sort out the difference between claims that really have the backing of specific lines of evidence, and those which have nothing behind them besides propaganda backed with social sanction.

In this sense, even though I am unsure it should be philosophy’s primary method (as I have no trouble imagining circumstances in which doubt should be set aside, where we should trust and have faith), I have no trouble affirming negative philosophy as an important and useful method — crucial to the critical examination of worldviews as well as more specific claims that this or that policy or idea will solve this or that problem.

Posted in Philosophy, Where Is Philosophy Going? | Tagged , , , , , , , , | 3 Comments

What Should Philosophy Do? (Part 2)

Last week, we outlined four answers to this question, provided examples of each, and following a brief discussion of Comte’s Law of Three Stages and the rise of materialism as a philosophical dogma, brought our discussion to a tentative conclusion: philosophy should attempt to identify, clarify, and critically evaluate worldviews as it finds them in society. Whether it should construct new ones is a different question. This suggestion is surely as reasonable as anything presently available. Among the factors that prompted Horgan’s discussion (cf. links in Part 1) is the fact that while philosophy may be institutionalized in academia (although there are institutions where it is in peril) as a cultural force it is arguably on life support. The idea, repeated by John Horgan and dating to early positivism (Comte) that philosophy shouldn’t presume itself able to compete with science to find truth is, in great part, what led to its current low standing. For when you give up the idea that philosophy ever finds, or should seek, truth, what happens?

Horgen’s ensuing discussion (Part 2, “Maybe It’s a Martial Art”) begins by noting the passionate mental combat which animates philosophers: how they go after each other, sometimes not merely vigorously but almost viciously, to defend their views as “the right ones.” He cites some revealing examples, if from outside philosophical literature (academic journals) per se, in, e.g., the exchanges between John Searle and Daniel Dennett over the existence of consciousness in The New York Review of Books (NYRB). If he’d wanted to look at an exemplar in academic literature, he could have cited the comments which followed Searle’s original presentation of his Chinese Room thought experiment in his infamous article “Minds, Brains, and Programs” (Behavioral and Brain Sciences, 1980), later expanded into the short book Minds, Brains and Science (1984). Although Searle’s argument was aimed at “Strong AI” (the idea that a properly programmed computer really would have a mind in any reasonable sense of that term), many commenters took strong exception, in a few cases becoming surprisingly, shall we say, unfriendly. Horgan somehow missed the acrimonious exchanges between Searle and French deconstructionist Jacques Derrida over speech acts during the 1970s and later. The two, obviously, had vastly different methods for doing philosophy–so different, in fact, that each one questioned the legitimacy of the other. In the mid-1990s Searle described Derrida’s infamous “nothing exists outside texts” as “preposterous.” Finally, as a non-APA member Horgan was not in a position to see the fight over “gender feminism” that erupted near the end of the 1980s and continued on into the early 1990s, that fight’s ground zero being a paper by “first wave” equity feminist Christina Hoff Sommers paper “Feminism Against the Family” which exposed the Marxist roots of “gender feminism” as well as its authoritarian tendencies, especially over women.

Someone arguing that philosophy is a form of mental combat, the intellectual equivalent of a martial art, thus has some material to work with. There are many exchanges in the philosophical literature, often as book discussions, that display such tendencies in much more modest forms. But this begs us to ask, What, in the larger scheme of things, is such mental combat good for? Its participants eventually walk away (sometimes in the above cases with a distinct sense of needing a bath) each thinking he/she is right and has “fought the good fight” or however else rationalized, but (1) such events, even at the modest level, tend to be atypical; and (2) even were they not atypical, why should anyone doing this be paid for doing it, or receive tenure because he or she has proved to be very skilled at it?

With a somewhat strained argument, Horgan compares the mental combat inherent in philosophy to martial arts, where top practitioners have higher aims than merely winning contests. In winning an honorably fought contest, they seek to become better humans. Becoming better humans suggests ethics. It can suggest other things including mere power-playing, but let’s concede this one. For whether philosophers should eschew truth claims in favor of ethical ones as a key to improving their own or their students’ or others’ conduct in life is surely a reasonable suggestion.

My response has three parts.

(1) Elsewhere I’ve surveyed modern ethical theories and why I believe they do not merely fail but fail miserably, assuming their aim is to provide reasoned guidance in determining what to do, how to solve genuine ethical dilemmas, how to evaluate actual human conduct. My context was a critique of the materialism that underwrites much of the philosophy of the past century and a half, via efforts to stay out of the way of science, although humanistic ethical theories–theories, that is, that begin with some aspect of our nature as human beings–had put in their appearance well before.

(2) It is one of the dogmas of philosophy within modernity that the factual and the ethical are logically decoupled and therefore in separate domains: Hume’s fork (“ought” cannot be derived from “is”). But one’s worldview surely involves commitments to specific claims about reality, about what the world is like, and what we are like, in ways that have implications for what is possible for us and therefore for what ethical system(s) we might adopt. A Christian will look to God’s commands. A “third stage” atheist, having made the metaphysical judgment that God does not exist, will look possibly to Mill or to Rawls or to the Non-Aggression Principle: humanism in one form or another. My point is, he will start somewhere, with something akin to a factual claim (that, e.g., as a matter of fact it is wrong to initiate physical coercion against others) and derive from it what constitutes morally acceptable conduct, Hume notwithstanding.

But (3) Where can we look for evidence of genuine progress in ethics that has impacted on the world of modernity–as opposed to a better understanding of how ethical language works (that we have that, I don’t think anyone will question)? Yes, we’ve abolished slavery. Or have we? We’ve abolished the chattel slavery of the Ante-Bellum South, but have we not reinvented it in other forms? There’s room for a conversation here! Yes, attitudes that were once acceptable, such as unabashed expressions of race-based hatred, no longer are. Oh, wait a minute. Are we sure about that? Just the other day, I encountered this. While not intending to suggest that such contentions are common, given that they happen at all, how much progress have we made?

The real world, however we define it, is full of horrendously cruel, inhumane practices, some of which we need not go outside Western civilization to see. The U.S. federal government supports a war machine that has displaced hundreds of thousands of people in the Middle East just since 2003, the year Bush the Younger began the disastrous Iraq War, most likely without consulting just war theory much less principles of non-aggression. The U.S. “justice” system incarcerates a larger percentage of its population than any other advanced nation, including for non-violent offenses. Arguably, the practice of solitary confinement is a form of torture, as we now know what prolonged isolation from human contact does to a person’s mind. We know physical tortures ranging from brutal beatings, so-called waterboarding, and sleep-deprivation, have occurred in CIA “black sites” in foreign countries, out of sight from prying eyes. Government is not the sole agency of organized cruelty. International sex traffickers have raked in billions of dollars. The practice does not just occur in foreign countries. Finally, the biggest money-maker on the Internet is–wait for it–pornography, in which men as well as women are physically degraded for money.

It is thus amazing how easy it is for philosophers–or other academics or intellectuals–to convince themselves that we’ve made moral progress! They should review the Milgram Experiment; or realize that as they read about an event such as the Holocaust, that Hitler himself never raised a hand against a single Jew. Nor did Josef Stalin kill anyone personally (that we know of). Nor did Mao. Such events would not have been possible absent the cooperation and collaboration of thousands of people in chains of command and obedience who would have rationalized their behavior by saying, “I was just following orders.”

All of which suggests an improvement to the suggestion for the aim of philosophy offered at the conclusion of Part 1, and at the outset above. In addition to whatever it has to say about worldviews and their influence, philosophers can surely provide a public service, and a service to humanity, if their subject can serve as a basis for criticizing authority and blind obedience to it–blindly following, en masse, the commands of supposed leaders, political or otherwise. What kinds of worldviews encourage or enable blind, thoughtless, unreflective obedience to authority? How are such worldviews maintained? How do they use language? I don’t claim that such queries are new. But obviously they are needed. We’ll pursue them, and how they fit with Horgan’s summation and call for “negative philosophy,” in our third and final installment (hopefully) early next week.

Posted in Enlightenment, Ethics, Philosophy, Uncategorized, Where Is Philosophy Going? | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

What Should Philosophy Do? (Part 1)

Inspiring this series of posts (I’m thinking there might again be three) is John Horgan’s series on “What Is Philosophy’s Point?” in Scientific American (five installments, here, here, here, here, and here). I should begin by saying that I am delighted to see Horgan’s writings, whatever specific agreements or disagreements I may have. Scientific American has a fairly wide readership, much of it outside the confines of academia. At least that readership will see that the subject still exists, that it hasn’t been defunded by misguided university administrators, absorbed into so-called cognitive science, or buried under an avalanche of identity-politics.

Modern philosophy as an endeavor, enterprise, discipline, or whatever we want to call it, has never ceased to agonize over its identity — especially after the sciences came to dominate intellectual culture. Even philosophers who maintain they have found the perfect identity for their field, have not managed to convince even a majority of other philosophers, though they may have achieved a substantial following. Ask 20 different philosophers What Should Philosophy Do?, and you might not get 20 different answers but you will doubtless get several. This won’t happen, it goes without saying, with physics, or chemistry, or biology. It might happen with art, if Brian Eno can be believed.

When lecturing students at the very beginning of an Intro to Philosophy course, noting that many would find the subject rather mysterious, I used to identify four distinct answers to the question, yielding four distinct approaches to philosophical activity. Without implying that the four are hermetically isolated schools of thought, none touching the others:

(1) Philosophy seeks to develop, or articulate, a comprehensive theory or account of the world and everything in it, including our place in it, (often) some account of what the good life consists of, and (often) a diagnosis of the difference between what the philosophical ideal and the socially real. Some might call this a worldview. Philosophy is theoretical system-building, in other words. Exemplars from the history of philosophy are aplenty: Plato and Aristotle; St. Thomas Aquinas; Descartes, Locke, Berkeley, Hume, Kant, and Hegel; in the twentieth century, someone such as Whitehead. Before going on, we should note the immediate problem that “progress in philosophy” usually seemed to mean each philosopher crossing out most of what previous systems had to offer and substituting his own, validated as it were from inside. We should also distinguish among philosophers the system-builders from the system-smashers. Each of the above were system-builders. System-smashers included (indirectly) the Sophists whom Plato and Aristotle despised; early modern writers such as Montaigne (to whom Descartes was responding, at least in part); Hume when dealing with natural theology (not in his broader epistemological and ethical views); Kierkegaard, Nietzsche; the later Wittgenstein in his anti-essentialism; and in the late twentieth century, Thomas S. Kuhn (however reluctantly), Paul Feyerabend (enthusiastically), and all those identified with postmodernism (Michel Foucault, Jacques Derrida; on our side of the Atlantic, Richard Rorty with his “neopragmatism”).

(2) The second answer: philosophy is analysis, not synthesis. The absence of agreement on whose philosophical system is the most defensible is a problem, as we mentioned. So philosophers turned in large numbers to efforts to clarify their own problems or questions. How can we expect to make progress if we’re not clear what we’re talking about? Sometimes logical-linguistic clarification concluded that a given problem, such as free will versus determinism, or the mind-body problem, is based on linguistic confusion and should be given up. Whereas a systematic philosopher of the first school may try to prove that the will is free, someone doing analytic philosophy wants to know what it means to say the will is free. Does it mean acting outside the causal structure of the universe? Whatever can this mean?! Or are we merely ignorant of the causes of our behavior (as behaviorists insisted). Or does acting freely just mean the modest and commonsensical notion of acting without another person or institution compelling us to do so? Acting freely in this sense is compatible with determinism (hence the term compatibilism). Philosophical analysis began to develop in the 1800s. It was not, of course, invented in the 1800s. Socrates was doing very basic analysis when he asked Euthyphro to define piety; or Meno, to define virtue as a precondition to answering whether it can be taught. The founder of sociology Auguste Comte questioned whether theoretical, system-building “second stage” philosophy had any place in a scientific world. He was not a system-smasher, though. His answer was “third stage” positivism, which hands the questions philosophy had hitherto dealt with over to the hierarchy of the sciences. Comte set the stage for philosophy as the logical analysis and clarification of language. And as they had newly-developed and very powerful formal-logical techniques to work with, major nineteenth century thinkers starting with Göttlob Frege led to Bertrand Russell across the English Channel, whose articles and treatises (along with those of his colleague G.E. Moore) defined the early course of analytic philosophy — and also its “third stage” mindset. Despite its Continental roots, analytic philosophy soon became dominant in the English-speaking world. Every student who pursues academic philosophy soon realizes this. Ludwig Wittgenstein (both Wittgensteins!) and Viennese logical positivism led the way to A.J. Ayer, P.F. Strawson, and J.L. Austin in the U.K., and in the U.S. the leading thinkers (e.g., Carl Hempel, Ernest Nagel) called themselves logical empiricists. Their primary goal was clarifying explanation and justification in science, which included extensive explorations into inductive logic, Bayes theorem, and so on. Eventually analytic philosophy evolved under withering criticism of some of its own products, leading to major figures like W.V. Quine, Wilfred Sellars, and later, John Searle, Saul Kripke, Donald Davidson, and Michael Dummett among others (these being the names I typically think of first). I spend more time on this answer to, What Should Philosophy Do? because its techniques prove powerful when used outside of philosophy. I am unsure this power is truly appreciated — even by many of its own practitioners who often leave themselves open to criticism for the insularity of their activity (although in fairness, insularity became a comfortable path to academic tenure and financial stability long ago).

(3) The third answer to, What Should Philosophy Do? was offered by the existentialist / phenomenological tradition that took root primarily in twentieth century Germany and France. Philosophy, in this view, should set out to describe the human condition—perhaps standing in isolation before a God none of us truly understands (Kierkegaard, who might be regarded as this tradition’s founder), or in a world without God (Heidegger, Sartre, etc.). This tradition engaged in its own form of analysis; one thinks of the phenomenology of Edmund Husserl and figures like Maurice Merleau-Ponty — even linguistic analysis (Ferdinand de Saussure). In my humble opinion, contributors to this tradition expressed their generally dark assessments far more effectively in fiction and plays than formal philosophy. I find Heidegger’s and Sartre’s formal treatises to be unreadable, but the latter’s Nausea is guaranteed to disturb anyone who finds himself experiencing what Sartre was getting at, regarding the “for-itself” confronting the “in-itself.” Much the same can be said for Albert Camus’s works, especially The Stranger. Although existentialism was primarily a European phenomenon, I would argue that existentialist themes permeate American novelist Ernest Hemingway’s works. Meursault, the lead character in The Stranger, and Krebs, from Hemingway’s short story “Soldier’s Home,” are virtually interchangeable in a world each experiences as meaningless and vaguely hostile, absent an anchor-point of value and confirmation such as God.

(4) Finally, we get to the idea that philosophy should not merely describe but rather change the human condition. Paraphrasing Karl Marx (from his “Theses on Feuerbach”), philosophy has only described the world; the point is to change it. Philosophers, in this view, should see themselves as obligated to use whatever skills they have to expose power relationships and provide critiques of power while allowing marginalized voices to be heard. Relevant here is whether those of us who do philosophy, do it as an end in itself or as a means to other ends. Marx would have disdained the former as a bourgeois luxury, contributing nothing to class analysis or the coming struggle between those who own the means of production and those with only their labor to sell. The idea, which has its roots in Hegel’s distinction of the differences in perception between the master and the slave, has animated Marxian philosophy and also the various late twentieth century developments such as race-conscious philosophy, radical feminism, queer theory, and so on. However much identity-politics may be seen as a wrong direction (to be bluntly honest, I see it as such), the idea of producing critiques of power survives such a criticism. This answer to, What Should Philosophy Do? can easily incorporate philosophical analysis by exposing how language can be weaponized and used as an instrument of thought control and domination in institutions, or in society at large, by whichever groups with agendas. It can, for that matter, incorporate elements of the systematic approach by noting the kind of worldview or theory of reality in which exercising power is most at home, “legitimizing” itself by nothing more than a capacity to wield deadly force if challenged.

These, then, are the major answers to the question, What Should Philosophy Do? (I am not asserting that there are no others; I am only saying they are the prevailing ones. Anyone truly educated in philosophy is at least aware of all four even if he or she rejects all but one of them.)

Turning our attention to reviewing Horgan’s series, then, his main title, “What is Philosophy’s Point?” is surely a variation on our, What Should Philosophy Do? 

At first glance, in light of the above, the series might not look exactly impressive. Part I, (“Hint: It’s Not Discovering Truth”) repeats a refrain that has become familiar in the past few years since Stephen Hawking declared philosophy to be dead in the introduction of his The Grand Design (2010). (For whatever it’s worth, I meditate on and attempt an extended answer to philosophy is dead claims here.)  The refrain, as we see from the above, though, is hardly new. Above we mentioned Comte and referenced two of his three “stages.” The Law of Three Stages, he called it (or States or Conditions, terms he uses interchangeably) is as follows: the first is the “religious or fictitious” state; the second, the “metaphysical or abstract”; the third, the “scientific and positive.” The first, using a somewhat different metaphor, might be thought of as our intellectual childhood. The second, as our adolescence — somewhat reckless, its reach exceeding its grasp. The third, in that case, is our mental and cognitive adulthood as we stand on our own, without gods, myths about intellectual certainty, etc.

Comte believed inquiry was converging on this final third stage.

I am at work on an extended tract arguing at length that Comte had the germ of a sound idea in his “stages” view of inquiry, but that he was wrong about the his “third stage” being the final one as well as the sort of progress made in reaching it. That, though, is a conversation for another time.

The point to make here: Horgan’s first installment is permeated with third stage thinking, I will call it — as is the bulk of the philosophical work he has to draw on, which he tells us inspired him to write these pieces: fascination with the mind-body problem which has produced easily the largest literature of any single issue in modern and recent analysis. Is it really a problem? For whom? Some, such as Gilbert Ryle (of the post-Wittgenstein “ordinary language” school) believed it mired in linguistic confusion. Rorty, much more recently, argued that the sense of there being a mind-body problem was nothing more than our use of a special vocabulary, one that drew on obsolete (seventeenth century) notions; what can be done to dissolve the mind-body problem is to get rid of the sense that this (or any other) vocabulary represents or “mirrors” reality and that it is the job of philosophy to find and justify the vocabulary that does.

The sense of a real, live problem about mind and body has survived Rorty’s efforts, though. What has seemed to create a problem has come down to a single question, visible in the work of the brilliant Australian philosopher David J. Chalmers: how can we make sense of the existence of consciousness (i.e., our individual consciousness as a kind of private, inner stage, important to us, from which each of us views the world) in a material universe in which all configurations of “matter” are, in the final analysis, of equal “value”? Most readers will recognize this as one way of describing “hard problem” of consciousness which has loomed large in philosophical conversation since the 1990s.

One answer to that question is taken by the Daniel Dennetts of the philosophical world, who deny that a reifiable “private inner stage” really exists as such, that it is any more than an illusion created in our brains and central nervous systems. Truth be known, I find this kind of denial to be very strange. Its counterintuitiveness does not refute it, of course, but surely ought to give us pause, having left us with a sense that something essential is being left out. This brand of hard materialism seems paradoxical at best and incoherent at worst. If consciousness really does not exist as such, then how can Dennett undertake the (presumably conscious) action of denying or affirming anything? How can you be reading this — if you are — and presumably processing mentally (in some sense of that term) what you are reading? What is it that makes our utterances — and the inscriptions we see on paper or online — more than strings of random-seeming noises or meaningless markings?

Consider: those of us with exposure to a foreign language we don’t understand may listen as closely as we can to speakers of that language in conversation and hear nothing but unintelligible noises. But it should be evident from direct observation that the speakers understand each other and are providing appropriate responses! The material constitution of language just being sounds emitted from one’s mouth and received via one’s ears, each speaker is mentally adding something that I cannot add in listening because I do not understand or speak their language. The same is true if I try to read, e.g., written Arabic. I see nothing but curvy lines and dots. An Arab speaker recognizes the words and sentences of the language he grew up with. This “understanding,” this “recognition,” I submit, is lost by materialism. Somewhere in here, obviously, is Searle’s absolutely brilliant “Chinese room” thought experiment from the 1980s, which — frustratingly! — Searle himself never drew the most straightforward consequence, which is that materialism as a theory of reality and of the human person is simply wrong!

Dennett would doubtless call these responses question-begging and wrong-headed. But is there an alternative? More interestingly, is there some neutral vantage point from which to survey and judge, from which to construct a non “question-begging” approach to consciousness? Are we not undertaking mental actions in any reasonable sense of this phrase when we even raise the question, much less undertake it?

Eventually, our reasoning must reach the point of realizing that consciousness is, in some sense yet to be spelled out, very, very basic, and must be built into our very understanding of how the world works, or we end up with incoherence!

Another way of saying this will doubtless offend every third stage thinker (Dennett being an example; not to single him out, as there are many others) who has science / scientific method (as they see it) on a pedestal. This is to not be a materialist. This was the option chosen by Thomas Nagel, who has proven to be alone with his “teleological naturalism,” motivated by the apparent failures of materialism in combination with his own resistance to theism. But the failures of materialism open the door to a variety of other worldviews, Christian theism included. They open the door to the likelihood that there are other phenomena, likely to be dismissed as illusory but easily seen as real and incorporated into our account of our experience if we stop trying to shoehorn everything into a worldview in which everything must be either reduced to laws or propensities discovered by physics and chemistry, or eliminated as unreal products of our imaginations or a “folk psychology” (“eliminative materialism”).

How good are the reasons for being a materialist? has for years now seemed to me a perfectly valid question. How well argued is the materialist stance? To use a Feyerabendian ploy, has materialism won out not just in much of philosophy but in science itself not because of the superiority of its arguments but because it was able to bully alternatives out of the way? Given the clear absence of a consensus on the status of such problems as mind-body (not that the presence of one would necessarily decide the matter for all time), it doesn’t seem to me materialists are entitled to assume so without much more work. That a Colin McGinn, another of those rare leaders in the post-Rorty world of academic philosophy, finds the mind-body problem “intractable” (whether that is McGinn’s word or Horgan’s) is telling.

By reflecting on the seeming failures or lapses of materialists, and their willingness to “eliminate” what doesn’t fit a worldview to be distinguished from the actual methods and findings of modern science, surely we have found a “point” to doing philosophy, and therefore an answer to, “What Is Philosophy’s Point?” Maybe there is an answer to, What Should Philosophy Do? which combines all four of the above answers in recognizing, clarifying, evaluating, and if necessary, constructing alternatives, to the worldview(s) that prevail in Western civilization.

To be continued …

Posted in Philosophy, Theism or Atheism, Where Is Philosophy Going? | Tagged , , , , , , , , , , , , , , , , , , , | 2 Comments