But, overall, knowledge does give us power?
That is true. It doesn’t, by itself, free us. It is a two-edged sword. You can use certain technologies to promote freedom but also to spy on people. One of the core thoughts of the book where I descend from a strong philosophical and religious tradition, in philosophical terms from Socrates, is that I hold that the advancement of knowledge is not in itself liberating.
The general view today is, I think, that the growth of knowledge leads to a growth of human freedom. But the human world isn’t accretive in that way as the sciences. In human history what often happens is the destruction of whole civilizations.
There seems to be a certain monoculture in our thinking today, in our view of the world. Whatever side you’re on, most people would believe in inevitable ethical progress that is attached to the sciences.
There’d be different content, but still the general assumption is that we are moving to a better state. Now, my view is that politics and ethics aren’t like that. I take that ethics and politics are more erratic and discontinuous. There are serious advances, but then they are regularly lost.
And, unfortunately, good things are lost. For example, in the ancient world, pre-Christian Europe, there wasn’t a persecution of gay people! That was then lost for 2,000 years. That’s quite a long regression. People who believe in progress must allow the question, “But what about those 2,000 years?”
There are good events in history—there are genuine advances—but they are inherently fragile. That’s my key message.
“In most of our versions of paradise, sacred or secular, heaven is a place where dreams come true,” Garrett writes. True enough, yet what he omits to note is that human dreams of perfection are essentially contradictory. We may dream of a cosmos governed by moral laws but we also want one in which our cherished personal attachments can sometimes be exempted from these laws. We would like ourselves and those we love to be spared ageing and death; but if our wishes were granted, whether by divine decree or by means of the new technologies that futurists in Silicon Valley are coming up with, we would cease to be the creatures we are and become unrecognisable to one another. Our inability to form any coherent view of the afterlife results from it being a projection of needs and impulses that are irreconcilably at odds.
Now, as in the past, there are many who look to another life to resolve these conflicts, but the anarchy from which they seek to escape is inside themselves.
In Jesus’s world, the Son of Man would arrive in his kingdom, the last would be made first, and each person would be repaid according to their deeds. In Marx’s world, the communist society would regulate the general production and allow each person to try their hand at being a hunter, fisherman, herdsman or critic, according to their whims. In Nicholas Carr’s world, looking backward rather than forward this time, personal technology has distracted us from our authentic selves, and if we can just plug our ears with wax and ignore its siren song, we can get back to whatever deep, meaningful things we were assuredly doing with our lives before the mid-’90s. In these sorts of narratives, imagination fails at the crucial moment and leaves us with a vague “happily ever after” denouement, where we have healed and become whole. Once we reach this state, then what? No one ever says. It seems to be assumed that we’ll know an ideal state when we reach it, and having done so, we’ll be content to bask in it indefinitely.
I, on the other hand, suggest to you that most people don’t actually know themselves well enough to know what would make them content, and even if they were to luck into contentment somehow, boredom and mischief would soon drive them back to dissatisfaction again. More money in the checking account? A prettier appearance? More time to read good books? The dictatorship of the proletariat? The Second Coming? It doesn’t matter what you give them; people will always find new ways to make themselves unhappy again. However beautiful a design you manage to weave from your circumstances, the threads will immediately begin unraveling. You may even start picking at them yourself.
Kenan Malik credits Hegel with the insight, overlooked by previous philosophers, that humanity was above all else a work in progress. Human nature did not burst onto the scene fully formed as if winking into existence from a vacuum; individually, it was shaped through interactions with others, and socially, it was shaped through the evolving stages of history’s dialectic. Having grasped that humanity’s story was born in motion and conflict, though, Hegel gave in to the temptation to envision it eventually coming to rest on a static plateau, deciding, conveniently enough, that the historical dialectic was destined to reach its final resolution in the Prussian state. Many others since then have likewise found themselves unable to conceive of human existence outside the self-serving conventions of narrative structure. Whether they see the ideal human subject as needing to be discovered in the past or created in the future, they all still assume that the meaning of existence reveals itself in the conclusion. But a chord ringing out interminably would be the death of music. A pose held indefinitely would turn even the most graceful dancer into a statue. The motion is the meaning.
The human spirit is a shark with just enough awareness to turn one eye up toward the light shimmering down and dream of a world without constant motion, the scent of blood, and mindlessly gnashing teeth. The denizens of the world above know better, though. In our world, we are required to swim forever — in currents of our own making, if need be — or else die.
What the French thinkers (and Habermas) produced was essentially a postmodern form of Marxism. Some of the authors seem reluctant to abandon Marx, others are keen to update him, but no one seems willing to jettison him entirely. It is not so much his economic determinism or his class-based motivations that are retained as his idea of ‘false consciousness’, expressed through the idea that knowledge, and reason, must always be forged or mediated by the power relations of any society — that knowledge, hermeneutics, and understanding all serve a purpose. Just as Kant said there is no pure reason, so, we are told from the Continent, there is no pure knowledge, and understanding this is emancipatory. While it would not be true to say that these writers are anti-scientific (Piaget, Foucault and Habermas are too well-informed to be so crude), there is among them a feeling that science is by no means the only form of knowledge worth having, that it is seriously inadequate to explain much, if not most, of what we know. These authors do not exactly ignore evolution, but they show little awareness of how their theories fit — or do not fit — into the proliferation of genetic and ethological studies. It is also noticeable that almost all of them accept, and enlist as support, evidence from psychoanalysis. There is, for anglophone readers, something rather unreal about this late continental focus on Freud, as many critics have pointed out. Finally, there is also a feeling that Foucault, Lacan and Derrida have done little more than elevate small-scale observations, the undoubted misuses of criminals or the insane in the past, or in Lacan’s case vagaries in the use of language, into entire edifices of philosophy. Ultimately, the answer here must lie in how convincing others find their arguments. None has found universal acceptance. At the same time, the ways in which they have subverted the idea that there is a general canon, or one way of looking at man, and telling his story, has undoubtedly had an effect.
This is from a chapter on the French intellectuals of the late twentieth century. Most of the time, when you hear someone disparaging postmodernists, these are the people whom they have in mind. (Lacan’s most famous disciple, of course, is the grotesque caricature-wrapped-in-a-parody-inside-a-bullshit-farce, Slavoj Žižek.) Anyway, banalities elevated into profundities aside, false consciousness has indeed proved to be one of Marx’s most enduring ideas. You hear its echoes when people are said to be “voting against their own interests”, as if the speaker knows better than them what they really want or need, or when, say, minorities and women feel that their individual perspective trumps the imperatives of their race or gender (as dictated to them by the self-appointed intelligentsia). The truism that there is no such thing as knowledge outside of perspective becomes, in practice, Foucault’s “genealogical method“, where the aim of an argument is not to rebut specific assertions, but to identify the supposed wellspring of your opponent’s thought and declare it hopelessly polluted, thus implying, without needing to openly assert, that everything downstream is likewise poisoned.
It wasn’t just the French, though; the Frankfurt School is noted earlier in the book for being preoccupied with “the attempted marriage of Freudianism and Marxism.” Watson also quoted Friedrich Hayek: “I believe men will look back on our age as an age of superstition, chiefly connected with the names of Karl Marx and Sigmund Freud.” It’s funny because it’s true! In an age in which science was quickly colonizing all forms of human experience, almost the entire intellectual class of Europe and America was captivated by two systems of thought which look increasingly ridiculous the further they appear in hindsight. And so I still wonder: what will our descendants think about the water in which we’re swimming now? Why should we assume that we aren’t living under the spell of superstitious nonsense while priding ourselves on our cutting-edge scientific awareness just as much as people in the last century were?
Isaiah Berlin was one of my most formative intellectual influences. He single-handedly engendered my enduring fascination with the history of ideas. I plan on re-reading all of his books soon, right after I get caught up on all the new books I’ve accumulated recently. Which brings me to my point: the NYRB has republished a short acceptance speech of his from twenty years ago which elegantly encapsulates a couple of his most fundamental, recurring themes. I do implore you to go read it, as it would be bad form for me to copy and paste the entire thing, a powerful temptation indeed.
In the 21st century that old spell of universal progress through western ideologies – socialism and capitalism – has been decisively broken. If we are appalled and dumbfounded by a world in flames it is because we have been living – in the east and south as well as west and north – with vanities and illusions: that Asian and African societies would become, like Europe, more secular and instrumentally rational as economic growth accelerated; that with socialism dead and buried, free markets would guarantee rapid economic growth and worldwide prosperity. What these fantasies of inverted Hegelianism always disguised was a sobering fact: that the dynamics and specific features of western “progress” were not and could not be replicated or correctly sequenced in the non-west.
To gain an understanding of this phenomenon, we can look at a strange aspect of liberalism’s “victory”: the constant appearance of counter-ideologies that have arisen in reaction against it. Despite its overall success, liberalism has for two centuries been dogged by a series of counter-ideologies. So far, they have all been defeated, but sometimes only at great cost. Fascism and the various forms of communism and leftist extremism were the major counter-ideologies during the 20th century; varieties of extreme nationalism played a similar role during the 19th and early 20th centuries. Various other intellectual trends, including some without comparable but still not trivial political significance, such as the Romanticism of the early 19th century and related forms of bohemianism and avant-gardism, might also be considered in this context.
However varied they are, these counter-ideologies generally share a sense that liberalism’s protection and privileging of individual self-interest as opposed to the common good (however defined) makes it ignoble; potentially or actually unjust; and chaotic or anarchic and hence ultimately weak. This sensibility is evident in the pejorative meaning of the term “bourgeois”: someone who is so immersed in the pursuit of petty material concerns that he is blind to both nobility of soul and the claims of social justice.
Roughly speaking, there are two ideal types of counter-ideologies: those holding that liberalism is too disorganized to work well and hence cannot survive, and those fearing that liberalism will succeed (or has already succeeded) and will diminish human life as a result. These sound like mutually contradictory objections, but by calling them ideal types we recognize that in practice most counter-ideologies have elements of both: Liberalism is bad because it is successful in forcing or seducing people to adopt a “bad” way of life, but its faults mean that it will fail eventually.
…In general, critics saw liberalism as too disorganized and anarchic to survive because it left individuals too free to pursue individual interests at the expense of a concern for the common good. As we have seen repeatedly over the years, it is easy for liberalism’s enemies to underestimate a democracy’s geopolitical (including military) strength as a result. The advantages of a less centralized political and economic system reside in the fact that, once galvanized by a common threat, such a system can make better use of the various talents of all members of society. This truth is easily overlooked by those who adhere to this critique of liberalism.
I clicked on this article with no particular expectation, only to find it very interesting, dare I say fun to read. Perhaps your experience might be similar should you care to give it a look-see.
Mama always said life is like a Rubin vase. If you focus on this part of it, you see the Ploosa Shawnje. But if you focus on the other part, you see the Ploosa La Memshows.
John Gray offers a positive review of Harari’s book, though, as is frustratingly often the case with him, the book being reviewed seems to be judged primarily on its proximity and sympathy to Gray’s own “godless mysticism” worldview which has featured so prominently in his writings over the past decade or so (one might say he’s become quite a hedgehog about it). Perhaps he only chooses to review books which address similar themes, I don’t know; it just makes me a wee bit leery when so many of his reviews seem to hinge upon whether the author has wisely echoed or foolishly disagreed with Gray’s well-known thoughts about the subject. And I say this as someone who is largely in agreement with that worldview. I fear the temptation of Procrusteanism may get the better of him at times, that’s all.
If human beings were potentially capable of applying reason in their lives they would show some sign of learning from what they had done wrong in the past, but history and everyday practice show them committing the same follies over and over again. They would alter their beliefs in accordance with facts, but clinging to beliefs in the face of contrary evidence is one of the most powerful and enduring human traits.
Outside of some areas of science, human beings rarely give up their convictions just because they can be shown to be false. No doubt we can become a little more reasonable, at least for a time, in some parts of our lives, but being reasonable means accepting that many human problems aren’t actually soluble, and our persistent irrationality is one of these problems. At its best, religion is an antidote against the prevailing type of credulity – in our day, a naive faith in the boundless capacities of the human mind.
…The refusal to see clear and present danger shows that the idea that human beings base their beliefs on their experience is just a fairy-tale. The opposite is closer to the truth – shaping their perceptions according to what they already believe, human beings block out from their minds anything that disturbs their view of the world. Psychologists who examine this tendency – sometimes called cognitive dissonance – have speculated that refusing to face the truth may confer an evolutionary advantage. Screening out unpleasant or disturbing facts may, in some circumstances, give some people a better chance of survival. But at the same time this tendency leads us all into one folly after another. Many regard science as the supreme embodiment of human reason, but science may yet confirm what history so strongly suggests – irrationality is hard-wired in the human animal.
Certainly unreason can be tempered by the hard-won practices of civilisation, but civilisation will always be a precarious achievement. To believe that human beings can be much improved by rational argument is to assume that they are already reasonable, which is obviously false. The old doctrine of original sin contained a vital truth – there are impulses of irrational destructiveness in every one of us.
In Pascal’s vocabulary, heart is not a word for feelings or emotions but for a mode of thinking best understood as “intuition”. Intuition for Pascal is a compelling and effective method of comprehending certain things without having to reason our way to them. In addition to being a mode of thinking distinct from reason, intuition also supplies the basic apprehensions that reason requires for its own operation…Pascal knew well what he was saying in implicating intuition as the seat of religious belief, though there is no reason to view intuition as somehow less intelligent or rational than other, more explicit, modes of thinking. Indeed, much of the “reasoning” we do, as well as many of the “reasonable” conclusions that we come to, owe their existence to the supporting work of the kinds of intuitive thinking processes discussed in this and previous chapters.
…Blaise Pascal’s well-educated and imminently “reasonable” contemporaries pinned their hopes and lavished praise on reason precisely because they deemed it capable of cutting through unfounded beliefs and superstitious faith and arriving at true knowledge logically evaluated and clear-mindedly obtained. The problem, though, is that only someone wearing blinders can argue for this view of knowledge acquisition. While such careful, calculated thinking certainly takes place — scientists, for example, try very hard to emulate this process and hold their compatriots accountable to it — in everyday life the formation of the majority of our beliefs, even for the well-educated and reasonable, follows a less rigorous course.
Intuitive beliefs, the kind of basic beliefs that ought to be found in a belief box, are concrete, commonsense descriptions of the real world derived from perception and spontaneous, nonconscious inferences. Intuitive beliefs are intuitive both in the sense that they are products of innate cognitive mechanisms and in that you need not be aware of holding them, even less of the reasons why. Nonetheless, because intuitive beliefs arise from reliable perceptions and inferences, they are rigidly held. Examples of intuitive belief include the belief that when you are tired you should sleep, that you cannot walk through walls, and that books cannot swim. Intuitive beliefs are so obvious that they hardly seem like “beliefs” at all, yet they are. Countless such beliefs continually operate in the background of daily thought, supporting conscious reasoning and guiding behavior.
Reflective beliefs are what people normally understand by the word “belief”. Instead of being derived automatically and nonconsciously, reflective beliefs come from conscious, deliberate reasoning or from external sources of authority like parents, teachers and books. Reflective beliefs are usually explanatory and interpretative rather than descriptive. Reflective beliefs may or may not be fully understood or well grounded and, consequently, people’s commitment to them may vary widely, from loosely held notions to dogmatic convictions.
…Reflective beliefs are influenced by intuitive beliefs primarily because intuitive beliefs provide the default assumptions that underpin reflective beliefs. As an example, most of us believe that any two objects will fall to the ground at the same rate. This is a reflective belief concerning gravity that was taught to us and probably required a demonstration before it was accepted. What that reflective belief requires, though, is a previous understanding that objects in fact fall to the ground when released. This is an intuitive belief — derived from our intuitive physics — that all people share and that, interestingly, is neither itself in need of demonstration nor made an explicit part of the final reflective belief. Similarly, intuitive beliefs are routinely operating in the background of most of our reflective beliefs, supplying the host of default assumptions that make conscious reasoning possible but nevertheless go largely unnoticed.
…Pascal’s critics insist that people have good reasons for their beliefs, yet that not the way belief generally works. Many of the “reasons” that we put forth to explain particular reflective beliefs are simply post-hoc justifications that had little to do with the belief’s actual formation. Furthermore, a good number of our reflective beliefs have no explicit reasons for support because none are called for. If intuitive beliefs get the job done, we don’t stop to draw up more reflective arguments. Explicit reasoning or elaboration only comes later. The same is true of religious belief. Lots of people can give explicit reasons for their belief in gods, but few if any of these explicit reasons were actually part of the mental process that formed the original belief. Religious belief, rather, arises from that mode of thinking correctly identified by Pascal as intuition.
Like progress, reason certainly exists and we can be grateful for it. But also like progress, its advance is neither uniform nor permanent — individuals, cultures and historical epochs do not become more “reasonable” in some kind of zero-sum manner, and setbacks can occur in the right circumstances. On a individual level, we all know people who can be highly educated and intelligent experts in one area while being shockingly ignorant and stubbornly irrational in another, and behavioral economists have recently been mainstreaming the understanding of just how omnipresent this sort of selective, intermittent rationality is. It’s not any different on a grander stage, either — in the twentieth century, educated people prided themselves on no longer being the sort of ignorant, superstitious barbarians who put people to death for witchcraft, even as they prepared (one hundred years ago precisely as of this week) to slaughter each other by the millions for dimly understood reasons which seemed to take on a life of their own, outstripping the ability of their human creators to control them. Within a couple decades, the honestly-conceived byproducts of scientific advance, eugenics and racial science, would lead directly to the loss of tens of millions more lives before being officially discarded as a regrettable detour into pseudoscience. Steven Pinker cites widespread literacy as an essential building block in the history of humanity’s supposed progress toward increased reason and empathy, but Timothy Snyder responds by pointing out the essential role literacy also played in the religious wars of post-Reformation Europe and in the spread of virulent nationalism in our own day.
There’s always a tradeoff, always a new set of problems and dilemmas accompanying every advance.
A detached observer surveying the sweep of human history from a vantage point in the stratosphere might find it reasonable to conclude that “reasons” are just the changing fashions we wear while getting down to the perennial killing which is truly the serious business of our existence as a species. Apologists for Communism desperately try to convince themselves that the ostensible reasons for their mass murders somehow represented an advance over the reasons people had committed mass murder in the past — killing reactionaries, class traitors and counter-revolutionaries was “better” than killing people for being of a different ethnic group, or killing them in the process of colonizing their land. But the idea that history is a dialectical process in which humans hate and kill each other for increasingly better and more valid reasons until they arrive at a final synthesis in which no one needs to be killed for anything ever again is the most delusional of myths. Humans will always differentiate themselves from one another no matter how often the field is leveled, and some of those differences will intensify into hatred and violence.
The moral of the story from this perspective is not at all about urging anyone to give up reason and embrace irrationality, or to foreswear any attempts to improve their circumstances in practical ways. For that matter, I don’t believe humans are even capable of ceasing to use their minds to manipulate their environment to their liking. No, the goal is only to attempt to illuminate a sort of parallel track of human existence, one that has nothing to do with concrete progression within linear time. What Gray is saying about “religion at its best” could apply to philosophy or mystical spirituality as well, and it’s what I think Jerry Coyne misses in his eagerness to have the same old argument he’s always had about religion vs. science — there’s a certain type of wisdom that isn’t the same thing as intelligence or knowledge.
There’s a perennial question humans have been asking themselves as long as they’ve been recording their thoughts: How should we live? It’s not the kind of question that depends on amassing a collection of facts, or mastering intricate theories. We will never achieve complete understanding or control of all the countless variables that create the circumstances of our lives. We will never agree on what constitutes an ideal existence, let alone attain the god’s-eye perspective necessary to create and maintain it. Every improvement we make will also bring along unforeseen results, and the cycle will continue as long as we do. Realizing all that, one also realizes that the basics of how to live a good life do not become more complicated in tandem with our technology and our expertise. The virtues that ancient philosophers recommended are just as valid today; they haven’t been “improved” upon or rendered obsolete by material changes. As much as it offends our pride to hear it, we are not going to think our way beyond the basic dilemmas of the human condition.
Every age has a theory of rising and falling, of growth and decay, of bloom and wilt: a theory of nature. Every age also has a theory about the past and the present, of what was and what is, a notion of time: a theory of history. Theories of history used to be supernatural: the divine ruled time; the hand of God, a special providence, lay behind the fall of each sparrow. If the present differed from the past, it was usually worse: supernatural theories of history tend to involve decline, a fall from grace, the loss of God’s favor, corruption. Beginning in the eighteenth century, as the intellectual historian Dorothy Ross once pointed out, theories of history became secular; then they started something new—historicism, the idea “that all events in historical time can be explained by prior events in historical time.” Things began looking up. First, there was that, then there was this, and this is better than that. The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence.
…The idea of progress—the notion that human history is the history of human betterment—dominated the world view of the West between the Enlightenment and the First World War. It had critics from the start, and, in the last century, even people who cherish the idea of progress, and point to improvements like the eradication of contagious diseases and the education of girls, have been hard-pressed to hold on to it while reckoning with two World Wars, the Holocaust and Hiroshima, genocide and global warming. Replacing “progress” with “innovation” skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices are getting newer and newer.
To continue with the theme from the previous post, I think it’s clear that progress certainly exists. The more pertinent question is, who benefits from it, and for how long?
The image, as mysterious as a nightmare, was taken by the British photographer George Rodger on April 20, 1945, as the boy approached his jeep and the four British soldiers traveling with him on a road in southern Germany. When the image was published in Life magazine on May 7, 1945, the caption read: “A small boy strolls down a road lined with dead bodies near camp at Belsen.”
…The photograph was reproduced in Postwar, Tony Judt’s magisterial history of Europe since 1945. Judt’s caption focused upon the child’s averted gaze:
Shortly after Germany’s defeat in 1945, a child walks past the corpses of hundreds of former inmates of Bergen-Belsen concentration camp, laid out along a country road. Like most adult Germans in the post-war years, he averts his gaze.
In Judt’s reading, the child ceases to be, in Werner Sollors’s words, “a poor innocent young bystander walking about a hellish adult world but … a person deeply implicated in the hell that surrounds him.”
It was only in 1995 that the child in the photograph was given a name and allowed finally to escape his mute servitude as the emblem of Germany’s averted gaze. He was not actually a German bystander at all. He was Jewish, one of the survivors… Once the boy is seen to be a survivor, his averted gaze can be understood in a new way. In Sollors’s memorable words, Sieg, “like so many mythical heroes who got out of Hades … simply must not look back.”
In reading this, I was struck by the absurdity of trying to read deeper significance into a photograph, which, no matter how artfully taken, is merely a representation of that particular fraction of a second. A film sequence that included the moments before and after the one in question would surely offer many other images suggesting different interpretations. A painting, by contrast, obviously has no inherent meaning that doesn’t include the artist’s intention. Whatever you see is there by design. A photograph, though, shows you a mere chance moment, a fleeting image removed from context and encased in amber. Beyond the bare facts of the scene, the rest is your own imagination and projection, a fable agreed upon. How do people not feel silly pretending to discover deeper meaning in them?