The Calculus of Consent

In conversation with Michael Munger

 

You wake up groggy and confused. Your head hurts, your vision struggles with a strange, leaking glare, and your inner ear is doing something unpleasant, something nauseating that you can’t quite pin down. You lie there sore and muddled, feeling surer and surer that something is in fact moving. The floor underneath isn’t the still hard ground you fell asleep on last night. Yes, there it is, a gentle but nervy rocking back and forth, back and forth.

There is a smell too, salty and organic and wild. You sit taking it all in as best you can, and finally claw yourself to your feet. The floor and walls and low ceiling are made of a heavy, damp wood; and those beams of sunlight wrapping around the gaps in the construction now feel hot and high. Above there’s a squeak of animal life somewhere in the air, and then, sure enough, footsteps. There are people up there, quite a few it seems, and like it or not they are your only hope for finding some answers and for getting back to the safety of your house and bedroom.

The stairs upward creak and flex, but have that sure feeling of being well used. You push hard on the trapdoor, it flops up and open, landing with a heavy thud of wood on wood. There is an explosion of sunlight, you raise your hands and take a step or two protectively back down the stairs. There’s voices now, and slowly as your eyes adjust to the light you see people, all busy, all at work with a purpose, and all looking happy and content here on what is now obviously a large sailing ship.

Nervously you begin to walk around the deck, dodging the riggings, masts and flapping sails, and scurrying sailors. You stare hard at the friendlier looking people, hoping for some eye contact and recognition, and then hopefully an explanation. But everyone is preoccupied with work and hobbies and conversation; though they clearly notice you, and a few people even give you the odd questioning looks. No one seems at all concerned by a stranger walking around the deck.

You go searching for answers, tapping on shoulders, asking people where you are, how you got here, and who they are. They are a mixed bag: some are kind and understanding, others are angry to be approached and hassled, but all of them are genuinely perplexed by your questions. Some of the more compassionate people nod you in the direction of the ship’s bow, towards some more stairs and a higher deck.

Increasingly panicked by this situation, you rush over and climb to the top of the stairs. Before you is the navigation deck of the vessel: people are moving around with different metal instruments, measuring angles of the sun and the waves, others are talking weather patterns, and more still are fretting over the large maps and debating the benefits of potential routes. In the middle of the scene, unattached to this activity, but being consulted on every final decision, is a large, rotund man, with a cartoonish captain’s hat.

You walk over to him and immediately he looks back at you with knowing eyes. A rare look emerges in his face, one that seems already intimate and close. You begin throwing out those same worried questions of yours, and he quickly interrupts. He welcomes you aboard his ship, the S.S. Political Consent, and tells you that this is your new home. Sure he kidnapped you in your sleep last night while the ship was docked in port, but don’t worry, this is no slave ship, nor a tyranny of the powerful over the weak. This is a democracy! He is captain now, but only because he was voted in as such at the last election. There will be another election soon enough, at which point anyone else aboard might take his place, including you.

You are incredulous now at how cavalier this man and this crew are about your arrival on board. Feeling that the morality of what is happening to you is ridiculously being lost, you begin to yell your complaints. You didn’t ask for this, any of it. If this ship is so great, and everyone here is so happy and pleased with its existence, why weren’t you consulted? Why weren’t you given the choice of joining, why was it forced upon you?

Despite what he has done to you, the captain appears to be a fairly reasonable and patient man. He pulls up two chairs, sits down with you, and begins to go over the details of what makes this ship, this society, so important. It is not perfect, it never claims to be. But it does always try to improve itself, always accepting the criticism of anyone who has it, never silencing any of the crew, and endeavouring at all steps to discover new truths about the world and about human flourishing; and that is what makes this ship such a beautiful community. You may not like what you see, you may hate it, but you can change it; all you have to do is convince a majority of your fellow sailors to do so. And they are you fellows! Everyone might have different jobs and talents, but no one is more valued than any other. Everyone is equal and everyone is free!

You are furious! That might all be true you say, but you certainly didn’t have any freedom when it came to whether or not to join the ship in the first place. That was forced upon you! You ask the captain how – if he is such an advocate of freedom and individualism, as he claims – he can be so relaxed about you not having an original choice over whether to join his crew, or whether to decline.

He looks back at you now with the same knowing expression, then he turns to the ocean around you. In each direction there is nothing but horizon and blue waves. No land to be seen and no other ships. Confident that his gesture has had its intended effect, he slowly turns back to you and says in a firm and final tone, “You are free to leave anytime you like!”

 

* Inspired by David Hume’s Essays, Moral, Political and Literary (1748)

Should it be said, that, by living under the dominion of a prince, which one might leave, every individual has given a tacit consent to his authority, and promised him obedience; it may be answered, that (1) such an implied consent can only have place, where a man imagines, that the matter depends on his choice. But (2) where he thinks (as all mankind do who are born under established governments) that by his birth he owes allegiance to a certain prince or certain form of government; it would be absurd to infer a consent or choice, which he expressly, in this case, renounces and disclaims

Can we seriously say, that a poor peasant or artizan has a free choice to leave his country, when he knows no foreign language or manners, and lives from day to day, by the small wages which he acquires? (1) We may as well assert, that a man, by remaining in a vessel, freely consents to the dominion of the master; though he was carried on board while asleep, and must leap into the ocean, and perish, the moment he leaves her.

 

*** The Popperian Podcast #26 – Michael Munger – ‘The Calculus of Consent’ The Popperian Podcast: The Popperian Podcast #26 – Michael Munger – ‘The Calculus of Consent’ (libsyn.com)

The Philosophy of Medical Discovery

In conversation with Donald Gillies

 

In the early 19th century, maternity hospitals around the world had a problem. Shortly after giving birth, as the womb was returning to its natural shape, a worrying amount of women were contracting, and dying from, a sudden and horrible disease; now known as puerperal fever. “A woman could be delivered on Monday, happy and well with her newborn baby on Tuesday, feverish and ill by Wednesday evening, delirious and in agony with peritonitis on Thursday, and dead on Friday or Saturday.”

On July 1st, 1846, a young Hungarian physician accepted the underwhelming post of “Assistant” at the maternity wards of Vienna General Hospital. Ignaz Philipp Semmelweis had come from a prosperous Budapest family. He had intended to study law but soon transferred to medicine in hopes of making a more profound difference – as he saw it – to the Austrian empire. He graduated in 1844, and Vienna General became his first full time position.

There could not have been a worse place to begin a medical internship. In Semmelweis’ own words, he walked into a "dreadful puzzle”. The two wards at Vienna General catered for between 7000 and 8000 women per year, which made it one of the largest hospitals in the world. The dreadful puzzle that Semmelweis found wasn’t just the high rates of puerperal fever, but the extreme difference in infection rates between the two wards.

In 1846, 11.4 percent of all mothers in the First Division contracted and died from the disease. In the adjacent Second Division, it was only 2.7 percent. Following a very “Popperian model of conjectures and refutations”, Semmelweis began considering various possible explanations; rejecting the ones which proved incompatible with the evidence at hand, and subjecting the rest to rigorous testing.

His first hypothesis was the widely held view that puerperal fever was due to “epidemic influences”. There was significant evidence to support this, as many diseases – malaria and cholera for example – were known to spread as epidemics. But this couldn’t explain the discrepancy between the two wards, nor the extraordinary fact that the death rate for “street births” was itself considerably lower than that of the First Division.

The next hypothesis was overcrowding. Again, another reasonable guess considering what was known about the higher prevalence of puerperal fever within the city’s “slums, barracks, ships, and workhouses.” But as it turned out, the reputation of the First Division had leaked out to Viennese society. Meaning that new patients regularly insisted on being treated in the Second Division, which made it – and not the First – the more crowded of the two wards.

Along similar lines, Semmelweis conjectured and then rejected the impact of diet, as well as other markers of overall health.

Next, the focus was on the medical staff, and a fear that students and nurses were dealing with patients in a “rough manner.” A large degree of this was xenophobia about “foreigners” not having the proper care, training, or compassion. To test this, Semmelweis moved staff around between the wards. He discovered no change in the rate of infections.

Psychological factors were then considered, such as the daily routine of the hospital priest. The First Division was set-up in such a way that the Priest would have to pass through five or so sub-wards to reach the “sickroom.” The concern was that seeing the priest, and knowing why he was there, might be a terrifying enough experience to affect the health of the watching patients. Semmelweis tested this hypothesis by persuading the priest to use a “roundabout route” instead, and to avoid announcing his presence with the customary tolling of a bell. For weeks the priest came and went unseen and in silence. Infection and mortality rates remained unchanged.

Another hypothesis had to do with delivery position. It was standard practice for women in the First Division to give birth on their backs, while in the Second Division they gave birth on their sides. At this stage Semmelweis was testing increasingly unlikely ideas, describing himself as “like a drowning man clutching at straw”; still, he continued to test. He introduced the practice of side births at the First Division; the rates of puerperal fever remained unaffected.

Then, in 1847, while struggling to find a new, plausible hypothesis to examine and test, one of Semmelweis’ colleagues cut his finger on a scalpel that was being used for an autopsy. Soon enough, this colleague became sick with fever and died a few agonising days later as a patient. For Semmelweis the similarities between his colleague’s sharply declining illness and the women with puerperal fever was hard to ignore. The scalpel seemed to have done more than simply puncture the skin. It might have carried with it “cadaveric matter.”

The role of micro-organisms in causing infection had not yet been discovered, but from this series of events Semmelweis had formed a crude theory about “blood poisoning”, and its path from dead to living bodies. He began to pay closer attention to the behaviour of his colleagues as they moved from the autopsy room and through the various sub-wards of the First Division. They all washed their hands, but only ever “superficially”, to the degree most would often retain a “characteristic foul odor”.

Semmelweis had a new theory to test. He issued new handwashing regulations, instructing all staff leaving the autopsy room to clean themselves with a stringent solution of chlorinated lime. For the first year that this procedure was tested, 1848, the mortality rate for puerperal fever in the First Division dropped suddenly to only 1.27 percent, outperforming even the Second Division which had a rate of 1.33 percent in the same year.

The reasons for the Second Division having lower rates than the First for all those previous years? By sheer chance, the Second Division’s patients were only attended to by midwives, whose training and hospital duties did not include the dissection of cadavers. This new theory also explained why “street births” didn’t suffer from higher rates of puerperal fever.

Soon Semmelweis had to broaden this thesis to capture new evidence. On one occasion, while still fighting with the details of his discovery, Semmelweis and some of his colleagues were doing the rounds of twelve women in a single maternity sub-ward. The first patient was suffering from “festering cervical cancer.” The doctors all thoroughly disinfected their hands before examining her, but only followed the “routine washing” before moving on to each new patient. Eleven of the twelve soon contracted and died from puerperal fever. Semmelweis had to amend his theory about “cadaveric material” being the problem, to also include “putrid matter derived from living organisms.”

He couldn’t possibly have understood at the time the full implications of his theory, nor how good of a Popperian he was being. But quite soon, Semmelweis was all too aware that he was on the verge of ushering-in a revolution in the history of medical science. And revolutions, as they are, tend to end badly for those in the vanguard.

If he was correct, then Semmelweis was effectively also saying that his fellow doctors – the people he needed to take his theory most seriously – were not just failing to save countless women from early deaths, but were, in fact, directly causing many of those deaths themselves. Instead of helping their patients, they were killing them, and all due to poor hygiene. Semmelweis was of course also implicating himself, but this small piece of self-effacement was cold comfort for the people he needed to convince.

It was a task he ultimately failed at, with the medical profession turning its back on him. When his contract at Vienna General came to its end in 1849, Semmelweis applied for an extension and was denied. Around the large medical institutions of Europe, Semmelweis went desperately to knock on doors and to apply for open positions. His reputation preceded him – he was rejected by every single one. Frozen out of medical practice, he eventually had little choice but to return to Hungary, where in 1865 he collapsed into madness and was committed to an asylum.

Grand, new ideas are often hard things to handle when they first arrive. Rather than building upon existing knowledge, they shatter it. Old paradigms and comfortable truths are not things to easily abandon, nor should they be. There are more reasons than just self-preservation for Semmelweis’ failure: he was slow to publish, there was a national prejudice towards his Hungarian heritage, and most significantly his theory was still young, it had criticism to still answer, explanations to flesh out, and boundaries still to be drawn.

Revolutionary science takes time to settle into the mind – people are accustomed to the world looking a certain way, and not another. But truth has the remarkable quality of not needing to be defended: it simply works! People who rely upon it will succeed, and those who don’t won’t! Whereas Semmelweis failed and felt “disgusted with his treatment”, his germ theory of disease did not. It succeeded without him, and it survives today, because, and only because, it remains true.

 

*** The Popperian Podcast #25 – Donald Gillies – ‘The Philosophy of Medical Discovery’ The Popperian Podcast: The Popperian Podcast #25 – Donald Gillies – ‘The Philosophy of Medical Discovery’ (libsyn.com)

The Life and Philosophy of Joseph Agassi

In conversation with Nimrod Bar-Am

 

This story begins with Nicolaus Copernicus, not for what he did or achieved, but for what he refused to take for granted. The world of science had been stuck on an appealing idea and a faultless person. Aristotle had once ruminated about the earth being the centre of the universe – all the planets and moons and stars revolving around us – and with his works being quickly rediscovered in Europe after the Dark Ages, as well as being fawned over for their eclectic genius, this single theory stood out in important ways.

It seemed well reasoned, it came from a perfect source, and it gave church authorities a reason to relax their dangerous gaze: man was proven to be at the heart of all things, the sole focus of God's creation; science had spoken! Saying that perhaps Aristotle was wrong, that perhaps the universe was much bigger than previously imagined, and there was no centre to it, the Italian monk Giordano Bruno was burned at the stake by The Inquisition in 1600. His crime? Diminishing the significance of human beings!

As you might have guessed by now, this story doesn’t follow straight lines nor dates on a calendar, but it is driven forward by the only thing possible. But we’ll come to that a little later. Copernicus’s theory of the earth orbiting the sun, rather than the other way around, was different to Bruno’s in content but had committed the same heretical sin. This early clash between religion and science was all about how we see ourselves as a species: the gods-eye-value of our existence. But in a way, it was about something else, something that would continue and fight and remerge and repackage and squirm for survival across the history of thought and of science. Look out your window today, pay close attention to the words and attitudes of the people around you, and you will see it still here, diseased and aged, but for the most part un-weakened and rudely alive…

The people who killed Bruno and the people who hated and threatened Copernicus, were afraid of change. Simple as that! They knew what the world looked like, they knew how it was explained, and they were comfortable with things as they were. Less about the feeling of fear and more about the feeling of certainty, they desperately taught what they knew to their children and drilled their ideas into school curriculums. They were sure that they understood the way things worked, and so what kind of people would they be if they didn’t help the next generation to understand things as they did.

As Bruno’s screams died away and his charred body shock-down to ash in that horrible public square, it was a message, not a punishment: the game of knowledge creation is over, stop seeking new ideas! All we need, and will ever need, has already happened – we ought to look only to the past, not to the future. Copernicus’s theory about the earth being the centre of the universe wasn’t, in fact, new at all. Shortly after Aristotle’s death, the Greek philosopher Aristarchus wrote in books that have been lost to time that heliocentrism made a lot more sense for what he was seeing in the night sky. Nearly two thousand years later Copernicus’s great step forward for science and for our species, was to take Aristarchus seriously, and to be the first to do so.

His achievements ought not to be diminished by this, Aristarchus’s idea and the observations he made were there all along, waiting to be discovered or rediscovered. What Copernicus did by daring to challenge the theocracy of his day, changed the face of human development towards everything worth wanting or caring about. The Copernican Revolution – as it came to be called – would begin to play out and find its worth over the next century in the everyday attitudes of ordinary men and women. People began to look back with their own critical eyes, discovering that not all the Greeks had the same thoughts and philosophies, that clinging to the works of Aristotle was an arbitrary choice, that knowledge changes and does for the better, and most importantly that they too can accomplish these changes. Copernicus’s legacy was a newly self-confident humanity who thought mistakes were everywhere, and that it was up to them to discover and correct them – in the words of Joseph Agassi, he was “The most important man in the history of modern science”.

Born into this new world was Galileo Galilei, an early beneficiary of the Copernican Revolution. He asked a simple, innocuous sounding question, with tremendous implications for the growth of science: “If you were on the moon and you looked at the earth, which would be smoother, the land or the water?” For all those centuries before him, people had believed that the moon was a giant crystal mirror. Mirrors shine when exposed to light and the moon shines when it is hit by light from the sun. It all made sense! If the moon was nothing more than rock and dust then you would not expect it to shine in the sky as it does, so everyone thought it was made of such things.

Galileo changed this! For years the small flickers of scientific growth – those embryonic individuals and communities – which popped-up from time to time, had at least one thing in common despite their differing methodologies: they believed that truth was found through observation. The world beyond our senses doesn’t lie to us, and so we only need to see it for what it is, observe it long enough and with enough detail to understand everything about it. And they had help: the development of new instruments, allowing them to see smaller and larger and further than ever before.

All this map building and data collection was a mistake, thought Galileo. He had seen too many theories come and go, built upon what were solid observations, only to then be destroyed. The problem with science, the thing holding it back and drenched by those pernicious thoughts of certainty, was boring old human thinking. Trust your eyes and the moon appears to be a crystal reflecting the sun’s light back at us. Begin to dig up the preconceptions and the under-the-radar theories that are involved in that observation and things collapse, fast.

Hang a mirror on most walls, and you will see that contrary to your expectation it is not as bright as the walls; in fact it is considerably darker. The mistake that is commonly being made here is one of perspective. We tend to think the mirrors are shinier than walls because when light hits the mirror it is reflected, and we see those reflections on other walls. The mirror is glaring light back at us, and so it appears to be a very, very bright object. But stand at certain angles – try it now – where the light is not being reflected in your direction, and you will see the true darkness of the mirror.

Now the mirror is not only dark, but also flat. This is why the mirror reflects light in only one direction, and appears bright from one view point and dark from all others. The wall next to the mirror is not smooth, or not as smooth, but if you look closely at the wall under a microscope you will see it is made up of countlessly many small pieces, which are individually smooth. A hodgepodge of tiny little mirrors, each reflecting light in different directions, meaning that unlike the mirror the wall also appears to be bright from all directions. The mirror is brightest from only one angle, the wall brighter from all others.

Back to the moon then. Back to the people of Europe’s Middle Ages staring up at its brightness in the night sky, understanding what they were seeing was due to the reflected light from the sun, and making the simple connection to the thing they knew best reflected light: crystal mirrors. In two ways Galileo put an end to this type of thinking. The first way should be obvious by now: if the moon appears to be not only bright, but also bright from all directions – as it does – then it cannot be a mirror. The second way runs us back to that question of his: “If you were on the moon and you looked at the earth, which would be smoother, the land or the water?”

Staring up at the moon, even with the most rudimentary instruments of the day, you don’t see an unblemished orb glowing back at you. Instead you see a glowing orb littered by dark spots. Before Galileo changed the way that we see the moon, those dark spots were a mystery; an unexplained, or poorly explained, phenomenon. Now that the moon was made of more common earthly materials, he conjectured a better theory. If you were on the moon looking at the Earth, you too would see darker spots. Those spots would also appear smoother, just as the ones on the moon did when looking the other way. It was water! The moon was covered by oceans and lakes of water. He was wrong of course, but also much nearer to the truth than any other who had come before him.

Galileo’s telescope wasn’t powerful enough to see the moon in sufficient detail to see those oceans and lakes, but it was powerful enough to see them move over time; shifting gradually across the landscape. Galileo had a problem. What he was seeing was not water after all, but darkened valleys next to light covered mountains, and then the mountains becoming dark as sunlight finds the right angle to brighten those valleys. He never saw a crystal mirror in the sky, he never saw oceans and lakes, and he never did see mountains and valleys. Observation was not the answer to the future growth of science, and knowledge is never extracted from the world around us through our senses. What mattered was how well people reasoned and theorised and developed abstract ideas: good ideas leading to mountains and valleys, bad ones leading to crystal mirrors in the sky.

Galileo’s close friend Johann Kepler entered the scene next. The two men believed that the natural world that they were trying to make sense of, was God’s perfect creation. God doesn’t make mistakes, only people do! And it was therefore more than just a matter of truth and knowledge creation and progress to correct the mistakes they saw around them – as well as within their own theories – it was a religious duty.

So deep in his convictions was Kepler, that when he discovered the work of Copernicus his reaction was to imagine that the sun was a “symbol of God”, the centre of our universe as we – and everything else – move around it in circular orbits. Circular orbits! It made mathematical sense, and Kepler published a book about the elegant design of our cosmos. He then realised the mistakes he made in this book, and he wrote another. Chasing ever more accurate calculations, Kepler developed creative ways to look closely at the movement of the planets; expecting that his efforts would show the intricate precision of God’s circles. “No matter how small a mistake is, it matters”, and so – despite being unbelievably accurate for the time – the existing measurements would have been an insult to God, even if only marginally inexact.

But with every new calculation and improvement, things began to make less sense. The circles swelled and popped at the seams, until they looked more like eggs. There was a problem with the model! These ellipses represented not a misunderstanding about the exact placements of God’s design, but a misunderstanding about what God’s design actually was. Kepler’s great idea – “the first man in history who said that planets do not go in circles” – was something he didn’t want to imagine was true; but theories need to match the empirical world, and when they don’t we need to have the intellectual honesty to admit that they were wrong.

None of this said anything noteworthy about the truth or falsity of elliptical orbits though, and Kepler sensed this in his pessimistic attitude towards his own theory. The circles which now clashed with the data, did not clash at all for centuries upon centuries when people just like himself looked up at the night sky and saw rounded planets in rounded orbits. It was only due to Kepler that people would begin to see ellipses instead, and so all that his theory actually showed was (1) how enormously suggestable and theory-laden our observations are, and (2) that Galileo and his predecessors were wrong.

They were wrong all along of course, but it is only in light of a new, competing theory that this can ever be known. So what was there to stop his own theory turning out to be false as well? The short answer was nothing, but here we start to see some early seeds of falsificationism being understood, with all the appropriate optimism. Rather than waiting anxiously for his theory's evitable execution at the hands of a usurper, Kepler staked the first clear sign post for what good science should look like, and how good science should behave. Circular orbits was a flimsy idea, and so were elliptical ones, but it was his attitude that would make the difference. There were things he could do!

Rather than protecting his theory, he would expose it as much as possible, take risks with it; try to head-off future scientists by doing their work for them. It seemed true to him, but just like with Galileo he didn’t know what he didn’t know – specifically, he didn’t know how future calculations might destroy what he had built. The challenge before him was one of extension, stretching his idea about ellipses to its farthest corners, making completely novel predictions, all the while building-out his theory and making it increasingly falsifiable. 

If the planetary orbits moved in elliptical orbits, then there must have been a reason for it. And such a shape would require two “pins” or “focuses” to stretch it outwards in such a way. If there were only one focus – in this case being the sun – then you would expect to see perfect circles, or something very close to it. If Kepler’s new theory was correct then there must be something out there warping the geometric symmetry – an unseen force that future generations would soon discover. Kepler didn’t know about the sun’s gravity, but his reasoning was incredibly prescient, also committing his theory to variations in the speed of the planets. A steady circle would produce a steady speed, but with this new model he speculated that you should expect the speed of those planets to change as they approached different regions of their orbit: faster when closer to the sun, and slower when further away.

Across the English Channel, Isaac Newton entered college at a time when this new curiosity and scientific atmosphere was firmly in the air. Instead of simply taking Kepler seriously, as many scientists were doing, he also embraced the implications. As a reasonably young man, Newton began imagining what that unseen force of Kepler’s might be; he jumped into the problem that had been left for science to answer and took it upon himself to devise what such a force would look like, and how it would affect other objects, both large and small. It was itself a “revolution”, and produced a tremendous “quarrel among the scientists of his day”, who believed that Newton’s idea was an intolerable “step backwards”.

They already understood what forces were, they had rudimentary theories to explain them, and the one thing everyone knew for certain was that forces were local events, never acting over large distances. Without all the details, the law of inertia was an easy enough thing to observe, or so people felt. Accelerate something and it will continue along at that speed unless something else begins to slow it down. Throw a ball in the air, watch it accelerate, slow down, stop, and fall to the earth and you have some rudimentary understanding of what is happening. And that rudimentary explanation is clearly local, the forces at play are the energy from your arm and the pull from the ground.

Newton began running calculations to get a more precise theory of when a ball, or anything will go up, at what speed it will happen, when the ball’s momentum drops to zero, and then how fast it falls. The tremendous discovery he made was that the higher the ball went – for example – the smaller the downward force upon it. What was happening in such an experiment might now appear clear to us, but back then it shocked the foundations of science: “if we are twice as far away from the earth, our gravity becomes four times smaller. If we are three times as far away from the earth, our weight is nine times smaller”. This would soon be known as Newton’s inverse square law.

Just as with Kepler, the implications of this one discovery were extraordinary. It means that our weight and how we experience movement is proportional to this new force, and that 300 years before the first satellite was placed into orbit Newton had proved it would be possible. And it didn’t stop there, fundamental discoveries in science tend to reach beyond themselves, with new implications leaping constantly from the darkness: as that acceleration away from the earth is happening, heavier objects behave differently to lighter ones. Newton’s force acted more upon the heavier ones, and so this new resistance needed a name: mass!

Why should the ball fall to earth and not the earth to the ball? The earth had considerably greater mass. But this left Newton with a dramatic concern, his theory was telling him something that seemed nonsensical. Though the mass of the ball is dwarfed by the earth, it still has a mass, and so it should still have an effect; an unbelievably minuscule one, but an effect nonetheless. As the earth pulls the balls down (so to speak), the ball is also pulling the earth upwards. Or to put it in more comparable terms, “When I push you, you push me; when I pull you, you pull me”; one is impossible without the other. This would become Newton’s laws of action and reaction.

With each step, the discoveries became evermore elegant, and opened wonderful new opportunities for accuracy and experimentation. But it came with a horrible problem. Guided by the Royal Society (of which Newton was a member), science was expected to look a certain way: clear, distinct, and coming from a “mass of observable facts.” Newton’s theory had none of these! For it to be accepted, people would first have to change their understanding of what a theory ought to be: from data collection and parochial events, to universal explanations of how the world worked, as well as everything beyond it.

Newton was also swept up in this gatekeeping, unsatisfied with his new theory not because of what it said, but because of how it looked. He was a follower of Descartes, and so was stuck on the nagging idea that instead of invisible forces, gravity and all the rest was explained by direct contact of a kind: microscopic collisions pushing objects around. On this view, repulsion and attraction were both impossible and redundant. Newton’s solution was to try hard to make his theory fit Descartes’ design, and followers tried too, attempting – with great effort – to squeeze one theory into another.

They all failed of course. Descartes was wrong! But it took more than good sense to change people’s minds, it took good predictions and their failures. If Newton was correct, the planets and stars and moons and comets and asteroids and assorted space junk would be dynamical objects, attracted and repelled by different forces, and so moving in different directions. Under Descartes’ theory of collisions, large vortices would direct all objects in the same way; an invisible liquid of microscopic particles pushing everything along its path. The solar system was one of these vortices, and each observation seemed to prove the theory true, with all the “heavenly bodies” circling the sun in the same direction. Then in 1661, astronomers noticed a comet in the night sky going the wrong way, and the theory was dead.

Newton thought that his theory of gravity would become nothing more than proof for Descartes’ Cartesian theory. In the end it did the opposite, showing not only the falsity of Cartesianism, but also a whole new way of developing scientific theories; one based on explanations, not on observations. But mistakes often repeat as habits, little tricks within the human mind, and so as people moved on from the dominating figure of Descartes (just as they had with Aristotle before him), instead of doing away with the idea of the infallibility of great men, they traded in one for another: everyone was suddenly a Newtonian, and it was now Newtonian science that contained no mistakes!

In the year 1800, The Royal Institution was built in London. Its goal was less about charity than it was about the changing world of science and independence of many new discoveries. Amateurs were making breakthroughs and solving problems that the rich and the patronised could not. Tinkering away in their spare time, these part-time scientists were showing that there were no social boundaries to knowledge, and that anyone could learn the requisite skills and move the enterprise forward if only they had the will. And all this came to the benefit of science as a whole, and society writ large. The more people working on a problem, and the more minds dreaming up creative new explanations, the better.

As the industrial revolution hit its stride, there was also the matter of factories needing skilled labourers. And so The Royal Institution began running public lectures to educate the poor, and build that background knowledge within English society. In those early lectures, standing almost in the shadows at the back of the hall, was a seventeen year old Michael Faraday. Growing up incredibly poor, Faraday had so little schooling that he had to teach himself to speak proper English, and yet he began scribbling down notes, building his own small books on chemistry and biology. The books got his foot in the door, and when a menial position opened at the Institution he took it. Faraday then had access to the scientists running the programs, and he battered on enough doors to land himself a job as an assistant. A few years later he was delivering the lectures himself, and then finally he ended up as the director of the whole Institution.

Science doesn’t have too many Cinderella stories, but Faraday was one of them. Never able to catch-up enough on the mathematical education he missed out on, Faraday became a Chemist. And he dodged most discussions of physics for the same reason: “there was too much mathematics in it.” But when electromagnetism was discovered, even he couldn’t help but get caught up in all the excitement. Following Newton, the French physicist André-Marie Ampère built the new theory around pushes and pulls, attraction and repulsion. The question at hand was to explain how electric matter was magnetised, and the best scientists of the day simply looked over their shoulders for a semi-divine answer.

Sitting alone in a Danish laboratory, Hans Christian Oersted spent twenty years thinking up better alternatives. The whole picture seemed a lot messier than the Newtonians were willing to accept. Different kinds of electricity were being discovered, electric forces were being used to break down chemical bonds, and Oersted could see the transformation that electric matter was producing in previously non-electrical matter. Soon he was talking about more complex forces beyond the standard push and pull, involving dynamic rotations and turns.

It was all very heretical, and so no one took Oersted seriously. Until Faraday that was! Defying expectations at most turns in his life, Faraday had developed a love of criticism: people who dare to think differently and ideas that attack existing standards. The young chemist began constructing an experiment to solve the issue, and perhaps offer some late honour to Oersted’s name. He started by dipping half a magnet into a cup of mercury, so that the other half (and the other pole) remained above the surface. He then hung a wire from above the magnet so that it just faintly touched the surface of the mercury. A battery was then attached, one side to the mercury and the other to the wire. As the electric current ran through the wire it was loose enough (as it dangled) to move around however it may.

Watching the wire as it rotated around the magnet, Faraday knew that he had proved not only André-Marie Ampère wrong, but also Isaac Newton. He had shown not only that there were more physical forces out there than had previously been accepted, but also every force that does exist can be changed and altered to become another type of force. What Faraday didn’t realise was the extraordinary scope of what he had just produced in that small laboratory: here was the first ever electric motor. But soon enough he was building-out his theory with ever new discoveries, such as reversing the process and converting magnetism into electricity rather than just electricity into magnetism; what we now call the electric dynamo.

As he worked away on problems such as these for the better part of a decade, the social implications of what he was doing should not be understated. Riddled with a “self-doubt” that most of his opponents lacked, Faraday had “a very hard time” of things. He wasn’t just building a new science, he was destroying an old religion (Newtonian physics). Isolated and lonely, Faraday asked himself “Who am I to fight the whole world?” and more than once almost gave up hope.

When the breakthrough came, and then when more followed, he had to present his discoveries to the scientific community who mostly believed he was either “hoaxing” them or “deceiving” himself or “knew so little about mathematics” that he must be wrong. But truth survives under its own weight, Faraday’s experiments worked where Newton’s didn’t, and bit by bit, scientist by scientist, he drew a crowd of followers. As the full magnitude of what he had done began to settle over him, Faraday questioned with childish disarmament in his own diary that perhaps “All this is a dream”.

What Faraday had done changed science forever, not in the discoveries themselves nor in the impact they had, but in the breaking of invisible chains. Chains that held people to old ideas and great men of the past, as well as drawing a steady and fixed path into the future. Faraday corrected more than just some false physical theories, he opened the space of the scientific enterprise on a boundless vista. After him, scientists were encouraged to have bold, fantastical thoughts about how things are and what explains them in the best possible way. He freed creativity from Newton’s cage, and showed that the “most exciting thing about science is that we don’t always know where it is going.”

Soon Thomas Johann Seebeck was discovering the interaction between heat and electricity, and how one can be changed into the other; Thomas Young theorising about light being wavelike, in the same way as sound was; and a young Albert Einstein was looking back at Faraday’s “bold idea” and dreaming-up the most outlandish of new ideas, showing once again that Newton was wrong.

The only way to really understand the history of science – as with history in general – is to look back at the problems that people had! The things they battled against, the ideas they challenged, and the world that they wanted to improve. Today Einstein’s theory remains with us, but due to Faraday every young graduate student (in the field) worth his or her salt is now spending their days and nights not trying to show Einstein’s genius, not paying homage to the great man, but rather trying desperately to cut him down at the knees, to prove him wrong.

And so the story continues…

 

*** The Popperian Podcast #24 – Nimrod Bar-Am – ‘The Life and Philosophy of Joseph Agassi’ The Popperian Podcast: The Popperian Podcast #24 – Nimrod Bar-Am – ‘The Life and Philosophy of Joseph Agassi’ (libsyn.com)

W.W. Bartley and Pancritical Rationalism

In conversation with Paul Levinson

"Bill, people say that I am a difficult man. Am I a difficult man?"

"Karl, only a difficult man would ask a question like that!"

Bill Bartley had first heard about Karl Popper at the same time that he heard about his difficultness. Rumours of the old philosopher’s dark and grinding tones had made their way across the Atlantic, and in casual goodbyes with his former professors at Harvard College, Bartley told them of his plans to study with Popper regardless. To a man and woman, they “strongly discouraged me”, each taking their brief last moments to issue a “warning”: “I would regret it.” There was care and there was worry in their voices. It was only years later, “when they learned that I did not regret it”, that “they became very angry with me.”

A month or so later, Bartley was being interviewed by the Registrar at the London School of Economics and Political Science (LSE). She was cheerful, professional, and to the point, telling Bartley that based on his interests and background she had already chosen an adviser for him (“let us call him ‘X’”) and after a few years of study and writing of “a very good book” he would have his Ph.D. “I told her that I would be happy to write the book, but that I had come across the ocean to study with Popper, and that if he were not to be my adviser I would return to America on the morrow. She scolded me and told me that I was being difficult.”

The next morning he met his new advisor, Karl Popper, and unlike the previous day, there was “nothing routine about this interview.” Without an obvious sense of kindness or personal interest, Popper stared hard and “discomfitingly” at his new student – “His eyes were trained directly on me” – and launched into a near-accusatory lecture about why all of Bartley’s old philosophy professors at Harvard were “utterly” wrong! He then started a whole new lecture, this time about some papers of Bartley’s that Popper had got his hands on. The gist was this: the well-credentialed student-journalist, and editor of the famous Harvard Crimson, that now stood before him couldn’t write. Or at a minimum, “wrote very badly.”

To work with Popper, Bartley would have to be less “pretentious” and much, much more “clear” with his language. Just as heavily obscurant prose should have no place in the world of philosophy, neither should a flowery, stylised prose, whose primary interest is in being “eye-catching” rather than “accurate”. Philosophy was – and is – about “reaching toward the truth”, and nothing should ever be allowed to get in the way of this. Looking back at Popper in silence, as the criticism continued to rain down – wet and cold – upon his first day on campus, Bartley knew two things: “I felt immediately-and ever after-that I had his whole attention” and “from that moment I loved him and knew that I could learn from him: that it would be worth any difficulty that might arise.”

A week later, Bartley attended his first seminar, and from experience he understood the normal procedure of things. But this wasn’t normal! Things ordinarily would begin with a student presenting a paper of theirs, followed by questions and comments and some discussion. The professors may join-in, but they tended to stay at a distance and let their younger colleagues get on with it. “Popper's seminars were different”, rather than a cordial back-and-forth, or a few perfunctorily questions that the listening student might have missed, “they were intense confrontations between Popper and the person reading the paper”.

At this particular meeting, the unfortunate student who had been chosen to present his work only managed to “read about two paragraphs” over the course of an hour. With each and “every sentence” Popper interrupted. Every word mattered and was drilled-into for meaning and purpose; “nothing passed unchallenged”. Popper asked the student a question, “the student dodged it.” Popper asked it again: same evasive answer. Popper stood-up and repeated his question once again, and finally the poor student “answered at last”. But the public inquisition wasn’t over. “Were you then wrong in what you said first?” Popper asked. The student began to mumble away in a different direction, so Popper inquired again; “Yes. But were you then wrong in what you said first?” The bullied student nodded. Popper continued: “Do you apologize?” The student nodded silently again. “Good”, Popper replied. “Then we can be friends.”

Other people weren’t so lucky. On at least one occasion Bartley witnessed Popper physically grab a student by the collar – mid-presentation – and literally throw him out of the seminar. None of this was kind, but there was an attempt at kindness behind it all. There is “nothing easier” Popper once said, than “to nod sagely at a student and say that what the student wrote or said was ‘interesting.’ But that is not teaching, and does not involve learning.” The ordinary rituals and pastimes of academia did nothing for the students on the other end of them; but if a professor were to ever take these interactions seriously, and show some actual care for the intellectual future of the people learning from them, then Popper was sure that they would behave more like him.

Since his early days in Vienna, Popper had dreamt of a new kind of school. A place without all those traditional hang-ups of university culture, with a group of people who refused “boredom” and instead chased-down new problems, discussed those problems loudly, clearly, and without ego or attachment, and a place where no-one – ever – wasted even a moment cramming for gate-keeping examinations. The school found its home at the LSE, and it was built in Popper’s image, but of course many students were “not ready for such a school” and after a few grim encounters “soon dropped out.”

The ones that stayed, showed the battle scars and the intellectual development you might expect. Names like Agassi, Jarvie, Lakatos, Watkins, Feyerabend, Gellner, Gombrich and Sabra, who tore the face off modern epistemology and reshaped the field with new rigour. And then there was Bartley! Perhaps the only student of Popper’s who chose to study under him not in spite of his “difficult” personal reputation, but because of it! So when he first met his supervisor, he knew the early conversations would be hard going. And they were! “For he would often interrupt what one was saying and begin a long and flowing discourse; and there was no hope of interrupting once that had started.”

The complaint of not being able to get a “word in edgewise”, is as fair of a thing that could ever be said about Popper. But Bartley wasn’t buying the standard reason: Popper’s ego! Quickly he surmised that the older man might be “quite deaf”, and that all that talking he did was a coping mechanism of a kind, with Popper picking up the few words and cues that he could, guessing at what was being said, and speaking so widely in response as to hopefully cover what was being asked. Bartley was correct, Popper was deaf, and this obstacle – as well as the relationship between the two men – was quickly overcome by a clever strategy. Before their scheduled meetings, Bartley would write Popper detailed letters “setting out the issues that I wanted to discuss”, allowing Popper to clearly understand both what the topic was, and what Bartley’s views were on it.

“Modest in those areas where he had a right to be vain, and vain in those where he had a right to be modest”, Popper refused to admit his deafness to himself, let alone anyone else. Eventually it was public embarrassment while delivering the famous Sherman Lectures at University College London that forced the issue. He couldn’t hear the questions from the audience, which led at least one visiting professor to complain that Popper had “deliberately pretended to mishear” so that he could “dodge my question.” The very next day, he went out and purchased a hearing aid!

Deaf or not, what Bartley found in Popper was rare and liberating. As a young journalist at Harvard, Bartley had shiny ideas of purpose and community. The other half of his university life largely involved attending lecture halls, with each passing year he could feel new boundaries and less freedom. All the vitality that he found at The Crimson, and which flowed through the air as a new undergraduate arrival at Harvard, was being slowly pushed aside; making room for traditions and conventions of a “professional community”. Working with Popper reversed this for Bartley, and wound the dial of academic vanity back to zero; every day, and for every topic, he had the “irrevocable permission and freedom to throw myself into the world of ideas”.

The university Registrar was prescient enough. After a few years Bartley did finish his Ph.D. And he did so with “a very good book” in tow (The Retreat to Commitment), even if he does say so himself. And for seven world-shaping years, Bartley learnt from, studied under, worked with, and argued against Popper. Those years were unusually “idyllic” for anyone in Popper’s company, in fact there were so few – even minor – quarrels that Bartley’s colleagues and fellow students teased him about the closeness of the teacher and pupil. Then in 1965, they did eventually fall out, and in such brilliant fashion that “we did not speak for twelve years”. That dispute runs to the heart of everything that matters about the philosophy of science.

Before Popper and before Bartley, there was a long – and rich – rationalist tradition. We might call the beginning of this classical rationalism, just for a bit of time keeping and neatness. This is the world of the ancient Greeks, and here the question first steps through our door with the noticing of other cultures, and the awareness of difference: different myths, different practices, and different intellectual ideas. At this early stage, the problem of rationality is the problem of what culture should I embrace? Here there were no claims to rationality being universal in any way, but – in the words of Joseph Agassi – by searching for the best possible “replacement” it was “aspiring to be universalist”.

Enter comprehensive rationalism, and a new breed of philosophers led by people like Immanuel Kant. Instead of scurrying into the refuge culture, now the search was on for a harder, more permanent, less human, foundation. And the possibilities were out there, so to speak: intellectual intuition or sensorial experience or transcendent logic… These all seemed like good options, and they all failed by their own accounts. It was never made clear what the rules of such rationalist theories were, how these were supposed to achieve the things they claimed. As soon as rationalism itself was turned around upon these foundations, and the question asked how does it all work? The theories collapsed, and rationalism was again adrift.

The solution was a bad one. Pick a starting point, something from which we get things going – from which the rationalist wheels can begin to spin – and just admit that the choice is made from beyond the boundaries of proof and evidence and explanation: an irrational commitment that allows rationality the small foothold it needs. It runs like this: choose induction (for example) and don’t worry yourself about the failure to explain how induction works, it is enough that it does work. Now get on with discovering all the knowledge of the universe through the inductive method, without ever having to be bogged-down looking backward and questioning the principle itself. That it works, is proof enough that it is true.

The rationalist community had what it thought it needed, and got busy working on human progress. And quietly, behind the loud scenes of public celebrations, the irrationalist community were also cheering! The comprehensive rationalists were staking their houses on something that sounded very reasonable, and which was a definite improvement on what had come before. Here is Popper giving them some temporary dues before he then tries to cut them down: they have the “attitude of the person who says: ‘I am not prepared to accept anything that cannot be defended by means of argument or experience.’” Very reasonable!

Things get unreasonable with the large space it opens up behind it. If you are allowed to simply declare your own starting point, and remove that starting point from the ordinary rational process, then what is stopping the church next door doing the same thing, saying something like ‘we start from a first principle of God existing and His commands for us being written in a certain holy book, and then we reason outwards and let rationality take its course’. If it isn’t obvious yet, these foundations really do matter, and where you end up will be heavily influenced by where you start. By defining rationalism in this way, the comprehensive rationalists were inadvertently letting any crank who also wanted the title, to claim it.

So lowering the standards of what was rational in order to let rationality win the day, just was not going to cut it for long; the bend of its own logical inconsistencies would eventually become a self-crippling problem. Karl Popper, and critical rationalism, to the rescue! Rather than beginning with an irrational faith of some variety (even if that is an “irrational faith in reason”), this was a much braver step into the void of human experience, and much more humble expression of what is possible. Here is Popper again: “[this is] a critical form of rationalism, one that frankly admits its limitations, and its basis in an irrational decision, and in so far, a certain priority of irrationalism”.

Critical rationalism was different to its predecessors in one very important way: it sought to minimise its irrationalism, not to merely accept it in full. If we must kick things off in an unreasonable place, let us then commit to making it as small of a place as possible. For any given decision, it is always we that make it; it is never forced upon us as a next logical step, the consequences of the decision don’t predetermine that we make it, these are moral judgements after all, and “in the case of moral theory, we can only confront its consequences with our conscience.” We are fallible, and so all of the decisions we make are also fallible. There can never be a thing with certainty in our lives, nor in rationality. Everything we do, say, think, act up, or choose, might be wrong, and likely is in some way; that these mistakes – big and small – often seem entirely rational at the time, makes no difference to them still being mistakes.

The small shard of irrationalism that Popper was begrudgingly letting into his theory, is the commitment to the rationalism itself and the favourable attitude to criticism (which are one-and-the-same thing for Popper). That these commitments or attitudes are themselves not rationally justified, is for Popper missing the point of rationalism itself. If rationalism moves forward through criticism, and it is moral to want to continue moving forward in that way, then criticism is necessary; even if this means accepting it as a presupposition. Popper thinks that the problem here is less of a problem than two poorly phrased questions: 1. Asking for a justification for rationality, in a world where justification is not possible. 2. Doubting the central place for rationality and rational decision-making, without thinking about the impossible alternative – people living in constant irrationality and deliberately turning towards irrational decisions. Without first presupposing the value of rationality, questions of rationality make no sense.

But the irrationalists were still cheering, and staring down their rationalist foes, saying proudly: See! You are just like us! At some point our arguments become circles too, our regress becomes infinite, and then, just like you, we sidestep that by making a one-time, irrational, leap of faith. Talking about his old friend, Agassi sets-out the coming dispute like this: “This argument is not serious. William Bartley said, nevertheless, we rationalists have to answer it properly”.

It comes down to a pair of crises. In the raw, psychological sense, the first of these usually manifests in early adulthood, when we are finding a sense of who we are as people, and then building-up that self-image; something that we then project out upon the world. This is called the “crisis of identity”. The “crises of integrity” tends to occur later in life, when we are out there trying to live-out those chosen identities, and struggling to do so. Here Bartley is a therapist for his fellow philosophical patients: “My thesis is that the perpetual crisis of integrity into which rationalists are continually falling or being forced is due to a neglected crisis of identity in the rationalist tradition”.

It runs something like this: despite all the great improvements of Popper’s theory, particularly his exorcism of the search for justifications, an impossibility – as well as a monster – remained within the minds of most rationalists. The impossibility was a matter of theory selection, with the average rationalist holding-on through whitening knuckles to unattainable notions of what rationalism can, and ought, to be. And then having their spirit crushed by a crisis of integrity, when they cannot live up to their own standards. The simple, though easily deceptive, mistake they are making, is equating their theories of rationalism with the very possibility of rationalism.

There is a story that both Popper and Bartley like to tell, and it is a helpful example for this crisis of rational integrity: the philosopher and mathematician Gottlob Frege spent a significant part of his career theorising about the logical structure of mathematics itself. What he wrote and said, seemed true, and was widely accepted as such. Then Bertrand Russell discovered a series of paradoxes within Frege’s (as well as Russell’s) theories. When he heard about this, Frege exclaimed to an audience “Arithmetic has been set spinning!” But of course it hadn’t. The only thing which was spinning was Frege’s theory of arithmetic.

For Bartley, the “miserable state” of rationalism was – in part – due to the same type of error: the best theories of rationalism that existed were either self-refuting, or permissive of leaps of faith (no matter how small), which opened-wide the irrationalist backdoor. And because of this, many philosophers were suffering from a crisis of identity about the whole rationalist project. That was the impossibility. The monster was something that Popper thought he had slain, but which remained heavy in the shadows of modern philosophical tradition: an “authoritarian” oppression, squeezing dogma, rigidity, and indeed irrationalism, into everything it touched.

That infinite regress which causes all the problems here, comes from questions like “How do you know?”, “How do you guarantee that?”, and “How do you justify that?” These can be asked again and again, running further and further back down the deductive chain, until there has to be a stopping point where the rationalist says No more! Enough! And becomes an irrationalist himself. Not seeing the monster behind them, many philosophers, including Popper, accepted some form of this. Bartley refused!

All those regressive questions didn’t sit well with Bartley, who noticed that they all “demand authoritarian answers”, of one kind or another. It’s not that they reasonably lead towards an unavoidable stopping point of some kind, but rather that they already have stopping points baked-in to the questions themselves. The only reasonable sounding answers to such questions – which keep the appearance of rationality intact – are unpleasant things like “the Bible, the leader, the social class, the nation, the fortune teller, the Word of God, the intellectual intuition, or sense experience.”

How do you know?” is nothing more than a kindly phrased demand for justification, a demand for something that cannot be answered… unless of course the answer is something authoritarian, unquestionable, beyond justification itself, and guaranteeing a correct outcome. An impossible question producing an impossible answer. Since Popper and critical rationalism, the role of the philosopher was to seek and eliminate error, not to guarantee truth. And yet when the inquiry was of rationalism itself, many Popperians – including Popper – seemed to lose touch with their own theory.

When a philosopher is asked “How do you know?” the only accurate could – and should – be: “I do not know; I have no guarantees.” When he is pressed further, and asked to elaborate, he would have to say something like: “Some of the theories I hold may in fact be true; but since there are no criteria of truth, I can never know for sure whether what I believe to be true is in fact so.” When the unsatisfied irrationalist continues to push the interrogation, seeking the authoritarian answer he desires, the philosopher ought to turn things around with a better question: “How can our intellectual life and institutions be arranged so as to expose our beliefs, conjectures, policies, sources of ideas, traditional practices, and the like— whether justifiable or not—to maximum criticism, in order to counteract and eliminate as much intellectual error as possible?”

One way of explaining the mix-up and strange blindness of Popperians to their own theory, is that ground-breaking, and genuine, innovations in thought and philosophy – running against centuries of accepted wisdom and decades of personal education – are difficult things to properly internalise. A better explanation would be that such blindness was, in fact, a feature of critical rationalism; a painful outgrowth from an internal error: the “fusion of justification and criticism”, a “hidden philosophical dogma”.

The best example of this lingering justificationism runs to the heart of laboratory level rationality. Here it is accepted that an argument of some kind is rational if, and only if, the conclusion follows from the premise “through the relationship of logical deducibility.”

1. All roses are flowers.

2. All flowers are beautiful.

3. Therefore all roses are beautiful.

Or,

1. All whales are mammals.

2. All mammals have kidneys

3. Therefore all whales have kidneys

The problem here should be obvious if you have managed to pick-up where Bartley’s theory is heading: from the beginning of our all too human talk about rationality, even at the most foundational, and logically grounded level, is an existing standard. Something that pre-defines what is intellectually respectable and what is not. And this – or these – standards exist whether you are a comprehensive rationalist looking to make an evaluation, or a critical rationalist looking to make criticism.

Also smuggled into that mistake, is a second assumption about the transmissibility of rationality. Whereas falsity is retransmitted from conclusion to premises, it is imagined that truth runs in the other direction, top-down. This isn’t so controversial, but that “intellectual respectability” runs top-down also, from premises to conclusion, with complete faithfulness and accuracy, is. The assumption can be summarised like this: “the logical derivatives of a theory inherit its quality and degree of intellectual respectability.”

This is a historical hangover of a kind, something that remains with us because our earliest attempts at building a theory of rationality involved searching for a criteria of truth. Such a criteria was an impossibility, as well as a step into authoritarianism, but the allusion remains nonetheless with the “demarcation between the respectable and the disreputable” still neatly coinciding with “the demarcation between the true and the false.”

As it turns out – though it is still taken for granted – the transmissibility requirement doesn’t work. When you go searching for basic statements to kick the rational ball into motion, you only ever find a description of the empirical character of something. Which is a report of sensory experience, and then the logical derivatives from that statement. However, it turns out that the empirical character of the statement is not transmissible in any way. “From every basic empirical statement both nonempirical metaphysical statements and all tautologies follow logically”, writes Bartley “Adding to the difficulty, universal scientific hypotheses cannot be reduced to truth functions of a finite class of basic empirical observation statements—which denies empirical character to scientific hypotheses themselves. Such unwanted results hound empiricists with the well-known ‘paradoxes’ of induction and confirmation in any case”.

A scientific theory cannot be probably true, nor probably false. With an infinitely many possible theories and explanations and observations out there for any single phenomenon, a theory can only be either true or false, a 1 or a 0. And yet in this traditional shift between premises to conclusion, probability is being smuggled into the picture. In the mind of the person working through the rudimentary rational theory, an error of transmissible probability is being made, followed by another error of linking that probability to respectability; something like: because I think A is true, and because I think B is derived from A, B must be just as true.

A Popperian might interject here and say ‘But as you acknowledge, criticism or testability runs in the other direction, being retransmissible from conclusion to premises, and critical rationalism is all about testability’. And here Bartley would smile, and respond that this would mean that “testability would be transmissible from premises to conclusion”, and that “any logical consequence of a hypothesis would have to be as highly testable as the original hypothesis”. But this just isn’t the case, because all hypotheses are testable or falsifiable by the testing or falsification of their consequences; meaning that a hypothesis can have a higher “degree of testability as any of its consequences.” An example might be helpful, here is Bartley’s:

1. All who dwell in London are English.

2. All who dwell in Hampstead are English.

3. All who dwell in Bloomsbury are English.

Let’s examine the testability of these three hypotheses, and their relationship. If we correctly assume that both Hampstead and Bloomsbury are in London, and that the second and third hypotheses are derivatives of the first, what would happen if we could falsify the second and show that not all the people in Hampstead are English? By the Popperian rule of retransmission, the first hypothesis that all the people in London are English must also be wrong, because Hampstead is in London.

But what would happen if the second hypothesis were not falsified, and the third hypothesis – as another derivative of the first – has never been tested? Well, in that case, the first hypothesis would be falsified by the third hypothesis being tested and falsified in the future, but the second would not. It would remain unfalsified because they are logically unrelated: showing that some people in Bloomsbury are not English, doesn’t mean that the same is true for the people in Hampstead. They are unconnected. So the first hypothesis is more testable than its derivatives, since any falsification of the second and third would falsify the first, but would mean nothing to the remaining derivative hypothesis.

So it is possible for a hypothesis to have a higher degree of testability than its consequences. Meaning the consequences do not inherit their testable quality directly from the hypothesis. Nothing can be deduced in this way, there is no logical relationship. Nothing is bequeathed from higher level theories to lower ones, and nothing is returned, and the traditional account of logic and justification can be put to bed, and left to die, with a scribbled note saying Do Not Resuscitate! In the next bed, in the same ward, critical rationalism dies too, just a few painful breaths later; from the same infectious disease.

Pancritical rationalism is different. For a start, it’s healthy! It opens no doors to no monsters, of any kind. It asks nothing from justificationism. Nothing from irrationalism. Requires no leaps of faith. And solves all of those logical inconsistencies that Bartley discovered, and which you – the reader – no doubt found so painstakingly boring over the previous paragraphs. Pancritical rationalism reconciles Popper’s critical rationalism with those lost hopes of a comprehensive rationalism. Those infinite regresses, those necessary stopping points, all disappear, and what is left is something that cures rationalism of its crisis of integrity.  

Bartley’s new creation protects nothing from criticism, even rationalism and criticism itself. It is a philosophy which never has to cut off an argument, and which welcomes all questions, especially of itself. And importantly for Popperians, it is a philosophy that takes critical rationalism more seriously than Popper ever did. Pancritical rationalism accepts that “rationality lies in criticism”, and so our whole purpose – and the way we develop new knowledge – is to apply criticism as widely, and forcefully, as possible… including to the important role of criticism in the rational way of life; and including to whether the rational way of life is even possible.

A good Popperian would agree that it is reasonable to believe that something is true, and acceptable as a theory, only if it is “held open to criticism and survives severe testing.” So why should the same not apply to rationalism itself? As much as this might sound like the opposite, the people and the ideas that Bartley is targeting here are the irrationalists. As a younger man, Bartley had a firm foot in the door of religion – and for a while even considered training to become a protestant minister – and was very aware of people out there, beyond the halls of academic philosophy, who were seeing the leap of faith being made by rationalists, and saying if it is acceptable for you, then it must also be acceptable for me.

If rationality is logically limited, then every irrationalist and his dog has a rational pretext for being irrational. But “If my argument is sound”, writes Bartley “irrationalists lose the most formidable weapon in their intellectual armory, their rational excuse for irrational commitment.” Anyone willing to call themselves a rationalist, ought to be someone willing to acknowledge – and welcome – that it is something that he could be argued out of.

This step might seem small, or insignificant, but it was one that Popper refused to make. Friend to both men, Ian Jarvie, said of Bartley’s pancritical rationalism that it fundamentally changed the way in which Popper’s work should be seen and understood, and that it “transformed the problem-situation in philosophy”. The demarcation criterion needed changing, from being between science and non-science, to that of between rational and irrational; and importantly criticism was now off the chain, allowing for a larger space of competitive theories, as well as cross-field criticism from the worlds of law and art and morality etc.

What Bartley had solved was a deep issue inside epistemology, something from which “Almost all other philosophical problems are directly related yet subordinate”: the problem of reunifying knowledge with rationality.

So is a critical rationalist an irrationalist? Bartley thought so, Popper didn’t. Bartley thought that we should be open minded about being open minded, Popper didn’t. Strangely the one concession that Popper did make, was to say that his logic checked-out, but that this amounted to nothing more than a rephrasing – though be it an improved rephrasing – of what Popper had already said; he hadn’t made a mistake, as Bartley said he did, but “had intended to advocate all along” for the logic behind pancritical rationalism; but still not pancritical rationalism itself. Here Bartley called Popper an authoritarian for not seeing the momentous change that he had formulated, and a fideist (“the epistemological theory which maintains that faith is independent of reason”) for holding onto his irrational commitments. And so, that “twelve years” of silence settled over the two men.

The final words here belong not to Popper, nor to Bartley, but to Joseph Agassi, who lived through the animosity as a constant friend and intermediary between the warring parties. He also has the retrospective lens of someone who walked the line between both philosophies, though admittedly remaining a critical rationalist, and never a pancritical rationalist.

An immediate problem for Bartley is a paradox. If pancritical rationalism insists upon its own questioning and fallibility – opening itself up to criticism – what happens in practice when we actually begin to play that game? The logic runs like this: If we accept that (A) everything is open to criticism, and (B) this means that A is also open to criticism, what happens if we instead criticise and refute B? And if B turns out to be false, that means A is also false, and pancritical rationalism is refuted by its allowing boundless refutations. On this, Agassi agrees with Bartley that B is simply a statement about A, not a position itself. And being a statement – in a different domain to A – it doesn’t have to be independently criticisable.

On a more worldly level, is it really the case that pancritical rationalists are open to rationalism itself being falsified? How would this play out? Rationalism moves forward through criticism – without which there would never be any change and never any new knowledge – and so what would happen if a pancritical rationalist were to be one day argued-out of his commitment to criticism? He would have to either take his theory seriously and give-up on all future criticism, or take that criticism of criticism only tentatively and go on looking for why he might again be mistaken (continuing to use criticism). Either way, the best Bartley seems to be able to say here is that pancritical rationalism is criticisable, but only tentatively so. Which reopens that irrationalist door, allowing any fideist to also say that you are welcome to criticise his commitment to god, but he will only ever take your criticism tentatively.

Bartley thought it was important that we remain open to the possibility regardless – that the whole rationalist project might be mistaken, and should be brought to an end, if only to maintain logical integrity. And here Agassi begins to dig in his heels, asking what is so terrible about the “charge that we are dogmatic rationalists. What is this dogma?” It’s a good question. If it is indeed a dogma, then it doesn’t seem such an unpleasant type of stain to have on one’s character. Being dogmatically attached to critical debate, to thinking that all types of criticism are constructive, and to inviting as much criticism as possible, itself seems like a paradox.

A challenge then. Agassi asks Bartley to run his theory into a social framework, and see how reasonable – or unreasonable – it seems with this rephrasing. The point of a theory of rationality can only be that it connects with the real world somewhere down the line. So instead of rationality questioning its own possibility, how about members of democracy trying to overthrow that democracy; a political party running for election saying that once elected they will do away with all future voting, it will be the last election ever. Would it be reasonable to call this party democratic? And if they won, would it be reasonable to call the changes they make democratic?

Popper takes rationality for granted – or, in this social rephrasing, takes democracy for granted – but good and effective criticism should always be a poke towards finding alternatives. We don’t dump an existing theory once some effective criticism comes its way, but only after something better is offered; when a viable alternative is found that passes the tests which the previous theory did not. “What this means for Popper is clear enough”, says Agassi, “both democracy and science are open to reform. What this means for Bartley above and beyond what it means for Popper I cannot see.”

Bartley thought there was a very important difference. What pancritical rationalism meant was more criticism, new types of criticism, new manners of it, and new research protocols. It also meant less openings for irrational belief systems, which might otherwise piggyback their legitimacy on the concessions of rationalism. If you want to criticise a particular thing in a particular way, we – if worthy of the name rationalists – should be welcoming and “willing to test it.”

So let’s start again, with Agassi mediating things. Instead of attaching ourselves to rationalism as some kind of faith, or basis for thought, let’s take it as a “working hypothesis.” Nothing more than an assumption that allows for “rational deliberation” and progress. Here Agassi thinks that “Popper and Bartley would have allowed it.” Readers might sense a tone of conventionalism to this though, but even if not, this manoeuvre pushes the dispute down the road, but no closer together. Popper and Bartley would simply be moved-on to arguing about what specifically is a rational deliberation.

Agassi is undeterred. What is really lost in all of this by allowing rationalism to rest on an “initial basic axiom, namely, the claim that rationality is possible and desirable”? If this must be done, is Bartley correct that this really offers support for the fideists and the irrationalists? Are the arguments of these people really strengthened in some way by them being able to say “tu-quoque” (you too). It seems a stretch to claim that a single – standalone – axiom, one that is accepted only to facilitate a violent explosion of rationalism beyond itself, turns that rationalism into irrationalism.

Still the logic remains a problem. If you simply decide to choose a basic axiom, then that choice is not rationalism. It is a leap of faith. And for Popper, Popperians, and critical rationalists, the problem is more troubling. According to their own theory, something is rational based on its “openness to criticism”, and so what does this say about their choice of an initial axiom, if that choice is uncriticisable? Popper was aware of the fideist line he was walking here, when he talks about “the myth of the framework” or that “the most criticizable theory should be examined first”. On this, Agassi sees Popper as “unclear if not inconsistent.”

There is an old philosophical chestnut here, something that slips back into language and thought like a virus seeking a host. And it is here where Agassi thinks the discussion is going astray. It is true that classical rationalism and comprehensive rationalism collapsed, because of their comprehensive designs. Fideism is different though: it collapses specifically because it is not comprehensive. But as rationalists we must reject them both, and this “amounts to rejecting what they share: the idea that rationality is proof.” Critical rationalism and pancritical rationalism are not these things, specifically because of their openness to criticism. Above all, the Popper-Bartley dispute matters specifically because it is “painful” and unfinished. In his final thoughts, Agassi writes “I know Popper and Bartley agreed that they disagreed; I do not know on what.” And if someone like Agassi can’t figure this out, what luck is there for the rest of us?

 

*** The Popperian Podcast #23 – Paul Levinson – ‘W.W. Bartley and Pancritical Rationalism’ The Popperian Podcast: The Popperian Podcast #23 – Paul Levinson – ‘W.W. Bartley and Pancritical Rationalism’ (libsyn.com)

The Open Society and Its Enemies, and Happiness

In conversation with Elyse Hargreaves

 

Before the Open Society there was tribalism: tribal settlements, tribal chiefs, tribal loyalty, and tribal violence. Run things back far enough and the world begins to take on the same unpleasant shape. The decades and centuries rolled-by, and nothing ever changed, nothing ever improved, and everyone suffered… a lot. What defined this way of life was the ideas people held and the attitudes that ran through them. Any question about the social order, any doubt about the rhythms of nature or the rigidity of customs, had the same blunted answers, dressed-up in slightly different ways.

These tribal societies were places of magic and irrationality. Every aspect of life was dominated by routine, by taboos, and by fear. It was also comfortable! At no point did any member of the tribe ever have to ask themselves things like how should I act in this situation? Proper behaviour was always determined in advance, and if everyone simply followed the path already set before them, they could avoid the ire of their community, and be sure in the safety of their group-given identity.

They were sacrificing themselves and their personal responsibility, but they had never experienced such things, and so didn’t know what they were missing. From the high walls of their tribe, this alternative looks only as an ever-widening wilderness of problems and doubt and insecurity and risk and fear and pain and disownment and alienation. All the promise and fulfilment of being able to make one’s own decisions, had not yet been felt… by anyone! At least not anyone in the tribe. Their culture and their institutions were violently geared to snuff such things out, with dissenters either quickly corrected, quickly exiled, or quickly killed.

This is the Closed Society. Tribe might feel like a derogatory type of word to describe large and well-organised communities, but Karl Popper has something even more inflammatory in his next breath: “herd”! If herd doesn’t do it for you, then how about “organism”? An unchanging, semi-biological binding of people to one another, with such strong knots and such steely purpose, as to never allow any sort of upheaval, any social mobility, nor any desire that isn’t already approved and given. Nothing about the individual is ever permitted, and everything about the group is reinforced: “common efforts, common dangers, common joys and common distress.”

Then, in a single, impossibly bright explosion, things began to change in “one of the deepest revolutions through which mankind has passed”… and it started with the Greeks. After lifetime upon lifetime of disregard, the traditional class of Athenian landowners were suddenly forced to pay attention to the larger, poorer, population. This population had been growing fast, as the Athenian state stretched its borders in a benevolent type of colonialism. Daughter cities sprung from the earth or were transformed from their tribal colours, all from the understanding that they might mirror the extraordinary changes of their neighbours, and benefit from inter-city trade.

Seafaring and economic minds are a heavy part of the story here. All the cooperation and technology and wealth and knowledge that bounced around Greek cities with the growth of commerce, had an impact on the people it touched and the ideas they held. The economic revolution became a spiritual revolution, and all that magical or irrational thinking was under threat. New “danger spots” emerged, and although all of that movement and trade postponed things to a degree, tribalism and the Closed Society had found its executioner. Yet even in these earlier moments, “the strain of civilization was beginning to be felt”: an unease somewhere deep in the stomach, as the structures of the old fell away to a new world of increased freedom, increased choice, and the horrors of increased responsibility.

It is the price we have to pay for the knowledge we gain and the progress we make, but it is an unpleasant, dizzying price. The natural order of things comes to a sudden and hard end, and all that these new Open Societies have to offer is uncertainty, doubt, and insecurity. This strain of civilization is a personal and emotional crisis, and the destroyer of tradition. So no wonder it was resisted.

With the most to lose from these changes, the oligarchs of Athenian society went to war against democracy. But they had to more than argue against people having a vote, they had to go to “the roots of the evil” they saw around them. The new trade routes, the new monetary policies, the new commercialism, the naval strategy, the harbours, and even the long walls that linked the harbours to the city became targets for the oligarchs: symbols of a “humanitarian” revolution.

When an early force of Spartans was discovered north of Athens, these oligarchs jumped at the chance to conspire with a friendly enemy “in the hope that they would put an end to the democracy.” These are the words of Thucydides, the great historian of the Peloponnesian war – and the man whose writing defined this clash of people and ideas – himself an ardent, unashamed, card-carrying “anti-democrat”.

Sparta lost this expeditionary skirmish, and people like Thucydides learned that beyond the traditional ruling classes and beyond the educated few, there was a majority of Athenians who identified with the Open Society, believed in its ideas, and were willing to fight and die for it. But the drum beats were in motion, the oligarchs now knew how to play the fifth column, and along with Sparta they began tapping any pocket of unrest they could find across the Athenian empire. As democratic Athens was completing the sea walls, producing the most extraordinary flourishing of culture and progress that had ever been seen, the oligarchs were plotting their second chance.

As Sparta regrouped, the Athenian oligarchy swung their ears around the empire, looking for bits and pieces of unrest, and building them to hostility and revolt. Athens taxed its people, and it taxed its daughter cities. But whereas the Romans would loot their provinces of treasure and heritage, transferring wealth to the “dominant city”, as in all things Athens was ahead of its time. They instead placed a small five percent levy on everything imported and exported by sea. It was a method that encouraged the continuation of trade and independence and growth, conscious not to push the limits, or betray the principles, of the Open Society.

But a tax is still a tax, and no one likes paying them. And being taxed by a distant neighbour can easily feel like imperialism, not partnership. Through this, the oligarchs slowly twisted Athens’ empire against itself. The alternative was Sparta, and their method of handling foreign affairs was much less appealing. It was the method of the Closed Society: keep taboos rigid and resist all change, suppress any impulses toward individualism or egalitarianism, don’t trade with your neighbours but dominate them with force, maintain your superiority to outsiders, and at every opportunity enslave and steal from other tribes; avoiding growing too large and, with it, unified.

These are the principles of all tyranny, not just Sparta. And the people of Greece knew little of it at the time. They only knew Athens, they knew of the taxes they didn’t like, and they had not yet appreciated the large – and uncertain – step that Athens had made towards a better world. Tribal societies have a deep emotional anchor on their side, something that holds minds and bodies in place: they know how to select their enemies, how to explain-away their unpleasant values in terms of saving the people – and the state – from those enemies, and how to claw together a hardened patriotism around that fight; all in easy to appreciate – and hard to reject – slogans like: “back to the state of our forefather” or “back to the old paternal state”.

That the Athenians who bought into this patriotic return were the same ones willing to commit open treason against their own state, is an irony that doesn’t need explaining. It was a mistake, but one of weakness and fear, not of contempt and malice. Though they were still as wealthy as ever, they desperately missed what they once had: stability, order and tradition. It was a nihilism of the spirit and of the mind. A generation of men, young and old, who rather than adapting and becoming democratic leaders, sought to bring the whole institution down in a grand statement of sedition. The foremost group of these oligarchs came to be known as the Thirty Tyrants, and the single most prominent member, the leader of the cabal, was Plato’s uncle Critias.

On the other side are people who Popper likes to call the Great Generation. The conservatives and liberals alike who lived immediately before – and during – the Peloponnesian War; a difficult to pronounce assortment of intellectual folk heroes: Sophocles, Euripides, Aristophanes, Pericles, Herodotus, Protagoras, Democritus… The great early statesmen of Greek civilisation, the builders of the new “humanitarian” institutions, and people wise enough even then, in the premature stages of the Open Society, to say things like this: “The poverty of a democracy is better than the prosperity which allegedly goes with aristocracy or monarchy, just as liberty is better than slavery!”

Those were – of course – the words of Democritus. Here is Pericles speaking of Athenian identity: “Our city is thrown open to the world; we never expel a foreigner… we are free to live exactly as we please, and yet we are always ready to face any danger… We love beauty without indulging in fancies.” Those uplifting, and extraordinary forward-thinking, tones were spoken as a funeral oration for dead Athenian soldiers. The war was again hot, and this time Sparta would win.

Five years later, with the battles still raging, and Athens still losing, a pamphlet was published and passed around the open hands of a then-doubting society. Deceptively titled The Constitution of Athens, and making false allusions between democracy, imperialism, and societal decay, it was a knife into an already weakened enemy. All the problems that Athens now felt, were reshaped from being caused by internal treason and conspiracy with the enemy, to being the fault of freedom; of letting the unruly masses have a say about things that only the superior classes – the oligarchs – understood. The author was Critias!

It was the Closed Society trying to reassert itself, pointing to the war and fear that they had welcomed to Athens’ door, and claiming that this was the price of abandoning the traditional – magical and irrational – order. All those grand statements about an open future and being “free to live exactly as we please” were being publicly tortured by an alternative vision. One where everything was certain, everything was given, everything had its natural place, and where even history was foreseen and controllable (historicism).

As Athens fell to its knees, there was an intellectual problem to deal with: Socrates. More than anyone else, he had been responsible for Athens’ brief moment of light and hope and progress; not as a “theorist of the Open Society”, nor as a leader of democracy, but as a critic… of everything. He famously wandered the dusty city streets, challenging the deepest held convictions of the people he saw. And so naturally democracy itself also became his target.

In a petty turn of blame and retribution, the Athenian revolution from Closed to Open showed its age and its immaturity. After the Spartan defeat, humiliated eyes twisted inward and settled angrily upon democracy’s loudest critic. The mistake cannot be overstated – it was a hangover from their once-tribal selves. They could not tell the difference between a democratic criticism and an authoritarian one. The difference between someone who wants to improve the errors he sees, and someone for whom those errors instead mean that the whole enterprise ought to be destroyed and replaced by a totalitarian substitute: “there is no need for a man who criticises democracy and democratic institutions to be their enemy”.

It also wasn’t helpful for his reputation to have spent so much of his time in the company of “anti-democrats”. Members of the Thirty Tyrants, people like Alcibiades, Charmides, and Critias, were often seen with Socrates, talking about politics and culture. When these people crossed over to the Spartan side during the war, Socrates was stained by a betrayal that he didn’t commit himself. Here again, Athens showed its not-yet-erased tribal colours, failing to see the difference between someone debating his enemies (trying to change their minds) and someone conspiring with like-minded criminals.

Worse! As a “teacher-politician” Socrates was charged with actually guiding these men – all renamed as his students – towards their betrayal. The people he had taught were the people who had gone on to bring-down Athenian democracy, and colluded with a foreign, butcherous enemy. In its fragile condition, there was enough here for Athens – and its newly constructed laws – to accuse Socrates of master-minding their defeat, by educating the “most pernicious enemies of the state.”

A post-war amnesty for all political crimes made things hard for the prosecution though. And so the charges were limited in ways that might have seemed acceptable to both sides of the courtroom; intending only to set a standard of sorts, and protect the future Athenian state – as they mistakenly saw it – from a repeat insurrection. Their plan was to simply “prevent him from continuing his teaching” and the crime was written to match: “corrupting the youth.”

Socrates stood before his accusers, denied the charges, announced publicly that he had “no sympathy” with The Thirty Tyrants, nor with their actions, and pointed out that he had – in fact – risked his life by challenging them and then by denying any association with the winning army. Convicted, and offered a choice of punishment – exile or death – Socrates took the chance that so many would – and had – not: to stand on principle; “Only if I stay can I put beyond doubt my loyalty to the state, with its democratic laws, and prove that I have never been its enemy. There can be no better proof of my loyalty than my willingness to die for it.”

Enter Plato, the “most gifted” of Socrates’ disciples, and in Popper’s view the “least faithful”. To put things in simpler, blunter terms: “He betrayed Socrates, just as his uncles had done.” Unlike his teacher, Plato was a totalitarian at heart, and no sooner was Socrates dead then Plato was going to work implicating him posthumously in the fall of Athens.

Plato’s great political work, The Republic, was whipped-up to revive the worst spiritual horrors of The Thirty Tyrants and of the Closed Society. A book designed to normalise, and provide intellectual weight, to totalitarianism, Plato inexplicitly chose Socrates to be its mouthpiece. He then had Socrates novelistically talking-up new laws that would make “free thought”, political criticism, and “teaching new ideas to the young” as capital crimes: effectively admitting to his own criminality, and approving of his own execution, under Plato’s pen.

And the debauching of Socrates’ legacy continued as he was made into an enemy of democracy (“had not Socrates been killed by the democracy? Had not democracy lost any right to claim him?”), and an aristocratic elitist, happy to denigrate the common man (“Had not Socrates himself encouraged his disciples to participate in politics? Did this not mean that he wanted the enlightened, the wise, to rule?”). As he perverted the thoughts of Athens’ greatest ever statesman in this way, Plato was consciously rebuilding the once-lost ideas of tribalism and the Closed Society.

Here, in the final analysis, Popper has a small fleck of sympathy with Plato. He almost certainly knew that he was betraying Socrates, and a heavy motivation for this was to “quiet his own bad conscience”; but he was also a creature of his time. He was scared. He felt that uncomfortable strain of civilization. And he wanted desperately to be free of it all. He wanted to be happy again, and to know his place in the world… he also wanted this same relief for his fellow Athenians. The mistake he made was a simple one: thinking that happiness belonged to a previous time, and wanting to return there.

Socrates, Plato, Critias, Sparta, and that first Athenian democracy, can all feel like an impossibly distant crumb of our history. But as you read it – in its twists and worries and insecurities and hopes – it rings much, much closer to home. Regardless of what we might like to think of ourselves and the progress we have made, we are still very much at the beginning of this revolution from Closed Society to Open!

The largest problem with this Greek history, as it tends to be told, is its end point. The fall of Athens to a rising Sparta was not the “final results”. At first it was only seventy of the surviving Athenians, then more, then more still. Under the leadership of Anytus and Thrasybulus, the Athenian armies were reformed in exile. Eight months later they returned, defeated Critias, defeated the Spartan garrison, and restored what had been lost: “the democrats fought on.”

 

*** The Popperian Podcast #22 – Elyse Hargreaves – ‘The Open Society and Its Enemies, and Happiness’ https://popperian-podcast.libsyn.com/the-popperian-podcast-22-elyse-hargreaves-the-open-society-and-its-enemies-and-happiness

The Paradox of Tolerance

In conversation with James Kierstead

 

Philosophy has a tragic way of exporting only its worst elements to popular culture. For Popperian philosophy this has been the lengthy endnote to The Open Society and Its Enemies, where Karl Popper digs greedily back into the questions of tolerance and intolerance. The modern references run along similar themes and with similar enemies – Nazis, Islamists, anti-capitalists, communists – substituted into the breach; and because it is popular, catchy, and pseudo-intellectual, most people never actually read the original text, nor do they venture beyond into the wider range of Popper’s work and personal letters.

As endnotes go, even long ones, it is less than “a fully thought-out theory.” But the question itself does matter – it mattered back then, it matters now, and it will continue to matter, as well as continue to challenge the most fundamental notions that we hold about ourselves. Popper may not have dwelt too long – in terms of ink and paper and academic energy – in these waters, but he had been watching them (from near and afar) all his life; struggling with the question: “Where exactly should we set the limits of toleration in speech and action”?

Building-out his arguments for pluralism and accommodation, most of what Popper wrote on tolerance comes to us through the pages of The Open Society. There he uses tolerance as a tool of sorts, a “common denominator” from which we can build a world of extreme difference, and yet have it also remain peaceful and non-coercive. But quoting Voltaire, Popper also saw the harder epistemological edge to it: “What is tolerance? It is a necessary consequence of our humanity. We are all fallible, and prone to error. Let us then pardon each other’s follies’ (Popper’s own translation).

Without tolerance, rationality suffers. We all make mistakes, all of the time; and we only ever see those mistakes – recognising them for what they are, and having the chance to correct them – through criticism… the criticism from other people with whom we also disagree. Sitting in this precarious landscape, the one un-retrievable error that we can make, the one thing that could bring the whole project of enlightenment values and scientific progress and moral improvement down upon itself, is to silence or limit criticism. If you accept that you are fallible, then you must accept tolerance as an “important moral consequence”.

Tolerance must also be an antidote then, a protection against unpleasant worldviews (enemies of The Open Society) that corrupt what we have, and could have. The religious zealots claiming divine sanction, the nationalists claiming ownership over a people's identity, the decoders of historical inevitability claiming to know the future, the tribalists of all stripes and creeds. What they have in common is commitment – commitment to their ideas rather than to truth, and a deep belief that they couldn’t possibly be wrong.

This is where the paradox begins to build. The place where Popper stops his audience and demands something more from them, something more than what his enemies are willing to offer: “We must be tolerant”, Popper wrote in a 1971 letter to Jacques Monod, “even towards what we regard as a basically dangerous lie.” It is an easy thing to dismiss certain ideas as “outdated” or obviously “wrong”, or even to dismiss them for not reciprocating the courtesy that they are given, but we should not forget that the people holding them are largely honest; they believe the things that they say and argue for. They deserve our tolerance.

To not do so, runs its own horrible risks that Popper warned against in a 1973 letter, this time to Paul Kurtz and Edwin H. Wilson: “we must not become dogmatic and churchlike ourselves.” To replace intolerance with a different kind of intolerance, is to not replace much of anything at all. Critical rationalists like himself should be wanting something more, something that breaks with all that prior tradition, rather than continuing with it under different colours.

The trouble hits with a question of functionality and practical decision-making. Imagine that you are the captain of a large ship and you are looking to hire a crew. Just like yourself, they are all fallible and so you don’t expect them all to be particularly good at their jobs. Some are going to be lazy, some will drink too much hard liquor, some might battle with sea sickness, loneliness, or depression. But what you don’t expect is for one of them to not want the ship to float – who, once you are out in the ocean, begins trying to scuttle the hull and drown everyone, along with himself. If you were to know his plan before sailing, what would it say about your captaincy if you allowed him on board anyway? If you discovered his intentions mid-trip but before he could do too much damage, would you have any obligation to keep him on board?

So if we expand this out to a large and functioning democratic society, Popper’s quick reference point is always Germany in the 1930’s (the Weimar Republic) and the rise of Hitler. Popper was of course writing his original passage during the Second World War, and as an exiled Jew he would have been forgiven for thinking about what had happened – and what was happening – in harder, less measured, less philosophical, more emotional tones than that of tolerance and its natural limits.

What matters is always violence. Once you really see it, really feel its coercive shadow, have it change the world around you as well as yourself, it becomes impossible to join in with the apologetics and false analogies:

I shall never forget how often I heard it asserted, especially in 1918 and 1919, that ‘capitalism’ claims more victims of its violence on every single day than the whole social revolution will ever claim. And I shall never forget that I actually believed this myth for a number of weeks before I was 17 years old, and before I had seen some of the victims of the social revolution. This was an experience which made me forever highly critical of all such claims, and of all excuses for using violence, from whatever side. I changed my mind somewhat when Goering, after the Nazis had come to power by a majority vote, declared that he would personally back any stormtrooper who was using violence against anybody even if he made a little mistake and got the wrong person. Then came the famous ‘Night of the Long Knives’ — which is what the Nazis called it in advance. This was the night when they used their long knives and their pistols and their rifles…After these events in Germany, I gave up my absolute commitment to non-violence: I realised that there was a limit to toleration.

All those grand statements about empowerment and human rights and self-determination and everyone expressing themselves and civic responsibility go out the window here for Popper. The reason why democracy is important is simply because it is a means by which we can remove bad leaders and bad policies without having to resort to violence. Neither the intrusive antisemitism he suffered, nor the unpleasant language and speech around him, made the young Popper believe that the line of tolerance had been crossed. The bloodshed and the violence did!

But actual violence seems a high bar, and in later notes Popper begins to flush out the details: “we must not behave aggressively towards views and towards people who have other views than we have” he wrote in his letter to Kurtz and Wilson, “provided that they are not aggressive.” So violence includes the threat of violence – and anything short of that deserves our tolerance, regardless of how nasty, unpleasant, or irrational it appears.

Perhaps all this talk about violence and threats isn’t the most helpful – after all, in often-corrupted modern turns of language, many people will have very different ideas about what those terms mean and what they look like on the ground. A less fashionable, and so more plainly understood term like coercion might be a better fit – something that might still be hard to define when asked, but something that can be easily recognised when felt. Who deserves our tolerance? Anyone who will talk and argue for their theories; anyone who wants to convince you rather than suppress you; and anyone who can be countered by “rational argument” and kept in check by “public opinion”.

The danger now comes from two directions: 1. from the intolerant people and ideas that want to coerce us into supporting them, or into silence, and 2. from ourselves! More than just a core aspect of modern society, tolerance is what makes pluralism and our increasingly peaceful lives possible. And our moral institutions have done a very effective job of building the term into our daily lives and into our identities; to call someone intolerant in this day and age is a burdensome insult that cuts into their very personhood. So we are vulnerable targets of a kind – open to being exploited and shamed into confusion by intolerant invaders, accusing us of hypocrisy for not tolerating them, despite their violence, their aggression, their coercive ideas.

It is not a mistake we should be making. The challenging part of Popper’s brief work on tolerance is its implications which, if you have agreed with him to this point, will appear as unavoidable as they are troubling. The final sentence of Popper’s footnote reads like this: “we should consider incitement to intolerance and persecution as criminal, in the same way as we should consider incitement to murder, or to kidnapping, or to the revival of the slave trade, as criminal.” If you consider intolerance to be as dangerous as Popper does, then this makes clear sense. It should also make you feel deeply uneasy!

In the late 1970’s, Routledge editor Rosalind Hall wrote to Popper. She was seeking his permission to reprint “some” of his footnote on tolerance from The Open Society. Then beginning to see how his words were already being misused and catch-phrased, Popper wrote back saying yes she could, but only if she quoted the entire paragraph, and did not exclude the disclaimer (like so many other people had) that “I want it to be clear that this is proposed by me only incidentally and not as my main statement about tolerance.”

It is never going to be easy, or exact, trying to draw the appropriate limit of tolerance. But reciprocity is as good a starting place as any. It can help to cut through the philosophy and high-mindedness of it all, and instead offer a simple tool for analysing and labelling intolerance in the real world: is the person or the idea across from you playing by the same rules as you? Are they returning the same courtesy that they are being offered? Or are they the dangerous few that Popper had in mind when he spoke about the paradox of tolerance? If tolerance is not a mutual exercise, then one side has an unfair – and unnecessarily generous – advantage.

Later in his life when Popper was visiting the subcontinent, he heard a slightly humorous – and clear – example of his paradox at work. Whether the story was true or just legend, it does something that Popper’s best efforts failed to for so many years – cut through the misunderstanding in a sharp and sudden stroke. It also no doubt helped to endear him to his audience in New Delhi:

“I once read a touching story of a community which lived in the Indian jungle, and which disappeared because of its belief in the holiness of life, including that of tigers. Unfortunately the tigers did not reciprocate.”

 

*** The Popperian Podcast #21 – James Kierstead – ‘The Paradox of Tolerance’ The Popperian Podcast: The Popperian Podcast #21 – James Kierstead – ‘The Paradox of Tolerance’ (libsyn.com)

Memes: Rational, Irrational, Anti-Rational

In conversation with Susan Blackmore

 

We human beings are difficult little animals. In some obvious enough ways we are just like all the other species out there: fragile sacks of genes and cells and blood and lungs and brains and hearts; eating, fighting, hunting, procreating, aging, and dying. But in other – still obvious ways – we are of a different kind: talking, writing, reading, building, and theorising our way to space travel and nuclear power and quantum mechanics and cures for disease and robotics and pocket computers and the internet.

So what is it that makes us so special, and allows us to achieve such unbelievable – magic-like – progress from such a mundane biological starting point? To what do we owe our intelligence, our consciousness, and our comfortable lives? Here we have never been short of a theory or two, and never close to a satisfactory one… until now! And just like with its cousin-theory – Darwinian natural selection – this great new idea bounces between complexity and simplicity: easy enough to understand at a glance, hard to get your head around in its detail, and so profoundly unassuming that it often feels incomplete; leaving us dumbfounded as to how this small attribute could have such an enormous reach, considering all that has been done in its wake.

What makes us different” writes Susan Blackmore, “is our ability to imitate”. It is an ability that we share in primitive ways with other animals as well, but they don’t take to it with the vigour and intuition that we do. Try waving at your cat, and see what happens. Keep on waving for hours or days or months, and the cat is never going to wave back. You can train the cat with rewards or affection or punishment to do certain things, but you cannot teach it by demonstrating the behaviour yourself.

Take an impossibly foreign human being though, someone with no understanding or experience of a waving hand, and after seeing you do it in his direction a couple of times, he will instinctively begin to copy you. He will then also, very quickly, make the connection between the gesture and its meaning. He will then wave at other people, they will wave back, and the imitation has a life of its own. “When you imitate someone else, something is passed on”. That something is a meme… and it explains who we are as a species.

The big brain. So many of the traditional answers to our previous question – what makes us different from other animals? – are quick references to the brains we have. So much so that the question is often asked differently: why do we have such enormous brains? And before you answer, make sure that you are not making one of two mistakes. 1. Slipping into a circular argument of the kind: we have big brains because they make us more intelligent, we became intelligent because it helps us to survive, and so we developed big brains to help us survive. Nothing is being answered here other than what we already know: our brains are big and our brains are useful. 2. Forgetting that biological evolution moves in small increments through random mutation, and that it is not indulgent. That something may be beneficial on paper does not mean that it will be selected for by nature, nor that each small, incremental change on the path there carries enough individual survival benefits to make an evolutionary difference.

Take primitive hunter-gatherer societies – our ancestors – and people whose survival rested upon understanding the world around them: seasons, migration patterns, basic tools, alternative food sources, cooperation… Having a larger brain would certainly have helped them with this, but did it have to be as large as the ones we have? The range of things they needed to do in order to endure and thrive was limited, and with each increase in brain size comes an intolerably large increase in survival-related problems.

Human babies are hopeless little creatures. That big brain comes with a big head, and that big head is a tremendous hassle. With a close ratio between pelvis and skull, childbirth is an unhealthy ordeal, where some mothers – and babies – don’t survive the birth; even then, human babies are born extremely premature, soft-skulled and unable to fend for themselves, becoming a heavy burden upon the species; and the brain itself does not sleep or rest like our other muscles, so it is always burning energy, needing a disproportionate amount of calories to keep it going. Expensive to build and expensive to maintain, we have a problem to solve here: “a smaller brain would certainly save a lot of energy, and evolution does not waste energy for no reason.”

So there must have been a selection pressure for our bulbous brains, something that outweighed the trouble they cause. It also happened fast (in evolutionary terms). Somewhere around 2.5 million years ago, with our hominid ancestors, our brains began to grow dramatically into what we have today. And so the theories often begin with the archaeological record, and with toolmaking: the pressure first comes from the environment and the animals within it, and then our need to outwit prey, to evade predators, to mould the landscape in small ways; technology was needed, and for this we needed bigger brains. But surely we didn’t need ones as big and costly as what evolved – a much smaller, less cumbersome brain would still be able to make tools and still be able to form hunting strategies, as we can see in limited ways today in other species.

If it wasn’t toolmaking that made the difference, perhaps it was a matter of discovery, spatial awareness, and map (cognitive) building. Foraging for food is no easy business, and environments are unpredictable, dangerous places to wander about aimlessly. Here a bigger brained ancestor of ours would have had a competitive advantage by storing more locations of food in his mind, more dangerous places that ought to be avoided, and more landmarks for quicker navigation. The trouble is – again – that big brain of ours is overkill. Animals that make complex cognitive maps do also show an increase in brain structure, but not in overall size. Rats and squirrels make complex maps of their environments in order to survive, and yet they do so with fairly small brains.

How about consciousness then? Imagine an early ancestor of ours who has suddenly developed the ability to look into his own mind and contemplate his own existence. He is the first of the species with this ability, and it comes with an enormous benefit. By analysing his own mind and his own behaviour, he is able to build primitive theories about the other members of his tribe: who will be aggressive in what circumstances, how emotions of sadness or happiness or surprise or anger or confusion will affect people’s behaviour, and how to build firmer relationships with allies and sexual partners. All he has to do is think how he would respond in the same circumstances.

The problem with the consciousness answer though is fairly obvious. First, it is hard to pin down whether consciousness is an evolved – selected for – function, or an epiphenomenon of another function like language or intelligence or attention. Second, it is incredibly tenuous to say that consciousness is a uniquely human quality, and that it provided such a fierce advantage as to make large brains – with all their downsides – necessary. Finally, and most critically, we still don’t know what consciousness is, and “you cannot solve one mystery by invoking another.”

The Machiavellian Intelligence hypothesis is the most fun. Forget any notions of improved cooperation, compassion, understanding, or relationship-building, instead what our big brains evolved to do was to con, outwit, and betray our fellow members of the tribe. Social life has certain rules – often unspoken – that guide how we should act, in what circumstances, to whom, and what should be reciprocated. This is all nice enough, but anyone willing to break these rules has a clear advantage over the rest, who don’t. Especially if those rules are broken in cunning, devious ways that hide your indiscretions, diminish or destroy your enemies, and craftily propel you upward in terms of status, power, resources, and survival. And all this scheming requires a lot of brain power.

This takes the social function of the brain to a whole new level. Arms races are not foreign to biology, but if inter-species competition is the endgame of our big brains, then it seems to have dramatically overshot its landing. Being larger, stronger, or faster are all ways of outcompeting your neighbour for food, or shelter or safety or sexual partners that are not nearly as biologically expensive as having enlarged brains. Why the pressure was heavily set upon social skills still needs explaining, as does the growing complexity of social life. Besides, is it really acceptable to say that our ability to do mathematics, paint cathedrals, build computers, or understand the universe, comes down to our improved social skills? This seems like a jump. It certainly leaves more questions than it answers.

Instead maybe, just maybe, the “turning point in our evolutionary history was when we began to imitate each other”. But before we get to that, we must talk about language and its origins.

One of those incomplete thoughts about the evolution of our big brains is about the fact that we talk, a lot. We just don’t shut up! Think of any common-enough human activity, and then think what it is mostly about. We meet friends for lunch, have drinks after work, settle-down to watch a football game, and it is all – largely – a pretext for having a conversation. Perhaps it is easier to think of it this way: think of a group of people – two, three, four, five, it doesn’t matter – sitting together somewhere, anywhere. Are they talking, or are they happy in the “companionable silence”?

Language too comes at a high cost – “if you have ever been very ill you will know how exhausting it is to speak” – and so it takes some explaining; and for many socio-biologists it is connected to our big brains. Communication matters. It helps us to form social bonds, to pass on useful information, to cooperate more effectively, and it helps us to build culture. The more detailed and precise the communication, the more effective it will be in achieving these things; and all those extra layers of nuance and complexity require more brain power, and so the two evolve together. But because they do, the same hurdle catches both. If it is all about competitive advantage, then why has language – as with our brain size – got so out of control? If we just talked a little less, we would save energy, and by using less energy than our chatty neighbours, it would be us – and not them – who had the competitive advantage. Evolution didn’t produce creatures who are capable of complex language, it produced “creatures that talk whenever they get the chance”.

So much of Blackmore’s work here is an attack on this type of socio-biology, these incomplete theories that push the problem of our intellectual evolution someplace else, and then announce loudly that because it moved it is solved. The mistake is always of the same kind: trying to explain our extraordinary – and unique – development in terms of genetic advantage. It is an understandable mistake… but still wrong!

So back to that quote from Blackmore – “When you imitate someone else, something is passed on” – and the importance of imitation. Look around yourself, and just like all those people before Charles Darwin, you are likely to not notice the most extraordinary aspect of our reality; oblivious to the driver of all life, to all evolution, to all knowledge, to everything that matters about being the creatures we are. The trouble comes about largely because we have become so damn good at it. Despite how extremely rare it is (drawing the hard line between ourselves and the animals), all that imitation and copying and replication tends to pass us by unnoticed – so successful and so constant, it has become almost boring.

Genes. Evolution is helpfully thought of as competition; competition between replicators. It is less helpfully thought of as something that happens for the “good of the species”. The mistake is thinking of the organism as a whole, that evolution cares about the survival of the animal in question. It doesn’t! Biological evolution happens at the level of individual genes, and those genes have only one – deeply selfish – purpose (not the right word because genes don’t have intentions in the human sense): to replicate. To be passed on to the next generation.

It is tempting to look at the development of a new wing or a larger body or better camouflage or any given Gene X, as working to improve its host’s chances of survival. But genes don’t have foresight. They don’t have desires of any kind. They simply become what they are through mutation, and are either successfully passed-on through sexual reproduction, or not. The ones that are not, die out, and the ones that do live on, reproducing again and again, until they too die out through changes in the environment or competition from other genes. The crossover that trips us up, and has people using the language of intentions, desires, wills, hopes and purposes is the connection between passengers and their hosts – “between ‘replicators’ and their ‘vehicles’”. It just so happens that if we die, our genes die with us. So we have that pitch in common: the human vehicle wanting to live-on for a variety of reasons, the individual genes wanting (again not the right word) the human vehicle to live-on as well, or at least long enough to have sex, and so allowing the genes to replicate in new vehicles (offspring).

In this, our genes are more like indifferent and greedy parasites (getting as much as they can, as quickly as they can), than members of a team pulling in the same direction. The errors of language in the previous few paragraphs tell a story in itself: just how hard it is to think about evolution, and how hard it is to talk about it accurately, despite it otherwise making intuitive sense. And so it can’t be stressed enough, what it comes down to is replication, replication, replication. Or with a little more elegance, and in the words of evolutionary biologist Richard Dawkins: “all life evolves by the differential survival of replicating entities”.

Memes. Look around yourself now, go outside and really look at your fellow human beings. See if you can break through that background noise of normality. See if you can notice the next step in replication, the non-biological kind. Look at the clothes people wear, the music they listen to, the cars they drive, the food they eat, the gestures they make, the catch-phrases and turns of language they use, the hairstyles they sport, the movies they watch, the books they read, the ideas they profess, the tools and the technology they use… Once you slow down enough, and spend the time to re-notice the things you take for granted, you will see these habits and preferences and desires and fashions and fears for what they are, and what they are doing. Jumping from host to host, from brain to brain, they have a life of their own, and a goal of their own: replication!

This is the world of memes, and it is indistinguishable from the world of human beings. Each and every meme, just like each and every gene, evolved individually and in groups (memeplexes or genomes), with different and connected histories. They are unique, they evolved, and they make us do things. They make us speak in certain tones with certain words, drink Coke or Pepsi, wear a green shirt rather than a blue one, and eat pizza rather than a hamburger. What they all have in common is you! “Each of them” writes Blackmore, “is using your behaviour to get itself copied.”

You heard that right! Your food choices, clothing choices, language and thoughts, are using you, not the other way around (well at least not in the same malicious way). The next great technological invention is likely to spread around the world because it is useful, improves lives, and so that makes it something worth getting a copy of. The next breakthrough in science might spread too, because it has truth on its side and makes the building of new technology possible, but it is likely to spread with less fecundity because it is harder to understand, and has fewer immediate uses for the average person. A catchy tune or song on the other hand – take Happy Birthday to You as an example – might ripple effortlessly around the world, across language barriers, copying itself again and again, to the point where just hearing the title, or thinking about a birthday party, brings it faithfully back to life in your head.

As you hum that tune and remember those lyrics, ask yourself the hard question: “where did that come from?” It is firmly locked away in your memory, just as it is locked away within the minds of millions of other people, and yet its beginnings, its history, its origin, doesn’t matter. What matters is how it came to you, why it stuck when so many other tunes didn’t, and what it makes you do (sing it at birthday celebrations). What is the cause of all that extraordinary imitation? Something under the surface of the behaviour itself (remember there is a difference between replicators and vehicles) is lodging itself within the minds of its hosts (me and you) – “some kind of information, some kind of instruction” – and causing itself to replicate when it comes into connection with other hosts. This something is the meme!

Some of these memes are helpful to us, like new technology; some are entertaining, like songs; and some can be positively harmful, like pyramid schemes, financial frauds, false medical cures, unhealthy body images, social media addictions, dangerous ideologies or bigotries. Memes are indiscriminate and uncaring, like genes they are selfish and are only interested (the wrong word once again) in spreading as widely as possible. It is a challenging idea – striking “at our deepest assumptions about who we are and why we are here” – but one that satisfies the Popperian criteria for being a true theory: 1. Has testable predictions, and survive those tests, 2. Solves problems and explains things better than the rival theories.

Some memes succeed, whereas others fail, for obvious enough reasons. We all have limited processing capacity in our brains, as well as limited storage capacity. And so, no matter how well adapted a new meme is to our psychology, or how well-geared it is to being imitated and selected, it is always going to struggle in such a competitive landscape. The best ones are the most evolved ones, the memes that arrive in our minds through countless variations and combinations of old memes; the errors and baggage slowly carved away, with gradual improvement upon gradual improvement adding-up to make the meme a ruthlessly efficient copier.

But we human beings are fallible, we make mistakes, constantly, so all this combination and selection is a tricky business. Especially if everything hinges upon our passing on something we hear or see – a song or a story or a theory or a fashion trend – faithfully enough that it can then be passed on by others, and not diminished with each replication. So the most successful memes have something about them: depth. A joke is a great example of a meme with this quality. When a joke is told for a second time, and then a third time, and then a millionth, it is rarely ever told the same way; but the joke is unmistakably replicating, and so it must be replicating for a good reason.

That reason is that the joke is humorous, it makes people laugh, and that makes people happy, which then makes them want to share the joke. The exact format of the words don’t matter, what matters is the underlying structure or pacing or punchline that makes it funny. Get your hands on a joke, change the words completely, even change the setting from a jungle to a café for example, and the joke might still work if you are clever enough with how you adapt it. But then forget the punchline or a core feature that makes the punchline work, and then no one will laugh, and the meme will die. The most important thing about the meme is not the raw details, but the meaning behind those details.

The way a joke of any kind gets into your mind, is the way in which everything else does. They might be individual memes, surviving alone and replicating on their own weight, or giant clusters of memes, all bound together, feeding off each other, surviving together, replicating together. These, for lack of a better word, are memeplexes, or as Dawkins calls them with some helpful imagery, “viruses of the mind”. Think cults, religions and other dangerous ideologies, and you have a reasonable picture of what a memeplex looks like, though they don’t have to be necessarily pernicious. Add enough memes together, and enough memeplexes, and what you have is a complete human mind.

But these memes have another evolved talent, something that makes their lives a little easier. They don’t just find a place within a given mind, but once there they begin renovating, actually working to “restructure a human brain” according to Daniel Dennett, “in order to make it a better habitat for memes.” Take farming as an example: contrary to what we tend to think, it did not improve the lives of those early adopters, it did not reduce disease, and most counterintuitively it did not increase the quality of nutrition. When people like Colin Tudge look at the skeletons of the earliest farming communities from Egypt, all that looks back is “utter misery”; starvation, illness, deformed bones from the excess workloads, and everyone dying long before their thirtieth birthdays. So why did farming catch on?

The answer is fairly simple, and connects the two mysteries: those farmers’ lives didn’t improve for the exact same reason that farming became so popular! The more food they managed to produce, the more children they were able to have, and with more mouths to feed there was more work to do, more food to produce, more land to be bought or seized, and ever more children to feed; children who would grow up to be farmers, and who would run through the same cycle. Then, with less and less land available for their traditional way of life, hunter gatherers would have few choices but to drop their spears and take up plows. Step back and what does this look like: replication, replication, replication!

Farming took off in the way it did, and spread rapidly across the globe, not because it made people happier, healthier, or more comfortable, but because it was a good meme; well-adapted to its human hosts. With a “meme’s eye view”, the world looks a very different place. Instead of asking how new ideas or technologies benefit human beings, we should be asking “how they benefit themselves.” The subjective experiences of the people whom these memes are running through, and the emotions they feel, are a part of a much more complex process, triggering some things to be imitated, and others to not be. At this point, our memes are well and truly off the leash, living a “life of their own”, causing themselves to be evermore replicated, and manipulating our behaviour to get this done.

Sure, genes are a prerequisite for memes – the creation of brains that are capable of imitation was necessary before the first meme could ever be formed. But once that happened, once brains of that kind evolved, all the popular talk of biological advantage, and of evolutionary psychology, almost entirely misses the point. Memes, once born, are independent of their genetic origins, they are a second replicator, acting entirely in their own interests. Sometimes those interests coincide with biological and psychological health, and sometimes they can be positively harmful. So we better begin to understand them in as much detail as possible.

To do this requires a return to the meme’s eye view. Memes look at the world – and at us – in a very singular way: “opportunities for replication”. Every time we speak, we produce memes. The trouble is that most of these die immediately, never finding a new host, and never being spoken again by ourselves. If one of those memes manages to get onto a radio broadcast, a television program, or into the pages of a book, it has dramatically increased its chances of replication, and so it has a competitive advantage. Our brains and our minds and our behaviours are nothing more than opportunities from the meme’s eye view.

Think of what you are reading now, the words on this page. It started with a thought in the mind of Richard Dawkins. That thought caused Dawkins to write a brief aside – 15 pages – in his book The Selfish Gene. Which was then read by Susan Blackmore, and it caused her to flesh out the theory over 250 pages in her book The Meme Machine. The physicist David Deutsch read that book, and added some missing details to the theory in his own book The Beginning of Infinity. I read Deutsch’s book, which caused me to go back and read Blackmore’s, which caused me to interview her and publish our discussion, which caused me to write the words you are currently reading. The theory of memes is itself a meme, though only a mildly successful one.

It might be easier to think of this in terms of the types of brains and minds we have, and the changes that memes have made to them. Try for a moment a little self-experiment: try to stop thinking! Stick at it for more than a few seconds and some thought or another will pop into your mind. Push that thought away, try again, and you will likely only last another second or two before you are bombarded by thought after thought. The whole practice of meditation is built around being able to calm our ever thinking minds and give us a few more moments of peace.

All this thinking is extremely stressful, as we worry about the glance someone has given us, whether we turned off the lights before leaving the house, what we should eat for dinner, how we should dress for that business meeting tomorrow. Try as we may, emptying the mind is a nearly impossible achievement; and yet one that would be very beneficial to all of us from time to time. All that thinking and worrying drives unnecessary stress and anxiety and depression into our lives. What is obvious, if you pay enough attention to it – perhaps through meditation – is that we are not in control of what we think; thoughts just happen, and we cannot turn them off.

All that thinking also requires a lot of energy and calories, so what on earth is it all about? Why do our minds do this to us? The answer to this question – and so many others like it – goes back to the same starting point: you have to think in terms of brains which are capable of imitation, and “in terms of replicators trying to get copied.” A meme that isn’t paid any attention is doomed, slipping silently out of its hosts’ minds. Memes that capture and dominate our attention on the other hand, are much more likely to get acted upon and then passed-on to other minds who will do the same. So memes evolve to capture more and more of our attention over time and with that, the reason you can’t stop thinking begins to make sense: “millions of memes are competing for the space in ‘my’ brain.”

With far more thoughts (memes) competing for the same limited space in our minds, Blackmore likes to think of it in terms of a vegetable garden. You can try to clear the soil, and plant the seeds you want, but before long the green tips of weeds will appear. Wait a little longer and there will be more. All the clearing and pruning and de-weeding of the mind (through practices like meditation) have an effect, but the process continues: weeds fighting for sunlight and water and nutrients, competing for space against other weeds and against the vegetables you actually want to grow. Memes are “tools for thinking” and so they thrive most successfully in hosts that think a lot.

These memes also have another sinister trick up their sleeves. Not content with the brains which evolution gave us (and so gave them) our memes target our genes. They change our behaviour, and so they also change the genetic grounding for why we have sex, when we do it, what we consider sexually desirable, and how we raise children. Once the prisoners of genes, when memes arrive we are sprung from that prison, and our genes take our place behind the bars. By changing certain behaviours of ours and by working towards making their home (our minds) more hospitable, our memes turn us toward their own light.

Memes want to spread, it is all they want! So they prefer to find themselves in a human-host that is genetically well-adapted to this purpose: be it people who are inclined to religiously follow trends, people who are naturally more charismatic and so capable of influencing others, or people with better focus – and more attention to detail – who are therefore better able to accurately copy and share memes. For this, they need certain things from their hosts (us): more proficient brains with more memory capacity and processing power, better sensory organs to perceive memes and then copy them faithfully, and certain personality traits conducive to replication and imitation, like the ones mentioned above. And they can get these things done in the way they get everything done: by changing our behaviour. In this case our sexual behaviour.

The sale of sex in modern society is not about spreading genes”, how could it be, with all its anti-evolutionary (biological evolution that is) qualities? Rather, “sex has been taken over by the memes”, and with it the rest of our biology.

So after a long detour around the world of memes, back to those big brains we first puzzled over, and back to a better answer. The high point in our evolution, when everything began to change for us, was when we started to imitate each other. Before this, biological evolution inched its way forward, and the type of rapid and unusual change that we see with the development of our outsized brains, was seemingly impossible. The necessary selection pressures just weren’t there, and our best socio-biological theories didn’t work as explanations. But with imitation, a second replicator (other than the genetic one) is let loose, changing the environment, changing our behaviour, and changing which genes are selected for; radically altering our evolutionary path.

The exact moment this happened is lost to history, but unlike all those socio-biological theories, the “selective (genetic) advantage of imitation is no mystery.” If your neighbour has developed some sort of good trick, something useful, or valuable, it is clearly beneficial to yourself to be able to copy him. Running things back again to our hunter-gatherer past, perhaps this neighbour has discovered a new way to find food, a new mechanism for building shelter, or a new skill for fighting. The people who saw this and ignored it, choosing instead to continue along seeking food or other improvements as if nothing had happened to the man next door, paid a price. The people who noticed these small jumps in progress, and decided to copy them, learned valuable skills, new knowledge, and were better-off for it.

But imitation is no easy thing. It just seems easy to us now because our memes have been running the show for so long, and our genes are now so fine-tuned to memetic purposes. There are three requirements, or skills: 1. Deciding who or what to imitate, which is no easy thing. 2. Decoding information from complex behaviours, technology, theories, and transforming that into your own behaviours, technology, theories. 3. Accurately matching bodily actions. Very basic versions of all these skills can be found in primates today, and five million years ago our ancestors had the same latent abilities.

Two and a half million years ago, the first stone tools were made and we had our first obvious signs of imitation. Without rehashing all the mistaken ways that big brains have been thought to evolve, memetic theory has the Popperian benefit of being able to explain the phenomenon (big brains) as well as the empirical content of all these other theories. All those cooperative and bonding social skills, the cunning and the deception of Machiavellian intelligence, the navigation and pathfinding of cognitive map building, the leaps forward in survival that come from tool making, and all the spin-off benefits of consciousness, are explainable by a single development. A threshold, that when crossed by our ancestors, transformed so much, and took on such a life of its own, that it became hard to even recognise through the enormous dust cloud of change and success.

The first step is what Blackmore calls “selection for imitation”. And it’s the simplest. Somewhere in our evolutionary mess, a genetic variation for imitation happened. The people with this variation had an immediate advantage, copying the best primitive tool makers (to use that as an example). Building better spears, better baskets, better huts, they thrived and so the gene spread. The next step is where things become really interesting: “selection for imitating the imitators”. When everything around you is changing, and that change is speeding up due to imitation, it is not always so easy to know what to imitate. But a successful imitator is a much more obvious target. Instead of trying to select which spear works best, copy the spear of the most successful hunter; instead of trying to choose which shelter to imitate, copy the shelter of the healthiest family. By imitating the best imitators, the growth of memes finds a whole new gear.

The third step is where the question of genetic advantage begins to fade away, and where memes are gradually let off their leash: “selection for mating with the imitators”. Because imitation was an advantage to our ancestors who inherited the skill, they would also have been seen as genetically desirable; high-value sexual partners. They would have thrived when others did not, and so stood out from the crowd. By choosing a good imitator to mate with, you get close access to their imitation skills and all the benefits that come from them. Your children will also then benefit by inheriting these imitation skills in turn. Through generations of this selection pressure, crude and embryonic imitation becomes much more refined and effective.

The final step is a little predictable, but it is where that big brain of ours finds its explanation: “sexual selection for imitation”. Think of the peacock and its ridiculous tail feathers. These feathers are used for attracting female peahens, and nothing else. But the peahens are attracted to them. The bigger and brighter the tail, the more attractive the male peacock is to potential mates. So peacocks with ever larger and ever brighter feathers have more sexual opportunities, have more offspring, and those offspring (the male ones) will have similarly ridiculous tail feathers. The feathers are cumbersome, making their hosts easier targets for predators, but that one advantage of sexual selection is enough for the feathers to continue growing and to continue sparkling. This is called a “runaway sexual selection” and it should sound familiar.

As mentioned, the ability to imitate is not an easy task. It requires a lot from our biology, specifically it requires a lot of brain power. And also as mentioned, memes are great at exploiting sexual selection. If the selection pressure for the peahens was something like ‘mate with the peacock with the grandest tail feathers’, then the selection pressure for early human beings (and all human beings since) was probably something like “mate with the man with the most memes”. And just as with those tail feathers, before long this one characteristic (imitation) begins to dominate all others in terms of genetic reproduction. Our brains grow to accommodate more memes and better replication, those memes and that imitation are then sexually selected for, our brains continue to grow in order to handle more memes and better selection, and we end up with huge, cumbersome brains, as a case of “runaway sexual selection”.

Back to language. Again, without rehashing the whole space of socio-biological theories about how language evolved and its impact on our intelligence and species-wide success, memetics has a better answer (without the inconsistencies) and a different approach (one that encompasses and explains all those other theories). The difference between a silent person and a talkative one, is that the talkative one is likely to be a much better spreader of memes. So from a basic starting point, language is a tool for our memes. The only thing that memes want to do (again the wrong word) is replicate, and when we start to think about language in this way, aspects of it begin to make more sense.

Imagine you have heard some juicy gossip. The choice to tell someone or not, often doesn’t feel like a choice at all. Your biology seems to be firing against you, demanding that you repeat the words you just heard. If not gossip, then some current event you saw in the news. Some new movie you saw and liked. Or, of course, the clearest example of that hilarious joke you just listened to. It is easy to dream-up alternative theories about the origins of language, but much harder to find a theory that accounts for the fact that we are often overwhelmed by the compulsion to talk. Stick someone alone in solitary confinement, and they will soon begin having conversations with themselves. Tell someone else that you have booked them a week-long stay at a silent retreat, and watch them sink with dread before your eyes.

People who hold the meme for talking, will spread more memes; it is much easier to tell someone something than to act it out silently. And people who hold the meme for talking compulsively will have more opportunities, and wider audiences, to continue spreading their memes to. And in this way, as with our big brains and imitation, the meme pool begins to change the gene pool through sexual selection. So the question ‘why do we talk so much?’ has a nice, clean, encompassing, and deeply-explanatory answer: “We are driven to talk by our memes.”

The final ribbon on this theory of language sits in the details of its evolution. Look around and listen to the words we use, the phrases and sentences and paragraphs and conversations and debates and expressions, and what should hit you first is how innate it all is. There are differences, gaps, and outliers, but on the whole almost everyone you see using language, uses it as grammatically well as anyone else. And yet none of us learn language by being taught its structure, being corrected for our mistaken usage, nor even (and this is important) by “slavishly copying what they hear.” The grammar we study in school is but a tiny part of the natural structure (grammar) of language.

There was once a famous chimpanzee named Washoe, and an even more famous gorilla named Koko. They were famous because they could speak – well, they could use basic sign language. Having been taught a few key words, Washoe and Koko would build short – “three word” – sentences, requesting certain things, and even expressing themselves. Then after the “excitement and wild claims” had faded slightly, psychologists, linguists and native deaf signers, began to offer some doubt. The primates weren’t actually signing anything close to what language is. There was no structure, no grammar, no order to things, no understanding of what they were doing. Washoe and Koko had simply learnt a few symbols (they had to be trained and coerced), and were using those symbols to request things.

Young children on the other hand, do something extraordinary. Without too much effort they seem to absorb the language they hear, and the rules for its use. It is largely inexplicit. They often don’t even realise that they are learning or improving upon what they have already learnt, and yet without the need for reward or punishment, they pick it up and use it; in all its complexity and depth and structure. “The human capacity for language is unique.”

How this unique ability evolved is a tricky enough question, if for no other reason than languages don’t leave behind happy accidents in the fossil record. Archaeologists can’t go digging around in the mud for clues in the same way as they can for tools or bones. Extinct languages are lost, forever! There are clues – like the discoveries of art, burial rites, tool making, and trades – but they are distant and weak. The idea being that for such things to happen, language would had to have been on the scene. This is really just an argument that language would make such things so much easier, and so we are guessing that it might have been present. Not a convincing theory! Especially when language and thinking are so deeply wrapped together, it is almost impossible to speculate what might be possible without language.

Those big brains likely had something to do with it, but this misses the biological complexity of speech. A delicate and accurate control of breathing is needed, requiring the development of specific muscles in – and around – the chest and diaphragm. And the interplay between them is vital, overriding the mechanism of one in favour of the other, at just the right moments and in appropriate ways; allowing us to talk and breathe and function. We also need a wide variety of sounds, sounds that are distinct enough to convey the clear meaning of words. For this, our larynx is considerably lower than it is in other primates. But muscles and larynxes don’t fossilise either. Digging through what we know of our deep ancestors may never take us to the origins of language, but an easier answer might come our way if instead we simply “knew what language was for.”

A good replicator needs three things: Fidelity, fecundity, longevity. Let’s start with the second. In a world where genes have evolved creatures (ourselves) who are capable of passing on memes, how wide and how far those memes spread is an obvious challenge. For a meme to replicate it needs other hosts (people) to copy into, and who can then continue spreading it. The need is always for more hosts. And language becomes an extraordinary tool in this, allowing you to pass on the meme to large crowds all at once, even if none of them are even looking at you. Instead of using signs and gestures, speech allows memes to continue replicating, be it face to face, face to faces, or in the dark, or around corners, or over reasonable distances.

Fidelity. How does language help to improve the accuracy of what is being copied? This is fairly straightforward. Think back to Washoe and Koko, imagine they are together in a room and one of them has a primitive sort of meme running through their mind. The work that meme has to do in order to be replicated in the other, involves some heavy lifting. Signs can be ambiguous, gestures need to be deciphered, and behaviour is a mess of movement and sound: finding the one thing to copy (the meme) through the background corruption and superfluous activity is no easy process. Now add language, and everything becomes clearer. The meme can be communicated with much more accuracy, and in the event that the wrong thing is still copied, it is as easily corrected as saying don’t copy that! Copy this!

So what about longevity? It would appear at a glance that the problem of life-extension for memes is a problem of memory capacity. Someone communicates a meme to you, it is then stored in your brain until you can communicate it to someone else. If the meme is hard to remember, it might be partially forgotten, or lost entirely. Here language comes to the rescue. If you hear a series of random numbers or words or sounds and are asked to repeat them back a few minutes later, you will find it very hard to do so. If you are read a simple sentence on the other hand, remembering it will be a much easier task. Language adds structure and meaning to the sounds we hear, and this makes it considerably more memorable.

Besides, language doesn’t need to be repeated in an exact replica to convey the same meaning. If we are hunter-gatherers and I say something useful to you (a meme) like don’t go up that mountain because there are lots of dangerous bears, the message you pass on to someone else might be there are hungry bears on that hill, stay away, or scary animals live on those slopes, avoid them. There would be countless ways to express the same meme, and for this we have language to thank. It doesn’t take much imagination: think of a group of people who tend to copy each other. Now add language. Are they better or worse at copying? The evolution of memes explains the evolution of language.

So with all these memes running through us, and with all these changes that memes have made to our genetic code and our behaviour, what are we? We are all, down to the man or woman or child, gigantic memeplexes that bundle together in such a way that makes it all feel complete and singular. It makes us feel as a self! Blackmore calls this the selfplex – that constant, nagging feeling that ‘I’ am somewhere behind the rest of the human show. Perhaps it is better said in the title of her book: we are meme machines, and so if we ever want to understand who we are, to be happier, healthier, smarter, more productive, or more relaxed (insert whatever progress means to you), then we had better begin to understand our memes.

If you feel like you understand memes now, but are still oddly confused, that makes sense. Those early listeners to Darwin’s theory of evolution also understood the plain meaning of the words he spoke, but were also confused about the weight and inferences they carried; as was he. And make no mistake, Blackmore’s theory, with a few tweaks here and there, has the same astonishing explanatory value of Darwin’s, and the same world-shifting implications.

There are two ways to look at the path forward. We can do as Dawkins hoped, and begin to fight against our memes: “we alone on earth, can rebel against the tyranny of the selfish replicators”. Or we can follow Blackmore’s suggestion and discover that we are “truly free – not because we can rebel against the tyranny of the selfish replicators but because we know that there is no one to rebel”. Either way, it is worth seeking them out, looking into yourself, searching for the things you think and do compulsively, the things in your life that feel like they are stuck on repeat, the things that seem to have more control over you than you do over them. Not all our memes are good or valuable or worth having, many are downright harmful, and they can, by some effort, be de-weeded from your memeplex.

This is about three times as long as any article I wanted to write in this series. And I am tempted to say that this is a matter of how much this book, and how much this theory, meant to me. And I admit that I am captivated. And I really do think that Blackmore is onto something huge here. And I am sure that if we only understand our memes better, we can understand ourselves better and improve our lives. But if I am going to accept the theory, then I need to do a better job of thinking in terms of the theory. It would be more accurate to say that I have been infected by the meme of memes. Now let’s see if I am a good carrier or a good host or a good vehicle. “The driving force behind everything that happens is replicator power”, so the judgement of my success or failure will come down to whether or not I have managed to – in some small way – also infect you with this meme of memes.

 

*** The Popperian Podcast #20 – Susan Blackmore – ‘Memes - Rational, Irrational, Anti-Rational’ The Popperian Podcast: The Popperian Podcast #20 – Susan Blackmore – ‘Memes - Rational, Irrational, Anti-Rational’ (libsyn.com)

Rules of the Game

In conversation with Joseph Agassi

 

Have you ever thought of yourself as a genius? Then chances are you thought you were a young genius at that. But why? The question catches in the unpleasant grooves between scholarship, success and glamour. It is an “attractive” question though writes Joseph Agassi, and “I deem only attractive questions worthwhile”; but this doesn’t save it from difficulty and neglect. Just as with the big questions in science, attractive questions in philosophy struggle under their own weight: vague, logically ambiguous, and reliant upon too much background knowledge which is not yet available. But don’t let that stop us, boldness matters, and Agassi is bold. So we’ll ask and answer it anyway: “Are all geniuses’ infant prodigies?”

History is a problem here, because not only do we begin to lose our way with the shortened question, what is a genius? but when it comes to the people who we most commonly think fit the category – “Newton, Einstein, Masaccio, Leonardo, Keats, and Schubert” – we know very little about their actual childhoods; much less still about how the people around them judged their youthful intelligence. But we can make a harder distinction about when it all ends; if, by your late teens, no one has commented – loudly and publicly – on your prodigious talent, then you have lost the right to the title forever. So let’s be Popperians about this then, and change the question once more to help us out. Negatively phrased inquiries tend to offer clearer answers than positive ones, so: “Can genius show up in individuals well past early adult life?”

Ask around performing artists, scholars, media professionals, scientists, entrepreneurs, company executives, politicians, mathematicians, and musicians – as Agassi has done over his life – and you will find “one indisputable fact”… the very question “troubles them greatly”. But not in the way you might think. After all, there is a rich vein of evidence supporting the rise and late discovery of mature geniuses. Here is Agassi’s list: “Moses, Muhammad, and Sigmund Freud; Vincent Van Gogh, Paul Gauguin, and Henri Rousseau; Johann Heinrich Pestalozzi and Homer Lane; Ben Franklin, Michael Faraday, and Max Planck; Georg Cantor, Bertrand Russell, Kurt Gödel, and Abraham Robinson.”

By almost any metric these people are geniuses. And yet none of their school-aged peers or teachers or friends or family had anything exceptional to say about them until much, much later in life. This certainly does add some weight to our revised hypothesis, but also opens up a challenge of a kind. Is this late blooming less a matter of emerging genius, and more that of poor judges and missed opportunities? Perhaps Van Gogh The Late-Genius, was also Van Gogh The Child-Prodigy, yet without the access to paint, to canvases, to encouragement, to guidance, to motivation, or to knowledgeable-enough eyes. And yet something about this doesn’t sit well with our worldly expectations. Take Einstein for example, and now imagine him as a young student scribbling away in his technical college. To make this work, you now also have to imagine that none of his teachers, at any point in his education, noticed even the slightest spark of something special about him.

Part of the problem here is clearly a matter of recognition – the poor and limited ways we are tuned-in to the abilities of the people around us. If the people who are expected to first notice, then announce, and then cultivate, and then anoint us with the title of genius, are bad at their jobs, then what else is there… other than a childhood of misunderstanding and neglect. But why should this bother us so much? The neglect of one’s talent can often be the freedom it needs to grow, unnoticed and so unbound by social expectations. Why does the question cause so much angst and so many late, worried-filled nights for the young adults that Agassi talks to? People who are: “highly concerned with this matter and in an obviously painful manner.”

Genius casts a dark shadow. The youth today – as in every day – are both “highly ambitious and highly frustrated”. They want to achieve what the people before them achieved: the wealth, the comfort, the status, the happiness, the meaning, the purpose, the career, the prospects, the accomplishments, the recognition… the genius. Listen closely enough, and you can hear under their breaths the light whisper of neurotic desire: “I will not be satisfied with my output, I will not be satisfied with my life, unless I both achieve ingenious results and am recognized for them.”

The rest of us are stuck living lives of mild-chloroform, intolerable, unacceptable, small. All except for one unpleasant little group of adults, people whose role in this issue is disproportionate to who they are, but not to who they think they are. They are the one group in all of this that Agassi has open contempt for, the people who have had their genius-like moments, have exhausted them, and yet can’t stomach the courage to get out of the way for fear of being overtaken:

Those senior members of our cultural and intellectual elite. (I am rather poor at expressing my pity for them.) For, as I watch the ambitious young professionals press themselves hard toward the precipice, I view their older colleagues, their senior advisers, as those who have already fallen off the cliff, who do not dare move a limb for fear of discovering that their bones are broken or even that their bodies are paralysed. Fear of paralysis, it is well known, is quite paralysing.

These people are the gatekeepers to this world, and to a talent that they largely never held; with the few who did now bearing the scars of being corrupted by the experience. For Agassi, they are “phony” in their attitude and intellect. Creatures who are afraid of their own honest advice, career counselors who abuse their positions in order to shy away the young from following in their footsteps; the fewer competitors the better. But even then they have something in common with the next flowering generation of prodigies. An opinion about themselves: if you are not in the Genius Club – and recognised as such – by your late teens or early twenties, then you will never be!

And this is what the whole confusion might be about. The vague answer to our vague question. The “myth of the young genius” as Agassi likes to call it, is not really a question about genius at all, not about intelligence or success or ability or work ethic or knowledge or talent, but about the mud and grime of human life. It is a plea for the ambitious amongst us to just stop, to drop what we are doing and accept the ordinary undertow of existence: “frustration and futility”.

The Myth is a cautionary bedtime story, a folktale, a taboo, that reads like this: before you dare to plan for great achievement, before you start the gears of energy and sweat, just remember you are already, most likely, almost certainly, too late. That PhD you are writing, that novel or that poem, that note you are trying to catch on your violin, cello or flute, that pose, that manoeuvre, that brushstroke, that idea, that revision, that criticism, that theory, that scene, that very inkling of a thought, is doomed to failure because you are not already a raging success… a recognised genius.

So the myth of the young genius might be better called the myth of the magnanimous senior professional. The myth of an older generation that is happy to see the rise of the next – of mature professionals leaving their self-pity and career aspirations at the door of truth and progress. Ordinarily these people would not matter. Gatekeeping can only go so far, hold back the young barbarians for so long, until success begins to speak for itself and the whole game comes to an end. But all this talk of genius and its recognition matters now, because those senior professionals have managed to infect their junior colleagues with this nonsense, pre-emptively talking them out of aspiring to great deeds, with a fear of being too late to the game.

Yet the question of young geniuses troubles the old as much as it does the young. And for this they deserve some sympathy with our disdain. They too were raised on The Myth, and had the same hard – and sudden – judgment pushed onto their lives: early prodigy or a long life of mediocrity. This is what gatekeeping and mythmaking is for, it perpetuates what has come before and, worse, it does our thinking for us. Those senior professionals are likely as unaware of the harm they are doing now, as they were aware of the harm once done to them. But why does the question trouble them then? To be once labeled a young genius – as many of these people have – is to have an unpleasant thought nag at you through your aging life: is that the best I’ll ever be?

Those who have never been lucky enough to feel the embrace of being a recognised-genius, have another unconscious reason for keeping The Myth alive. Despite all that they are and have achieved, they have endured the stigma of not being in that select club, and so the frustrations of their life tend to feel like the unavoidable pitfalls of destiny. They begin to look upon their problems in a non-Popperian light: they resent them rather than loving them. Dug for so long into self-loathing and frustration, the thought that it was a grand mistake – all avoidable – can feel less like relief than a compound error. One more mistake in a life of mistakes. It is just easier – and less psychologically confronting – to accept what has happened and to wish the same failure upon the next generation; their soon-to-be-failure making yours more tolerable.

So perhaps it is a good thing that these geriatric gatekeepers are so bad at their jobs. You might expect people who push The Myth to at least be relatively good at recognising talent when they see it, to ensure that no actual young geniuses are left out in the cold, with their bright futures snuffed-out by the disappointment of rejection. Or, failing that – to run the Popperian line a little further – you might expect, or hope, that they look upon their errors as falsifications: indications that something might be wrong with the underlying theory. Yet the phenomenon of the unrecognised genius is so common that it has slipped into cliché, an irony-rich trope which cradles one hard fact to its chest: if genius is a childhood development then, at the very least, we are all very poor at identifying it in those early stages.

Here the pseudoscientific mind is earning its dues, blinded to the self-fulfilling nature of The Myth. The error runs like this: the late development of certain geniuses does not mean that they were not extraordinary youths, but rather that they were! Sadly there were not enough adult geniuses around them at the time to notice (it takes a genius to recognise a genius). And for the infant prodigies that go on to fulfill all that expectation and promise? Well, the fact they became geniuses proves that they must have always been, even if the label is only applied retrospectively. Dig hard enough into the photo albums and family stories of adult geniuses, and you will always find some small spark of talent or precociousness; just enough to satisfy uncritical minds and keep The Myth alive.

Ask these people what genius looks like, and they say things like: I know it when I see it. Ask instead what does genius not look like, and you get no answer at all.

The problem of genius is the problem of all knowledge, of life… of us! There can never be a theory of genius, any more than there can be a theory of human beings. We can always talk about who has been seen to be a genius in the past, and even try our hand at saying what combination of talent and success should constitute the title today, but tomorrow this will always be wrong! We human beings are creative, it’s what makes us what we are, as well as what makes us completely unpredictable. And one of the few things which is uncontroversially true about geniuses, is that they are highly creative, and so highly unpredictable.

Any theory about what a human being is, can only ever be based on today’s best knowledge. What we all do though – for every problem or hassle or difficulty or limitation or failure – is create new knowledge to improve things about the world (and about ourselves). Our ancestors, though genetically identical to us in every way, were living dramatically different – unrecognisable – lives for one reason only: their knowledge was poles apart from ours. Dream-up any theory of genius that you like, any category for which the next one ought to fit, and that human problem hits back at you.

The next genius will disrupt what we currently know, including what we currently know about geniuses themselves. If the next great talent in some field follows the current trends of the field, we may appreciate their ability, their dedication, their work, but – as they simply repeat someone else’s breakthrough or follow someone else’s formula – we can never call them geniuses. The only real way we have of judging these prodigies, these masterminds, these virtuosos of our time, is by the single criterion of extreme creativity. Something not just different or difficult to understand, but something as otherworldly and strange and incomprehensible (at first) as magic. The prerequisite of all things genius is that it not be immediately – nor easily – appreciated.

This is why people struggle so much when it comes to recognising the geniuses in their midst. It is also why our education systems struggle so painfully when it comes to teaching creativity, or cultivating it, or even tolerating its existence and expression. So much of what we talk about when it comes to these questions, according to Agassi, is a surreptitious way of discouraging people away from “productive careers” and into “unproductive” ones. And from the beginning there is a practical type of apology for this: “Infant prodigies will not be detected in a social milieu which has no use for their talents”. But the problem of education runs deeper…

Any degree of talent or ability is a matter of knowledge, and so it is also – to some degree – socially determined. Here Agassi has an observation, a personal anecdote of sorts. Asking around his colleagues and friends to see if “they could remember the teachers who made a difference to them”, more often than not Agassi received the same answer: “It often turned out that it was a single teacher who had showed critical appreciation and who, quite by accident, had even helped his or her more active charges to decide the direction of their mental development.” The problem became a matter of rarity, “from the age of 6 to 26, there were only two or three seminal people who really affected them”.

Infant prodigies face the same career decisions as the rest of us, the same hassles, the same anxiety, the same limitations of chance and circumstance. Einstein dreamed of being a mathematician rather than a physicist, but based on the state of the two fields in continental Europe during his day, if he wanted to work on something grand or comprehensive (which he did) it would have to be physics. The French chemist responsible for discoveries in pasteurization, microbial fermentation, and vaccination, Louis Pasteur, always regretted his career choice of chemistry over biology. And if Max Planck had his way and had pushed back upon the guiding arms of social pressure, he would have been a musician rather than the Nobel Prize winning theoretical physicist that he was.

To have your career charted-out for you in some way – large or small – is inevitable. Even if that charting is nothing more than the cold, hard press of job insecurity. So why does the world of standardised education make such a mess of the talent, the creativity, the soon-to-be-geniuses that we hand to it? It’s a question that comes with another myth, a romantic one about the need for geniuses to walk their own paths, find their talent on their own, away from comfort and corrupting voices, “to wander in the desert and agonize”.

It is true that Planck was “embittered” about his genius, a prisoner of his own career. But the Einsteins of our world run across this theory, peaceful, happy, content with themselves and with their second-choice jobs. Then Agassi talks about the violinist Yehudi Menuhin, and the more stereotypical image we tend to hold of the extremely talented amongst us – anguished, depressive, and unhappy because they pursued what they wanted, because they lived the career they wanted, and because they succeeded. We instinctively think that genius causes hardship, rather than hardship causing genius.

The pain of being an infant prodigy is obvious enough: they standout. And though the burden of their talent might be tormenting enough, it is likely nothing compared to the problem of being noticed for who they are: the one amongst the many. Child psychologists fill textbooks with the longing to feel normal that they see in their patients; and so perhaps what we owe those infant prodigies are cold eyes. Instead of recognising their talents at all, we might help their wellbeing – as well as the development of those talents – by simply downplaying the extraordinary things they do.

The development of young geniuses is nobody’s business but theirs. Any instinct to try to help these people along, should be quickly shaken to silence by the obvious truth that we don’t know how to. We might get it right and help in some way, but we might also get it wrong and cause damage; more importantly we wouldn’t know where to start, nor how to judge our success or failure. So choosing to – at a minimum – take away any unnecessary pressure, is likely to be a good thing. As Agassi comments in wistful tones “we know that quite a few very brilliant individuals suffered from pressure so much that as adults they were resolved not to use the special skills and talents they had developed under that pressure.”

Anyone willing to say that genius-level talent needs our firm hand on its shoulder, guiding and prodding it towards its potential, needs to also admit in the next breath that they have no evidence for that. And anyone who feels it is true regardless, ought to be very cautious of how close that attitude is to real world tyranny. The myth of the young genius is certainly a myth, but so are all our motherly worries about “warm feelings” and appropriate “encouragement”. The acquisition of skills and knowledge and talent are all their own reward, valuable in their own right, and intuitively desirable. So long as we are not actively discouraging these things – through clumsy schooling, tyrannical parenting, or jealous gatekeeping – then genius, whatever it turns out to be, will take care of itself.

 

*** The Popperian Podcast #19 – Joseph Agassi – ‘Rules of the Game’ The Popperian Podcast: The Popperian Podcast #19 – Joseph Agassi – ‘Rules of the Game’ (libsyn.com)

 

Karl Popper’s Hopeful Monsters

In conversation with Joseph Agassi

 

Talking with Joseph Agassi is an uncoordinated affair. He speaks, he stops, and he interrupts at the most improbable and surprising of places. In unnatural lurches, he jokes while being serious, is kind when talking over you, and elaborates with single word answers. The bewilderment and confusion you feel hits your mind like a panic: this is not so much a casual walk in the philosophical park, as a knee-hugging collapse in a muddy trench; the bombs resonating a little closer each time.

Getting him on the phone is both easy and hard. Agassi always responds to emails with the speed of a plugged-in, tech-savvy teenager, but still in the blunt, busy tones of his natural voice. Dates are made, calendars marked, and he isn’t there. New dates, new times, and still no luck. Eventually I ring him at home without prior announcement. The phone is half way through its first yawn when a confident voice cuts-in: “Agassi here!” Despite our letters, he seems unsure of who I am, but as soon as I say “I was hoping to talk to you about philosophy and about Karl Popper” something shifts. The loud Hebrew music from the next room is turned off, guests are politely ushered out the door, and the man whom Rafe Champion once anointed as an “Intellectual Irritant” is ready, leaning into his microphone and poking for combat.

But he starts with a lament: “I have a constant sense of failure due to my inability to sustain reasonably good relations with the person to whom I am most indebted, both intellectually and personally.” Popper understood as much as anyone that criticism is always a sign of respect, but there is something about Agassi that seemed to hit a nerve. And although they spend long hours together at Popper’s home, their relationship – and Agassi’s criticism – is always a little too close to the bloodstream. From his closing front door (long past midnight), Popper is muttering to himself as to whether his student is more trouble than he's worth. With Agassi – walking down the cobbled path to the gate – lost in self-reflective thoughts about why he is putting himself through the late hours and the abuse; already planning his escape.

Jerusalem. Years earlier and somewhere in the corridors of Hebrew University. Agassi is young and balancing an education between his mandatory stint in the Israeli military. An undergraduate degree in physics, a master’s degree in physics, and then a change that is more profound than it might appear. What need does science have for the philosophy of science? For outsiders who never will – nor want to – step into a laboratory, but who nonetheless find a calling in telling those insiders what their lives are all about, why they behave the way they do, what they are trying to achieve each and every day… and why they fail to do it. Or to phrase the question as Agassi did to himself: is it a role of necessity or simply nuisance?

London. The award of an overseas scholarship gives Agassi the corrective space he needs. It also forces him to go looking for a teacher, someone with the old rabbinical school fire that Agassi is used to, but someone also “underestimated” enough to be open to new, unknown, door-stopping students. In 1950’s Britain, there is only one name that meets this criteria, and although he is well known within the pages of academia and literature, the general public still walk past him – and glance over his books – with lazy, anonymous eyes. Karl Popper is not Bertrand Russell, and for this he feels “amazingly underestimated”.

When that scholarship elapses, Agassi is stuck in a strange and appealing orbit. The thought of leaving is not a thought at all. He bullies his way into a research position, and when that too runs its course, he twists into a Ph.D. despite Popper advising against it. The whole point is to not leave… at least not until the intellectual partnership is completely exhausted, and the personal relationship completely broken. He takes the nuisance part of the philosophy of science to heart and pesters his teacher-supervisor-employer-friend with the happiest/nastiest criticism and argument he can find. And Agassi is grateful for everything it gives him: “Such intellectual success as I have enjoyed is almost entirely thanks to my work under Popper’s tutelage”.

When the falling-out comes, it is easy to forget the closeness of the two men. With enough clout and job security to avoid his campus obligations, Popper spends most of his time locked-up in his house in Kent. The few students that visit him are those who have been personally invited. A rich jealousy grows over the Popperian School, everyone fighting for even the slightest of chats over the daintiest cups of tea. What they all remember of their few stopovers is this: when they arrive Agassi is already there, comfortable and fed as if it were his own home; when they leave, they leave alone, with Agassi and Popper waving goodbye in unison, preparing for their evening debate.

The jealousy and the ego comes also from Popper. And he has a strange rule that seems both personal and out of place. While it is ok and appropriate to criticize other scientists and philosophers as loudly and publicly as possible, the Popperian School is different – a place where all intellectual exchanges must remain private. Why Popper chooses this, is up in the clouds of psychologism, but it doesn’t make much sense for his philosophy. Break the rule and Popper breaks it too, a public feud from a public, good-faith expression of critical rationalism. And a feud it is, with the quality of “expressions'' and “disagreement” becoming “brief, ad hominem, and worthless at best.”

Six years is all Agassi can take (this is more than most people). He gets out with his doctorate and his sanity and a richer mind. Then someone hands him a copy of Popper’s latest book, Objective Knowledge, and asks him to write a review. Unfortunately for both men “the book was very poor”, an empty philosophical statement “buried under a thicket of misconceptions.” The worst – and most shameful –tendency of Popper is there, smeared across the pages, sparking hard memories for Agassi and tragic emotions for the reader: Popper is constructing his own myth, writing his own biography, planning for historical applause.

The pen portrait. A philosopher who cares more, and thinks in higher tones, than his contemporaries. He argues deep into nights, mornings, and broken relationships because the truth matters, it reaches beyond itself, changes the world and the people it touches, and so the fight is a question of duty and honour. He is straightforward when others are not. He is interesting when others are not. And his veins run with an unnatural amount of common-sense. This explains the bitterness and the jealousy that stalks the slow moving, fast thinking, semi-recluse. It is the burden of high rationality and of bravery… and it is the narrative of Popper’s life that has won the day!

The camera portrait. A bullying old man, angry and bitter about the recognition he feels is being denied to him. Arguing with his adversaries and his friends and with his students, Popper tries to prove this tragic neglect true by proving everyone else wrong. If he can always come out on top, always batter his opponents into submission, then it stands that he must be a philosopher of extraordinary quality. And if those victories are public to the point of gossip, everyone else will begin to see that quality, along with the mistreatment. On the wrong end of grandiosity and myth-building, students like Agassi are stuck with the “bullying”, the “dogmatic”, the “cruel”, the “domineering”, the “capricious”, the “complaining”, the “disrespect”…

It starts with an unpleasant little sentence where Popper says: “I have always been interested in Goldschmidt’s theories, and I drew Goldschmidt’s ‘hopeful monsters’ to the attention of I. Lakatos, who referred to them in his ‘Proofs and Refutations’. Agassi reads this and gets a “jolt” – he has seen this before, felt the same odd taste of trafficked recognition over recent years. Perhaps it is true that “the arrow which has left the hunter’s bow belongs to the hunter no longer”, but it certainly doesn’t then belong to the person who simply saw it fly. Having done nothing to deserve it, Popper is trying to claim responsibility from Lakatos for his use of Goldschmidt, and responsibility from Goldschmidt for Lakatos’ continuation of his theory.

Just who told Popper about Goldschmidt in the first place we will never know – Popper has left that link of the chain unreferenced. But Popper has an Agassi-shaped problem here, who quickly senses something wrong with the dates and with the philosophical claims. Popper says he wrote his original paper – Evolution and the Tree of Knowledge – in 1961, but being “no expert” on the topic, decided against publishing. In 1963 Lakatos publishes his own work, with its hard influences from Goldschmidt’s book. And in 1973 Popper circles back around to his old paper, now seeing it as worthy of publication. It is a timeline that begs its own doubts and questions: was it Popper who introduced Lakatos to Goldschmidt, or the other way around?

There is a degree of historical nit-picking to this, but Agassi is just sharpening his blade. He is a Popperian of course, an admirer of the man as well as his philosophy, and this new book – Objective Knowledge – doesn’t live up to either. Speaking in over-the-top deference to offset what is to come – calling Popper “Sir Karl” – Agassi restates Popper’s own methodology: 1. Start with a problem. 2. Pay your predecessors, and past solutions, their dues. 3. Show the error of these solutions. 4. Present your own, improved, solution (something that explains more). 5. Make sure your solution is immune to the previous level’s criticism. 6. And finally, acknowledge other valid, unfalsified, solutions. Agassi finishes by saying that Objective Knowledge fails each and every step.

The Darwinian theory of knowledge – or evolutionary epistemology – goes like this: think about knowledge for just a moment or two, and you are likely to get quickly tied-up in all sorts of bad ideas. And this has a lot to do with the type of questions we ask and answer: how do we know something is true? How can we be certain of things? How do our senses produce truth? Bad questions lead to bad answers, as they did for all the predecessor theories to Darwinian evolution. Asking questions such as why do birds have wings? they were setting themselves up for error: birds have wings so that they can fly! And just like that, from a bad question, you will likely produce theories of godly design, rather than evolution by natural selection.

What allowed Darwin to make the breakthrough that he did, was a new question which opened the space for better answers. Ask instead what kind of process would lead to a bird having wings? and suddenly the true theory becomes easier to find. And so it is also the case with knowledge creation. If instead of asking questions along the lines of how it is that we can know things, we asked how does knowledge grow? a lot of the confusion and mistakes could have been avoided. It would also have opened our eyes to some wonderful similarities between biology and epistemological processes.

Biology. In any population of any species, there are genetic variations, meaning that despite living in the same environment and evolving together, we/they are all different in some way. Many of those variations will be irrelevant, not helping or hurting the host organism; these will die out slowly over time. Some will be harmful, and will die out much more quickly as their hosts die too. And some will be improvements, giving their hosts a competitive advantage within their ecological niche. These are the survivors, the ones who hang around a little longer, who are better adapted to the dangers of their world, who are more likely to reproduce and pass those genetic improvements on to their offspring.

This explains both the wide variety of life that we see in nature, the dramatic success of certain genetic changes, as well as the higher heritability of success, rather than failure. The process is a process, and it continues. Environments change, making past genetic improvements less helpful than they once were, or simply making a radically new ecology to which the species is poorly adapted. The other species change too. They evolve, and some of that evolution will be targeted at hunting this example species to extinction; or to expelling it from its territory. What is successful today won’t be tomorrow. Nothing lasts. And with every successful improvement comes a host of unforeseen problems. The good news is that there is always another possible variation that could fix the new problem…if only it materializes quickly enough. The species that don’t adapt, die!

Epistemology. Or how does knowledge grow? Just as there is constant variation in genes, knowledge too is always changing and multiplying and dying and succeeding. Most new ideas are fairly neutral, some are harmful, and only very few will ever be successful. But those select few have something on their side. They are improvements on what came before them, they explain the world in more complete, more accurate, or more vibrant terms. They give their hosts a competitive advantage. This is noticed by other people (it is only people who are capable of explanatory knowledge) and copied, spreading the new knowledge and spreading the competitive advantage. The people and societies that don’t copy these successful ideas pay a high price, stagnating, suffering, falling-behind, and dying.

But knowledge comes from problems, problems with existing theories or problems with our understanding of the world (these are theories as well, but a helpful distinction for where we are heading). Just as with biological organisms, knowledge too lives within an ecological niche of a kind. It doesn’t build up from nothing, but answers a need or a selection pressure. An asteroid heading to earth makes the knowledge about how to deflect such objects vital, or the discovery of a new pandemic-causing virus makes knowledge of potential vaccines and mitigation policies suddenly important. The best theories – the ones we consider as true – are simply the best surviving ones… and by extension, the ones that help us to survive and hopefully thrive.

The real selection pressure, however, is always criticism. Criticism from us, about our best existing theories. It might be an asteroid or a virus that causes knowledge to change, grow and become relevant, but more often than not it is nothing more than one human being disagreeing with another human being: a theory that another theory is false. There is always some way to criticize even our best, and surest-footed theories, and when those ways are exhausted, new ones can always be thought up. Here we have an endless landscape of variation, analogous with that of gene mutation and gene coupling. And with each new criticism comes the possibility of improvement, that the new theory is better than the incumbent, and so – through selection pressure and adaptation – takes over, survives, and reproduces, while the old one slowly dies out.

To quote myself from an earlier paragraph: “What is successful today, won’t be tomorrow. Nothing lasts. And with every successful improvement comes a host of unforeseen problems. The good news is that there is always another possible variation that could fix the new problem…if only it materializes quickly enough.” As true as this is for both biological and epistemological evolution, there is a difference. Natural selection in living species is a nasty, painful, wasteful, and unbearably long process. Millions of deleterious variations are likely to happen before a single positive one occurs, it then takes thousands of years, generations upon generations, of handing those genes down to offspring for the variation to become stable within the genepool and reasonably widespread.

Worse, for a genetic change to make a significant difference in the survival of a species, there has to be a problem on the ground for it to solve, or improve upon. Which means for those thousands, perhaps millions, of years waiting around for a successful genetic mutation, the species in question is suffering… a lot. For the evolution of camouflage to make a difference to a lizard, it can only be because it was being hunted to near extinction – without any protection – beforehand. An evolved increase in strength or size implies that the smaller, weaker species was easy prey. An increase in speed comes from a need to outrun predators more effectively. And improvements in dexterity, or other hunting abilities, points to constant starvation and food insecurity in one's ancestors.

Any biological sensations of pain or unease or discomfort or anxiety or worry or fear or panic or misery that you feel today, can only be because the human body evolved to feel it. And that it evolved to feel it, can only be because the members of your distant family tree who didn’t have those feelings suffered horribly and died as a result. Every small change in our biology is thanks to an unfathomable amount of carnage and mortality; and the reason why Susan Blackmore calls biological evolution “design by death”.

Epistemology on the other hand evolves with more efficiency and less bloodshed. For a start, it’s not blind. Rather than waiting patiently for random variation after random variation, hoping that one of these might become helpful to the species before it’s too late, explanatory knowledge evolves within a mind after the discovery of a specific problem that needs solving. Though they might still fail, all these variations/solutions that are thought-up have an advantage – they are targeted at solving the problem at hand; there is no randomness to the process, and less waste. And whereas biological evolution is restricted to small, incremental, physical changes, epistemology is driven not only by a knowing purpose, but also by human creativity; removing all physical limits on what is possible, as well as allowing for larger jumps forward in evolution (without the need for all the smaller, intermediate steps).

It gets better still. Epistemological evolution – explanatory knowledge – is also faster than its biological competitor… much faster. All those thousands and millions and billions of years of change, can happen at the speed of a neuron firing between two synapses of a human brain. That’s all it takes for the world – and life on it – to change forever, in the most dramatic of ways. Rather than waiting for a problem to manifest itself, and then desperately scratching for a solution, epistemological evolution can reach beyond our current safety, imagine future problems before they manifest, and then go searching for pre-emptive solutions. A process in which no one needs to suffer at all for improvements to be found.

This is the largest – and most significant – difference between biological and evolutionary epistemology: the cost of progress. An animal with a bad evolutionary code – an error of some kind which needs replacing by Darwinian natural selection – is doomed. It is something that can only be solved by future mutations in future offspring, and so the host will always die before seeing a positive change. And if the genetic error comes in the form of a competitive disadvantage to other members of the species, then the host will die out without even that faint possibility of having offspring, and without the fainter possibility of genetic improvements in the next generation. Either way, errors kill their hosts. In epistemology however, we can discover our errors and eliminate them without anyone having to die. All we have to do is change our minds, and they are gone, no longer harmful and no longer a problem. We can let our false theories die in our place!

Spinoffs. No sooner had evolutionary epistemology found light, than philosophers were hard at work making it incomprehensible. The instructionist school was born, then the selectionist: should we be judging the growth of knowledge by the behaviour of the people who hold it, or by the underlying truth-claims? How do we get the knowledge – replicating itself across hosts – through the cloud of psychological nonsense we all have in our minds? What is the appropriate unit of study: the success of an idea in spreading, or the competitive advantage of the people who adopt it? Are the ideas stored within individuals or within an inexplicit culture? What is the replicator: what is the one-to-one analogy for the gene, the cell, the phenotype etc? Are our theories of epistemological evolution contingent upon our theories of knowledge (empiricist, inductivists etc)? Then we are off in the wilderness talking about hypothetical realism, epistemological dualism, adaptationism, perspectivism, embodied theories, disembodied organisms… And none of it can save Karl Popper.

Back to Joseph Agassi. He reads Popper’s new work, rechecks Popper’s own method for inquiry, and goes to war. Instead of starting with a problem – as he should – Popper only has a distinction, a delicate turning of slight details. What his theory solves, is already solved by Goldschmidt’s; and what his theory explains, is already explained. So on points one and two of his methodology (1. Start with a problem. 2. Pay your predecessors, and past solutions, their dues) Popper has failed. Popper’s great new twist was to look at both biological and epistemological evolution as not just similar in appearance and outcome, but also as doing the same thing: problem solving.

This is fairly uncontroversial when it comes to explanatory, human-created, knowledge. But biological evolution is also knowledge creation, of a sort. The ability and awareness of how to run faster, how to avoid predators, and how to find food, how to reproduce most effectively, are encoded within the genes of animals (as well as human beings). It is rigid, confined, slow-moving, but it is unmistakably still knowledge. And it evolved to solve problems – problems with being hunted by other animals, problems with starvation and hunger, problems with passing on our genes within a competitive environment. Genetic knowledge is just the slower, dimmer cousin of explanatory knowledge; different in ability, but not in kind. 

Agassi gives Popper his dues here – he has added something to the existing theory (“it connects the amoeba and Einstein as problem-solvers”), but not much. On the third methodological step (3. Show the error of these [previous] solutions) Popper finds a slight foothold. But it is less of an error in Goldschmidt’s theory that he is pointing out, than an incompleteness or a lack of emphasis. When it comes to four (4. Present your own, improved, solution (something that explains more)) Agassi is unimpressed: “here comes Popper’s claim that he has an explanatory theory. He has none that I can see.” By this same failure, point five (5. Make sure your solution is immune to the previous level’s criticism) fails too.

The worst sin that Agassi sees in the pages of Objective Knowledge, is with six (6. And finally, acknowledge other valid, unfalsified, solutions), and how it relates to two (2. Pay your predecessors and past solutions their dues). Having already stolen the credit for Lakatos’ theory by referencing Goldschmidt, and saying that he (Popper) deserves recognition for dubiously connecting the two men, Popper then goes on to diminish Goldschmidt to a single, decade-old afterthought, in a single footnote. Agassi writes: “I mean, how does Goldschmidt come into Objective Knowledge: through the back door in a 1972 Addendum to a 1961 paper”.

Having cut his way through Objective Knowledge, as well as some favourites from Popper’s back catalogue – corroboration, scientific credibility, what constitutes an explanation – Agassi turns around to stamp the final vestiges of life from the book: “looking again at Popper’s excursions into biology, I am amazed to find how much pointless though valid criticism it includes… I am amazed to see that they [his papers] start with attacks. No problems, no discussion of strength of valid solutions to be attacked.”

When Popper reads Agassi’s words, he does the unthinkable for someone who believes that “all criticism is constructive” – he ignores it! And for Agassi, this is “painful”, after all “any criticism is better than a dismissal or an oversight”. When messages begin leaking through to him from mutual friends and colleagues, the gossip and the second-hand professionalism is too much for Agassi to tolerate. He phones Popper to talk about his review – “I was used to him shouting at me” Agassi writes, but this time all he did was “scoff at me”.

All these moments in Popperian history can ring as distantly as stories of Socrates plodding around the agora. And so it is hard to imagine that Agassi was there, beginning to end, and at 95 years old he is joyfully still here. His memory is strong and unshaken by age, his stories rich, long and wonderfully personal: the whims of Paul Feyerabend, the plagiarism of Imre Lakatos, the soldier’s honesty of John Watkins, the persistent fraud of Ludwig Wittgenstein, the regrettable weakness of Thomas Kuhn, and the intense anger of Karl Popper.

Perhaps Agassi has earnt that label of Intellectual Irritant, and that is the place he will hold in this history when people inevitably write his story. But what lingers from speaking with him is only admiration. I admire that he doesn’t back down, doesn’t retreat at any cost, and fights to blood and bone. I also admire that he drips with emotion and regret when thinking about the toll it all took, and the harm it may have produced… whether the fault is his or theirs: “No amount of justification of an action may allow us to ignore the pain it causes”. And what lingers too, despite the unpleasantness, is the gratitude he still feels for a single, chance event, which changed his life for the better:

I do not know how much I am indebted to Sir Karl Popper, except that but for my having been his student and research associate I would not be what I now am. I consider that fact my greatest fortune.

 

*** Shortly after writing this article (publication delayed) Joseph Agassi died at his home in Tel-Aviv (1927-2023).

 

*** The Popperian Podcast #18 – Joseph Agassi – ‘Karl Popper’s Hopeful Monsters’ The Popperian Podcast: The Popperian Podcast #18 – Joseph Agassi – ‘Karl Popper’s Hopeful Monsters’ (libsyn.com)

 

Karl Popper’s Social Turn

In conversation with Rafe Champion

 

A group of unrelated and unknown people meet in a room. They don’t ask about credentials and they have nothing in common beyond their desire to be in that room. Outside on the street, in their cars, in restaurants, with their families and friends, these people are as insufferable and flawed as the rest of us: gossiping, threatening, sweet-talking, fighting – dogmatic, tribal, loyal, and full of prejudice. But inside that room everything changes: they argue and criticise and interrogate, but all their other human baggage is left at the door. If you were to accidentally walk in and watch them for a few minutes, you would find it impossible to guess who the senior members were, and who were the juniors, who had had more professional success and who had less, or what their lives and personalities might look like when the meeting ends and everyone goes home.

These hypothetical people are scientists, and for most of us this is enough to explain the unusual behaviour of that room. By extreme good fortune we have become accustomed to the sudden shift in attitude and rigour. But what actually happens beyond that threshold takes some clearing up. And asking the scientists themselves likely won’t get you there – for the most part, they don’t know either! So how is it that these deeply individualistic people manage to leave their egos at home and work towards something impersonal and collective, without being conscious of the process themselves?

It is what has come to be known as the “social turn” by Ian Jarvie, the “institutional turn” by Rafe Champion, or “the organization of inquiry” by Gordon Tullock, in the philosophy of science; and it belonged first to Karl Popper:

what we call ‘scientific objectivity’ is not a product of the individual scientist’s impartiality, but a product of the social or public character of scientific method; and the individual scientist’s impartiality is, so far as it exists, not the source but rather the result of this socially or institutionally organized objectivity of science.

Now this seems odd… counterintuitive. Most Popperian readers will have a clear-enough image in their minds of what science ought to look like. It involves as many competing theories as possible, as much criticism and as many tests as possible, and as much advocacy and argument. We will never reach an upper limit on this, a place where we stop and decide that science has become drunk on its own health, and so the question of organisation doesn’t seem like much of a question at all: if you want science to succeed, simply get out of its way; let the free choice of free individuals rule, hugging as close as we can to laissez faire principles.

Popper would agree with those words, and disagree with the implication. How those individuals fit together also plays an important role in this – without which, ‘scientific objectivity’ would be impossible. At the end of the day, science actually achieves things, it solves problems, moves forward, improves the world, and makes progress. It is more than just a collection of unconnected scientists building their own hypotheses and running their own tests. If it were only this, it would be a miracle if even a single problem was solved.

Perhaps a clearer way of looking at the problem is in reverse. What would you do if you wanted to cripple the progress of science and the growth of knowledge? Here is Popper:

By closing down or controlling laboratories for research, by suppressing or controlling scientific periodicals and other means of discussion, by suppressing scientific congresses and conferences, by suppressing Universities and other schools, by suppressing books, the printing press, writing, and, in the end, speaking. All these things which indeed might be suppressed (or controlled) are social institutions…Scientific method itself has social aspects.

There are two interrelated motives to everything scientific: understanding the natural world and controlling the natural world. It is the difference between “pure” and “applied”, between curiosity and a practical purpose. Recognised in nearly every lab, in every country, the line is as appropriate as it is fuzzy. Like it or not, once the step of understanding the natural world has been successful, the controlling of the natural world has already – largely – come to life, with the many immediate uses, tests, experiments, applications, and implications, being present and explained by the pure research discovery. There is never a hard and obvious demarcation in the scientist’s mind – unavoidably slipping between the pure and applied titles as he works.

The objective truth of things is unknown and out there to be discovered, and we only ever get there by guessing (conjectures) at what that unknown world looks like. This is the way knowledge creation happens, whether it is in the mind of a great theoretical scientist or in the dusty corner of an undergraduate laboratory (still in the mind, of course). These people might have different pressures, different funding incentives, or different motivations (creating something vs. becoming rich), but as they are both trying to create new knowledge (the knowledge of how something works or the knowledge of what to do with it), they are both scientists of the same kind; the same kind of scientist that we all are… every single one of us!

At all stages, the same game is being played and the same question asked: do my theories about the world actually correspond to reality? And there is no such thing as a theory which doesn’t try to connect with the world beyond our minds. Without a phenomenon of some kind that needs explaining, a scientific theory is never born. This was Galileo’s great crime, not discovering a new motion of the planets, but claiming that this represented a new law of nature and a better way to explain what was actually out there in the universe. The charge against him was presented by the Jesuit cardinal in this way: “act prudently” and “content yourself with speaking hypothetically and not absolutely.”

If the work of science was to only state what we see, and then revise what we see, and never draw a connection to underlying laws of nature – making our theories “simply abbreviated statements of observations” – then we have some hard lifting before us. When the Einsteinian system replaced the Newtonian one, it wasn’t because it explained more observations; after all there are very few observations that Einstein’s theory can handle more simply, or effectively, than Newton’s. If the whole game were about data collection and accuracy, then general relativity would never have been regarded as anything more than a “minor step forward”.

Albert Einstein was supposed to have said, “if you want to know what a scientist really believes, don’t listen to what he says, but observe what he is working on.” And very few will ever be caught working on theories or experiments in a way that would indicate a scepticism about objective reality, or as if their research were only “devices for conveniently summarising experimental results.” Pick a scientist at random, controlling only for their being committed to their job, as well as being honest. Working day and night, sweating and suffering for their theories and experiments, what they probably don’t do is study their own interests and motivations; asking themselves why certain problems were chosen, why particular methods used, or how truth relates to what they are doing.

Because they don’t spend much time on these issues, and because so much of what they do understand is inexplicit, they are liable to make some strange mistakes when pushed – grabbing hold of “various ill-conceived theories”. It is, after all, a hard thing to say publicly – as well as to oneself – that I don’t really know what I am doing! Left alone to scurry around their labs, these scientists take plenty for granted, and stumble onto unspoken answers: the possibility of truth is the only thing that brings meaning to their work, and progress to science. Without it, all their achievements would be miracles!

We (scientists and non-scientists alike) don’t begin by collecting data, we form theories, and then test those theories through experiments, against other theories, and finally against the collection of data. The point being, the accumulation of information gets us absolutely nowhere, until a human mind creates a theory to make sense of it; as well as to make the accumulation possible. Without a theory how does anyone know what to collect in the first place, or what constitutes a data point?

Straighten this out as well as you can, find your methodological groove, and what is left – according to Popper – is only still ever conditionally accepted as true. It is always open to revision and scepticism, and this also leads many scientists to make elementary mistakes about the nature of their work, thinking that the critical attitude which has been drilled into them has unhelpful implications about the scientific enterprise. It goes like this: if I am required to be sceptical about every possible theory, this surely means that they are all untrue. Or: if I am required to be sceptical about every possible theory, then I ought to also be sceptical about the very existence of truth.

And all sorts of horrible little ideas fill that opening space: science is about consensus or science is about workability or science is about the collection of data. The shame and the errors are magnified by one important – and true – implication, being the one which is most commonly missed: our “theories seem not to last”; that the history of science has been a history of radical change and of disproof. Nothing that we do ever seems to stand the test of time – all it takes is a little bit of well applied criticism, and everything we once thought to be true crumbles inevitably to our feet. We are always likely to be wrong (in one way or another), and yet never likely to see this for ourselves.

Its origins are as long and deeply carved as you could hope for, stretching back to The Logic of Scientific Discovery and to The Open Society and Its Enemies, but this social turn in Popper’s philosophy always happens at the intersection of human fallibility. As a small proof of this, Popper himself seemed to not fully understand the significance of his turn; snow-blinded to his own enormous discovery. “Popper’s consistent ability to think socially also does much to account for his originality” writes Jarvie in The Republic of Science, “since it is hard to do and its difficulty is attested by how often readers and critics of Popper do not grasp that this is what he is doing.”

So wrapped-up in defending science against the charge of being a “mere social construction”, Popper’s snow blindness was largely self-inflicted. Perhaps the word turn is not helpful here, indicating a slight or casual change, a subtle mention or glance in an otherwise ignored direction. This underplays just how essential “thinking socially” is to Popper’s work, much more so than “logically or psychologically”. It is so central to everything Popper thought, and to the scientific method, that Jarvie believes it amounts to a “proto-constitution of science.”

It is what keeps science afloat and functioning, even when the best methodology is not used. Alexander Fleming’s discovery of Penicillium rubens was not the accident that most people think it to be. Thousands of other researchers had seen the same contamination in their slides – and millions more had seen it on the non-microscopic level – but only Fleming realised its importance. An accident may have led to the contamination, but not to the creation of the theory; that was all Fleming! Still, it was an example of bad methodology, and the reason why the theory holds today is the same reason why most others fail: community!

Not a community of like-minded colleagues, nor a community hugged together by a governing body or a set of laws, but a group of otherwise disconnected people committed to doing the one thing for each other that we cannot do (particularly well) for ourselves: expose our mistakes. No one gives commands, there are no hard organising principles, and none of it is consciously designed. It all hinges on the key truth that, if you desire new knowledge, then you must want to find and eliminate error. And it is this which requires a community… of a kind.

The theories that survive also owe their lives to this social world of scientists, without whom (and their best efforts at criticism and refutation) they would be indistinguishable from the crowd of false theories. A perfectly true hypothesis can never be considered as such until it moves from its host’s mind into the minds of other scientists; with all the doubt and difficulty and explanation and testing and predictions and implications that goes along with that.

There are better and worse ways for this to happen though. Dissemination is always a challenge and a balancing act – trying to get new theories exposed to the scientific community as quickly and cleanly as possible, whilst also filtering-out the frivolous, the nonsensical, the fictional, the non-rigorous, the fraudulent, and the fabricated. We all have a limited amount of time and attention to spare, after all. When this is done well, scientific knowledge grows sharply, benefiting us all. Done badly, and the social fabric tears under a deluge of un-distilled information and phony publications. The problems of science are the problems of the human condition; Tullock puts it like this:

There is no reason to believe that scientists are much more thoughtful and honest than other men. The obvious high degree of truthfulness in scientific research comes not from the superior moral probity of the individual scientists, but from the social environment in which they operate.

A scientist caught faking his experiments or fudging his results is not an existential threat to science – nor to the scientific community – but he does represent a muddying risk to the smooth function of things, and the pace of progress (a non-trivial problem when your whole goal is to improve things, through knowledge creation, as quickly as possible). Matching the seriousness of the crime, the punishment is invariably excommunication, the end of a career, the thorough collapse of reputation, and the questioning of all previous work.

In the 1920’s, Paul Kammerer was one of Europe’s most prominent and well respected biologists. After years and years of success and rigour and achievement, he was then associated with a single faked experiment… and it “ruined him”. Kammerer’s downfall was sudden, dramatic, complete, and irreversible; ending, sadly, with his suicide in 1926. The fraud he took part in speaks of a tragic desperation that he must have been feeling. Fabricated theories are unpredictable, fabricated experiments are unrepeatable, and fabricated discoveries have no practical applications. Kammerer was always going to be exposed, with his deception sooner or later bubbling to the surface; which is why the scientific world is more truthful and honest than the non-scientific: lies are better and more seriously policed!

So if everything comes back to the question of effective dissemination – allowing scientists to check each other's work as quickly as possible, whilst also filtering-out the obvious errors – then how should this be done? These days everyone has the answer, infecting their vocabulary like a pathogen escaping a laboratory: peer review! And it means absolutely nothing – no content implied! How peer review works, and how it should work to be more effective, remains a mystery beyond the plain meaning of those two words. At its bones, it is as crude as the worst aspects of human life: “scientific advances are disseminated” writes Tullock, “through the same channels of advertising, salesmanship, and public relations as other commercial products… [and] this does have some effect on the development of science.”

So, much of what is good science misses out on publication, not because it lacks merit or quality, but because it doesn’t line-up with editorial guidelines, audience expectations, or the reviewers simply don’t understand it. All journals are specialised, but this specialisation can only go so far, and sadly the “most important” new research often falls on deaf, and confused, ears. Then the reputation of the author becomes a problem: notable scientists get too much of a free pass, while the unknown are rejected on the assumption that their unknownness is a symptom of their poor research. After that comes the problem of the sheer number of journals that now operate (all trying to meet publication deadlines), meaning that ten rejections of an article has no necessary implication for the future of the research. Short on content for the next quarter, the eleventh will take it. If not the eleventh, then the twelfth…

How to save science from its own institution? And from becoming a victim of diminishing returns? Choose your poison: improve the tenure process, offering more protection for researchers at all stages in their careers; do away with the résuméd-importance of having a flock of grad-students (encouraged to support and advance their supervisor’s work); expand access to funding and equipment; place more value on the receiving of awards (encouraging boldness) rather than the accumulation of publications; or perhaps something much smaller, much simpler, but which would have a disproportionate downstream effect…

For this we go back to Gordon Tullock and his hard look at the structure buttressing the scientific enterprise. One idea, one requirement upon the editorial boards of academic journals, would be revolutionary: make them publish a list of their rejections. From this, we could see how discerning they are, how much they reject vs. how much they publish; we would also begin to see any biases that they might have, if they were consistently rejecting papers from one view point for example; and the next time a ground-breaking paper is published, you could go back and see all the journals that rejected it before it was published, popular, and acclaimed.

Or, as Rafe Champion points out, you could simply give new scientists “a good introduction to the works of critical rationalism” and Karl Popper.

 

*** The Popperian Podcast #17 – Rafe Champion – ‘Karl Popper’s Social Turn’ The Popperian Podcast: The Popperian Podcast #17 – Rafe Champion – ‘Karl Popper’s Social Turn’ (libsyn.com)

Finding Consolation in Truth

In conversation with Michael Ignatieff

I am visiting a friend who lost his wife six months ago. He is frail but unsparingly alert. The chair where she used to sit is still in its place across from his. The room remains as she arranged it. I have brought him a cake from a café that they used to visit together when they were courting. He eats a slice greedily. When I ask him how things are going, he looks out the window and says quietly, “If only I could believe that I would see her again.” There is nothing I can say, so we sit in silence. I came to console or at least comfort, but I can’t do either. To understand consolation, it is necessary to begin with the moments when it is impossible.

Michael Ignatieff has lived an interesting life, in different – but connected – worlds. A young journalist scouring over questions of disputed nationalism and civil war; an award winning novelist and non-fiction writer; an historian and Harvard professor; a politician and leader of the Liberal Party of Canada; President and Rector of Central European University in Budapest, Hungary (until the university was expelled from the country by Prime minister Viktor Orban); then back to his roots as a professor and writer… and Popperian.

At different moments, the theme has returned to Ignatieff, running lecture series, conferences, editing and writing books, about Karl Popper; and particularly about The Open Society. As much as the term means anything, he has been – and remains – an academic hero of mine, someone who stood-up for the right things in the hardest moments; courageous, principled and questioning.

I had been hearing the name for years before it meant anything to me, littered through the footnotes and references of the political science books I was studying in my undergraduate days. Then by happy chance I was looking to do some private reading on the war in Bosnia, and I picked up a copy of The Warrior's Honor. I was immediately stuck to my seat! Here was a professor of history from the fanciest of Ivy League schools, writing about a foreign civil war… and he was there in the blood and the mud and the horror! Instead of following the fighting from a soft bed in America, or a café in some safe neighbouring country, he camoed-up, travelled to the frontlines, slept in the trenches, dodged bullets and artillery, and spoke face-to-face with the ethnic warriors he wanted to document.

In many ways Ignatieff is now heading back to those earlier moments in his career, looking at them now through an ageing lens and the slow-growing prospect of the end of his own life. Asking himself questions about the meaningfulness of existence, the value already exacted and the value still waiting to be found, and how to approach the inevitable harder moments, when everything appears lost; when consolation seems impossible. In good Popperian tradition, things don’t start with a hopeless definition about that word consolation, but with a growing thought – bright, dominating, challenging – and a very human problem: “There is no true consolation in illusion, so we must try, as Vaclav Havel said, ‘to live in truth.’”

So is it true? Is only truth capable of consoling us? It is certainly capable of some heavy lifting: outside the gates of Kresty prison in 1938 a line of women is stretching around a bricked wall. It is Leningrad, it is winter, and everyone is desperately cold… and with each minute the line grows longer. They are waiting to see the men inside, their men, the men they love and lived with, until the Yezhov terror and the late night arrests took them away. The purges and the air of fear has done its job, and so the women whisper to each other, not knowing whom to trust, not wanting to draw attention to themselves, nor to their family on the other side of the wall. As Stalin’s regime swept millions from the face of the earth, these were the silent witnesses, cowed, scared, worrying, and the only proof that other people once lived.

As the frozen wind bites harder, one of the women softly exclaims “Can you describe this?” The crowd remains still and quiet, and then a slightly louder whisper answers “I can!” It was the poet Anna Akhmatova, in line at the prison to see her son, and as the two women caught eyes, a small, delicate smile appears on the first woman’s face. That smile did a lot! We know nothing about what happened to her or her family, only that it was likely heartbreaking, as were the times. But Akhmatova turned that small facial expression into poetry, and so she stood for a moment in time, for a faceless people, and for an inhuman tragedy; people who refused to be forgotten by history.

Over the next twenty years Akhmatova continued to suffer with and to write about and to immortalise the victims. These were people dragged to the limits of the human experience, driven to insanity with fear and hope. Most of them would never see their fathers, brothers, husbands and sons again, in fact as they waited outside the gates at Kresty the people they loved were likely already dead, or already transferred to some distant Siberian gulag. And they probably knew this. What else did they have? They could silently accept the ghostly new world they found themselves in, or they could reclaim the lost moral authority of their nation; futile as it might seem.

When everything has been taken away, and the prospects for change are so miserable, sometimes the only thing left to people is to stand as witnesses and wait – decades perhaps, lifetimes even, for vindication and for the madness to finally wash away. Life is also reclaimed in this way, looking back on the year he spent behind the fence at Auschwitz, with his family, his people, and himself on the edge of death, Primo Levi admitted that it was also “when he felt most fiercely alive.”

But more than anything, we were their consolation. Their hope was as much political as it was moral. As they ached through the most unspeakable pain, they were thinking about us. When people like Akhmatova and Levi put pen to paper, they were consciously writing to the future with a hard epistemological idea: the truth matters, the future can always be better than the past, progress is possible in every circumstance, and even if we never actually feel the consolation that we need so badly, that doesn’t mean that it will never come, never be vicariously felt by others: “they had suffered for a faith, not a belief in paradise or salvation, but instead a resolute conviction that hell existed and that they had an obligation to chronicle it.”

There is a sad tendency to approach history with a detached sense of apathy: it is lost, it is over, and the forces working through it – and over us – are too large to bother with. What will happen, will happen! As calming as this might be for some people – and even psychologically healthy, helping them to accept the horrible things that have come their way – it is also not true. History weighs impossibly over us all, but as Vaclav Havel noted, it “is not something that takes place elsewhere: it takes place here. We all contribute to making it”.

Havel knew this as much as anyone could. A leading figure in the resistance to communist rule, he could be heard across the underground radio stations of Czechoslovakia. When this was suppressed, he moved into publishing absurdist satires and plays. When the printing houses and theatres banned him, he moved again, this time into the heart of the political opposition, choosing to become more, rather than less prominent. He was arrested multiple times, constantly surveilled by the secret police, prosecuted, tortured, and then repeated. As a political prisoner he continued to write letters and push for change.

His last and longest prison sentence ended in 1983. Soon enough he was leading the Velvet Revolution which toppled the communist system, and “within seven years of leaving jail, he was president of his country.” What stuck with him most during the intermediate years, was a sense of failure. That he had let too many people down, too often. Just like the long arch of history, shame of this kind is a difficult thing to deal with, but it is not helped by imagining that it belongs to a previous self, or previous people. Optimism about the future comes from acknowledging error, not by avoiding it. By accepting the truth of your failures and living in a way that corrects them.

When you error-correct your own life in the hardest terms, external judgement mostly arrives as an old, neutral story; and the bits that don’t, as happy new visitors. Three years before leading his country to freedom and becoming president, Havel was asked by a journalist how he felt about the future: “Hope is definitely not the same thing as optimism. It is not the conviction that something will turn out well, but the certainty that something makes sense, regardless of how it turns out.”

When it comes to leading countries and suffering through darkness, Abraham Lincoln – and the consolation of war – deserves to be mentioned. A president of gratuitous empathy, Lincoln visited the barricades, talked with his soldiers, and carried their agony home with him; their young faces rippled by “the noise, the blood, and the terror.” And when they died, he wrote letters to their widows and to their orphans, knowing full well that nothing he said could change their heartache… he wrote anyway: “I feel how weak and fruitless must be any words of mine which should attempt to beguile you from the grief of a loss.”

The letters he received were of a different kind. Every day mothers’ and wives’ wrote to him pleading for clemency, hoping their imprisoned sons and husbands might be released, and not have to face the ultimate penalty for their desertion. Others simply begged him to end the war and allow the soldiers to return home. There was never a moment when Lincoln wasn’t aware of the horrible place he was leading his nation into, as well as the power he wielded over so many lives.

As this fog of suffering eased over him, Lincoln bit down on an unpleasant truth: “If war was to be waged…it must be waged with ferocious intensity.” He pushed hard into the back of Ulysses S. Grant and his army amassed at Richmond – the political commands and encouragement coming from the Oval Office giving away nothing of the internal torture of the man inside. The battle would run as long as it needed to, the Union forces would scrape and crawl and continue to pay a heavy price, just so that the enemy would have to pay a higher one.

The downward spirals of the human condition and of history, are always a weaker cousin of progress and improvement. One can only destroy or suppress; the other has an infinite vista of options and choices and possibilities and solutions and creativity before it. One is completely predictable; the other endlessly flexible, capable of being born anew each and every day. One fears the future; the other invents it. As the “terrible grandeur” of the Civil War reverberated inside halls of politics, and as confidence in the idea of America weakened, suspicions, deception, confusion, revenge and retaliation found momentum. And Lincoln was reminded each and every day just how little control any one man has over history, even a president: “I claim not to have controlled events but confess plainly that events have controlled me.”

But truth is different! It reaches out between people, between nature and minds, and between the past and the future. If slavery really was the abomination that Lincoln believed it to be, then he would be more than just able to assemble a better fighting force, he would be also able to eventually convince the Confederacy of their mistakes. But first he would have to win, and he would have to do so with all the self-doubt and self-questioning that comes with the pursuit of truth and progress.

And so Lincoln spoke in universal terms. He could have easily – and must have been tempted to – define the war in terms of Southern provocations and Southern slavery, and with that dumping unbearable, but satisfying, condemnation upon his enemy. Instead he declared the cause of the war to be “American slavery”, an “offense” that every man and woman, North and South, needed to own and to bear and to take responsibility for. This was war fought not for the future of a nation and its people, but for a moral truth… and for all moral truths to come.

The consolation here must also be found in high principles. Success on the battlefield would end the war and stop the horror, but this wasn’t what Lincoln was fighting for. He knew that “these are not the days of miracles” – those were as distant to him – even in victory – as we are to him now. Lincoln understood that the South would need help to accept their defeat, that the North would need help to forgive the South for making the war necessary, and that both sides would have to learn to look across at each other equally… as victims in the swell of history and ignorance.

In the thick mist of a Paris night, two men – strangers – knocked on the door of Marie Rose Vernet. They carried another man in their arms, weak, exhausted, sick, and a fugitive. This was the height of the Jacobin terror, everyone was under suspicion, and the price of helping an outlaw would be the death of you both. They asked Vernet to shelter their friend until he recovered, and hide him from the guillotine. She had only one question: “Is he virtuous?” When told that he was, she had only one answer: “Then let him come”.

The man being carried had a mouthful of a name, Marie-Jean-Antoine-Nicolas de Caritat, marquis de Condorcet, and a prominent resume: secretary of the Royal Academy of Sciences, deputy of the National Convention, a politician, a scholar, a mathematician, and now a criminal from his own revolution. Months earlier Condorcet walked proudly down the same Paris streets that he now hid from, wearing the new uniform of the National Guard, a prominent figure in the revolution.

Yet, whereas his fellow Guards wore a sword on their lapel, Condorcet chose an umbrella. For him, the fire and the violence were unpleasant necessities. Built upon the new sciences of probability, calculus and economics, the French republic that he dreamed of involved an end to superstition, to tribalism, to ignorance, and the “lackadaisical incompetence that had doomed the ancien régime.” Here was the hard-won opportunity not for change, but for liberation. He drafted legislation that would outlaw slavery across the French colonies, he published pamphlets arguing for equal rights for women, and he wrote the draft of the French constitution in 1792.

And it cost him! Condorcet’s aristocratic friends drifted away, issue by issue. The royal scientific societies retracted his honorary memberships, and then he committed the most unpardonable of sins: voting in the National Assembly to convict the King of high treason but not to execute him. If the revolution was to matter, and was to be worthy of the name, it should be different to what came before it, it should not kill its enemies. Growing anger within the Jacobins had found its flame – Condorcet’s draft of the new constitution was voted down, and the halls of politics were stormed for a second time; the moderates and their allies arrested.

Lucky to escape the mob, Condorcet recovered in Vernet’s quiet house. And as the months ran into years, he resumed an old project to stave-off his “sinking mood”. It began as a multivolume history of science, and grew into the encompassing story of progress and knowledge and growth: the “Enlightenment narrative”. He was trying to recast the revolution in its proper light, but also to explain what gave meaning to his own life, as well as what gave meaning to all humanity: problem solving and improvement.

Much too much negativity about the species had seeped into philosophy, and then into daily life. All change brings with it a new set of problems, but those problems are also soluble just as the previous ones were, and just as future ones will be. For all the inequality and disassociation that the rise of capitalism had brought, there was also more wealth, more choice, and much less actual poverty. In the paraphrased words of Adam Smith, “an average day labourer in England lived better than many an African king.”

Robespierre and the Jacobin terror were justifying their violence in the opposite terms, claiming to see a “fatal pattern” to history, and so were preventing the inevitable slide back into tyranny. Condorcet’s vision of the human condition, and its potential, was something very different; something for which he had the best theories of science and economics on his side. Rather than being defined by the trends and predictive twists of history, humanity renewed itself every day and bent only towards happiness and truth. Both of which can never be suppressed for too long, always wriggling free to find the light.

The revolution may have been slipping away, but it had not been in vain. It was a call to progress, and that call would soon again find its feet, if only men like him – and people like us – were willing to work for it. Coercion and bloody terror can never win for too long, for although they may consider (falsely) history to be on their side, truth never is: “The perfectibility of man is truly indefinite, and the progress of this perfectibility, from now onwards independent of any power that might wish to halt it, has no other limit than the duration of the globe upon which nature has cast us.”

But dying is the end of something, and it is coming for us all. In the mid-twentieth century an old institution from the Middle Ages was reinvented by the English doctor, Cicely Saunders. Watching physicians, nurses, and patients struggle to retain their hope and sense of purpose, she had become fixated on the same philosophical question that tied Anna Akhmatova to Vaclav Havel to Abraham Lincoln to Marquis de Condorcet: the relationship between consolation and truth.

Saunders had seen the hard reality of death steal away the consolation of her patients, filling their final moments with fear and panic and worry. Wrapped-up in a foreign world of medical decisions and unnecessary procedures, these people lost hope as they died. They had more to accomplish, more to resolve, and the growing shadow of death didn’t have to diminish them. Saunders’ insight was to create an institution for palliative care, and most importantly, for consolation: the hospice.

Every patient was different, but in each and every case truth was important. Some people needed more easing and soothing than others, but “false hope was no consolation at all.” Perhaps the greatest failing of the medical establishment in this regard, was the inability of doctors to deal with their own fears around death: “many of them couldn’t tell patients the truth because they couldn’t tell themselves the truth.” The hospice instead built a community around respect for, and the individual needs of, patients, through an unflinching eye on death.

Instead of running from truth, and believing that our deaths must be lonely, unpleasant, and cold events, Saunders turned the institutional dial on this; returning purpose to our final days. Death rarely happens as isolated and deserted as poetry likes to imagine it – more often than not, death is among the most public and socially involved moments of our entire lives. Moments not just where the dying receive the consolation they need, but where they also are desperate to console the people they love and who they are leaving behind: “the giving of consolation was essential to the receiving of it.”

Sometimes what consolation needs most of all, is simply the opportunity for truth to settle comfortably in its own space. When Michael Ignatieff writes about his parents and their death thirty years ago, the words hit me as only true things can – knowing instinctively in that moment how it would feel when the same desolation eventually comes my own way: “They had been the audience before whom I played out my life, and with those two seats in the theatre suddenly empty, the play itself seemed to have little point.”

Separated from them in their last moments, Ignatieff’s parents died in hospital beds, leaving him “inconsolable” with “deep scars”. And when he writes things like “I wish my parents could have had a good death”, it savages the reader with shared compassion. But those last moments that could have been shared, last conversations and meaningful words that could have been spoken, last hands that could have been held, are not as lost as they seem. There was no time and no place for them to happen, and with that comes regret and a deepening of grief, but just as there is no such thing as an insoluble problem, there is also no such thing as an inconsolable situation.

We may be crippled and disabled by the sorrows of life, but this is always a temporary condition. If we cannot consciously find the appropriate place and background for consolation, our unconscious minds will often do the work for us, digging down into the “recesses of our souls”, recovering lost hope, and restoring meaning to a meaningless circumstance: “It is the most arduous but also the most rewarding work we do, and we cannot escape it. We cannot live in hope without reckoning with death, or with loss and failure.”

As the churches, mosques and synagogues empty out, it becomes obvious enough that consolation is losing its institutional setting. This is true… and it’s not. The buildings and the shared rituals are one thing – but the tradition that Ignatieff is a part of here is something much more important. When we struggle with notions of fate, and fight back against the very human impulse of resignation, we inspire others, consoling them as well as ourselves in the process. How lucky you are to have something exceptional enough to grieve for in its absence – it could have all been much worse, and it could all be infinitely better in the future. Our misery is never just our own… and never permanent.

I was struck by how emotional I was talking with Michael Ignatieff. I could hear in his voice, and his words, that he was too. As I write this now, there are soft tears collecting in my eyes. I am emotional again… but also, unmistakably, consoled!

*** The Popperian Podcast #16 – Michael Ignatieff – ‘Finding Consolation in Truth’ The Popperian Podcast: The Popperian Podcast #16 – Michael Ignatieff – ‘Finding Consolation in Truth’ (libsyn.com)

Defending Baconian Induction

In conversation with Jagdish Hattiangadi

Karl Popper was never as wrong as when he spoke about Francis Bacon. And it begins at the end, with the children of Bacon’s work and the darkish hole that Popper – and Thomas Kuhn alike – saw them as crawling from. In his paper On the Sources of Knowledge and Ignorance, Popper was unfortunately sucked-in by an old mistake, something that had been knocking around the halls of philosophy departments for centuries: Bacon’s scientific method was the precursor of John Locke. From here, the mistakes continue…

There are ways to wriggle out of this, as there are with anything, but this comparison is a little hard to explain from a man who was ordinarily so rigorous (to the point of often doing his own translations). For Locke, our human senses are the whole game, they are the source of knowledge and completely free from error. So he is also the type of empiricist that Popper hated, the type who thinks of truth and knowledge as floating in the air, bombarding us with an accurate picture of the world out there. If the world doesn’t lie, and neither do our senses, then all – and any – mistakes can only possibly happen when we misinterpret our, otherwise perfect, sensory experiences.

Locke was wrong. But was Bacon also guilty of the same empiricist mistake? From his 1620 book Novum Organum, Bacon talks in similar sounding terms, but he means something very different. His recommendation to budding scientists was to begin building tables of the natural world, charting the degrees to which things occurred and were sensed, as well as the degrees to which those things were not (“tables of presence and absence”).

Slowly and experimentally building-up a natural history of things in this way, is not the same as Locke’s blunted idea that nature comes to us clear and ready to be understood. Compile all the sensory reports you like (of the kind Locke recommends) and you will still never get close to what Bacon is asking from us – you will never get the all-important discrepancies. Bacon spoke in empiricist language – talking about the “essences” and “natures” of things – but he did so in much more nuanced ways.

When someone speaks about the nature of an object today, we tend to imagine something singular, absolute and unchanging. Bacon’s use of the term was much closer to that of a tentative nature or a nominal nature. In fact his whole point had little to do with sensations at all, rather he was talking about an experimental natural history where the primary thing we are hoping to find, and record, are deceptive appearances.

To think like this, is to start with a hard anti-empiricist attitude: if the things out there, beyond ourselves, have natures or essences waiting to be discovered, then we cannot ever fully know what they are. As things appear to us, they are illusory and misleading. So we experiment, not to explain away – or disguise – the inconsistencies and the problems we find with our senses (as Locke did), but so that we can isolate those errors, learn from them, and record them in a way that emphasises the deception. The whole point is to avoid the Lockean or Aristotelian hope (and formulas) for an intelligible collection of recorded sense data. Here we can begin to see the “enigma of his [Bacon’s] kind of natural history.”

Some of the mistakes here come back to the use of the word experiment. Despite the time in which he wrote, Bacon didn’t whittle the term down to something as simple as “laboratory work, or work with instruments and measuring devices.” Already he had larger and more sophisticated ideas about observing objects and data in previously unobservable circumstances – isolating and removing aspects of the natural world, and seeing how they behave without all the background corruption. But this doesn’t quite get us to the “new Baconian sciences” as Thomas Kuhn called it, leaving the meaning of experimentation as something focussed primarily on the source, or location, of what is being observed.   

The collection of data was not the emphasis that Bacon wanted science to have. Forget the sensory impressions, forget the large and growing tables of observations, forget the isolation of phenomena, even forget the discovery of repetitions in nature, what matters most is that we “learn from our errors”. And to head-off another common misconception about Bacon, this is not just “perceptual error” but also “conceptual error”. Here it all starts to sound very Popperian, and perhaps it would be more accurate to describe Bacon as a precursor to Popper rather than to Locke, but there are important differences. The most important being that, rather than “errors in hypotheses, guesses, or conjectures”, Bacon talks of “errors in appearances, including perceptual appearances”; which “must also exist” if we take the Popperian position that our objective experience is “theory-laden”.

We get to this happy shore by cross-examining the world around us, holding our own perceptions to account and criticism, and thereby interrogating nature itself. With a twist by Robert Boyle that made space for mechanical principles as well, Bacon’s method of “true induction” was adopted in the middle of the 17th century by the Royal Society. And with that, this slightly modified “experimental philosophy” became the centre of the scientific world.

But success isn’t enough, we are looking for truth after all. And so fallibilism is a good place to start. The Aristotelian method which had dominated for two thousand years before Bacon hit the scene, involved a vocabulary of first principles, of proper knowledge, and of higher forms; while denigrating the hypothetical, the conditional, and even the mathematical, as lower types of knowledge. It searched for bedrock, and yet showed no way of actually achieving it – the Socratic Method is a wonderful way to produce refutations, but it is not the affirmative foundation that Aristotle wanted.

To wriggle free from this problem, Aristotle in the Prior Analytics took us into a new language of logic, demonstration and intuition. And it too didn’t work! No matter how useful or valid a demonstration of this logic was, it could only ever produce knowledge if the premises of the argument or statement were already known and taken for granted. A self-referential, and redundant, syllogism. Something that pushes the problem onto the premises, then onto further premises, and into an infinite regress.

Circular arguments are not impressive things to stake the future of science and progress on. And if we must rely on deduction of this kind, we are doomed. So what is there left for us that can produce results and guide the way to truth? This is the question that births induction for Aristotle, as another method whereby knowledge is drawn from observations, later “extracted from our memory”, and then “followed by a mental discernment of its essence from its many remembered attributes”. Here the hard work of induction – as well as the essence of first principles – comes to us not from that endless chain of demonstrations, that infinite regress, but directly from “mental intuition”.

And Francis Bacon is having none of it. Again, in strong Popperian tones, he talks about our initial observations as always being prone to error, always letting us down, and never capable of the high task that Aristotle demands from them. Then he is onto the imperfections of our language, our powerlessness to describe the true nature – or the essence – of anything. The only thing that is happening when language feels as important and surgical as Aristotle wants it to be, is that we are imposing it onto the world, not grasping some deeper reality through it. It can, and should, be dismissed as a “childish” or “naïve” method, unable to produce results.

Instead Bacon compares the human mind to a “broken mirror”, reflecting nothing as it really is, distorting everything. And when Bacon talks about his method of true induction, he is imagining something that would bypass (or correct) all those mistaken ideas about objective reality and the human mind. Baconian induction is a way of weeding through the distortions and finding the reality hidden far behind them. In short, his unique contribution to epistemology is to “extract affirmative knowledge” via a “method of refutation”.

Or call it “error analysis by division” as Jagdish Hattiangadi does, either way, there are more Popperian pre-echoes to be found here. Though it might be tempting to chart significant sections of reality all at once, and then decipher them at large, it is a better method to limit things to singular phenomena and singular tests, journaling individual distortions in piecemeal fashion. There are, after all, infinitely many ways to be wrong about a single observation, and infinitely many lenses to view it through.

And so we don’t just build and perform experiments to challenge our theories, but we also do so to challenge our previous experiments. This is what Bacon means when he speaks about the cross-examination of nature, not only the running of tests and the compiling of data, but observing the conditions under which that data was achieved, changing those conditions to see how reproducible the effect really is, and then changing again and again, to root-out errors and avoid false conclusions.

Even when an overwhelmingly large natural history is created in this way, we should still expect the result to be absolutely “bewildering”. The slow – and always incomplete – process of whittling things down to individual truths, is damaged and difficult too, because the process tends to always involve some retreat to an established metaphysical theory. The stripping-back of error has no endpoint, no clear path, and no non-opaque indications that things are heading in the right direction; all we can ever do is “attend to the errors at hand” and then try to find more of them (through the building of more and more Baconian natural histories).

Under no illusions about the difficulty of his project, Bacon often referred to it as deciphering a near-impossibly coded message, or finding an exit to a labyrinth. Choose the word you prefer – a Kuhnian puzzle or a Popperian problem – this is a philosophy that doesn’t play the rudimentary game of induction that so many people have posthumously ascribed to it. It also elevates the role and purpose of science to a status that Kuhn and Popper would have approved of: “power over nature”.

Where things diverge is with Bacon believing the foundationalist (all knowledge comes from finding its foundations and building upward from there) idea that first principles can be reached, known, and used to ascend the epistemological ladder. However, he is always conscious to note that we might always be in error, there might always be higher or lower rungs still to be explored, that there is never a final rung where the whole project ends, and that – with each step up or down – we carry fallibilism with us as an unavoidable passenger.

So why did Popper miss all of this happy subtleness, and falsely compare Bacon to people like Locke, when he was staring his own philosophical ancestor – his own family resemblance – so firmly in the face? Probably because everyone else did as well. For over two hundred years the academia surrounding epistemology and the scientific method pretended as if it did not exist. Even when Isaac Newton came along – as well as thinkers from the French Enlightenment – championing a near exact copy of Bacon’s method, the connection back to Bacon was never properly drawn. And tellingly, when the connection was occasionally made, it invariably came with the same mistake that Popper had stumbled into: coupling Bacon with Locke, rather than with the scientific successes that burnt bright around them.

So what would Bacon say about Popper and his sceptical philosophy, if he could look back on it all today? He would look for discrepancies and errors, and after scanning through most of it in nodding approval, stop, scratch his head, and ask aloud just what it is Popper thinks a good scientist should do. Conjectures and refutations, sure! But how do they, and we, affirm something as being true – is it really good enough to consider an unrefuted theory as just the best available option? And if so, how is this not akin to the conventionalism (“the philosophical attitude that fundamental principles… are grounded on (explicit or implicit) agreements in society, rather than on external reality”) that Popper claimed to hate so much?

Or as Hattiangadi puts it:

On the weak fallibilist endorsement of theory, suggested by Karl Popper, we can affirm that our best hypothesis may be true, given the evidence. On the stronger fallibilist endorsement of some theories by Francis Bacon, we can affirm our best hypothesis because it must be true, given the evidence. It must be true because it alone can solve the riddle in the evidence. Its presumed uniqueness makes all the difference, even though our judgment remains fallible.

It is a glitch in his methodology that Popper was well aware of: what to say of, and where to stand on, the truth content of a theory? It is the aspect of his philosophy he was most desperate to address during his life, and has remained the softest of underbellies to attack since his death. In chapter 10 of Conjectures and Refutations, Popper tried to clean up some of his earlier ideas, trying to explain how a change from one theory to another is appropriately called progress or the growth of knowledge, as opposed to just change. After all, to be worthy of the name/s doesn’t something new need to be known, as opposed to something old simply being mistaken?

Popper flapped around in these deep waters, hoping that he might eventually find a raft; or at least learn to swim. This is where verisimilitude comes onto the scene: we cannot say that a new theory is definitively true, but we can say that it has more “truth-likeness” (less false consequences and more true consequences than the previous theory). It sounds ideal. No better than that. It sounds like common sense. And so no wonder Popper stuck to it with such loving attention for so long. It was only in 1974, after multiple iterations of verisimilitude had come and gone, that David Miller and Pavel Tichy finally put the theory to bed. What they showed was unpleasant viewing for Popper: “verisimilitude could not be satisfied by two false theories.”

So did Bacon have a point? Popper thought not! He had run out of criterion to make verisimilitude work, but not to defend his overarching theory. In substitution he proposed “three new requirements” of any good theory; things that would allow for the growth of knowledge: 1. It “should proceed from some simple, new, powerful, unifying idea”. 2. It “should be independently testable (it must have new and testable consequences)”. 3. It “should be successful in some of its new predictions; secondly, we require that it is not refuted too soon – that is, before it has been strikingly successful.”

Popper would never have admitted it, but it certainly looks like he is reaching here, searching for an affirmative platform for new theories to sit on. Let’s look at his first requirement: by a “new, powerful, unifying idea” Popper has in mind something like gravitational attraction that connects things as distant as planets and apples. Hattiangadi doesn’t see this as possible or historically accurate. So let’s stick with gravity and with Isaac Newton: every phenomenon that his new theory unified and connected were, in fact, already connected from the Copernican debate. None of those relationships were made new or unified or powerful upon the arrival of Newton’s theory, only more coherent.

The second and third of Popper’s requirements are protections against the construction of ad hoc theories. Together, they need a good theory to be independently testable, to have independent consequences, and be able to “pass at least one such test before we abandon it.” And as far as ensuring the forward movement of science, they work. But this might also leave us in a state of “normative limbo”. Do we use conventionalist strategies or not? If we do use them, are they only temporary solutions that help us to gain some traction in the search for truth and reality? And how long must we hold onto a theory (not rejecting it) simply because it may pass some independent test in the distant future?

The rabbit holes keep appearing, and it is just much simpler to say that phenomena become unified after we discover discrepancies between previously unrelated phenomena, and when we build natural histories. Requiring a lot of heavy lifting, breakthroughs of Baconian induction happen rarely, but they avoid the Popperian traps of conventionalism and of non-affirming the truth of theories.

There are Popperian answers to this… good ones! And of course the work is not yet final, and never will be. But the world of epistemology and the scientific method was split in two by Bacon – between the natural philosophers who followed Newton, and the people who felt that induction had no logical basis to it, and so could not be saved. This line has become an unpleasant one, riddled with its own set of misconceptions and errors; most of which relate back to Bacon himself, what he said, what he thought, and on whose philosophical family tree he belongs.

The largest shame is that most of these mistakes would have been avoided, if more people had done their scholarly due-diligence. If they had only “concluded that another source of Baconian science, surprisingly enough, is to be found in Francis Bacon’s writing.”

*** The Popperian Podcast #15 – Jagdish Hattiangadi – ‘Defending Baconian Induction’ The Popperian Podcast: The Popperian Podcast #15 – Jagdish Hattiangadi – ‘Defending Baconian Induction’ (libsyn.com)

Whiffs of Induction

In conversation with Anthony O'Hear

*Induction (definition): “the doctrine of the primacy of repetitions” – Karl Popper

If you are going to be a non-hypocritical Popperian, then you are going to have to love your enemies – those people who go out of their way to kick holes in your philosophy. Looking at what he wrote attacking Karl Popper, Anthony O'Hear admits now that perhaps he wasn’t “generous enough” all those years ago, that his “book has a certain nit-picking quality” to it. So let’s be thankful that it does!

In many ways, Popper built his reputation and career against the back of the inductive method, stabbing wilfully into the flesh, weakening the giant to its knees, and then to its death; waiting for its enormous shadow to finally clear from the landscape of philosophy and logic. The fact that this hasn’t happened, is explainable firstly by the craft and intellect and rigor of modern inductivists like O'Hear, and secondly by Popperian scholars failing, almost entirely, to recognise the work that such people have done; instead, continuing to argue against a much cruder – and long since dead – inductivism from philosophical history.

Carrying around the words of Popper, and parroting them in back-slapping agreement, there are vastly too many Popperians out there in the world today who are great at remembering the prose, and yet terrible at living the philosophy. Terrible at admitting to the uncomfortable paradox that: if it is true, then it also must – in many different ways – be false! That these falsehoods need to be chased and hunted and celebrated by Popperians themselves. And that when we run out of ideas in this pursuit, criticism from without should be welcomed as medicine for a sick patient.

When O'Hear says – after his brief nit-picking admission – that, “On the other hand, I don’t think I would go back on the things I have said in the book in general about induction and verification”, he deserves our thanks, our admiration, and our love. He has, after all, written the most punching, and demanding, and difficult to evade assault on Popper’s work that exists today. And so, if we are to be true to ourselves and to our philosophy and to the man himself, it is also the best book on Popper’s work that exists today!

Let’s start at the beginning then, with the simplest argument for induction, the crude one: “Suppose, for example we are trying to discover the cause of cancer. We examine a large number of cases and notice that they are all heavy smokers, but that they seem to have nothing else in common. We would then naturally formulate a hypothesis to the effect that it was the smoking that caused the cancer.”

This type of claim might appear reasonably profound and uncontroversial, until you begin to think about all the possible things which could have been “naturally formulate[d]”. In this case it was smoking, but only because on the short list of things in common, it seemed to be the most likely culprit. There were infinitely many more things which all those cancer patients had in common, which were ignored or not noticed: their clothes, their language, where they live, the things they eat, the air they breathe,… However it came about that the inductivist researcher selected smoking, it couldn’t possibly have been from compiling lists of all the features of the patients’ lives, and then selecting the one feature which repeated the most (or they would still be working on those lists today).

Our impossible troubles with observing all aspects of an event or phenomenon, also stretch-out to the question of what constitutes a repetition: “a farmer may see his ploughing his field on two days a repeated task” writes O'Hear, but the “mouse whose nest is ploughed up on the second day will be more impressed by the distinguishing features of the two days’ activity.” The lesson being, every time we notice a repetition of some kind, it always involves the prior adoption of one or another points of view. This is O'Hear giving Popper his theory-laden dues.

Popper’s view on this is fairly straightforward: nothing that you could learn about experiencing Phenomenon A could help you to understand – or reason about – Phenomenon B, which you have not yet experienced. And when inductivists have tried to moor their theory to firmer ground, they generally haven’t helped themselves, by arguing in regressive circles: what justifies the inductive method? Past successes of the inductive method! What justifies those past successes? The successes before that, and so on.

What happens when the inductive method fails, and an inductivist is facing the disproof of his theory, even within that infinite regress? He runs a little further into the darkness, arguing only that the method or the principle has been misapplied, that relevant differences between connected events weren’t noticed, or that other relevant connections were overlooked. When the inductivist sees the sun rising and falling with regularity, he forms the theory that it rises and sets every 24 hours. He then visits Norway, sees the midnight sun and, instead of admitting that his method and his theory were wrong, he claims they were still correct, just missing a few extra details; details that he can now add after seeing them.

So it is a mistake to build a theory on the assumption that the future will resemble the past. But is it wrong to say that the accumulation of past evidence makes certain aspects of the future probable? For Popper it is, because a theory is either correct or incorrect – probably correct is a meaningless statement. And when you turn the wheels of probability theory on this, Popper appears vindicated. To run the experiment in a “universe such as ours”, the ratio would have to factor-in all possible (conceivable) tests and counter-theories. This being infinitely large, the probability of any one theory being correct would be zero (“or something very close to it”).

Regardless, in small and failed shifts like this, you can begin to see the steady refinement of the inductive method in response to Popperian criticism. And it continues. In fact the hard centre of O'Hear’s inductivism is a chase of sorts – Popper running slightly ahead, O'Hear snapping at his heels, getting closer by each step, until finally they turn down a bricked-up, blackened alley; nowhere else to run: “Popper’s attempt to dispense with induction is unsuccessful. We have found that inductive reasoning, removed from one part of the picture, crops up in another.”

All that talk of probability theory failing to help the inductivists out, is turned around on Popper, and turned around on the principle of falsifiability. If it is meaningless to say that a theory is probably correct based on some criterion, then it must also be meaningless to say that it is probably wrong! When a theory fails a test of some kind, at the bones of things this means that an empirical statement has clashed with an empirical observation. But – again back to that endlessly complex world of ours – there are infinitely many ways that this might happen, and not happen. So to talk about falsification without the help of probabilities, Popper and Popperians need a classification system, something that sizes different types of clashes (empirical statement vs. empirical observation), and designates what the consequences should be of each.

And O'Hear is happy to volunteer some of the preliminary work: “If the class of potential falsifiers of x include the classes of potential falsifiers of y and some more as well, x is more falsifiable than y.” For example, take these two statements and try to falsify them: 1. The planetary orbits are circular, 2. The planetary orbits are elliptical. The latter theory sits in a different class to the first, as it requires “six singular statements describing a curve” in order to falsify it, whereas the first theory requires only four.

O'Hear’s help and kindness quickly escapes him as we run deeper into things. Back in that infinite human world of possible experiences and observations and statements, theory selection is a fraught and difficult place for Popperian philosophy. How is it – from that limitless space within, and beyond, us – that we ever get around to choosing a single theory and calling it true? Millions upon millions of different theories are capable of explaining any single observation, the vast majority of which are not falsified and never will be. O'Hear and his inductivists have a simple answer: you choose the theory that has been most successful to date, the one whose predictions have most consistently come true.

The Popperian answer is the least impressive part of his philosophy: you choose the theory with the highest degree of corroboration! Meaning you choose the theory that is best corroborated by surviving the most severe tests. (At this stage you should be sensing the blood in the water! But let’s continue). Successful predictions, lots of them, are important things for a theory to have before we consider it to be true. And clearly we prefer theories that survive tests to theories which fail them. Now even if Popper says, something like all the successful corroborations in the world won’t give us reason to think that the corroborated theory will continued to be true into the future, will continue to be corroborated, he does still have a problem on his hands here.

Imagine you are a commander on a battlefield, and in the next few minutes you have one of two decisions to make. The enemy is attacking from an unknown side of the mountain range in front of you, so you need to either send your reserve troops to the left to reinforce the line, or to the right. With easier terrain, more cover, and better firing angles, all the attacks to date have come from the left flank. So the theory that the enemy prefers to attack from the left is a well corroborated theory. The inductivist agrees – completely! For them, the theory that the enemy prefers to attack from the left is true because it has been successfully predicted time and time again. The difference? The inductivist thinks that this should inform your decision about where to send your troops, whereas Popper thinks it shouldn’t.

Well what is the point of talking about corroboration, or truth in general, if it does not help to guide our decisions, if it does not help us with theory selection? Because, after all, you do need to make a selection in moments like the one just mentioned, despite the best efforts of some Popperians to pretend like you don’t – to say obfuscating things like, I would first have to more fully understand the ‘problem situation’. When induction does its best to meet Popper’s criticism head-on, the shame only belongs with the people playacting as if nothing were actually being said.

The question remains: as the commander on the battlefield, do you take into account the past behaviour of your enemy or not? Do you make your decisions with those patterns in mind, or do they hold zero value? In other words, back to that previous question: what is the point in corroboration if it’s only worth is to retrodict an explanation for past events, and not to predict future ones? And if it does have some predictive value for the future – if those previously observed repetitions should be factored into your decision making process in some way – then corroboration begins to sound a lot like induction.

It is important to head things off here, and remind any angry Popperians out there just who it is they are talking with. It’s been said once, but it’s worth echoing: O'Hear is not some unsophisticated lout from an earlier time, shouting about the inductive method through drunken breath at passers-by. Because the enemy has always attacked from the left does not mean that they will always attack from the left in the future (they might choose to attack from the right to catch you off guard, or the weather might change making the right flank easier to traverse), but only that this past behaviour should be a very important part of your calculations.

So not a hard inductive theory, nor the primacy of repetitions; but a theory of inductive inferences, and the importance of repetitions.

Either way, it is verisimilitude to the rescue – to save Popper’s corroborations from the “whiffs of induction’ and from that question: what does it mean to say that something is true? And what does it mean to select one unfalsified theory over another unfalsified theory? To give all of this the sense of purpose and progress that our “intuitive desire” needs, verisimilitude is a way of speaking about truth in terms of our distance from it. Theory A explains a phenomenon, and so does Theory B. Neither are falsified, both produce accurate predictions, but if Theory B explains the phenomenon as well as others, and/or has more precision, we can reject Theory A not because it is false, but because it is less true (has less truth content). In this way we can see the verisimilitude of any theory or statement being “its truth content minus its falsity content.”

So we can now make Popperian sense of the act of comparing theories, and of theory selection. But not quite! If you want to appraise the verisimilitude of a theory, then that appraisal will have to rely upon how we view the tests it has passed – it will have to rely upon “inductive background assumptions”. Call it “background knowledge” if you like, but the problem remains – the outcomes of all those tests matter – and if they matter then you have a spoon in the inductive soup. Verisimilitude 2.0 drills harder into the two categories that a theory can hold (truth or falsity) and talks instead of “excess truth-content” and “excess falsity-content”. 3.0 involves deriving numerical values from the accurate predictions of theories. 4.0 is a “language-dependent” version, taking apart the propositional language and primitive sentences.

Collapsing back into the uncertainties of probability theory, into inaccuracy, and into induction, whichever way verisimilitude has been constructed over the years, it has always failed to “fulfil all Popper’s original desiderata.” Adding another layer of problems without solving any, Popper stuck with the theory as it chased its own tail, “continuing to stress the importance of the idea” and trying desperately to save it; behaving in a very un-Popperian way!

So much of this comes back to testing, severe testing. This involves not just trying to shoot down a given theory, but doing so in places where that theory appears weakest and most likely to break: the riskiest predictions, the most unlikely consequences, and the most probable types of counterexamples; tilting the scale – as much as possible – towards falsification. The trouble is, once you have completed a severe test, it becomes less severe the second time, less so the third, and so on. You are repeating the event, and so the risks of falsification diminish. As the tests come and go, the theory in question moves further and further into established fact (background knowledge).

So in straightforward language. How does a theory move from risky theory to background knowledge: the repetition of severe tests… or induction! If a good Popperian doesn’t want to use corroboration as a guide to future success, but does want to claim that well-corroborated theories should become a part of our background knowledge (things that we take for granted in order to test other things), then the inductivist will nod away, saying: at least you have admitted that background knowledge is “covertly inductive”, and that inductive reasoning has its place. Still doubtful? How else, if not inductively, does the reproduction of a test and its outcome make it increasingly less severe?

If not a whiff of induction, then how about a whiff of verificationism. Popper’s philosophy is always running away from direct observations and towards theoretical statements. It goes like this: the statement “this is red” might appear as a clear observation of some object in reality, but by making such an assertion – no matter how clearly red the object in question is – you are impractically committing yourself to that truth holding into the future; for an infinite number of statements and objects. All of which we are not in a position to verify. Will it always look red, under all conditions, from all angles, and with all the coming advances in technology? This scepticism bleeds from Popper’s “general feeling that we can never rule out every possible mistake”.

It is interesting to think in this way, and it is certainly true, but – as O'Hear points out – “this position has no practical import. We do not [and cannot] act as if we might at any time have to revise well-tested empirical judgements about everyday realities”. Popper is making two errors here: 1. Suggesting that there is always evidence for a given observation, 2. Missing the common-sense way in which we talk about things, and why their enduring nature (even if false) is necessary for the “rest of our conceptual scheme” to hold. With Popper speaking in this way, it is hard to imagine how any empirical statement could hold any meaning at all.

If you are sensing some Wittgensteinian tones to this debate, then so is O'Hear, and so is Paul Feyerabend, and so is W.W. Bartley: “Bartley’s innocent comparison of Popperian methodology with a Wittgensteinian language game possibly so enraged Popper because of its closeness to the truth.” And he is running fast through the open door of another house that he doesn’t want to be in.

The scientific realism that Popper defends, is the claim that our theories actually give us knowledge of the world, as opposed to scientific instrumentalism which claims that our theories are just instruments from which we can get accurate predictions (just as a screwdriver doesn’t need to mirror the screw, our theories don’t mirror the world, but simply help us to bridge the gaps to what we want – they are tools, nothing more). Popper instead values greater universality and greater depth of explanations. He believes that – by encouraging the endless probing of even more fundamental truths – he can steer his philosophy into anti-instrumentalist seas. At this point in things, and surprised by the weakness of Popper’s argument, O'Hear has lost some of his patience: “an instrumentalist could agree that they are desirable properties [universality and depth] of a tool, because the more applications a tool has, the more useful it is.”

O'Hear also senses a whiff of – unavoidable – relativism in Popper, but this is best left for another day, and for readers of his book. It can all be brought back to a simple claim about what is reasonable and unreasonable to believe. Might it not be the case – and do we not have appropriately good reasons to believe – that it is just “our biological good fortune” to be able to notice regularities in the world? And to then be able to use those regularities to make successful predictions? It would, after all, be impossible to think and to function and to survive if those regularities were to suddenly disappear tomorrow (you became unable to notice them). And the best arguments for the method of induction come back to just that, practical, common-sense, decision making.

Piecing through Popper’s work with a maliciously sharp blade, O'Hear eventually finds his way to close agreement; or rather he believes that Popper agrees with him, and not the other way around. Especially when it comes to his claim about physical regularities and their importance for knowledge creation (“at least insomuch as this is acquired by ‘physical methods’”). Saying that our assumptions about the world could always fail us, that we could always be wrong, doesn’t hurt the O'Hearian inductivist in any way.

It is impossible to walk away from this book and this man, without thinking that he has a point; a point that scholars of Karl Popper are almost wilfully missing. Other than with his argument about Wittgensteinian language games (which now seems uncontroversially true), there are some good answers to many of O’Hear’s challenges (which he acknowledges himself); but there is no conceivable way of not being a better critical rationalist after reading the work of Anthony O'Hear – the worst enemy, and greatest friend, of Popperian thought.

**** The Popperian Podcast #14 – Anthony O'Hear – ‘Whiffs of Induction’ The Popperian Podcast: The Popperian Podcast #14 – Anthony O'Hear – ‘Whiffs of Induction’ (libsyn.com)

Karl Popper vs. Friedrich Nietzsche

In conversation with Ken Gemes

Have you not heard of that madman who lit a lantern in the bright morning hours, ran to the market place, and cried incessantly: "I seek God! I seek God!" As many of those who did not believe in God were standing around just then, he provoked much laughter. Has he got lost? asked one. Did he lose his way like a child? asked another. Or is he hiding? Is he afraid of us? Has he gone on a voyage? emigrated? Thus they yelled and laughed.

The madman jumped into their midst and pierced them with his eyes. "Whither is God?" he cried; "I will tell you. We have killed him, you and I. All of us are his murderers. But how did we do this? How could we drink up the sea? Who gave us the sponge to wipe away the entire horizon? What were we doing when we unchained this earth from its sun? Whither is it moving now? Whither are we moving? Away from all suns? Are we not plunging continually? Backward, sideward, forward, in all directions? Is there still any up or down? Are we not straying, as through an infinite nothing? Do we not feel the breath of empty space? Has it not become colder? Is not night continually closing in on us? Do we not need to light lanterns in the morning? Do we hear nothing as yet of the noise of the gravediggers who are burying God? Do we smell nothing as yet of the divine decomposition? Gods, too, decompose. God is dead. God remains dead. And we have killed him…

Within the walls of academia, Ken Gemes has lived a “schizophrenic” life. As a young man, he looked-out at his fellow human beings – people striving for things they didn’t understand, failing at every step, and then trying again to make sense of things – and saw most clearly the “mess” of it all. As an Australian with a “strong bullshit detector”, he had read Freud and had seen quickly through the gloss and veneer to the pseudoscience underneath. He wanted a surer footing for his work, a colder place of theorems and facts, somewhere far from all that mess… he became a philosopher of science!

Flying to Yale, Gemes settled into the philosophical rigor of things by publishing on verisimilitude, bayesianism, hypothetico-deductivism, confirmationism, verificationism… It was all parochially interesting, and as he had hoped, all very cold! Then one day he decided to write a “jokey” article, poking hard into the delicate ribs of Karl Popper. Famous for his anti-inductivism, Popper had claimed that Object A having Property F, would give you no reason to assume that Object B should have Property F. Even if there were a thousand such objects with Property F, this would still give you no reason to assume anything about object B.

Gemes’ short paper (three pages long) was a mathematical proof (using probability theory) showing that this was all wrong – and from across the Atlantic ocean “Popper went ballistic”. One of Gemes’ supervisors at the time, Clark Glymour, was strolling around an international conference, and found himself in an elevator with Popper. Not knowing his relationship to Gemes, Popper turned to Glymour, and launched into a tirade, “There is some idiot in America. This guy called Ken Gemes, who wrote this idiotic article. Do you know anything about it?” Backed and cornered into the shrinking elevator, Glymour was almost nose-to-nose with the Austrian, and so had nowhere else to look but directly into those hot eyes, muttering softly under his breath: “Yes, he’s my student.”

Then life intervened. Gemes was dealt a few “difficult punches”, and found that his old taste for the escapist heights of academia was lost. For personal reasons, he needed to find something more “flesh and blood”. Freud and his gang of psychoanalysts were out there, but they were also still bullshit. Luckily there was someone else, someone who had pre-echoed much of the psychoanalytic movement, just in much more interesting and insightful ways. He was also the most flesh and blood philosopher imaginable: Friedrich Nietzsche. And it all begins with the death of God… and a Madman.

Despite what we might like to think about ourselves, Nietzsche believed that we modern people have never properly appreciated what it means for God to be dead. Most of us have a rather simplistic, mechanical view of apostasy: once you have given up God, you have also escaped religion, and are free from its nightmares, its hangovers, its meaning. This is all wrong! Like it or not, we are still living in “Christian times” – it is still there, floating in the air around us, and in the values that we hug most closely: the value of compassion and the value of truth!

God was once the animator of all we were and wanted to be, including truth itself. So once he is dead and no longer providing for us, no longer telling us what to follow and what not to, all things become suddenly – and painfully – up for grabs. Why should I continue to care about my neighbour, loving him as I would myself? Why hold on to the value of truth as if it is still sacrosanct? Why should I care about it at all? It might take 200 years Nietzsche wrote, but a deep “Dostoyevskian…nihilism of disorientation” of this kind is on its way: “incredibly prescient of him to see our current situation in this so-called post-truth era”.

The other nihilism is the “nihilism of despair”, a collapse of the human spirit, rather than a sudden map-lessness and a drifting at sea. Without God the great meta-narratives of our lives – the ultimate values behind things – can never be fully realised (even though they still exist). And it is here on questions of nihilism and truth, where Nietzsche and Popper would have found each other to be soldiers in the same trench, fighting the same battle, plodding-on through mud and disease and injury, looking for an audience for their overlapping philosophy.

Both nihilisms have a Popperian flavour to them, and a Popperian disgust: disorientation is an appropriate thing to feel, in fact it is the natural state of things as we all doubt, search, and scrape to find a truth that we can never be certain of. But being lost should not lead us anywhere close to nihilism, with all its weak, relativistic horrors (there is no such thing as objective truth).

Still, despair is worse! It steals away the beauty of our world, and leaves us sulking about not having the final solution to our problems. Always with work to do – with knowledge to birth, and creativity to apply – to be map-less is a wonderful thing; it is what gives life its purpose, it is what keeps us going, it is how those ultimate values will be found, and it is the one thing that ought to keep those feelings of nihilism at bay.

And there is nothing wrong with embracing your critical, destructive side and announcing proudly to the world, as Nietzsche does, “I am dynamite”. Dynamite is just criticism on a larger scale, and with a larger target – it is bold, and a worthy thing to be proud of. Contrary to popular readings, all that Nihilism that Popper would have hated, Nietzsche did too – talking about these dark turns of the mind, and of group psychology, only in order to predict their coming, and to warn against them: a diagnostician, never an advocate.

The nihilist out there – and perhaps within us all – has a readymade answer to this: if your whole purpose is to go out into the world and falsify everything you see and hear, then sooner or later you will be left with nothing, or at least nothing but disorientation and despair. The Popperian counter is: if you try to falsify everything, you are indeed left with nothing… except for all the things that aren’t falsified. Whether or not Nietzsche would agree with this, is another problem of bad readings.

The concept of objective knowledge – and its possibility – gets a hard rattling within the pages of Nietzsche’s books, and leaves all of us on the outside of his mind wondering what on earth he is on about: surely he can’t be making such a simple, nihilistically-flavoured mistake, as to say that the pursuit of truth is nothing more than a religiously infused error. The problem runs first back to language, and then more deeply to the type of philosopher that Nietzsche is, and finally on to what his fatherly hopes are for his readers.

In reverse order: when Nietzsche predicts that nihilism will become the future of Europe, he is saying this as a doctor might to a wilting patient in his surgery. Staring into the jaundiced eyes, the thinning hair, the loose teeth, the folds of obesity, the declining posture, the worrying blood tests, the horror-esque habits, Nietzsche is warning people about the ghastly future that awaits them only so that they might avoid it. He wants those people – he wants us – to fully appreciate the meaning of the death of God, to construct our own master narratives, creating and celebrating replacement values; to become gods ourselves.

The type of philosopher he is: anyone who reads Nietzsche well, reads him as a psychologist – prefiguring all the best work of Sigmund Freud, without any of the scientism (the effort to apply the veneer of science to places where it does not belong). He is a loud champion of the Dionysian spirit, not because he prefers the emotional side of the human condition – the intoxicated, disordered, passionate side. But because, ever since Socrates, the Apollonian spirit has won the day – with logic, reason, and progress dominating our truth-obsessed lives.

Caring deeply about his patients, Nietzsche sees a coming clash, something that a little more Dionysian indulgence can help with: our deep psychological need to understand truth vs. our need to find meaning in the world. The God that we so ruthlessly killed, did more than explain the otherwise unexplainable, he also gave significance and purpose to our small, individualistic lives. Religion both bound us together and lifted us up… the fact that it was also harmful – particularly to creative spirits – and needed to be replaced, should not cause us to lose sight of why it held for so long, and why it animated so many lives.

The lament, the malaise, the depression, the disorientation, the despair, the nihilism of our times, is largely because the world no longer appears enchanted in the way that it once did. Without our myths, all we have is truth. And an obsessive, compulsive lust for more and more of it – buffering the things that give life its meaning back into darkness and scorn. Nietzsche is here cheerleading for a return of myth, fairy tales and folklore, because this is the aspect of his readers’ psyche which then (and perhaps currently) needed the biggest champion.

Finally back to language, and to a place where Popper and Nietzsche stand far apart: working in a time of obscurant philosophical language, and of leading intellectuals deliberately writing so as to be misunderstood, Popper lived by a refreshing motto of a kind: anything that can be said, can and should be said simply and clearly. Nietzsche didn’t play games with his prose like those contemporaries of Popper, but he was consciously writing for an audience. And so the pages of his books are dripping with bombast, with drama, and with inspiration. He was screaming into the darkness, hoping to catch the ear of the next great creative talent, and to guide them away from the herd. To pull apart his language analytically, looking for the simple and clear meaning, is to lose sight of the philosopher and his philosophy.

But for both men truth does matter! It is never certain, it is always open to change, and yet it does exist out there, waiting to be found by us. For Popper, us literally meant all of us. His philosophy was written to finally – and scientifically – put to bed a nagging idea from history: that great men drive it, and drag the rest of us along in their wake. Again, Nietzsche was having none of this. That audience for whom he was writing, for whom he sweated through illness, psychosis and rejection, was a rare breed of characters – the self-creating, elitist, Ubermensch (superman or overman).

The rest of us (which Popper cared so much about) Nietzsche was happy to dismiss with the forgetful tone of a schoolyard bully: “Let the values of the majority rule… in the majority.” Just who made it into the ranks of Nietzsche’s Ubermensch wasn’t so clear: for a time his friend Richard Wagner (along with other composers like Beethoven) were there, and then they weren’t; then his old “teacher” Arthur Schopenhauer was there, and then also removed. The one constant name? Nietzsche himself.

Questions of ego and grandiosity aside, all this talk of becoming supermen has its place within Popperian philosophy. More than just a document for domination and power, Nietzsche had in mind a much more internal individualistic triumph. He was encouraging his followers to stretch the boundaries of what it was to be human, to create new and beautiful things, to ignore the disapproval of the masses, to be their own metaphorical executioners (as well as executioners to what they hold most precious) – so that they can become much, much more than their “human, all too human” origins: “all great things must bring about their own destruction through an act of self-overcoming”. Or… to be bold, to take risks, to embrace fallibility, and to enjoy the Popperian pleasure of burning your own theories to the ground.

We might all have the desire and the capability to overcome what we are, but Nietzsche wants more from us – he wants us to have the will as well. In more of those Popperian tones, the world isn’t some colourless project, but a value-laden (or theory-laden if you prefer) interactive phenomenon. There is nothing that can be said about the world which doesn’t come with a set of presupposed values attached to it. Every action and thought and observation involves a thick tapestry of values – so why not cultivate your “will to power” and make those values your own, make them worthy of the place within your mind, and within the world.

And of course, being Nietzsche, he also wants us to suffer for it… a lot! Not just to prove something to ourselves or to others, nor to achieve a known outcome, but because it is where life and meaning are to be found: “To those human beings who are of any concern to me I wish suffering, desolation, sickness, ill-treatment, indignities—I wish that they should not remain unfamiliar with profound self-contempt, the torture of self-mistrust, the wretchedness of the vanquished: I have no pity for them”. He has no pity, and he wishes you wouldn’t either, because they are the lucky few who are capable of making their lives worthy of the name.

While the rest of us are left to wallow in the herd with our bitterness and weakness and fragility and resentment and self-loathing and fear and hatred and procrastination and spite and impotence and malevolence and vengefulness and shortcomings and instability and malice, all polished-up into an excusing slave morality. People who avoid suffering, and so will never amount to anything.

During his shortened life, Nietzsche suffered as much as anyone. He also felt horribly ignored, even having to pay for his final few books to be published, telling anyone who might listen to him that he had been born posthumously. Popper too suffered, felt underappreciated, and shared with Nietzsche an unconcealed dislike for academia and academics. But Popper and his work were wildly successful (professionally speaking) during his life. As Popper’s last years were spent on university payrolls and receiving knighthoods, Nietzsche’s were in poverty, illness, obscurity, and eventually madness.  

When Ken Gemes looks back at his own turn towards the work of Nietzsche, he sees it as “really disjoint from my philosophy of science work”. I am not so sure that it is! For me, there are more similarities between the two than people tend to think, and when both are done well they ought to be boiling red with emotion and import.

Perhaps most of the confusion is a matter of timing. Within his era, Popper had largely won the debate over the history and the future of science and knowledge creation, while Nietzsche had won nothing but a life of extreme suffering. Then slowly, bit by bit, building into an irrepressible drumbeat, Nietzsche’s posthumous birth has happened. And while the name Karl Popper might one day fade and disappear (hopefully because his ideas become so mainstream as to not need the reference), the name Friedrich Nietzsche will never die, never wilt, never again lack for an audience:

…How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? What water is there for us to clean ourselves? What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it? There has never been a greater deed; and whoever is born after us, for the sake of this deed he will belong to a higher history than all history hitherto.

Here the madman fell silent and looked again at his listeners; and they, too, were silent and stared at him in astonishment. At last he threw his lantern on the ground, and it broke into pieces and went out. "I have come too early," he said then; "my time is not yet. This tremendous event is still on its way, still wandering; it has not yet reached the ears of men. Lightning and thunder require time; the light of the stars requires time; deeds, though done, still require time to be seen and heard. This deed is still more distant from them than the most distant stars, and yet they have done it themselves.

- Friedrich Nietzsche, The Parable of the Madman (1882)

*** The Popperian Podcast #13 – Ken Gemes – ‘Karl Popper vs. Friedrich Nietzsche’ The Popperian Podcast: The Popperian Podcast #13 – Ken Gemes – ‘Karl Popper vs. Friedrich Nietzsche’ (libsyn.com)

The Constitution of Knowledge

In conversation with Jonathan Rauch

 

If you want to clear the room at a cocktail party” writes Jonathan Rauch, “say epistemology”. It is one of those horrible words, lengthy, a mouthful of enunciation, and isolating with its polish of professional jargon. It is also, largely, redundant! That poor guy next to you at the party, speed drinking his martini for an excuse to walk away, might otherwise pause mid-gulp, turn to face you, even lean-in for a closer listen with the light returning to his eyes, if you only replaced epistemology with truth or knowledge or information…

Philosophers build careers around ‘ologies’, and with every tweak of language (every ‘ology’ of this kind) comes a little refinement, a little more accuracy, and a new corner of academia that can build upon itself. But for the unfamiliar, the uninitiated, or the simply forgetful amongst us, these words only bring frustration, a stroke-like numbness, and quickly emptying rooms.

Which is a shame, because in many ways epistemology is also the catchphrase of our day… albeit expressed a little differently, and couched in a thick layer of doom and gloom. Browse the shelves of any surviving bookstore and titles like these will stare back at you with Best Seller stickers across their covers: “The Misinformation Age”, “Truth Decay”, “Post Truth”, “The Death of Truth”. A completely new genre of publication: The Epistemic Crisis Book!

Everywhere you look, people are obsessed with the question of knowledge, and seemingly distraught at the fiasco it now finds itself in.

The first thing that must be pushed back upon here is that hinge-word “now”. It is always tempting to imagine that what we are going through is unique, a special case of pain and difficulty from what has come before, or from what will come after. However you try and pinch things though, there is nothing  particularly new about dealing with deceptive leaders, with the lies of our fellow citizens, with the overriding interests of tribal loyalty; with discerning truth from falsity. In fact, it is the perpetual challenge of our species and of all others: either discover what is true about the world and so be able to adapt, and even thrive; or fail to discover truth, stagnate as a result, and then eventually die.

Still, it would be much too uncharitable to say that all these Epistemic Crisis authors are simply conjuring-up an emergency in the hopes of making some quick cash. They are not idiots, and most of them are not callous. They are on to something – a feeling, a sense, an unpleasant tingle in the bones that things are different this time around. And just what that something is, is an area that Rauch has a well-trained ear to.

In the 1990s, decades before most people began noticing that the soft, embracing kindness of modern social justice movements were morphing into the poisonous, intolerant inquisitions that we often see today, Rauch was publishing a foreshadowing book on what was to come. He also had a painful memory, and a shared history (of a kind), with this new generation of witch-hunters.

As a young, gay man of a slightly different era, Rauch grew-up with the worst of things. All around him were laws and prohibitions against who he was: against marriage, against employment, against businesses, against sex, against affection, against biology… So the gay rights movement was born from this sin, this public and open discrimination. Rauch watched as people marched against this injustice, as they filed petition after petition, as they demonstrated, launched legal battles, and “confronted the psychiatric profession with the irrationality of its pathologising of homosexuality”.

Above all it took bravery. Every one of that early generation of activists suffered. As did everyone who joined later! In 1996 Rauch allied himself with the public battle, fighting for the legalisation of gay marriage and, though hoping for more, was resigned to the fact that “I might see some success in two or three generations, if ever.” Eight years later gay marriage was first legalised in a single state, Massachusetts. By 2015 it was legal across all fifty. And Rauch was left with the happy, and impossible to ignore thought that, “I should have had more confidence in liberal science. You cannot be gay in America today and doubt that.”

But writing back then, as gay-rights turned a fast corner towards victory, Rauch could sense that his “confidence in liberal science” wasn’t so widely shared amongst his fellow travellers; that there were plenty of doubters in Gay America. Their liberal society had given them the freedom, and the right, to call-out prejudices against them. To loudly challenge those prejudices – and to defeat them. But it also gave their enemies the same freedoms and rights to fight-back against those defeats – to try to step society and its laws into bigotry once more.

So with their personal slice of salutary progress in the bank, many of these activists decided enough was enough. That liberalism – those freedoms, and those rights – which had been so useful to them, was suddenly a problem, a weapon that needed to be destroyed, lest it prove useful to someone else. “Today I fear that many people on my side of the gay-equality question are forgetting our debt to the system that freed us.”

It is an old and worn metaphor, but one that is useful and clear. It pushes to the heart of things, and to why those activists were making such an enormous error. That metaphor is the “marketplace of ideas”. A place of widely discordant views and opinions, all swirling around in competition. A place not of chaos (though it can often look that way) but of constant criticism. Rather than being a license for hatred (though it is also that by default), this marketplace is a mechanism for the discovery of truth. Somewhere in which ideas rise and fall on the strength of their arguments, and the quality of their explanations. A world where people talk directly to each other, and change their minds once – and only when – they are convinced to do so…

But in the years since, Rauch has begun to have his doubts. And it comes back to epistemology, a critical eye on those Epistemic Crisis authors, and a long, unpleasant gaze at the modern landscape of fake news, of misinformation, of relativism, of cancelling, of shaming, of trolling, of weaponising news, of normalising lies and falsehood, of the siloing of communities, of politicising truth, of Donald Trump:

Long before Donald Trump began his political career, he explained his attitude toward truth with characteristic brazenness. In a 2004 television interview with Chris Matthews on MSNBC, he marveled at the Republicans' successful attacks on the wartime heroism of Senator John Kerry, the Democrats' presidential candidate. "[I]t's almost coming out that [George W.] Bush is a war hero and Kerry isn't," Trump said, admiringly. "I think that could be the greatest spin I've ever seen." Matthews then asked about Vice President Dick Cheney's insinuations that Kerry's election would lead to a devastating attack on the United States. "Well," replied Trump, "it's a terrible statement unless he gets away with it." With that extraordinary declaration, Trump showed himself to be an attentive student of disinformation and its operative principle: Reality is what you can get away with. 

George Orwell imagined a shadowing and nosey government. One that branded free thought as traitorous, that made individuality impossible to the point of death, and which slowly suffocated its citizens into passivity, compliance and adoration. Thomas Hobbes saw us all in terms of our animal origins, fighting to bloody ends over limited resources… unless restrained by a powerful and controlling state. When most people think of social suppression they have something like this in mind: a Leviathan stomping them into silence and conformity. And that without such a structure, we could – and would – all flourish in new and beautiful ways; letting our full range of cognitive abilities off the leash.

What is too often missed is the internal mess of recurring errors that we have within us: biases. We tend to overestimate our chances of success; we overestimate the probability of eye-catching (but rare) events such as terror attacks; we like to extrapolate familiar data points from our lives, believing they are therefore universal to everyone else; we tend toward conformity within the groups we belong to; we notice evidence that confirms what we already think, while ignoring evidence that might contradict us… Studies have documented well over a hundred such identifiable biases/errors of these kinds, and this doesn’t take into account the whole category of meta-biases – those biases that blind us to our other biases.

All of this is a long way around to saying that reasoning is hard… very hard. And that to get anywhere with it, we first need what Charles Sanders Peirce called “network epistemology”. With truth being so elusive, we need a community around us – people who hold everything that we say to account with criticism and error-correction. Whenever knowledge creation isn’t a social behaviour, the enterprise is doomed! “It will appear”, Peirce wrote, “that individualism and falsity are one and the same.”

Science – when done well – is just such an escape from individual falsity. A process of constant trials and errors, of conjectures and refutations. An institution that doesn’t just find mistakes, but which revels in their discovery; hoping to find as many as possible, as quickly as possible, so that they can be just as quickly error-corrected.

The professional ranks that Rauch joined out of college looked a lot like this. As a young journalist hoping to tell “enlightening” and “true” stories (in a stereotypically solitary occupation), he couldn’t possibly have imagined how little space he would have to himself:

Apart from the lonely process of writing a first draft, I could do nothing on my own. Facts were gathered from interviews and sources; analysis was checked with experts; every sentence was edited, copy-edited, and often fact-checked; tipsters suggested story ideas, sources waved me off bad leads, and challenges to my claims percolated in conversations within the newsroom and outside of it. The sense of having joined something much greater than myself, and of swearing allegiance to the exacting standards of a great tradition, made the enterprise of journalism appealing and compelling to me even on the days when the practice of journalism seemed grinding and routine (which was often).

Today it is the changed and changing nature of the media environment that has Rauch doubting what he once believed: whether an open space for reporting and opinion and information gathering and data storage and publication and fact checking and second sources and third sources and of transparency, of evaluation, of interviews, of witnesses, of cross checking, of investigation, of trusted sources, and of critical feedback, is enough. Instead, we all need to be paying a lot more attention to the structure of the “knowledge-making business”.

The ‘marketplace of ideas’ metaphor implies – and needs – a lot more than a raucous, unguarded, unpoliced, unimpeded space where true theories survive and bad ones die. In much the same way as governments need constitutions and institutional arrangements to ensure their proper function, our Marketplace needs delicately tuned social settings for it to work; an agreed-upon collection of rules, a constitution of knowledge.

Some of this is merely a problem of bandwidth. Popperians can talk endlessly about the free flow of conjectures and refutations, of love-inducing problems, and of beautiful solutions, but only a very small fraction of the swirling thoughts, philosophies, notions, concepts, designs, and criticisms are ever likely to be noticed. So rather than imagining the open spaces of a Market, with all the available produce labelled and displayed for your careful inspection, a more apt metaphor might be what Rauch calls the “social funnel” – a place where, even if the persuasion of an opponent were possible, the battle to first grab his attention is near-hopeless.

The modern media landscape – with its targeted reporting and endless variety – appears to drive this social funnel ever narrower. Take a quick glance at the viewing habits of the average citizen, and you are likely to feel that all is lost; that we are all splintering into epistemic tribes, communities that talk across each other, but who never meet to hash-out their issues. “The commercial internet was born with an epistemic defect” writes Rauch, “its business model was primarily advertisement-driven and therefore valued attention first and foremost.”

And perhaps it is here where things take their worst turn. For all its promise and undoubtable good, today’s internet appears to be accelerating untruth at dizzying speed. With such a solitary focus on attention and ad sales, outrage becomes the ugly cousin, belatedly let out of the cupboard after the party is over and the guests have left; running around the already messy living room, burning pent-up energy and making an already unpleasant scene look a whole lot worse.

When a quiet news day means a loss of profit, the temptation to play upon an audience’s impulsivity is hard to ignore. A sprinkling of fake news and disinformation might just be the way to spice things up, and keep eyes on your channel. But so might a little “troll epistemology”, whereby you poke at conspiracy theories, at desecration, at insult, and at shock value, with the single-minded hope of winding people up. Call it a “firehose of falsehood” or “flood[ing] the zone with shit”, it is the type of tactic that has no interest in creating knowledge, in settling disagreements, or building trust. It only wants people off their seats, red-hot, and ready to fight.

Way off in the distance, but still visible, is another – but just as troubling – world of news media that bathes each day in “emotional safetyism”. These are often the traditional bastions of good journalism, the large shining lights of the industry who turned against pluralism, diversity, and value-rich disagreements, instead deciding that such things were abruptly too dangerous for the average listener to handle. Filtering their content through prudish self-censorship, they look down upon their readers, listeners, watchers, with a child-rearing concern: Sure, I can handle the truth of the world, but most people aren’t built like me. I am special. And they need protecting, from worry-inducing knowledge, and from themselves.

So Popper’s model needs new settings, for a new world. But what are they? What should this constitution of knowledge look like? It begins with a minimalist compromise, a balancing of simple, easily agreed-upon rules – something that ensures the dynamism that knowledge creation requires, but which also hinges heavily around stability. An accommodation that cuts through the inherent antagonisms of the current system, and which produces a much more functional institution (akin to the medical or legal establishments). A place with slightly more procedures, hierarchies and restrictions, but only insofar as better, more positive, and more reliable outcomes are achieved: a Madisonian epistemology to compliment the Popperian incumbent.

But of course, as much as anything, culture matters here! This all starts with people pushing back, unmuting themselves, finding their courage, speaking-out… and in doing so “remember[ing], you are never as alone as silencers want you to believe.” Still all this talk of cultural change and institution building can be a little overwhelming, and a little too isolating – much like all that previous talk about epistemology was. So how does one go about this without clearing the room at the cocktail party? Start small, with things that are easy to follow, easy to recall, easy to understand, and easily consented to, yet which will have disproportionately large downstream effects (a lesson that many new Popperians should take to heart).

So take two stone tablets, and carve into them the following maxims:

* No one gets the final say!

* No one has personal authority!

 

*** The Popperian Podcast #12 – Jonathan Rauch – ‘The Constitution of Knowledge’ The Popperian Podcast: The Popperian Podcast #12 – Jonathan Rauch – ‘The Constitution of Knowledge’ (libsyn.com)

Wittgenstein's Poker

In conversation with David Edmonds

 

There he stood, flames at his back, weapon in his hand, yelling the small room into silence; his voice cracking with anger. Ludwig Wittgenstein was the preeminent philosopher of the day – an “atom bomb” of thought and intellect. Those watching-on, trying to smuggle-in a word or two through the “tornado” of noise and emotion, were only slightly less eminent in their own right. Most were household names in (and beyond) the world of philosophy: John Wisdom, C.D Broad, Alfred Ewing, Richard Braithwaite, G.E. Moore, Margaret Masterman, Bertrand Russell, and an increasingly smug looking guest around whom all the fuss was building.

On that wet autumn night, Karl Popper had been invited (for the first, and only, time) to attend the regular meeting of Cambridge University’s Moral Science Club. He was asked to bring with him a philosophical “puzzle”. Instead Popper showed-up with a handful of philosophical “problems” and a grudge of sorts against the club’s president: “I admit that I went to Cambridge hoping to provoke Wittgenstein into defending the view that there are no genuine philosophical problems, and to fight him on the issue.”

Following established tradition, “the guest opened the meeting”… and that is where all the courtesy, kindness, and tolerance, ended. Puffed-up for battle, Popper went immediately for blood and victory, attacking the wording and implication of his invitation. Wittgenstein literally sprang from his seat to challenge the “upstart” in all his “foolishness”. Back and forth they went, interrupting, berating, shouting each other down, until Wittgenstein stormed over to the fireplace, and pulled out a glowing red poker. Waving it around in strong, violent strokes, he demanded that Popper provide a single “example of a moral rule”.

With the poise and delivery of a stand-up comedian, Popper replied “Not to threaten visiting lecturers with pokers.” Everyone roared with shock and laughter, while the slighted Wittgenstein dropped the poker on the ground and “stormed out of the room, banging the door behind him”.

Or did he?

The clash between Wittgenstein and Popper had been a long time coming. Both men were raised in the heady atmosphere of inter-war Vienna; both from assimilated Jewish families. Popper grew up firmly middle class, his father was a prominent lawyer, while his home was decorated with rare luxuries: pianos and a “library of ten thousand books”. Yet even before the hyperinflation of the early 1920s wiped out the savings of the Popper family, they – along with everyone else in Austria – were being looked down upon by the Wittgensteins. Not out of contempt or animosity of any kind, but from the disinterested and escapist heights of extreme wealth.

A “business genius”, Ludwig’s father Karl had built an empire on the back of the steel trade. In the evenings prominent scientists, musicians, painters, sculptors, and all manner of people from Vienna’s cultural elite would stop-by the Wittgenstein estate for dinner, drinks, and debate. With the image and riches of the American Rockefellers or Carnegies, Ludwig might not have known who Popper was, but Popper certainly was aware of Wittgenstein.

And he judged Wittgenstein accordingly, telling people that Ludwig “couldn’t tell the difference between a coffee house and a trench”, and that his book Tractatus Logico-Philosophicus, “smelled of the coffee house.” On this point, Popper couldn’t have been more wrong! During the First World War, Wittgenstein volunteered for duty, and refused the safe posting that his family connections would have afforded him. Instead he asked to join the frontlines as an artillery officer, and fought until captured, quite literally in the trenches. And it was there, in that mud and fear and agony and exhaustion and death, that Wittgenstein wrote the Tractatus.

At each turn in his life, Wittgenstein continued in this way – against the grain of assumed privilege. The youngest of nine children, three of Ludwig’s older brothers committed suicide, and he once confessed to a colleague that “all his life there had hardly been a day, in which he had not at one time or another thought of suicide as a possibility.”

After his release from a prisoner of war camp, he trained as a teacher and later worked at a rural elementary school. He left the job in a hurry after beating a particularly slow-witted student unconscious. He then tried his hand at architecture. Before that he was a gardener at a monastery, and had previously studied to become an engineer at the University of Manchester. Then at the height of his philosophical fame, Wittgenstein left it all behind for the isolation of a log cabin in the arctic forests of Norway; remaining there for years, communicating with the outside world only through letters. And when his father died, Wittgenstein chose to give away the entirety of his vast inheritance.

Back as a young boy in Austria, Wittgenstein briefly attended a state school (he was home schooled until then), K.U.K. Realschule in Linz, only a few grades apart from his fellow pupil, Adolf Hitler. Decades later, when Nazi forces were annexing Austria, all that long-denied privilege was suddenly – and understandably – too hard to ignore. Travelling back to Vienna from his naturalised British home (and then on to Berlin), Wittgenstein cut deals with influential politicians, paid bribes, and leant on his old family connections. And it worked! Wittgenstein’s sisters were allowed to live out the war years in safety, while their fellow Jews of Vienna were being dragged away into concentration camps; amongst whom – without the money nor resources of the Wittgensteins – were 18 members of Karl Popper’s family.

Fleeing the collapse of mainland Europe, Popper and his wife pulled hard on the few strings they had. They applied for British citizenship, were rejected; applied again, and were rejected again. Searching the globe for safe harbour, only a single, solitary offer ever materialised: the quiet hills of New Zealand… literally the other side of the globe. Even then, the Poppers had a hell of a time securing the necessary exit permits and visas (“our departure problems are appalling”). All this, while Wittgenstein was quickly handed British naturalisation on the backs of personal recommendations from the country’s elite, and then rode-out the war on a Cambridge scholarship.

A world away in the eerie silence of Christchurch, what little philosophical news managed to bounce its way across the oceans to Popper’s ears, always looked and sounded the same: Wittgenstein, Wittgenstein, Wittgenstein… Ludwig walked the halls of Trinity with a flock of acolytes in tow, copying his fashion, mimicking his mannerisms, and adoring his every word. So much so that Bertrand Russell was soon saying out loud that Wittgenstein had surpassed him, becoming the teacher in their relationship. But Wittgenstein was also becoming a man famous beyond his philosophy, a mystical public figure. While no one beyond a small – and parochial – group of close colleagues even knew Popper’s name; an outsider amongst outsiders.

It wasn’t the first time that Popper felt unjustly excluded from a world dominated by Wittgenstein’s shadow. As a young academic growing up in inter-war Austria, each and every Thursday evening Popper would sit at home, stewing in isolation, painfully aware of the party he wasn’t invited to.

That rare collection of Europe’s leading scientists, mathematicians, and philosophers, known as the Vienna Circle would meet each week to build upon the then-fashionable idea of logical positivism (“the view that scientific knowledge is the only kind of factual knowledge and that all traditional metaphysical doctrines are to be rejected as meaningless.”) and to discuss novel breakthroughs in knowledge creation. But what they spoke about most of all, was Wittgenstein, walking through his Tractatus line-by-line, revelling in its complexity and in the intellect of its author.

Invited to join the meetings countless times, and even made an honorary member of the Circle (considering how important his work was to them), Wittgenstein still never bothered to turn up, not even once. Popper on the other hand was desperate to join – publishing articles that he hoped would catch the Circle’s attention, while also chasing down its members on University campuses – and yet was never invited.

To Popper’s credit though, he had a very good – and unpopular – reason for wanting the eyes and ears of the Circle: he thought they were wrong… about everything that mattered!

Obsessed with building a criterion of meaning, the logical positivists believed that there were only two types of valid statements: “statements such as ‘All bachelors are unmarried men’, equations such as ‘2+2=4’, and logical inferences such as ‘All men are mortal; Socrates is a man; therefore Socrates is mortal’. And those which were empirical and open to verification: ‘Water boils at 100 degrees Celsius’, ‘the world is flat’ (which, being open to verification, is meaningful even if false).” All other statements – those not fitting within these categories – are literally meaningless by this account.

‘Does God exist?’ is impossible to verify, and is classified as a meaningless statement/question. But so is the claim that ‘Murder is wrong’, as it too sits beyond the scope of verification, and therefore belongs in the intellectual rubbish bin next to ‘Does God exist’. Even if you follow the logical positivists in this line of reasoning, and accept that ‘Murder is wrong’ is indeed unverifiable, why should that also make it meaningless? Why bundle the two together?

As you might expect, all this hinges around just what counts as verification. And it is here where the Vienna Circle found the philosophy of Wittgenstein so useful, digging into the Tractatus and embracing Wittgenstein’s ideas as their own. Ideas that Popper disregarded as “facile”.

Popper pushed back at the Circle by “polish[ing] up a two hundred year old artefact” from David Hume: the problem of induction. Restated here by Bertrand Russell: “the man who has fed the chicken every day throughout its life, at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.” Or to move this image out of the farm and into the laboratory, “no number of experiences can prove the validity of a theory.” So, in short, verification of this kind was impossible!

But Popper wasn’t done there. He railed against the Circle’s central project: the attempt to delineate sense from nonsense. He did so not only because by following this criterion they were carelessly discarding important philosophical problems, but also because the important demarcation was rather that between science and non-science. The question of meaning – as Popper saw things – simply had nothing to do with it.

And as Popper would note, they (the Circle) were people and a theory (logical positivism) that couldn’t survive by their/its own standard: logical positivism is itself a metaphysical claim, and one that is unverifiable in the same way as ‘Murder is wrong’ is unverifiable. So by its own light, logical positivism declares itself meaningless.

If these were the cold arguments of fact and theory, where Popper took things a little more personally was with Wittgenstein’s claim that there were no such thing as philosophical problems, only philosophical puzzles. That is, all apparent problems were really only problems of language – things that could be soothed-out with some “linguistic therapy”. And that all such issues could be easily avoided by simply not using language in unfamiliar ways (“when language goes on holiday”). For Wittgenstein, when we unravel important questions in philosophy, we are not exposing a hidden logic or underlying explanation, but rather just reminding ourselves of how language is properly used.

This was much too much for Popper to handle. What he saw in Wittgenstein here was an indifference to the real world around him, an indifference to the important questions within that world, and worst of all an indifference to the everyday people out there who longed for progress and a little less suffering: “Wittgenstein used to speak of ‘puzzles’, caused by the philosophical misuse of language. I can only say that if I had no serious philosophical problems and no hope of solving them, I should have no excuse for being a philosopher: to my mind, there would be no apology for philosophy.”

Which brings things back to the heat and fury within the Moral Sciences Club, and with that red-hot poker waving around the startled room. Popper had come for a fight, with a lifetime of bitterness and injustice to correct, while Wittgenstein had walked into an ambush of sorts. And yet even without this tensely built stage, things were always unlikely to go well between the two men. Both were unbelievably intolerant at the best of times… and notoriously so.

Bryan Magee once described the typical meeting with Popper like this: “an intellectual aggressiveness such as I had never encountered before… in practice it meant trying to subjugate people.” A former student and colleague of Popper’s, John Watkins, remembered typical seminars where invited lecturers would get as far as “announc[ing] his title”, before “Popper would interrupt” so much that “the speaker got through his title and nothing more.” Badgered with the energy and fixation of a schoolyard bully, these visiting dignitaries would routinely leave the seminars in tears, leading many people to joke under their breath that Popper’s book The Open Society and Its Enemies ought to be renamed The Open Society by One of Its Enemies.

And yet somehow, in terms of sheer prickliness and hostility – as well as a complete lack of social etiquette – Wittgenstein managed even to out-do Popper. The novelist Iris Murdoch said that Wittgenstein imposed “confrontation on all his relationships”, with his favourite advice to aspiring young students of philosophy being “abandon the subject” and instead “work with [their] hands”. As a school teacher he would beat his pupils beyond what were then reasonable standards. While living in Norway he once threatened to attack his neighbour with a stick. And when, in 1929, Cambridge was manufacturing a way to award Wittgenstein a doctorate for his Tractatus, he scoffed and belittled his examiners, Bertrand Russell and G.E. Moore (titans of philosophy in their own right), from across the table.

He ended the meeting – and the questioning of his work – abruptly by slapping Moore on the shoulder and saying “Don’t worry. I know you’ll never understand it.” Wittgenstein would later smirk that Moore was living proof of just how far someone could get in life with “absolutely no intelligence whatever.”

So, it would seem, that the fateful meeting at the Moral Sciences Club was always going to end with fireworks, anger, and intrigue. But perhaps not intrigue that would linger and remain hot over half a century later!

In 1998, while working as journalists at the BBC, the Times Literary Supplement fell on the desks of David Edmonds and John Eidinow. Within its pages was a letter claiming that Popper’s account of the meeting with Wittgenstein “was a lie”. A week later another letter arrived from a new author, saying that they had witnessed the encounter, and that both Popper and the previous letter were wrong. Then, remarkably, another week later someone else wrote-in saying that ‘no’, everyone else was mistaken and that he had the true story.

Edmonds and Eidinow were hooked, and began digging into the question, recovering old documents and compiling witness accounts. Memory is a difficult thing, but what we can – and should – say from this reporting, is that Popper did, in fact, lie. Or at the very least he embellished certain details in his favour to fluff-out a long cultivated self-image, as well as the pages of his autobiography.

It was, and is, a damn good story nonetheless!

 

*** The Popperian Podcast #11 – David Edmonds – ‘Wittgenstein's Poker’ https://popperian-podcast.libsyn.com/the-popperian-podcast-11-david-edmonds-wittgensteins-poker

Deutsch's Theory of the Pattern: The Widespread Compulsion to Legitimise Hurting Jews

In conversation with Richard Landes

 

There they were in the no-man’s-land between blood and bullets, Israeli military on one side, Palestinian forces on the other. Muhammad al-Durrah was 12 years old and screaming in terror, his father struggling to shield him behind a small concrete barrier. Then the camera suddenly shakes, dust sifts across the lens, focus returns, and al-Durrah is lying dead in his father’s lap. This was September 30, 2002, two days into the uprising of the Second Intifada, and the world had an image that it couldn’t ignore.

Media of all stripes and languages began running headlines about an Israeli genocide of the Palestinians, Jewish indifference to life, and the cold-blooded killing of a child. Across the Arab world al-Durrah became an instant martyr, with pictures of his dying body soon used on postage stamps around the Middle East; while the heat and intensity of the Intifada burned brighter with rockets, suicide bombings, and more of that gunfire.

And it was all “the cheapest kind of fake!”

The video of al-Durrah and his father that was released to the media wasn’t even close to what critical journalists would ordinarily accept. Instead of raw, continuous footage, we got six separate ten second long clips, cut and glued together; all grainy, all unfocussed, and showing the before, and the after, but not the actual killing. The reporting was that al-Durrah was shot multiple times in the legs and stomach, but there was no visible blood. Blame was immediately and persistently attached to the Israeli military, but when you look at the angle that the bullets must have come from, the shooters could only have been Palestinian.

The discrepancies built thick and fast for anyone asking honest questions… and then an extra couple of frames were leaked. Al-Durrah is lying dead, his father lost in grief and trauma, a few more seconds tick by, and the young corpse rolls slightly onto his side, moves his hands away from his face, opens his eyes, and stares knowingly down the camera lens. Not dead, not wounded, just an actor on a dusty stage. When Richard Landes first saw the footage, he immediately understood what it was: “Pallywood” (the Palestinian practice of forging evidence against Israelis, in the hope of turning global outrage against them).

By far the most alarming thing about these obvious forgeries is the greedy, unquestioning way in which they are accepted. Despite clear echoes from The Protocols of the Elders of Zion, a significant French anchor on a mainstream news channel stared-down her audience after watching the al-Durrah footage, and said “this picture erases/replaces the picture of the boy in the Warsaw Ghetto”. With Israel as the “new Nazi”, everywhere you looked people like our news anchor were lining up to free themselves, and their countries, from the hungover shame of the Holocaust.

There was a heat and an energy built around this particular instance of Pallywood that had Landes worried. After the Holocaust, open anti-Semitism and the manufacturing of modern blood libels (false, and persistent, allegations that Jews murder Christian children for religious rituals), largely stepped-back into the shadows as the accounting of those horrors stepped-out into the light. Sure, there were always those communities so tight within their own ideology, and religiosity, that they never felt the appropriate guilt, never corrected course, and never dealt with where their bigotry had led them.

But in those years of post-war reckoning, the Western world had been largely different. Though a catch phrase of a sort, “never again” was taken seriously, and hugged close, as a reminder that they too had shared in Hitler’s hatred, and so also bore a distant responsibility for his camps. “The West had resisted blood libels” Landes says, “until this one!”

On the streets of cities in France, chants of “death to Jews” began reverberating for the first time since the Nazi era. Jews were being attacked and abused and harassed with a righteous glee – “given what is happening in Palestine, what do you expect? You have brought this on yourselves.” – while most people turned away in silence and acceptance. No left-leaning political party uttered a concerned word, no anti-hate NGO did their self-declared job and campaigned against it, and most journalists didn’t even bother to file a report.

The obvious fraud of the video, combined with the near revelry and joy that people took in being able to legitimately target Jews again had something atavistic, and intensely memetic, lurking just below the surface. The sight of ordinary, and otherwise rational, people, going about their distant lives, turning their heads in a happy instant, and uncritically accepting the most outlandish of claims, took some explaining; and mere anti-Semitism just wasn’t going to cover it.

On the other side of the world from Landes, and deep into the writing of an upcoming book on irrationality and self-destructive behaviour, Oxford physicist David Deutsch had an uncomfortable answer:

I went to visit a twitter friend, the physicist David Deutsch. He’s writing a book about patterns of irrational thought that sabotage human creativity and progress. He has a chapter on the Jews in which identifies a pattern (he calls it “the Pattern”) concerning the Jews. The key to people’s behaviour in this regard, he argues, is the need to preserve the legitimacy of hurting Jews, for being Jews. This legitimacy is much more important than actually hurting Jews. And it targets only the Jews. It is not, accordingly, either a hatred or a fear, a form of racism or prejudice in the conventional sense, even though it can lead to those feelings and attitudes. But it is actually unique. No other group can substitute for the Jews as the target whom it is legitimate to hurt.

The truth of this is going to be found, or lost, in the details. Of course, a pattern needs to be explained, but much more so, it needs to be shown. It is not a claim that something is there, but rather that something repetitious is there. So where does someone start to show that the hatred of Jews is more than your garden-variety bigotry? At the beginning!

His blood be on us”, reads the gospel of Matthew, “and on our children.” Deicide! The murder of God! It is a crime that no other people have ever been accused of, and a guilt which no other children have ever inherited. In fact, it wasn’t until 1965 – a full 20 years after the end of the Holocaust – that the Catholic Church finally, and begrudgingly, revised its official doctrine which named all Jews, past, present, and future, as directly responsible for the death of Christ; as the heirs to Matthew’s blood curse.   

The worry around this issue burrows a little deeper still… and in a slightly different direction. As the keepers of the Old Testament, the killing of God (or the son of God) can strangely be excused through that same canon. After all, Christ didn’t actually die on that cross, and was quickly resurrected for his trouble. More importantly, as every Christian will tell you, his death was the event which brought salvation to all of mankind. So as Landes humorously puts it, “perhaps the Jews should be thanked by Christians, not vilified?”

Maybe this could have been true, if history had played out a little differently. But the Jews have a richer and more persistent stain on their record, something even more unforgivable than murder: denial! As the gatekeepers of prophesy they saw Christ in the flesh, and rejected the obvious truth (for all Christians) that he was the long talked about, and anxiously waited for, biblical Messiah. How could they not see what was before them? The answer for so many people is, they did! And they lied… because it is in their nature. This is a sentiment similarly shared by many Muslims, substituting Muhammad for Jesus.

For The Pattern to hold as true, more is needed. What is being proposed here by Deutsch, is not that an ancient hatred still lingers in the minds of many people today, but something much more insidious and destructive. Here the Jews are not just the scapegoats for the ills and violence of the world (no other group has ever been substituted for the Jews), but a special category of delight; where people find joy and celebration in being able to harm them – a prostrate, and deserving enemy.

Harm is the important word there, because it is not about being able to commit violence, but about the legitimacy of doing so, if it were ever chosen. Most people operating under The Pattern never have, and never will, actually raise a fist against a Jew – rather what they want is for fist-raising against Jews to be legitimate and correct… and they want every Jew to know that this is the case, well-founded in evidence, and always hanging over their heads. It is about preserving the right to harm Jews, not just about physical violence.

Embraced when it is there, and craved in its absence, The Pattern has the resilience and dopamine-high of an addiction. And as with every addiction, work-arounds and excuses are always at hand, always manufactured, so that those who are afflicted can happily, and greedily, indulge themselves.

The Enlightenment – with all its commitments to reason, liberty, tolerance, and progress – became a problem for The Pattern. Squeezed beyond respectable circles for the first time, The Pattern would slowly adapt and evolve and find its way back to prominence through a sleight of hand, best encapsulated by the fraudulent Protocols of the Elders of Zion. Here the rapacious and cunning Jews were placed as the manipulators of our kindness and naivety.

It, and similar hoaxes, ran like this: why do the Jews support democracy, a free press, and the free flow of capital? Well it’s not because they are a benevolent people who believe deeply in these ideas. And not even because the Jews prosper when everyone plays by democratic rules. But rather because they are seeking to enslave all of mankind, and these are the tools with which to do it! So even the Enlightenment quickly becomes a Jewish plot for global domination.

But this retrofitting of evil intentions has its limitations, and tends to hit the heroin addict’s vein with the disappointing tang of methadone. Hot flushes, cold sweats, and sleepless nights, become frantic heartbeats, horrible cramps, and skin so itchy that it is scratched away to blood. Soon the physical symptoms are intolerable to the point of crime and hospitalisation, with the addict now in so much withdrawal that he walks the streets in complete desperation – willing to do anything, anything, for a hit of that long denied drug (The Pattern).

And when it is eventually found again, it came with all the pent-up frustration and anxiety of a sudden overdose – with Hitler, the Third Reich, the Holocaust, and six million dead Jews. Spurned out loud for so long, The Pattern was back again in the public eye. And with so much catching-up to do, never had it been so efficient, so ruthless, so explicit, and so proud.

It was the kind of binge that addicts only wake up from on hospital beds, in jail cells, or in front of judges (that’s if they wake up at all). As quickly as it exploded in European rampage (and collapsed with defeat at the end of the Second World War), The Pattern was hushed back into silence, sent away to a court-ordered rehab centre, where again it would become much less acceptable, and much harder to express.

Despite the horrible recognition of where The Pattern had brought us all, there were still pockets of the world that never paused – for that small twinkling of self-doubt and reflection. Particularly in the Arab world, blood libel-type stories found a new voice and urgency at the very moment at which the full extent of Hitler’s crimes were under post-mortem. In their minds, and in their propaganda, those cunning Jews had tricked the world again, faked all those stories of massacre and genocide, and were plotting as always for global domination.

What could possibly explain such an inconceivably tone-deaf response to human suffering? Answer: The Pattern was under threat. And the danger was then, as it always had been, and as it remains today, either defend it or risk losing it forever. The scorn and the embarrassed looks from (some) Western eyes is an easy price to pay for the continuation of your favourite, and most satisfying, addiction; after all, they will likely thank you for it later on.

But the greatest threat to The Pattern was still on its way. Riding that wave of international sympathy and guilt, in 1948 the unthinkable happened: the state of Israel was established. Unthinkable because now every Jew in the world had a safe haven and a home, free of persecution and violence. There it was as a bright and unavoidable banner, speaking the new language of internationalism and human rights – the Jewish people had sovereignty… they were protected within the borders of their own country. The might of international law was on their side.

And of course, it was a return home of a kind. In the second century CE, the Romans had expelled the Jewish population from what is currently Israel, Gaza, the West Bank, and western Jordan. Renamed after the Philistines (ancient Jewish enemies), the region became Palestine by Roman decree. And there it stood, while the homeless Jews wandered from persecution to persecution, through pogroms, inquisitions, and violent conspiracy theories.

It wasn’t until 1894 that the Zionist movement was founded by the disappointment, the fear, and the exasperation, of the Austrian journalist Theodor Herzl. With his own assimilation going badly (along with most of European Jewry), Herzl quickly found momentum for the cause, and the first Zionist congress was held in Basil, Switzerland, in 1897. The issue swirled, locations were offered and reneged upon, and importantly the control of the great empires collapsed; replaced slowly by the ideal of the nation-state.

With little to expect from the kindness of strangers, Jewish philanthropists took matters into their own hands and began buying-up land in Palestine for the resettlement of European Jews; people who would re-join the surviving 10,000-odd population that had escaped Roman expulsion, and who had quietly eked-out lives within the city of Jerusalem. 40,000 Jews arrived from Russia between 1905 and 1914, as they were hunted out of Tsarist society. 600,000 more arrived between then and the Second World War.

With each step toward safety and sovereignty, Arab violence grew worse; most sharply in 1929, when the Jewish community of Hebron was massacred and run-out of town. In 1933, as the Nazis stepped into control of Germany, the Grand Mufti, al-Husseini, immediately contacted the German Consul General in Jerusalem, offering to help with Jewish eradication. By mid-war, the Mufti was broadcasting Nazi propaganda, organising attacks on British troops, and recruiting Arabs to enter the war from Yugoslavia. For his efforts, the Nazi high command appointed him to the ranks of the SS with the title of Major-General.

The war ended in 1945, Hitler had lost, and in that same year the League of Arab States (or Arab League) was formed. Its first order of business? Declaring a regional boycott of all Jewish farms, Jewish stores, and Jewish employment in Palestine.

But the Jews had a mandate from the freshly-minted United Nations, an allotted territory, and for once in the long history of their people they had momentum at their backs. The British left, and Israel was declared with David Ben-Gurion as the first Prime Minister. America officially recognised the new state, then it was the Soviet Union, then the rest of the world followed suit… Except for a significant holdout, with not a single Arab country willing to acknowledge that Israel had a right to exist.

As long as Israel stood, the Jewish people had borders from which to legitimately defend themselves, and so The Pattern was in sudden, and permanent, danger. Then as the world waited to likewise recognise another new state – the long since championed Palestinian state – in the neighbouring portion of the territory, those same Arab countries instead took the first modern step towards delegitimising Israel: the armies of Jordan, Syria, Lebanon, Iraq, Saudi Arabia, and Egypt, invaded Palestine.

In control of Gaza, Egypt systematically expelled the entire Jewish population. Jordan, in control of the other territory, forbade the very use of the word Palestine, instead naming it the West Bank. And this is where relations remain frozen, with the stifling realisation that any recognition of an independent Palestinian state – with its lines drawn next to the Israeli state – would, by its own existence, also formalise the existence and borders of Israel.

Attempts to destroy Israel through calculation and war, came in 1948, 1967, and 1973, and went, in each case with Jewish victory and a deepening sense of Arab humiliation: they were losing their grip on The Pattern.

Not to be deterred, the failure of open warfare was replaced by the Intifada, a persistent campaign of low-level, non-stop violence. The Intifada ran from 1987 to 1991, after which – through all the bombs, stabbings, riots, shootings, and thrown rocks – 20 Israelis had been murdered by Arabs under the banners of Islamic Jihad, Hamas, and the Palestine Liberation Organization (PLO). During the same period, 528 Arabs were murdered by their fellow Arabs, and those three organisations. Their crimes? Collaboration! Disagreeing with the violence, warning Jews about impending attacks, or for simply maintaining friendly relationships with the Jewish community.

Worried that Palestinian support was shifting to Islamic Jihad and to Hamas, in 1993 the PLO decided to meet with Israeli negotiators to discuss a peace deal. The meetings took place in Oslo, and due to the anti-collaborator atmosphere that had been drummed-up during the Intifada, it all happened in complete secrecy. Led by Yasser Arafat, the PLO walked out of the summit two years later with the signed Cairo Treaty in hand, and an agreed-upon “two-state solution”.

The Palestinian side of the bargain required a repudiation of terrorism, an end to anti-Jewish and anti-Israeli propaganda, and the altering of the PLO constitution to remove language which promised the destruction of Israel. For his efforts, Arafat pocketed the Nobel Peace Prize, flew back to Ramallah, stood before a crowded Mosque, and demanded that Palestinians “continue their Jihad until they had liberated Jerusalem” (both in its violence and territorial claim, a violation of the Cairo Treaty he had just signed).

The official PLO emblem remained unchanged as a map of the entire pre-1947 region (before the existence of Israel). Fatah’s emblem (Arafat’s core constituents within the PLO) remained the same as the PLO’s, just with rifles and grenades added for a little flare and emphasis. Palestinian schools continued to teach and lecture about the importance of destroying Israel. Anti-Jewish blood libel stories became a core part of Palestinian culture, taught, repeated, and accepted uncritically. An ancient law stating that any Arab who sells their property to a Jew will be executed and denied a Muslim burial, was revived. And when Arafat was subsequently rewarded by the Palestinian electorate for running away from what he had agreed upon in Oslo (receiving 90% of the votes), terrorism and violence increased sharply across Palestine.

In 1999, Ehud Barak was elected Prime Minister of Israel, and along with American President Bill Clinton, made a desperate attempt to rescue the commitments of Oslo. Hashed-out at Camp David, almost every demand of the Palestinian negotiators was acceded to. East Jerusalem (including the Jewish holy sites there) would become part of Palestine, all Jewish settlements not contiguous with Israel would be evacuated (by force if necessary), and the whole of Gaza and 96% of the West Bank would form the new Palestinian state, with Israel adjusting its own border and agreeing to land swaps to make this possible.

Given almost all that he had asked for – and so also on the precipice of having to give up The Pattern – Arafat again panicked. He walked-out of the meetings, promising to return in a few days to polish the final details. Instead he flew back to Ramallah and launched the Second Intifada against Israel. Soon the world was watching those faked images of Muhammed al-Durrah bleeding to death on his father’s lap, and with that, everyone (not just the Palestinians and the Arabs) was free and happy to roll again in the thick mud of Jewish hatred.

The language had shifted to appease modern sensibilities, and to provide enough cover for those who still cared about such things. Unchanged in purpose and intent, the hatred of Jews had become the hatred of Israel! All just another step, another evolution of The Pattern, “needing to preserve the legitimacy of hurting Jews, simply for being Jews”.

Once something like this hooks itself into culture, a huge amount of effort and persistence is needed to break it… as well as even to notice what it is in the first place. All patterns exist in their details as much as their explanations, and so with The Pattern it is found in what passes unchallenged nearly every day, from the mouths of activists, to global newscasts, and into accepted truth:

* ‘Israel is an apartheid state’ – there are 1.9 million Arabs living inside Israel (and growing each year) with full rights; while the Jewish populations across all Arab countries combined, have shrunk from 800,000 in 1948 to less than 9,000 today (Egypt: 80,000 in 1948 down to 100 today; Yemen: 60,000 in 1948 down to 50 today; Iraq: 140,000 in 1948 down to 5 today; Libya: 35,000 in 1948 down to 0 today; and so on…

* ‘Israel are an occupying power’ – there have never been any such similar claims made about Egyptian, Jordanian, or Syrian occupations of Palestine, and when asked what is meant by the phrase “occupation”, the response is often a reference to 1948 and the creation of Israel itself. Meaning that the very existence of Israel is what many people consider to be “occupation”. A fact further emphasised by the multiple offers of statehood to the Palestinian authorities, and the rejections of these offers.

* ‘The Israeli military are a vengeful institution who massacre Palestinians (men, women, and children) – indiscriminately’ – Israel has been attacked multiple times by its neighbours, and has never launched a war itself that was not in self-defence. When Israeli forces enter places like Gaza in response to rocket fire, they take the unprecedented – and near comically cautious – steps of warning Gazan residents beforehand through leaflets and text messages. And when Israeli soldiers do commit crimes or human rights abuses, they are publicly tried in court; while Palestinian suicide attackers are still honoured as martyrs, with their families paid lifetime stipends for their sacrifice to the nation.

* ‘Israel are committing a genocide of the Palestinians’ – The Palestinian population has increased from under 1 million in 1950, to over 5 million today.

Before sitting down to start work on his upcoming book, David Deutsch was part of a project to write an up to date history of Israel. He decided to start things off with a bit of humour… and from best intentions, and light-hearted fun, The Pattern still managed to squirm its way into the light, jumping back off the page at Deutsch and his fellow authors:

Once upon a time, we wrote a parody history of Israel, intended for the blog Setting The World To Rights (now in suspended animation), in which every sentence contained at least one lie.

But the reactions of many of our friends who read it were alarming. Instead of falling about laughing, most of them read it as fact. These were not opponents of Israel, but people sympathetic to it. We hadn’t realised how pervasive the prevailing distortions and falsehoods are. Considering that the parody began: “Judaism is unique among religions in being exclusive to a particular ethnic group (the Jews). It teaches (in its doctrine of ‘the Chosen People’) that all other races are genetically inferior to the Jewish one and that Jews are entitled to rule over them”, this was alarming.

We realised that we couldn’t put the parody into the public domain. After all, The Protocols of the Elders of Zion is also a crude forgery, but is now part of the standard repertoire of the Pattern usually called ‘antisemitism’. For instance it is in the Charter of Hamas. We didn’t want to be responsible for another anti-Jewish canard that might last the next few centuries.

 

*** The Popperian Podcast #10 – Richard Landes – ‘Deutsch's Theory of the Pattern - The Widespread Compulsion to Legitimise Hurting Jews’ https://popperian-podcast.libsyn.com/the-popperian-podcast-10-richard-landes-deutschs-theory-of-the-pattern-the-widespread-compulsion-to-legitimise-hurting-jews

Karl Popper, Friedrich Hayek and the Future of Liberalism

In conversation with Jeremy Shearmur

 

In the 1960s the London School of Economics (LSE) was in a rare and fascinating position – it had a walking, talking, godlike presence hanging over it. A name that everyone already knew would soon be carved into statues – and who would define the institution for centuries to come – was alive and amongst them. He only turned-up once a week, but by then that was enough. From being the first (and only) staff member in the Department of Logic and Scientific Method, to forming a research academy and attracting a remarkable list of international students and colleagues, Karl Popper had made the LSE his own, in a number of ways.

This was the strange world that Jeremy Shearmur walked into as a fresh-faced undergraduate student. Popper was not the kind of professor to waste his time on the grind and repetition of teaching, but in those corridors and those lecture halls, and in the language of his professors, Shearmur could see a firm and domineering shadow:

I had a ‘Popperian’ education in philosophy, but largely not from Popper.

I am struck, when looking back at my time at the L.S.E., by the fact that Popper’s approach to philosophy was at the center of the course, but the Department was characterised by lively debates about it, rather than its uncritical acceptance. 

When those undergraduate days were finally out of the way, Shearmur briefly erred between pursuing a PhD and a career in librarianship. And it was while testing the waters of this latter path during a graduate apprenticeship at Durham University library, when a remarkable opportunity fell into his lap. The then-assistant to Karl Popper was leaving his post, and a quick replacement was needed. Through word-of-mouth the position was offered to Shearmur; he responded like this: “The answer was yes: it was, for me, a bit as if a young Catholic had been asked if he wanted to work as personal assistant to the Pope!”

With the leader revered and elevated above everyone else, a flock of adoring acolytes dropping everything important in their lives for the opportunity to be a part of things, and a healthy dose of fear and insecurity and insignificance and punishment hanging in the air, the Popperian Church was, unmistakably, church-like. But this was also a church, and a Pope, who refused and rebelled against infallibility and claims to higher knowledge. The papal robes were torn and muddied, the stained glass cracked and unwashed, and the people who attended regular mass were loud, aggressive, and critical… of everything.

When he wasn’t ferrying library materials to Popper’s home or chasing-down obscure translations, Shearmur began to sniff-out his own role within the church, something beyond his current-position of altar boy; he started looking for problems within Popper’s philosophy. And what Shearmur found was a world both too small, and too expansive, at the same time. He saw unnatural and unnecessary limitations tying-down and restricting the possibilities of critical rationalism. In his mind, criticism should be stretched beyond science and into metaphysics; after all, knowledge of all kinds is a “social phenomenon”.

If we put logic, formalization and technical nit-picking back in its proper box, this will enable us to reflect on wider issues which were opened up in The Logic of Scientific Discovery but were not pursued there. This includes the role of ‘methodological rules’ (which, I have suggested elsewhere, may be understood in partly sociological terms). This would, at once, get rid of the artificial division between critical rationalism and sociological approaches to knowledge, but would allow us to pursue the latter with a Popperian concern for critical appraisal and the improvement of our institutions. 

Embrace just such an approach, break those barriers free, and what do we get according to Shearmur? A new, important, and central appreciation for classical liberalism.

It is one of those wonderful products of history which, due to its success and adoption in the world, has lost much of its gloss and meaning and value. The roots of classical liberalism drag us back to medieval Europe, to Britain, to kings, monarchs, duties to god, and the idea that individual rights are due to the generosity of governments – loving gifts that cannot be turned around as weapons against those same governments.

Classical liberalism emerges in this moment – insisting upon a sea change in our understanding of those rights. Rights are not gifted to us but rather owed to us. It is also not the role of government to present us with our rights but to protect them against encroachment. And when they cannot live up to this, or when they are the ones doing the encroaching, then those same rights are what we use to remove and replace them with someone better. Classical liberalism turns-around the bulk of human history and places consent at the centre of governance.

And in many ways this sounds Popperian… but also in many ways it is not! The problem begins first with drawing a line between the Two Karl Poppers. The earlier Popper, writing in The Open Society and Its Enemies, was a much more interventionist man, worrying about the dangers of “unregulated capitalism” and preferring a firm hand of state control within economic life. The later Popper twists out of this and lands on a streamlined, minimalist, indirect view of the state, whose role is never, ever, to attempt anything as presumptuous as to try and make people happy.

This, it appears, is where we can find the slow-working influence of Popper’s old friend at the LSE, Friedrich Hayek, and the entry point for negative utilitarianism (seeking to only reduce or minimise disvalue, rather than trying to maximise value). Here the state, in all its power and reach, is largely asleep behind the wheel, only ever shaking itself awake – and to action – in order to protect individual freedoms. Why would anyone want such a thing, why would anyone hope for a largely impotent government?

It adds to clarity in the field of ethics if we formulate our demands negatively, i.e., if we demand the elimination of suffering rather than the promotion of happiness. Similarly, it is helpful to formulate the task of scientific method as the elimination of false theories (from the various theories tentatively proferred) rather than the attainment of established Truths.

Just how that ethical clarity and the elimination of false theories comes about needs some explaining. And here Friedrich Hayek’s firm hand can almost be seen beneath Popper’s ink as he begins to explain the raw, exposed, uncorrupted, and constantly reaffirmed relationship between buyer and seller.

It may not feel like it for most people, but every time you wander into a store of any flavour, hand over your money, and walk out with a product of any kind in its place, you are becoming an important and indispensable part of an epistemological loop. A clean and unambiguous system of accountability, reaffirmed anew with each and every purchase.

Famously for Popper, knowledge creation is a process of conjectures and refutations. And that refutation part can often feel like the trickiest. It is where we test our theories against reality and against other theories… it is where other people begin to really scuff things up. The laws, the culture, the institutions, the bureaucracies that we build all have a logic behind them, a reason for their creation and for how they look. They tighten our hold on certain things. They protect us, and our values, from unwanted, poorly conceived, and short-sighted criticism. They prevent some of that scuffing!

And so we have a problem! Albeit a problem that doesn’t immediately sound like one.

Imagine that you are an activist at some point in our recent history – someone with truth and justice on your side. You fight and you suffer and you are arrested and you lose your job and are threatened. Yet each day you continue, no matter the cost.

At first it is just you! Then slowly another activist joins you on those streets, then two, then a handful… After decades of this a majority of your fellow citizens are on your side; you have very slowly, bit by bit, managed to convince a country of voters to correct its errors. Old discriminatory laws are rescinded, better ones are adopted in their place, and as far as you can see – as far as morality and law touches your life – things are changed for the better.

But none of this lets you sleep any easier at night… not yet! People can be frivolous and manipulated, they often change their minds, and if you could convince them to drop their previous, deeply held, beliefs (as you just did), then what is there to stop your enemies from doing the same? Tomorrow they might rally their troops, resharpen their arguments, copy your tactics, take to the streets, and eventually persuade everyone to walk back all those changes that you fought so hard for.

So to protect your achievements you decide to build something around them, something resilient to shifts of opinion and protest, something isolated from the whims of those masses. You create an institution! Through a series of steps, hierarchies, procedures, and bureaucracy you clog the muscles of your enemies with a rich and sustaining lactic acid. From here out, social change becomes more arduous, painful, time consuming, and ambiguous.

But this still won’t do it, this still doesn’t seem safe enough. So you decide to wrap your hard-fought progress up in a new type of reactive culture, one that says ‘To question this is evil, to try and change this is hateful, and to even doubt this is bigotry’. And it often works… because, as chance would have it, it is simply much easier to centralise knowledge (moral knowledge in this case) than it is to create it.

Here we find ourselves at a place of good intentions, and yet somewhere profoundly un-Popperian. For Popper error is all around us, at all times; so much so that error is the natural state of things. Which is why that second part of Popper’s epistemology (refutation) matters so much. Surrounded in all directions by so many wrong ideas, the only hope we have to make any sort of progress is to actively seek out these mistakes, and to remove them wherever they are found. We must embrace a wet blanket of constant and biting criticism. And we must avoid the creation of hollowed-out spaces in society, spaces where criticism is seen not as a corrective but as a problem.

Back to Hayek! Back to negative utilitarianism! Back to those impotent governments and a little more “clarity”. Well-meaning errors of the kind mentioned above find life within an open marketplace much less hospitable. So this time imagine a different pathway in life: instead of being an activist out to change the world, you are a small business owner out to get rich. And you start where all things do – with a problem… multiple problems.

You are hungry and cold (problem) so at first you look for a job to buy food and clothes (conjectured solution), but no one will hire you (criticism). Still in need of money (problem) you decide to sell your skills, labour and time by, for example, fixing shoes on a street corner (conjectured solution). You make some money, but not enough (criticism) and you don’t enjoy the work (criticism again). You decide to change the scale of your business (conjectured solution), and to make things a little more comfortable on yourself you rent a small shop and fill the shelves with products that you think people want to buy (conjectured solution again).

No customers enter your shop (criticism). You decide that you need a better location, somewhere with more human traffic, but when you move your shop across town (conjectured solution) despite now having plenty of customers browsing your shelves, no one ever buys anything (criticism). You need new/better products so you get a bank loan and invest in new merchandise (conjectured solution). About half of your new products begin to sell quickly, but the others remain unwanted and untouched (criticism). You use your profits to replace your poorly selling merchandise with different ones (conjectured solution).

Now about 60% of your products are selling reasonably well, but the other 40% are still cluttering-up your shelves (criticism). You continue error-correcting like this in a piecemeal fashion (conjectured solutions), but trends and shopping habits all change (more criticisms), and you are always chasing the tastes of your customers (endless conjectured solutions). You are also constantly discovering new and unexpected ways in which you are failing to meet their expectations (endless criticism).

It all sounds exhausting! But this is the case for all knowledge. And in this unhindered marketplace, where everyone has the free choice of what they sell and what they buy (and at what price), knowledge is being created at an incredible pace; with every single transaction.

So now imagine yourself in a much more familiar and intuitive position (at least for most people): being asked, as we are at every election, to choose how we want to distribute goods and services around the community; and to decide upon a way of life. To do this you can place your trust in politics and political leaders, with all their inherent leanings toward compromise, avarice, and corruption. Or you can follow Hayek’s lead and embrace the free flow of conjectures and refutations, of trial and error, of delicate fine-tuning to the needs and desires of the community, to a place where critical feedback is thick in the air.

And in appropriately Popperian ways, it is a place that doesn’t ask much from us… nor anyone!

Few things have the ability to drown most people in intellectual deep water quite like macroeconomic theory with all its talk of opportunity costs, supply chains, globalisation, sovereign risk factors, surpluses, deficits, recessions, depressions, elasticity, liquidity, seasonal adjustments, asset turn-over, marginal standings, business cycles, companies, industries, resources, gross domestic product, inflation, stagflation, classicalism, Keynesianism, monetarism… And yet none of it matters, none of it is necessary, none of it needs to be understood in order for you – or anyone – to participate.

It is an argument that former British Prime Minister, Margaret Thatcher, often used against the overreach of the state. With all the might and time and resources that governments have, they are in unique positions to understand all that terminology above, in all its detail and permutations. In fact, this is what they work so hard to achieve – developing grand theories and predictions for every corner of economic life, from the largest corporation to the smallest transaction. And yet despite all the resources they have, and despite knowing all that they do, they consistently get it all very wrong! Their predictions fail, their theories collapse, and so they start again with another grand enterprise.

The error that they are making has to do with – in hard Thatcherite terms – “local knowledge”. It is a mistake so common, and so hard to shake, that we have all fallen for it/suffered from it ourselves.

Back to our imagination, and with a lack of appreciation for local knowledge turned against us. It could be a new law passed through parliament, a new ethical guidance for your workplace, a new regulation for your community, or simply, perhaps, a recommendation from a friend. Regardless, someone, somewhere, has decided to solve a problem for you, whether you recognise it as a problem or not. And to do this they have either worked their way down to you from an all-encompassing theory, or have worked their way across (or up) to you via an extrapolation from their experience, and their own local knowledge.

There you are, handed these solutions, this reform agenda from afar, and almost immediately you realise that none of it will work! Perhaps it might somewhere else, but it misses all the particulars and challenges of your situation. There are too many small details missing, too many problems that are overlooked, not enough understanding of why things currently look the way they do, not nearly enough nuance… not enough local knowledge!

The beauty of using the marketplace as our primary source of knowledge creation – of conjectures and refutations – is that no such imposition of this kind should ever happen. And going back to an earlier statement, it never even asks for it. Local knowledge is enough!

It might sound counterintuitive, but there is nothing inherently parochial about everyone doing their own thing, in their own little space, making their own decisions and solving their own problems. Just as with various different scientific breakthroughs all coming together from different corners of society to form a single body of science, this can also be the case with political and economic knowledge. Indeed it is how culture works, silently stitching together a patchwork of truth and pragmatism. No one ever needs to know everything, nor to legislate for everyone and every behaviour. Knowledge always works best as a collaborative exercise… sharing it is enough! Just as you don’t need to understand heart surgery to benefit from heart surgery, you don’t need to understand all the connected details of a supply chain to prosper from that supply chain.

If so much that we value can be pieced together from market forces, then why have governments at all? What’s the point in having them, especially when their existence and tendency toward bureaucratisation, to institutionalisation, to overreach, is such a risk. The answer: we need them as back-ups to ensure trust within those marketplaces; as the enforcers of contract law, of bankruptcy law, to protect private property, and so on. They are there to ensure that anarchy and power wielded by the strong can never step into our lives and shake us from our freedoms… the very same freedoms that allowed that market to work in the first place.

Governments are also there to smooth out the jagged moral corners of society, to hand us pensions when we get old, to deliver us health care when we get sick, and social welfare when we lack the basics in life. All the things that we believe are necessary, yet which the invisible hand of the marketplace hasn’t yet got around to providing a complete enough answer for. Finding this balance was Hayek’s great challenge, and one which we are still fumbling with today. Finding the line between the philosophies of Popper and Hayek remains with us too. Though Shearmur has a partial answer:

The crucial difference between Popper and Hayek... is that while they both make use of epistemological argument for a broadly liberal position, Popper’s views centre on the fallibility of scientific knowledge, while Hayek is concerned not with scientific knowledge but with political lessons which might be extracted from what could be called the social division of information. Further to this, central to Popper’s vision of politics is the political imposition of a shared ethical agenda, through a process of trial and error: of piecemeal social engineering. What is central for Hayek are markets and their associated institutions which, on his account, form a kind of skeleton for a free society—one which, at the same time, enables us to make cooperative use of socially divided knowledge, and to enjoy a broadly ‘negative’ conception of individual freedom.

So what can, and should, we expect from a re-embrace of classical Hayekian liberalism? Not much. Only that it is better at knowledge creation, and so also better at ordering society, than all other less free alternatives. It all hinges around what all that freedom allows for: a roaring and constant flood of criticism. And with that comes the quick exposure of errors, and their quick correction to something better. With that flood of criticism comes an equally large flood of knowledge creation. And so it also stands exposed to its own refutation: all that needs to be shown to put classical liberalism in its grave, is that another system (whatever it may be) once implemented creates more knowledge, and does so more quickly. In that moment it would all be over. And this is easy to measure, just stare out at the world today and pay attention; this experiment is being run over-and-over before our eyes.

That being said, would it be possible to create a more centrally controlled society, with centrally planned institutions, which side-steps some of those market forces, which reaches deeper into our lives with more coercion and more regularity, and which also manages to produce more criticism than classical liberalism? A system where more is driven for, and more is demanded, than just an opening of space for dialogue and feedback. Rather a society where criticism is actively manufactured and applied, where people are coerced to seek out criticisms that they might not ordinarily find or care about, filling-in gaps that the market might miss, and adding more voices and opinions to the places where criticism already exists (similar to the way in which compulsory voting systems coerce people into thinking more about politics, policies and elections).

I suspect the answer is yes! If so, it will take some effort, carry with it a unique set of risks, and require knowledge that is yet to be born… still we should be open to the possibility.

God-like auras cultivate their own natural resentment. Churches fracture and fall precisely because they are churches. Pedestals are just unpleasant things to be around for too long. So perhaps it was appropriate that, in spite of all that he was, and all that he gave to the London School of Economics, it wasn’t until 1995 that a statue of Karl Popper made its way onto campus. And even then it was a donation from the Austrian President, Thomas Klestil – a small bronze bust now catching dust in the quiet halls of the philosophy department.

It is likely that Popper would approve of all this understatement – the lionisation of people or ideas is often the first step towards shielding them from criticism; criticism that Popper would have insisted upon hearing.

But I suspect even he – if alive today – might bristle when, upon taking a tour of his former campus, and looking around all that new architecture, he would realise that amongst the redevelopments, and changes in design and infrastructure, that his old office had been cleaned of his belongings, his name pried unceremoniously from the door, and the empty space turned into a public toilet! 

 

*** The Popperian Podcast #9 – Jeremy Shearmur – ‘Karl Popper, Friedrich Hayek and the Future of Liberalism’ The Popperian Podcast: The Popperian Podcast #9 – Jeremy Shearmur – ‘Karl Popper, Friedrich Hayek and the Future of Liberalism’ (libsyn.com)

New Zealand and the Authoritarianism of Plato

In conversation with James Kierstead

 

It was, and still is, an unenviable journey. For most people it is the other side of the world, a drowsy corner of boredom and isolation and stillness and parochial concerns. But a good job is a good job, and so James Kierstead found himself packing up his life in America and trekking-out on an academic relocation to the sheepy fields of New Zealand. He was in small company. Very few colleagues of note had made that move before him – and running down the list of ex-faculty nothing jumped-off the page, none of those names, despite all they had achieved, were particularly recognisable… except for one! Someone rich in controversy from all directions:

The mixed nature of Popper’s reputation was made clear to me only a few weeks before I myself moved to New Zealand, at a dinner following an interdisciplinary seminar on ancient political thought at Stanford. When I mentioned my impending move, the conversation soon turned to New Zealand classicists and philosophers, and in this context the name of Karl Popper was one of the first to come up. Very soon the dinner table was divided: though everyone had heard of Popper, only the political scientists in attendance showed unguarded interest; the classicists were unenthusiastic, and the ancient philosophers (both of them Platonists) were openly hostile. The only person actually to praise Popper was an exchange student from China, who was actively engaged in his country’s prodemocracy movement and lauded Popper’s insistence that our future is ours for the making.

Why would classicists care about Karl Popper in the first place, let alone be “unenthusiastic”? Why on earth would those philosophers – the people you might expect to appreciate and embrace Popper the most – be “openly hostile”? And why would an exchange student (and part-time democracy activist) from an authoritarian country be so full of “praise”? Well it all comes down to an unlikely villain and an interesting kind of “war effort”.

Karl Popper’s time in New Zealand was one of exile rather than choice. Pushed out of his native Austria just before the Second World War, New Zealand was the first, and most solid, offer of safe harbour that came Popper’s way. Settling down to the otherworldly calm of Christchurch, Popper was motivated to do his bit, whatever he could, from the distance at which he then sat. It was there, watching back on the horrors unfolding in Europe, that he wrote The Open Society and Its Enemies. The Enemy was naturally totalitarianism and brutality and coercion and disenfranchisement and oppression of all varieties. But His Enemy – the person that Popper named as the divine progenitor of all this carnage – was what shocked his readers and turned so many of them off: the Greek philosopher, Plato.

It was first said by Albert North Whitehead, and it echoes still as a raging cliché around philosophy departments today, that “European philosophical tradition… consists of a series of footnotes to Plato”. A clear embellishment, it is not so much meant to be taken literally as it is to represent the titanic figure that Plato was. He did so much work, of so much significance, so early in the history of philosophy, and in such a welcoming style, that a level of profound awe is certainly appropriate. But when The Open Society was first published a feeling of near-religiosity hung in the air; and so Popper was stomping upon sacred ground.

The level of complete and fawning veneration – both inside and outside of academic circles – for Plato was hard to overstate. People like Richard Livingstone – President of Corpus Christi College, Oxford, for nearly twenty years, and Vice-Chancellor of the entire University from 1944 – were making loud and un-controversial names for themselves by saying that Plato’s Republic was not only an important text in politics or philosophy, but also “the greatest of all books on education”.

Popper wasn’t just challenging this, he was casting a wide and staining moral judgement upon men and women like Livingstone. Far from being a great book on education, Popper saw the Republic as something uniquely dangerous… on a par with Mein Kampf. So those people lining-up to praise Plato, might as well have been crowding-in to the parade grounds at Nuremburg, goose-stepping in unison, and saluting Hitler with an extended Sieg Heil!

And if that didn’t hurt sensibilities enough, there was a tone to Popper’s attack, and a twinkle in his prose, that stood well outside philosophical tradition. Kierstead picks out a few representative examples of this:

The accusation that Plato’s literary skill served only to throw a veil over “the complete absence of rational arguments”; the dismissal of one of his inferences as “a crude juggle”; even the description of the ideal of the philosopher-king as “a monument of human smallness.”

In response to this inflammatory language, Gilbert Ryle wrote back in a kind of shock and disbelief (despite his sympathy with Popper’s analysis) that would sum up many of his colleagues:

[Dr. Popper’s] comments… have a shrillness which detracts from their force. It is right that he should feel passionately. The survival of liberal ideas and liberal practices has been and still is in jeopardy. But it is bad tactics in a champion of the freedom of thought to use the blackguarding idioms characteristic of its enemies. His verdicts are, I think, just, but they would exert a greater influence if they sounded judicial.

But it wasn’t just that he sounded a little too lyrical and bombastic for the taste buds of his day – people like Ryle sniffed out something much more problematic as they saw it. Popper was writing with an unconcealed contempt and a near-belligerent hostility. He was not deliberately dicing pithy, throwaway phrases, into his work to catch eyes and draw attention. He was not playing the role of provocateur, but earnestly talking-down to Plato as might an intellectual superior.

Kierstead’s question: “So which parts of his argument stand up to scrutiny, and which do not?”

The problems start with Popper’s use of the word tribalism. A tribal society for Popper was a closed society, in contrast to his Open Society. It is the dark cave that Greek democracy and Athenian society crawled from, muddied, sick and immoral, and it was where, in Popper’s estimation, that Plato wanted us to return: “Plato was longing for the lost unity of tribal life”.

It was a hard case for many people to swallow. Before Popper, there existed an instinctive difference between those simple, raw, tribal societies, and the highly efficient, centrally planned totalitarian states that he grouped together under the tribal umbrella. What made this comprehensive grouping newly appropriate for Popper was the common diagnosis of Historicism, the idea that history is determined by certain laws, and so the future can be accurately predicted by understanding those laws. Or to quote Gilbert Ryle again, history is “not a bus but a tram.”

When he stabbed this charge into the philosophies of Hegel and Marx, no one was very shocked. Even the blood-red adherents of those philosophies smiled back at him approvingly, nodding their heads and saying out loud, yes, we are historicists, we just don’t think that historicism is a dirty word. But with Plato things didn’t seem to fit quite so neatly… and the champions of Platonic philosophy were much less interested in playing nice.

For many readers, Plato was a man interested in questions of the good life, about how we should live, and what the proper way to be a part of society was; not in grand designs about history, nor about general laws that govern all human development.

The key confusion it seems was around the question of his metaphysical Theory of Forms, the belief that the physical world around us is just an imperfect copy of the Realm of the Forms (an ideal world populated only by perfection). Popper thought that Plato carved his political philosophy directly from this foolish idea, and attributed to him the thought that all change is therefore a negative, something that takes us further away from those ideal forms, and so something that is always corrosive. And make no mistake about it, this is certainly an image of Plato The Historicist.

The trouble is that when classicists dig into relevant passages from Phaedo, they often emerge with something very different in their hands. It is not that with each step we are further away from perfection, but rather when things do get worse it is due to an increased distance from the Forms. So conversely when things noticeably improve it is because that distance to the Forms has been shortened. “Things take on certain qualities because the Forms come to be in them; when a man becomes just, for example, he comes to partake in the Form of justice.”

On this reading not all change is change in one negative direction. And so this is Plato The Anti-historicist. But Popper would bridle at all this talk about perfection and original beauty, and as far as Plato claimed to already know what the end game of history was (of what we should be trying to achieve, not only now but forever) the label still seems to have plenty of purchase. And while Popper does talk about Plato as “the embodiment of an unmitigated authoritarianism”, he is also quick to offer his enemy a few charitable excuses:

"My thesis here is that Plato's central philosophical doctrine, the so-called Theory of Forms or Ideas, cannot be properly understood except in an extra-philosophical context; more especially in the context of the critical problem situation in Greek science which developed as a result of the discovery of the irrationality of the square root of two."

"It seems likely that Plato's Theory of Forms is both in its origin and in its content closely connected with the Pythagorean theory that all things are in essence numbers. The detail of this connection and the connection between Atomism and Pythagoreanism are perhaps not so well known."

But things are about to get much, much darker! And in the eyes of many classicists, as well as many philosophers, much, much less forgivable. Popper wasn’t only saying that Plato was wrong, nor only that he was an authoritarian, but also that he was deliberately dishonest, manipulating and distorting the philosophy and character of his teacher, Socrates. The language is typically rough, and Popper is here fighting to rehabilitate the historical Socrates from his student’s “betrayal”: “the philosopher king is Plato himself, and the Republic is Plato’s own claim for kingly power.”

By assigning psychological motivation to someone thousands of years dead, Popper was clearly reaching here, but the ways in which Plato is commonly defended from this aren’t very impressive either. The first goes like this: Plato’s dialogues are so deeply complex and layered and profound and nuanced and difficult that trying to pull the actual philosophy from them is an impossible task; they are irreducible to any one thing. That this has even been entertained within serious academic circles is an embarrassment to the field! Worse, it is blatantly irrational. That something is hard or difficult or nuanced or complex does not mean that it is therefore impossible. This is a picture of well-trained adults running away from their problems rather than trying to solve them.

But this nonsense does have a slightly less ugly sister, an argument with just a bit more purchase… but only a bit. That is, the dialogues should not be understood as communicating philosophy at all. A better way to describe them for some people – prominent among them was Leo Strauss – were as dramas. As Shakespearian plays of a kind, designed to draw-out the emotions of the audience, to entertain, and to inspire, but not to philosophise. It is an argument that Kierstead has little time for:

There are plenty of ways in which the comparison with Shakespeare is misleading. For a start, a single character dominates a large number of Plato’s dialogues; this is not the case with Shakespeare’s plays. Moreover, though characters in Shakespeare often say things that are of philosophical interest, they do not engage in systematic philosophical enquiry, either on their own or with others. But systematic and cooperative philosophical enquiry does not only happen repeatedly in Plato’s works—it constitutes the lion’s share of the content of almost all of the dialogues.

Which brings us to the Mouthpiece Argument – Popper’s claim that Plato betrayed Socrates, and used him as a puppet for his own philosophy; otherwise known as the Socratic Problem. It is certainly problematic to assign the opinions of any characters to that of their author, but there is a difference here that matters. If the anti-democratic views belong to Plato as Popper claimed, then it would certainly make sense for Plato to write them as Socrates’ instead (as he did). Such authoritarianism and such dissent against the Athenian democracy would not have been able to be voiced publicly at the time… at least not as one’s own.

This is all beside the point. Whether it was Plato or Socrates or even someone else, it is all just tinkering on the fringes of an argument – as are long debates about the nature of authoritarianism, and how authoritarian Plato actually was. As Kierstead explains, even if Popper were wrong in the strength of the label, and had to backtrack on that claim of “the embodiment of an unmitigated authoritarianism”, he would still have accomplished his goal:

In particular, it strikes me that Karl Popper himself would have been quite happy with the statement that Plato, though an authoritarian and even a totalitarian, was not an extreme totalitarian. An acceptance that Plato’s philosophy bore some resemblance to fascism would have been more than he was hoping for; but he probably would not have been terribly upset with it.

If all that people got from the episode was that Plato was indeed an enemy of the Open Society, then the book was still a roaring success. And so it was! The aura of Plato was over, never to recover nor return as it was before Popper took his aim. He was brash, bombastic, loud, at times obnoxious, and he deliberately rattled the sensibilities of the day, and perhaps this was exactly what was needed. For Kierstead, and for so many others, “Popper’s most important contribution was bursting the bubble of the complacent Plato worship that had been carried out for decades”.

And it all started with that journey to the other side of the world and the new quiet life he found there. It is a heritage that New Zealand holds proudly today. They were home to Karl Popper (if only briefly) and from their shores came The Open Society and Its Enemies. But it all might have been different if the peculiar fascinations of Popper had won the day. When he was applying for university positions in New Zealand and Australia, Popper wrote to his old friend Ernst Gombrich with a profound dilemma:

You kindly advise me to prefer Otago [New Zealand] to Perth [Australia], in spite of the Cangeroos [sic]. But I think you don’t really know enough of Australia by far: the nicest animal there (and possibly the loveliest animal that exists) is the Koala bear. Cangeroos may be nice, but the opportunity of seeing a Koala bear is worth putting up with anything, and it is without reservation my strongest motive in wishing to go to Australia.

 

*** The Popperian Podcast #8 – James Kierstead – ‘New Zealand and the Authoritarianism of Plato’ The Popperian Podcast: The Popperian Podcast #8 – James Kierstead – ‘New Zealand and the Authoritarianism of Plato’ (libsyn.com)

Karl Popper and Africa

In conversation with Oseni Taiwo Afisi

 

He fled as an aged and battled 35 year old. A man between worlds, with a thin and eclectic resume. Karl Popper had tried his hand at teaching, at psychology (of sorts), and even at carpentry, completing an apprenticeship as a cabinet-maker. He had a relatively-new doctorate of philosophy from the University of Vienna hanging on his wall, and a young wife, Hennie, to support. He was also Jewish!

1930’s Vienna was one of those extraordinary places and times to be alive. The cultural centre of Europe and rich in cosmopolitan politics, artists, writers, actors, musicians, and public intellectuals all rushed in for a taste; desperate to somehow squeeze-in, to be a part of that indescribable moment, in whatever small way they could.

Popper grew up in the heart of this. His father, Simon, was a lawyer who wrote satirical plays in his spare time, as well as building a formidable personal library of over ten thousand books where he would add his own German translations of Greek and Roman classics. The family were bourgeois, comfortable, and deeply integrated into the vibrant circus around them.

The young Karl Popper had a lot to be thankful for: he was too young to have served and suffered in the trenches of the First World War, and sure he had lived through the collapse of the Austro-Hungarian monarchy, but from its ashes came a rare type of cultural re-birth, and an intellectual revolution whose impact ran for generations. The peak of this was that collection of scientists and philosophers and logicians and mathematicians who called themselves the Vienna Circle.

The members of the Circle ran as a who’s who inter-war intellectual life: Moritz Schlick, Hans Hahn, Philipp Frank, Otto Neurath, Olga Hahn-Neurath, Rudolf Carnap, Herbert Feigl, Richard von Mises, Karl Menger, Kurt Godel, Friedrich Waismann, Felix Kaufmann, Viktor Kraft, Edgar Zilsel… And the work they discussed each week belonged to a similarly impressive showcase of names: Ernst Mach, David Hilbert, Henri Poincare, Pierre Duhem, Gottlob Frege, Bertrand Russell, Ludwig Wittgenstein, Albert Einstein…

A little late to the game and much too fresh-faced, Popper sat on the periphery of the Circle, never a participant in any of the meetings. But he did build friendships with those on the inside – the first was Otto Neurath whom he bumped into on the grounds of the University of Vienna. And it was Neurath who would later give Popper the title that he would wear as a badge of honour for the rest of his life: “the official opposition”.

The opposition to the philosophical blinders that he saw upon the Circle, the inferiority of the people within it, the glorification of idols such as Plato, Hegel, Marx, Freud and Wittgenstein, and particularly the opposition to the terrible ideas that they founded and publicised to the world, of which logical positivism was the worst offender. Even then Popper had a keen eye for the long-term dangers of bad ideas. He saw those beautiful Viennese streets a little differently:

I certainly disliked the existing society in Austria, in which there were hunger, poverty, unemployment, and runaway inflation—and currency speculators who managed to profit from it. But I felt worried about [Communism’s] obvious intention to arouse in its followers what seemed to me murderous instincts against the class enemy. I was told this was necessary, and in any case not meant quite so seriously, and that in a revolution only victory was important, since more workers were killed every day under capitalism than would be killed during the whole revolution. I grudgingly accepted that, but I felt I was paying heavily in terms of moral decency.

There was also another social and political movement flooding those cobblestones with the promises of revolution and utopia. Each day more and more young men were gathering, holding rallies, marching in strict unison, and singing a new type of patriotic song. In the early moments of this, out for his evening stroll, Popper was stopped by a uniformed teenager holding a large pistol. Popper tried to reason with the boy, who wasn’t there to rob him, but rather to police him, to ensure that he wasn’t up to no good. The young lad looked back at Popper with indifference and said, “What, you want to argue? I don’t argue, I shoot.” On his shoulder was a newly-sewn Swastika.

Shaken and scared, Popper shut his mouth and walked quickly home. That night, alone in his study, the first seeds of The Open Society and its Enemies were conceived.

A rich and pervasive anti-Semitism meant that Vienna at the time boasted the highest conversion rate of Jews to Christianity in Europe. New possibilities were opened for converted Jews: they were allowed to marry non-Jews, were eligible for new promotions and professional opportunities, and could live largely unmolested lives. Like many of those around them, the Popper family followed suit and assimilated their religion and their culture.

This was done so seamlessly that, while growing up, the only real involvement that Karl had with Jewish culture was from the outside looking in, as an intellectual analysis. Despite this, Jews still made up about ten percent of the city’s population. Then came Nuremberg and the Nuremberg Laws. Hitler was ramping up his race war, tracing the bloodlines of his enemies, and squeezing Jewish life and culture to impossible limits. The panic for assimilation was on!

And it was all horribly misplaced. When the Anschluss happened on March 12th, 1938, things tilted beyond hope. German forces walked into Austria to rapturous applause, and the two countries fused together into a single Nazi state. Those inter-marriages were annulled, Jews were fired from their jobs and arrested on the streets; no claims to previous religious conversion would save anyone. Karl Popper had got out just in time… eighteen of his relatives who stayed behind died in the Holocaust.

Stateless and desperate, Popper twice applied for British citizenship, and was twice rejected because he failed the residency requirements. So he leaned as much as he could on a distant but admiring colleague, Bertrand Russell, whom he had met at a philosophy conference in France in 1935. As formulaic and unimpressive as the letter sounded, this was still a recommendation from Russell, and so worth its weight in gold:

“Dr Karl Popper is a man of great ability, whom any university would be fortunate in having on its staff.”… “I learn that he is a candidate for a post at Canterbury University College, Christchurch, New Zealand, and I have no hesitation in warmly recommending him.”

Classified only as a “friendly alien”, Popper was still without a permanent home, and without a citizenship to fall back on. But he was alive and he had a job, as well as relatively safe harbour for the rest of the war years. Looking back on the carnage unfolding in Europe, Popper felt motivated to begin his own “war effort”. In his own words, New Zealand was “infinitely remote”, “not quite the moon, but after the moon… the farthest place in the world.” Here – three months away from Europe by mail, five weeks away by ocean travel, and beyond the reach of direct air routes – The Open Society and Its Enemies began to take shape.

Popper looked back on his time in New Zealand with a rare and sentimental fondness:

There was no harm in the people: like the British they were decent, friendly, and well disposed… I had the impression that New Zealand was the best-governed country in the world, and the most easily governed… I liked New Zealand very much… and I was ready to stay there for good.

For his wife Hennie, not so much! For her these were “the nightmare years”. Her husband’s meagre salary wasn’t really the issue, nor was it the need for her to grow backyard vegetables just to get by. The problem was the steadily rising manuscript before them, and her full-time job as both typist and editor. Karl would routinely pass his handwritten drafts to her, and she would retype the same pages as before, with increasingly minor changes added in the margins. By the time it was finished, she had run this task nearly twenty different times; for a book that sits close to a thousand pages.

From epistemology and the Vienna Circle, Popper was now stretching his title of “official opposition” in new ways. He was thinking back to those movements that were rampaging across his former home and pushing his family into gas chambers, as well as newly encountered oppressive societies such as the native Maoris in New Zealand. Popper was trying to tear down the fabric of Western political tradition, while exposing what lay at the heart of all despotism, all repression, all totalitarianism.

The Circle, Popper showed, had lost their way by trying to find certainty in science. It was a simple, innocuous, even intuitive sounding mistake, but one that flowed quickly downstream with disproportionate momentum and harm. Now he was warning against other commonly-held ideas which had the same dangerous reach, such as historicism (the notion that history is determined by certain laws, and so the future can be accurately predicted by understanding those laws), and even banal sounding truisms such as politics is about electing the best leaders and policies.

It was just this however – seeking the best leaders and the best policies – that led Plato away from democracy (where the uninformed and easily influenced rabble were in charge) and into an intellectual dictatorship, run in perpetuity by The Best. It also allowed the Caesars to rule over Rome through strength and violence, it gave Constantine and those after him the religious legitimacy to stay in power, it was the reference point for every monarch and aristocrat to further silence the unhappy masses, it was why Karl Marx decided against elections altogether…

The Open Society looked different, and for some, a lot less grand. The place for the great men of history shining a light for the ordinary people to follow, was gone. In their place, were those ordinary people, the unwashed and uninformed crowds making small, endless, and seemingly parochial choices about their lives, hoping to “minimise avoidable suffering.” Gone too were the utopias and the revolutions, replaced by something much less exciting: “piecemeal social engineering”. The Open Society was a world of ordinary people, making ordinary choices about their ordinary lives, embracing criticism and their own fallibility.

Stabbing at so many deeply held convictions and at so many still-revered thinkers, The Open Society and Its Enemies was nearly as difficult to publish as it was to write. In Popper’s own words: “it will be a colossal job for everybody concerned. It was a colossal job [writing it] here and I was (and am) very ill while doing it.”

To compound things, this was 1944 and the war was still raging, the manuscript was long and dense, and with his previous book not yet translated into English no one beyond a few small academic corners knew the name Karl Popper. Rejection after rejection flooded-in, and the publishing task was handed-over to an old friend back in England, Ernst Gombrich.

In the meantime, Karl and Hennie were falling out of love with New Zealand, and by 1945, almost as soon as the last gun in Europe fell silent, the Poppers were heading back to the continent. Another soon-to-be famous friend, Friedrich Hayek, had managed to pull a few strings at the London School of Economics, and despite a series of bureaucratic frustrations – “Our departure problems are appalling” – husband and wife were soon sailing towards a new job and the granting of naturalization and British citizenship.

In a letter to Gombrich, Karl Popper spoke about the journey before them:

Dear Ernst, This time we are really off, I think. We have been allotted berths—in two different four-berth cabins, though—on the M.V. “New Zealand Star.”… It is a frighter [sic], Blue Star Line, carrying normally 12 passengers, and at present (in the same cabins) 30. We are not terribly pleased to pay 320 pounds for the pleasure of spending 5 or 6 very rough weeks in the company of strangers… The passage will be very rough since we sail via Cape Horn—perhaps the roughest spot in the Seven Seas. Our corpses are expected to arrive, by the New Zealand Star, on January 8th or thereabouts. Please receive them kindly.

When they finally landed in England – seasick, miserable, dirty – and staggered gingerly to dry land, they were greeted by a beaming Gombrich, waving excitedly towards them. In his hand, high above his head, was the first edition of The Open Society and Its Enemies.

Popper settled in quickly to British life and a career at the London School of Economics, completing his exile where he always wanted to start it. And in his eyes, the exile really was still active. Alan Musgrave was Popper’s research assistant from 1963 to 1965, and said about Popper that, despite realising the magnitude and impact of his work, he remained “also very bitter” about his life. After the war, Popper was once asked if he would ever consider returning to those once vibrant streets of Vienna, to reminisce, and to see what had changed. He shot back bluntly, “No, never.”

Even the allure of a cushy, full-time professorship in Austria wouldn’t do it. It was a past that was better left where it was. When he did look back though on the horrors of that time, it was through the analytic lens of The Open Society, and that focus on the importance of every individual. The simple sounding error that the Nazis made, was collectivism. The same error (just different in magnitude) that was still being made across Africa, the Middle East, Asia, and by the Maoris on the sleepy shores of New Zealand. There were no benign cousins, no reasonable variants. Wherever this mistake happened the outcome would inexorably be terror and oppression.

The word that Karl Popper used to describe all such societies?

Tribal!

 

*** The Popperian Podcast #7 – Oseni Taiwo Afisi – ‘Karl Popper and Africa’ The Popperian Podcast: The Popperian Podcast #7 – Oseni Taiwo Afisi – ‘Karl Popper and Africa’ (libsyn.com)