The Life and Philosophy of Joseph Agassi

In conversation with Nimrod Bar-Am

 

This story begins with Nicolaus Copernicus, not for what he did or achieved, but for what he refused to take for granted. The world of science had been stuck on an appealing idea and a faultless person. Aristotle had once ruminated about the earth being the centre of the universe – all the planets and moons and stars revolving around us – and with his works being quickly rediscovered in Europe after the Dark Ages, as well as being fawned over for their eclectic genius, this single theory stood out in important ways.

It seemed well reasoned, it came from a perfect source, and it gave church authorities a reason to relax their dangerous gaze: man was proven to be at the heart of all things, the sole focus of God's creation; science had spoken! Saying that perhaps Aristotle was wrong, that perhaps the universe was much bigger than previously imagined, and there was no centre to it, the Italian monk Giordano Bruno was burned at the stake by The Inquisition in 1600. His crime? Diminishing the significance of human beings!

As you might have guessed by now, this story doesn’t follow straight lines nor dates on a calendar, but it is driven forward by the only thing possible. But we’ll come to that a little later. Copernicus’s theory of the earth orbiting the sun, rather than the other way around, was different to Bruno’s in content but had committed the same heretical sin. This early clash between religion and science was all about how we see ourselves as a species: the gods-eye-value of our existence. But in a way, it was about something else, something that would continue and fight and remerge and repackage and squirm for survival across the history of thought and of science. Look out your window today, pay close attention to the words and attitudes of the people around you, and you will see it still here, diseased and aged, but for the most part un-weakened and rudely alive…

The people who killed Bruno and the people who hated and threatened Copernicus, were afraid of change. Simple as that! They knew what the world looked like, they knew how it was explained, and they were comfortable with things as they were. Less about the feeling of fear and more about the feeling of certainty, they desperately taught what they knew to their children and drilled their ideas into school curriculums. They were sure that they understood the way things worked, and so what kind of people would they be if they didn’t help the next generation to understand things as they did.

As Bruno’s screams died away and his charred body shock-down to ash in that horrible public square, it was a message, not a punishment: the game of knowledge creation is over, stop seeking new ideas! All we need, and will ever need, has already happened – we ought to look only to the past, not to the future. Copernicus’s theory about the earth being the centre of the universe wasn’t, in fact, new at all. Shortly after Aristotle’s death, the Greek philosopher Aristarchus wrote in books that have been lost to time that heliocentrism made a lot more sense for what he was seeing in the night sky. Nearly two thousand years later Copernicus’s great step forward for science and for our species, was to take Aristarchus seriously, and to be the first to do so.

His achievements ought not to be diminished by this, Aristarchus’s idea and the observations he made were there all along, waiting to be discovered or rediscovered. What Copernicus did by daring to challenge the theocracy of his day, changed the face of human development towards everything worth wanting or caring about. The Copernican Revolution – as it came to be called – would begin to play out and find its worth over the next century in the everyday attitudes of ordinary men and women. People began to look back with their own critical eyes, discovering that not all the Greeks had the same thoughts and philosophies, that clinging to the works of Aristotle was an arbitrary choice, that knowledge changes and does for the better, and most importantly that they too can accomplish these changes. Copernicus’s legacy was a newly self-confident humanity who thought mistakes were everywhere, and that it was up to them to discover and correct them – in the words of Joseph Agassi, he was “The most important man in the history of modern science”.

Born into this new world was Galileo Galilei, an early beneficiary of the Copernican Revolution. He asked a simple, innocuous sounding question, with tremendous implications for the growth of science: “If you were on the moon and you looked at the earth, which would be smoother, the land or the water?” For all those centuries before him, people had believed that the moon was a giant crystal mirror. Mirrors shine when exposed to light and the moon shines when it is hit by light from the sun. It all made sense! If the moon was nothing more than rock and dust then you would not expect it to shine in the sky as it does, so everyone thought it was made of such things.

Galileo changed this! For years the small flickers of scientific growth – those embryonic individuals and communities – which popped-up from time to time, had at least one thing in common despite their differing methodologies: they believed that truth was found through observation. The world beyond our senses doesn’t lie to us, and so we only need to see it for what it is, observe it long enough and with enough detail to understand everything about it. And they had help: the development of new instruments, allowing them to see smaller and larger and further than ever before.

All this map building and data collection was a mistake, thought Galileo. He had seen too many theories come and go, built upon what were solid observations, only to then be destroyed. The problem with science, the thing holding it back and drenched by those pernicious thoughts of certainty, was boring old human thinking. Trust your eyes and the moon appears to be a crystal reflecting the sun’s light back at us. Begin to dig up the preconceptions and the under-the-radar theories that are involved in that observation and things collapse, fast.

Hang a mirror on most walls, and you will see that contrary to your expectation it is not as bright as the walls; in fact it is considerably darker. The mistake that is commonly being made here is one of perspective. We tend to think the mirrors are shinier than walls because when light hits the mirror it is reflected, and we see those reflections on other walls. The mirror is glaring light back at us, and so it appears to be a very, very bright object. But stand at certain angles – try it now – where the light is not being reflected in your direction, and you will see the true darkness of the mirror.

Now the mirror is not only dark, but also flat. This is why the mirror reflects light in only one direction, and appears bright from one view point and dark from all others. The wall next to the mirror is not smooth, or not as smooth, but if you look closely at the wall under a microscope you will see it is made up of countlessly many small pieces, which are individually smooth. A hodgepodge of tiny little mirrors, each reflecting light in different directions, meaning that unlike the mirror the wall also appears to be bright from all directions. The mirror is brightest from only one angle, the wall brighter from all others.

Back to the moon then. Back to the people of Europe’s Middle Ages staring up at its brightness in the night sky, understanding what they were seeing was due to the reflected light from the sun, and making the simple connection to the thing they knew best reflected light: crystal mirrors. In two ways Galileo put an end to this type of thinking. The first way should be obvious by now: if the moon appears to be not only bright, but also bright from all directions – as it does – then it cannot be a mirror. The second way runs us back to that question of his: “If you were on the moon and you looked at the earth, which would be smoother, the land or the water?”

Staring up at the moon, even with the most rudimentary instruments of the day, you don’t see an unblemished orb glowing back at you. Instead you see a glowing orb littered by dark spots. Before Galileo changed the way that we see the moon, those dark spots were a mystery; an unexplained, or poorly explained, phenomenon. Now that the moon was made of more common earthly materials, he conjectured a better theory. If you were on the moon looking at the Earth, you too would see darker spots. Those spots would also appear smoother, just as the ones on the moon did when looking the other way. It was water! The moon was covered by oceans and lakes of water. He was wrong of course, but also much nearer to the truth than any other who had come before him.

Galileo’s telescope wasn’t powerful enough to see the moon in sufficient detail to see those oceans and lakes, but it was powerful enough to see them move over time; shifting gradually across the landscape. Galileo had a problem. What he was seeing was not water after all, but darkened valleys next to light covered mountains, and then the mountains becoming dark as sunlight finds the right angle to brighten those valleys. He never saw a crystal mirror in the sky, he never saw oceans and lakes, and he never did see mountains and valleys. Observation was not the answer to the future growth of science, and knowledge is never extracted from the world around us through our senses. What mattered was how well people reasoned and theorised and developed abstract ideas: good ideas leading to mountains and valleys, bad ones leading to crystal mirrors in the sky.

Galileo’s close friend Johann Kepler entered the scene next. The two men believed that the natural world that they were trying to make sense of, was God’s perfect creation. God doesn’t make mistakes, only people do! And it was therefore more than just a matter of truth and knowledge creation and progress to correct the mistakes they saw around them – as well as within their own theories – it was a religious duty.

So deep in his convictions was Kepler, that when he discovered the work of Copernicus his reaction was to imagine that the sun was a “symbol of God”, the centre of our universe as we – and everything else – move around it in circular orbits. Circular orbits! It made mathematical sense, and Kepler published a book about the elegant design of our cosmos. He then realised the mistakes he made in this book, and he wrote another. Chasing ever more accurate calculations, Kepler developed creative ways to look closely at the movement of the planets; expecting that his efforts would show the intricate precision of God’s circles. “No matter how small a mistake is, it matters”, and so – despite being unbelievably accurate for the time – the existing measurements would have been an insult to God, even if only marginally inexact.

But with every new calculation and improvement, things began to make less sense. The circles swelled and popped at the seams, until they looked more like eggs. There was a problem with the model! These ellipses represented not a misunderstanding about the exact placements of God’s design, but a misunderstanding about what God’s design actually was. Kepler’s great idea – “the first man in history who said that planets do not go in circles” – was something he didn’t want to imagine was true; but theories need to match the empirical world, and when they don’t we need to have the intellectual honesty to admit that they were wrong.

None of this said anything noteworthy about the truth or falsity of elliptical orbits though, and Kepler sensed this in his pessimistic attitude towards his own theory. The circles which now clashed with the data, did not clash at all for centuries upon centuries when people just like himself looked up at the night sky and saw rounded planets in rounded orbits. It was only due to Kepler that people would begin to see ellipses instead, and so all that his theory actually showed was (1) how enormously suggestable and theory-laden our observations are, and (2) that Galileo and his predecessors were wrong.

They were wrong all along of course, but it is only in light of a new, competing theory that this can ever be known. So what was there to stop his own theory turning out to be false as well? The short answer was nothing, but here we start to see some early seeds of falsificationism being understood, with all the appropriate optimism. Rather than waiting anxiously for his theory's evitable execution at the hands of a usurper, Kepler staked the first clear sign post for what good science should look like, and how good science should behave. Circular orbits was a flimsy idea, and so were elliptical ones, but it was his attitude that would make the difference. There were things he could do!

Rather than protecting his theory, he would expose it as much as possible, take risks with it; try to head-off future scientists by doing their work for them. It seemed true to him, but just like with Galileo he didn’t know what he didn’t know – specifically, he didn’t know how future calculations might destroy what he had built. The challenge before him was one of extension, stretching his idea about ellipses to its farthest corners, making completely novel predictions, all the while building-out his theory and making it increasingly falsifiable. 

If the planetary orbits moved in elliptical orbits, then there must have been a reason for it. And such a shape would require two “pins” or “focuses” to stretch it outwards in such a way. If there were only one focus – in this case being the sun – then you would expect to see perfect circles, or something very close to it. If Kepler’s new theory was correct then there must be something out there warping the geometric symmetry – an unseen force that future generations would soon discover. Kepler didn’t know about the sun’s gravity, but his reasoning was incredibly prescient, also committing his theory to variations in the speed of the planets. A steady circle would produce a steady speed, but with this new model he speculated that you should expect the speed of those planets to change as they approached different regions of their orbit: faster when closer to the sun, and slower when further away.

Across the English Channel, Isaac Newton entered college at a time when this new curiosity and scientific atmosphere was firmly in the air. Instead of simply taking Kepler seriously, as many scientists were doing, he also embraced the implications. As a reasonably young man, Newton began imagining what that unseen force of Kepler’s might be; he jumped into the problem that had been left for science to answer and took it upon himself to devise what such a force would look like, and how it would affect other objects, both large and small. It was itself a “revolution”, and produced a tremendous “quarrel among the scientists of his day”, who believed that Newton’s idea was an intolerable “step backwards”.

They already understood what forces were, they had rudimentary theories to explain them, and the one thing everyone knew for certain was that forces were local events, never acting over large distances. Without all the details, the law of inertia was an easy enough thing to observe, or so people felt. Accelerate something and it will continue along at that speed unless something else begins to slow it down. Throw a ball in the air, watch it accelerate, slow down, stop, and fall to the earth and you have some rudimentary understanding of what is happening. And that rudimentary explanation is clearly local, the forces at play are the energy from your arm and the pull from the ground.

Newton began running calculations to get a more precise theory of when a ball, or anything will go up, at what speed it will happen, when the ball’s momentum drops to zero, and then how fast it falls. The tremendous discovery he made was that the higher the ball went – for example – the smaller the downward force upon it. What was happening in such an experiment might now appear clear to us, but back then it shocked the foundations of science: “if we are twice as far away from the earth, our gravity becomes four times smaller. If we are three times as far away from the earth, our weight is nine times smaller”. This would soon be known as Newton’s inverse square law.

Just as with Kepler, the implications of this one discovery were extraordinary. It means that our weight and how we experience movement is proportional to this new force, and that 300 years before the first satellite was placed into orbit Newton had proved it would be possible. And it didn’t stop there, fundamental discoveries in science tend to reach beyond themselves, with new implications leaping constantly from the darkness: as that acceleration away from the earth is happening, heavier objects behave differently to lighter ones. Newton’s force acted more upon the heavier ones, and so this new resistance needed a name: mass!

Why should the ball fall to earth and not the earth to the ball? The earth had considerably greater mass. But this left Newton with a dramatic concern, his theory was telling him something that seemed nonsensical. Though the mass of the ball is dwarfed by the earth, it still has a mass, and so it should still have an effect; an unbelievably minuscule one, but an effect nonetheless. As the earth pulls the balls down (so to speak), the ball is also pulling the earth upwards. Or to put it in more comparable terms, “When I push you, you push me; when I pull you, you pull me”; one is impossible without the other. This would become Newton’s laws of action and reaction.

With each step, the discoveries became evermore elegant, and opened wonderful new opportunities for accuracy and experimentation. But it came with a horrible problem. Guided by the Royal Society (of which Newton was a member), science was expected to look a certain way: clear, distinct, and coming from a “mass of observable facts.” Newton’s theory had none of these! For it to be accepted, people would first have to change their understanding of what a theory ought to be: from data collection and parochial events, to universal explanations of how the world worked, as well as everything beyond it.

Newton was also swept up in this gatekeeping, unsatisfied with his new theory not because of what it said, but because of how it looked. He was a follower of Descartes, and so was stuck on the nagging idea that instead of invisible forces, gravity and all the rest was explained by direct contact of a kind: microscopic collisions pushing objects around. On this view, repulsion and attraction were both impossible and redundant. Newton’s solution was to try hard to make his theory fit Descartes’ design, and followers tried too, attempting – with great effort – to squeeze one theory into another.

They all failed of course. Descartes was wrong! But it took more than good sense to change people’s minds, it took good predictions and their failures. If Newton was correct, the planets and stars and moons and comets and asteroids and assorted space junk would be dynamical objects, attracted and repelled by different forces, and so moving in different directions. Under Descartes’ theory of collisions, large vortices would direct all objects in the same way; an invisible liquid of microscopic particles pushing everything along its path. The solar system was one of these vortices, and each observation seemed to prove the theory true, with all the “heavenly bodies” circling the sun in the same direction. Then in 1661, astronomers noticed a comet in the night sky going the wrong way, and the theory was dead.

Newton thought that his theory of gravity would become nothing more than proof for Descartes’ Cartesian theory. In the end it did the opposite, showing not only the falsity of Cartesianism, but also a whole new way of developing scientific theories; one based on explanations, not on observations. But mistakes often repeat as habits, little tricks within the human mind, and so as people moved on from the dominating figure of Descartes (just as they had with Aristotle before him), instead of doing away with the idea of the infallibility of great men, they traded in one for another: everyone was suddenly a Newtonian, and it was now Newtonian science that contained no mistakes!

In the year 1800, The Royal Institution was built in London. Its goal was less about charity than it was about the changing world of science and independence of many new discoveries. Amateurs were making breakthroughs and solving problems that the rich and the patronised could not. Tinkering away in their spare time, these part-time scientists were showing that there were no social boundaries to knowledge, and that anyone could learn the requisite skills and move the enterprise forward if only they had the will. And all this came to the benefit of science as a whole, and society writ large. The more people working on a problem, and the more minds dreaming up creative new explanations, the better.

As the industrial revolution hit its stride, there was also the matter of factories needing skilled labourers. And so The Royal Institution began running public lectures to educate the poor, and build that background knowledge within English society. In those early lectures, standing almost in the shadows at the back of the hall, was a seventeen year old Michael Faraday. Growing up incredibly poor, Faraday had so little schooling that he had to teach himself to speak proper English, and yet he began scribbling down notes, building his own small books on chemistry and biology. The books got his foot in the door, and when a menial position opened at the Institution he took it. Faraday then had access to the scientists running the programs, and he battered on enough doors to land himself a job as an assistant. A few years later he was delivering the lectures himself, and then finally he ended up as the director of the whole Institution.

Science doesn’t have too many Cinderella stories, but Faraday was one of them. Never able to catch-up enough on the mathematical education he missed out on, Faraday became a Chemist. And he dodged most discussions of physics for the same reason: “there was too much mathematics in it.” But when electromagnetism was discovered, even he couldn’t help but get caught up in all the excitement. Following Newton, the French physicist André-Marie Ampère built the new theory around pushes and pulls, attraction and repulsion. The question at hand was to explain how electric matter was magnetised, and the best scientists of the day simply looked over their shoulders for a semi-divine answer.

Sitting alone in a Danish laboratory, Hans Christian Oersted spent twenty years thinking up better alternatives. The whole picture seemed a lot messier than the Newtonians were willing to accept. Different kinds of electricity were being discovered, electric forces were being used to break down chemical bonds, and Oersted could see the transformation that electric matter was producing in previously non-electrical matter. Soon he was talking about more complex forces beyond the standard push and pull, involving dynamic rotations and turns.

It was all very heretical, and so no one took Oersted seriously. Until Faraday that was! Defying expectations at most turns in his life, Faraday had developed a love of criticism: people who dare to think differently and ideas that attack existing standards. The young chemist began constructing an experiment to solve the issue, and perhaps offer some late honour to Oersted’s name. He started by dipping half a magnet into a cup of mercury, so that the other half (and the other pole) remained above the surface. He then hung a wire from above the magnet so that it just faintly touched the surface of the mercury. A battery was then attached, one side to the mercury and the other to the wire. As the electric current ran through the wire it was loose enough (as it dangled) to move around however it may.

Watching the wire as it rotated around the magnet, Faraday knew that he had proved not only André-Marie Ampère wrong, but also Isaac Newton. He had shown not only that there were more physical forces out there than had previously been accepted, but also every force that does exist can be changed and altered to become another type of force. What Faraday didn’t realise was the extraordinary scope of what he had just produced in that small laboratory: here was the first ever electric motor. But soon enough he was building-out his theory with ever new discoveries, such as reversing the process and converting magnetism into electricity rather than just electricity into magnetism; what we now call the electric dynamo.

As he worked away on problems such as these for the better part of a decade, the social implications of what he was doing should not be understated. Riddled with a “self-doubt” that most of his opponents lacked, Faraday had “a very hard time” of things. He wasn’t just building a new science, he was destroying an old religion (Newtonian physics). Isolated and lonely, Faraday asked himself “Who am I to fight the whole world?” and more than once almost gave up hope.

When the breakthrough came, and then when more followed, he had to present his discoveries to the scientific community who mostly believed he was either “hoaxing” them or “deceiving” himself or “knew so little about mathematics” that he must be wrong. But truth survives under its own weight, Faraday’s experiments worked where Newton’s didn’t, and bit by bit, scientist by scientist, he drew a crowd of followers. As the full magnitude of what he had done began to settle over him, Faraday questioned with childish disarmament in his own diary that perhaps “All this is a dream”.

What Faraday had done changed science forever, not in the discoveries themselves nor in the impact they had, but in the breaking of invisible chains. Chains that held people to old ideas and great men of the past, as well as drawing a steady and fixed path into the future. Faraday corrected more than just some false physical theories, he opened the space of the scientific enterprise on a boundless vista. After him, scientists were encouraged to have bold, fantastical thoughts about how things are and what explains them in the best possible way. He freed creativity from Newton’s cage, and showed that the “most exciting thing about science is that we don’t always know where it is going.”

Soon Thomas Johann Seebeck was discovering the interaction between heat and electricity, and how one can be changed into the other; Thomas Young theorising about light being wavelike, in the same way as sound was; and a young Albert Einstein was looking back at Faraday’s “bold idea” and dreaming-up the most outlandish of new ideas, showing once again that Newton was wrong.

The only way to really understand the history of science – as with history in general – is to look back at the problems that people had! The things they battled against, the ideas they challenged, and the world that they wanted to improve. Today Einstein’s theory remains with us, but due to Faraday every young graduate student (in the field) worth his or her salt is now spending their days and nights not trying to show Einstein’s genius, not paying homage to the great man, but rather trying desperately to cut him down at the knees, to prove him wrong.

And so the story continues…

 

*** The Popperian Podcast #24 – Nimrod Bar-Am – ‘The Life and Philosophy of Joseph Agassi’ The Popperian Podcast: The Popperian Podcast #24 – Nimrod Bar-Am – ‘The Life and Philosophy of Joseph Agassi’ (libsyn.com)