I enjoy introducing my junior staff to unfamiliar words. On my ward round today, I slipped ‘avuncular’ into the conversation. My registrar1 rapidly pointed out that it’s all very well to compare someone who is kind and friendly to an uncle—but we’re now wise to the fact that not all uncles are nice. Later, driving home, I heard Lorde’s new album referred to as ‘sick’. As someone who deals with sickness every day, I’d hesitate to use the word in appreciation. Words both slip into disuse and change their meaning, so it’s wise to deploy them carefully.
Let’s explore some mildly unusual words. By the end of this post, you’ll be thoroughly sick of the words ‘ontology’ and ‘epistemology’. I’ll also do something wicked. I’m going to re-deploy these words, after challenging their usage and meaning. But first, that brain you see above.
A patent clerk rocks
In 1905, a twenty-six-year-old clerk2 in the patent office in Bern, Switzerland submitted four papers for publication. In between examining patents, he seems to have found a bit of spare time. These were the topics:
A mathematical explanation of the photoelectric effect, where shining an ultraviolet beam onto various metals knocks off electrons—in a way that depends on the frequency of the light;
Brownian motion, first described seventy-eight years earlier by the botanist Robert Brown, who under his microscope saw the contents3 of tiny pollen grains being bumped around mysteriously;
An exploration of different frames of reference, the limiting speed of light, and what must happen to fast-moving clocks; and finally,
A principle that claims “the mass of a body is a measure of its energy content”.
Each of these papers revolutionised physics, and each deserved a Nobel: the first put quantum mechanics on a firm footing; the second compelled physicists to accept the existence of atoms that bump into things, by explaining diffusion mathematically; the third describes his special theory of relativity; and the fourth ultimately led to the uses of nuclear power in providing both clean, safe and efficient energy—on the one hand—and unprecedented weapons of destruction, on the other.
Einstein—for it was, of course, Albert Einstein—caused quite a stir, and continued to do so. Despite claims that he is over-hyped (people are always so wise in retrospect), these four creations together with his general theory of relativity establish him firmly as an intellectual colossus of the twentieth century. Perhaps the intellectual colossus.
So naturally, when he died on 18 April, 1955, the duty pathologist on call, Thomas Harvey, opened his skull and stole his brain. Harvey took a picture—the one at the start of my post. The fun had just begun.
Synecdoche, reification and mind projection
We’ll soon get back to the unfortunate postmortem history of Einstein’s brain. But first, some context. A few posts ago, I was quite rude about ontology—the ‘basic nature of reality’. Rude enough to make some philosophers have a little lie down. We can, you see, only make provisional statements about the shape of reality, because we can’t know it. This much is clear. So ontology as originally defined can’t possibly have any subject matter. It’s an artefact of Platonic thought.
This rejection has some down-stream effects. Traditionally (Bless them!) philosophers have drawn a stark contrast between ontology and epistemology, which is what happens when they explore knowledge. This contrast seemed useful and has been widely adopted. For example, ET Jaynes wrote a fair bit on the fallacy of treating the epistemic as ontic—projecting constructs in your mind into reality. He called this the “mind projection fallacy”.
This now poses a problem for me. How do I preserve the sense of this contrast, when I’ve just tossed ontology in the bin? First, let’s go back to how we do Science. When we encounter worthwhile problems, we try to explain them using models. We test our models for joined-up-ness and predictive value in the real world. Does a model work? Most fail, so we’re back to the start. But even those that succeed can’t be considered (ontologically) ‘true’. They are however useful, until we go back and build an even more useful theory—which still isn’t true.
Later, we fleshed this out. We worked out that within the realm of our current joined-up theories, we can apply Bayesian thinking enthusiastically; but at the ever-changing interface between our current model of reality and the infinite void of our ignorance, Karl Popper’s hypothetico-deductive testing is what we need. Or as ET Jaynes put it in 1988:
“For one who understands the difference between the epistemological and ontological levels, a wrong prediction is not disconcerting; quite the opposite. For how else could we have learned about those unknown factors? It is only when our epistemological predictions fail that we learn new things about the real world; those are just the cases where probability theory is performing its most valuable function. Therefore, to reject a Bayesian calculation because it has given us an incorrect prediction is like disconnecting a fire alarm because that annoying bell keeps ringing. Probability theory is trying to tell us something important, and it behooves us to listen.”
So let’s bend things a bit! For us, ‘ontology’ now refers to “tested, provisionally accepted model components. These work both internally and in reality (so far)”. The apparent ‘truth’. Things like subatomic particles, photons, atoms, molecules and the bigger things they make up, as well as the four forces that govern their interactions. (If you have a fifth force, please let us know!)
But where does this leave ‘epistemology’? This surely concerns the way we process the information we have about our ‘ontology’, constructing and testing theories. For example—as Jaynes just did—we can fail and learn something. We don’t actually need either label, but we can translate them into our model of Science.
Where then does ‘mind projection’ now fit in? A very mild form might be expressed where we take a property of something, and use it to represent the whole. Students of language will recall that synecdoche is where we substitute the part for the whole—“the hand that rocks the cradle”.4 They will also recall that it’s easy for this usage to become farcical: “The hand that rocked the cradle kicked the bucket”.
We can also be a lot sillier. We can reify. We take ideas like ‘love’, ‘a statistical distribution’, ‘intelligence’ and ‘information’ and treat them as if they exist in reality, instead of being concepts in our minds. Concepts we talk about and then test.
So we can re-cast ‘mind projection’ as unceremoniously yanking something abstract out of our brain, and pretending it has real existence. People say “Let the data speak” and then believe their own metaphor—but the data are mute. We can even do something more daft—neglect to test our assumptions. Which brings us back to Thomas Harvey. Who went one worse! He forced a concept of his own imagining onto someone’s very real brain. A brain that the owner wanted cremated.
Tiny minds, projecting
Reading about Harvey, a couple of things seem very clear to me. One of these is that he was a brain thief; another is that he had a very literal view of the world. It’s almost as if he saw ‘genius’ as a thing. Now it’s downhill all the way. When I look at what he did, I see someone deciding “If someone is a certified genius, there must be something very special about their brain. So surely if we look hard enough, we’ll be able to spot the ‘genius bit’?”
Harvey also clearly didn’t understand Science as I’ve outlined it. We know that main job of Science is to disbelieve. We find innovative ways to reject our most cherished hypotheses. Instead, Harvey sought confirmation. This did not go well.
First, of course, he weighed Einstein’s brain. Some people still cling to the notion that a smart brain must necessarily be bigger. Unfortunately for them, at 1230 grams, Einstein’s was a bit small.5 Oops!
Then he tried desperately to find the genius bit at a microscopic level. Harvey chopped the brain into 240 pieces, preserved it in celloidin and stashed it in a cider box under a beer cooler. He raided the box intermittently to cut off chunks and either study them himself or send them to pals.
Thirty years later, he and his co-conspirators (MC Diamond, AB Scheibel and GM Murphy Jr) published their first paper. After much fishing around inside Einstein’s brain, they claimed that it was special, after all! There are a few obvious issues with this claim, apart from the reification. The first is that they tossed aside all of the mundane aspects of the brain, and focussed on just some bits. They did we-don’t-quite-know-how-many things to those pieces, and finally determined that the ratio of supporting cells (glia) to neurones was greater than it was in a handful of ‘control’ brains.
We’ve talked about bias before. Harvey’s examination of Einstein’s brain seems to be a practical tutorial on how to entrench bias—project your dogma into someone else’s brain, don’t control for multiple tests, and choose controls who are a lot younger than your special study subject (n=1). And the papers have just proliferated subsequently. Einstein’s cortex was thinner than average so therefore he had “greater neuronal density”. His astrocytic processes must be special. “Regions in and near Einstein's primary somatosensory and motor cortices were unusual”. The more you look, the more, it seems, you find.
Vaguely troubling
ET Jaynes described the mind projection fallacy as ‘vaguely troubling’. I think that, as I’ve characterised it, it’s a lot worse. We’ve just witnessed fumbling attempts to impose increasingly desperate anatomical theories of genius on the physical substrate of Einstein’s brain. The sad thing is that at one point, even the editors of Nature seemed to buy into this silliness.
Once we’re aware of how easily people reify, we see it everywhere. It can seem subtle or even natural, especially if we’ve been exposed to a specific mind projection often enough for long enough. Take “IQ”.
Human intelligence is multi-dimensional. I remember going to a Mensa meeting—just once—and coming to several quick conclusions. One was that people who classify 98% of humanity as ‘densans’ perhaps don’t have much to offer; the other was that the premium qualification appeared to be that you also have to belong to the Ministry of Funny Walks.6 Soon afterwards, I heard that they had a Mensa away weekend—but had to cancel it because someone forgot to bring the key to the gate to the camping ground. It made the local newspaper.
The concept of an Intelligence Quotient originated in attempts to classify people with intellectual disabilities, but has subsequently been seized upon by any number of cranks, especially racist cranks. They often can’t see that they have taken a simple measure—a number from a test—and generalised it to apply to a whole person as if the number represents the entire person. Some even conjure up a magical “g-factor” that is meant to underlie IQ and account for all of human intelligence. The ultimate in synecdoche! Not a smart move.
In a later post, when we dig into artificial intelligence, we’ll put the concept of ‘general intelligence’ further through the wringer. But let’s move on to perhaps the most insidious and harmful mind projection.
Race &c
An obvious example of reification—which broadly overlaps pseudoscience and xenophobia—is the idea of ‘race’. As a child, Einstein was viciously taunted because of his Jewish heritage. And indeed, the faculty at Zurich (who grudgingly admitted him) seem far from eager to accept him as professor in 1909:
Herr Dr Einstein is an Israelite and since precisely to the Israelites among scholars are ascribed (in numerous cases not entirely without cause) all kinds of unpleasant peculiarities of character, such as intrusiveness, impudence, and a shopkeeper's mentality in the perception of their academic position …
In 1933, when the Nazis took over, Einstein was forced to flee Germany, with a price on his head. Now that we understand DNA better, we are crystal clear that there are no races. There are merely gene gradients. We are all just people. People who copulate enthusiastically and cheat rather a lot, too. We have done so throughout history, spreading our genes across the planet. Sub-classification of humans into races, which only really took off in the 19th century as a way of justifying things like slavery, is obvious pseudoscience. And increasingly, people are abandoning this sinking ship, leaving only the slow-moving rats.
But we still hold on to many other boxy ideas, even when they don’t fit. This sort of thinking causes enormous harm. For example, simplistic categorisation of the glorious spectrum of human variation into “male” and “female” is seized upon by simple-minded people and used to persecute those who don’t conform to their mind-projected stereotypes.
This is not the best occasion to fully explore daft sex and gender stereotypes (Goddess forbid!) but take a gander at the women in the picture above. They have (a) an XY ‘male’ genotype; (b) SRY positivity (the ‘male-determining’ gene), and (c) testosterone levels clearly in the ‘male’ range. Most current testing systems for athletes would bizarrely classify them as ‘male’. This is just one example of how stupid people can be when they stick to stereotypes, trying to impose their will on reality.
We’re sometimes even far too hung up on ‘Genus’ and ‘species’. Ask a microbiologist to explain the difference between Escherichia coli and the four Shigella species, and they’ll usually acquire a pronounced speech impediment. Genetically there are ‘E. coli’ variants that are more different from one another than they are from Shigella. The main reason why we haven’t categorised Shigella as a dysentery-causing variant of E. coli is tradition.
Why was Einstein special?
The preceding exploration underlines the fact that when we take our theories out into the real world, things become messy. We have two main choices here. We can force the world to fit our stereotypes—project our mind constructs onto unforgiving nature—or we can learn from our inevitable mistakes, and from others’ mistakes. What can we take from Harvey’s reification of “genius”? Perhaps we can look for better models.
Here’s a thought. Would it be outrageous to assume that if you took a bright person with a normal functioning brain, brought them up in a secular Jewish household where they were exposed to an intellectually nurturing environment, trained them in physics and maths,7 exposed them to a culture of smart people, and then gave them a lot of spare time to sit down and think, in between indulging in the metacognitive activity of questioning the validity of patents, that they would spontaneously come up with smart ideas? Did Harvey even consider this in his search for a weird anatomical substrate for Einstein’s smarts? I doubt it.
It couldn’t possibly all have started with the sense of wonder Einstein felt when he was given his first magnetic compass at the age of five, could it? Or Aaron Bernstein’s Peoples’ Books on Natural Science, lent to him by 21-year-old Max Talmud, which Einstein read with “breathless attention” at the age of ten? The focus there on the speed of light; the bullet and the speeding train; these couldn’t have influenced the thought processes of young Albert, could they? Naah.
Perhaps Einstein strengthened his tendency to visualise rather than think in words during his glorious year in the cantonal school at Aarau, where they followed Pestalozzi’s emphasis on conceptual thought and visual imagery? Surely not. Nor did his friends at the Zurich Polytechnic (Marcel Grossmann, Michele Angelo Besso, Friedrich Adler and of course Serbian physicist and mathematician Mileva Marić, whom he married) and his pals in the patent office8 play a role. There had to be some special physical substrate that set him apart, didn’t there? Yep. Genius made flesh.
Back to Brown
Toss a ‘fair’ coin. How will it land?
If you said something along the lines of “Well, there’s a 50% chance of heads, and a 50% chance of tails”, you’re edging into mind-projection territory. The coin will land heads; or it will land tails.9 Trying to imbue the coin with statistical properties as if they really exist is a good example of mind projection.
With this simple sermon in mind, it’s interesting to view Einstein’s original paper on Brownian motion through the lens of Jaynes. The key thing Einstein did was to take ‘atomism’—the idea that atoms really exist—and create a successful model that made testable predictions. The argument is moderately complex, with the following key assumption about the ‘process of diffusion’:
which is to be conceived as the result of the random motions of the particles due to thermal molecular motion.
But hang on! Einstein’s take is ‘ontic’, but the statement seems epistemic. Randomness is seen as a real thing—but Jaynes would argue that it reflects our ignorance about the state of the system. Indeed, if you transiently ignore quantum theory and know the position and momentum of every particle, you can precisely determine the state of the system at any time, a sort of giant Mersenne twister.
At this point it’s fun to read Jaynes’ 1988 exploration of diffusion. In short, he derives Einstein’s equation in a few lines, by applying simple reverse Bayesian principles, based on epistemic logic. Jaynes highlights the contrast between the time symmetry of the ontic approach, and the asymmetry that our logic introduces. It’s important to realise, as we did previously, that the asymmetry arises from our sequential application of Bayesian logic, and not from some implicit time-dependence of Bayes’ equation.
Read this paper with caution, as Jaynes may be a bit too vigorous in mocking modern tendencies to confuse the epistemic and the ontic. His central theme is a rather cruel reduction to the absurd of trying to derive asymmetric behaviour from time-symmetric processes without either (a) unstated under-the-hood-assumptions; or (b) embracing epistemology. But he does explore mind projection rather well.
We’re done!
At this point in our exploration, it’s tempting to branch out into an examination of one of:
‘Woo’ pseudoscience, which is replete with reification;
Whether Jaynes is right in his exploration of reification in quantum physics (notably the Einstein Podolsky Rosen paradox, and Bell’s theorem); or
More mind projection concerning actual brains.
In my next post I’ll succumb to the third of these. We’ll look at the concept of ‘souls’—and contrast these with modern neurology. Perhaps we’ll get back to woo and EPR at some later time.
My 2c, Dr Jo.
US equivalent: ‘resident’. In the USA, I’d also be an ‘attending’ rather than a consultant physician.
Technical Expert Class 3, the lowest rung.
Thanks to
for pointing out that it wasn’t pollen grains that wiggled under Brown’s microscope, but their contents!Thanks to
for pointing out that the original term I used (metonymy) is too broad, and that I should have said ‘synecdoche’. Amended on 2025-06-29.The average adult male brain weight in humans is about 1370 g; sexists seize on the fact that women’s brains are, on average, slightly smaller. Einstein had “a girl’s brain”. Or girls have a brain that’s just the right size: Einstein-sized. Your choice.
(At this point I must admit that I too walk a bit funny). It’s interesting to note that the founders of Mensa International were a bit disappointed with their efforts. Rich phrenologist Roland Berrill was miffed that his ‘aristocracy of the intellect’ was made up mainly of working-class people—he even initially insisted that Mensa be ruled by a ‘Queen’. Lancelot Ware was disappointed that members’ efforts were dissipated solving puzzles. As a humorous aside, Mensa initially claimed that at the IQ threshold they used, just 3 in 10,000 people qualified; this was out by two orders of magnitude, as they relied on calculations provided by the dotty and devious “IQ expert” Sir Cyril Burt. It was also a little unfortunate that the original Mensa logo resembled three Ku Klux Klansmen sitting around a table.
Einstein had a wonderful resource of smart people both in the ‘worldly cloister’ of the patent office—Michele Besso (again), Lucien Chavan and perhaps Joseph Sauter—and outside: Maurice Solovine and Conrad Habicht.
If it’s a chunky UK one pound coin, there’s a chance it’ll land on its edge, I guess. Oh! Am I edging into the wrong territory again? The coin will land heads, or tails, or on its edge.
I hope you have as much fun writing these pieces as I (and I trust the rest of your audience) do in reading them! Always provocative, you’ve again brought clarity to a fundamental (dare I say ontic?) subject, and I look forward to applying it and tormenting myself yet again with EPR vs. Bell (but leaving out Everett/MWI as reification, ha) for my homework…
I'd quibble with that definition of "metonymy." I think you provided the definition of "synechdoche," which is a rather specific form of metonymy.
When someone says, "The White House announced a new policy," that's metonymy, but it's not using a part to refer to the whole.