Press "Enter" to skip to content

A Taxonomy for Truth-Seeking in the Classroom

by Robert Rue— 

What’s in a Theory?

Several months ago, I was reading a book that advanced a hypothesis about a complex social dynamic. The thinking in the book was anecdotal and intuitive, based mostly in personal observation. It felt like the kind of work that might, at best, point the way to some rigorous inquiry.

And then came the author’s outrageous claim—that her work was a theory, an educated guess, and that in this way it was just like Darwin’s theory of evolution. Given the number of institutions that are making this book required reading, it seems that a lot of people have bought this claim.

The reality?

Her utterly untested—and perhaps untestable—hypothesis was nothing like Darwin’s work, which, even if it contains some contested space, has allowed biologists to make a stunning array of accurate predictions about the natural world for more than 150 years.

So, what enabled the author to elevate her work, plausibly to some, to the level of one of the most profound and well-demonstrated ideas in human history?

Among other things, a general confusion about the word “theory.”

We often put the word after “conspiracy,” which makes it sound as if an idea doesn’t need much substance in order to meet the requirements of the definition. We also use the word, often while sitting on barstools, in a way that is almost synonymous with “I’m going to connect two or three data points in any way that I think might be fun right now.”

But at the same time, “theory” is attached to uncontested notions like gravity.

Yes, this is confusing. Still, it is safe to say that no scientist who uses the term “theory of evolution” thinks of Darwin’s central idea as a mere guess. In fact, in this context, the word means something very close to the opposite.

This kind of definitional confusion is not as innocuous as it looks. If we are giving the same level of confidence to the book I just described as we should give to Darwin’s theory of evolution, we are living in a fantasyland.

Collective Delusion

I think we all know some signs of this societal illness–the tendency for groups to coalesce around counter-factual ideas–but a lot of us will conveniently believe that someone else’s political tribe is responsible for all of them. The exclusive blame of other groups is a feature of the delusion. For the record, the storming of the U.S. Capitol in the name of democracy; the creation of “autonomous zones” in the name of improving society; the smashing of businesses—some of them black-owned—and the demonization of logic, merit, hard work and punctuality in the name of racial justice are just a few of the things on my long list for the United States alone.

What probably seems like intellectual quibbling about the word “theory” is admittedly way upstream of the madness we’ve all been witnessing, but that’s my point. Even as we acknowledge that emotionally-charged group behavior arises out of something real, thinking people among us—educators at the top of that list—have got to start paddling upstream—hard—and figure out how to divert the rush of irrationality.

I want to argue here that, short of imminent nuclear war, there is no issue more pressing than our broken ability to register, interpret and reasonably act upon reality, or our best approximation of it. This is not a good-old-days argument in which I pine about the loss of Walter Cronkite. Our ability to detect and report reality has never been good. Ever since technology began connecting disparate parts of the world—and thus amping up our ability to impact it for good or for ill—there has been a growing mismatch between what we understand and what we need to understand. When oceans and other geographical obstacles were likely to keep human populations separate, there was virtually nothing a person could do that would have global impact. But our current connectedness means that a mistake (or a malicious choice) anywhere can lead to a disaster everywhere. And as Yuval Noah Harari, Tristan Harris, Daniel Schmachtenberger, Bret Weinstein and others have pointed out, our brains have been hacked by algorithms whose attention-capturing incentives can only lead to more delusion.

In my view, K-12 schools have not yet recognized the danger, their complicity in it, nor the urgent need for a reformulation of curriculum and teaching goals to combat it.

What I propose here is a clarification of the very terms we use to discuss truth. This taxonomy alone solves none of our problems, and yet without something like it—I’m sure I won’t get it quite right—I believe we will fail at all the higher-order truth-seeking endeavors.

Failing at that means the end of us.

The Taxonomy

Axiom

A claim that has been proven, or that is self-evidently true, like the number 1 equals itself, that addition and subtraction are inverse operations, or the principle of non-contradiction.

Why This Definition Matters

Without granting some underlying order, thinking itself is impossible.

Fact

Though we think of this as something that is incontrovertible—a thing that simply is—I believe we ought to acknowledge that what we call facts are often things that we can’t actually prove in the strict sense of the word. Instead, they are things on which we have overwhelming agreement. (We can’t prove that Abraham Lincoln was born on February 12, 1809; we simply affirm that the evidence is sufficient for believing this claim—or that people we trust, historians, have decided that the evidence is sufficient.) Given that definition, a fact is something that can be wrong. (When the vast majority of the human world thought the world was flat, for instance, that was nonetheless a fact for those people… until it wasn’t.)

This is not a formula for intellectual anarchy nor for the postmodern claim that truth does not exist. (As Philip K. Dick once said, “Reality is that which, when you stop believing in it, doesn’t go away.”)

Why this definition matters

Recognizing that factualness derives from common agreement and not from absolute proof is useful for a genuinely scientific mindset, since a scientist—or anyone thinking scientifically—should always be open to the possibility of new information leading us to new facts. Believing that something called a fact is absolute and thus can never change is a formula for dogma.

Data

Data is a fact about the world that has been given portable form. While I can’t make the sky itself portable, I can take a picture of some part of it and put it in a book. I can measure the temperature in that sky and turn it into a number. I can measure its color on a spectrum. But none of these things is the sky itself. Data is a reduction of the whole. It almost always requires interpretation in order for it to have meaning. This is particularly true when it comes to any complex social dynamic.

Why this definition matters

Because the map of the territory is never the territory itself. Once again, a thing we often think of as synonymous with proof is not proof. A piece of data, even if honestly and accurately collected, is only one piece of a map, and no map captures all of reality.

Evidence

Evidence is not proof, and neither is it merely fact. Evidence is fact that is being given an interpretation, usually as part of some posited causal chain. When we say something is evidence, we are saying of a fact that it supports a claim.

Why this definition matters

Separating the fact from the interpretation laid upon it can allow us to see how much distance there is between the two and therefore how much burden of explanation is to be appropriately placed upon the interpreter. It can also help us to see that the same fact can sometimes be deployed for different—even contradictory—arguments. 

Hypothesis 

A hypothesis is a proposed explanation of an observed dynamic. It allows us to make predictions that can then be tested.

Why this definition matters

Because it helps us see the difference between a hypothesis and a theory.

Theory  

A theory is an explanation of an observed dynamic that has been rigorously tested and has thus been elevated out of hypothesis-status. It is still not proven and may contain some elements that require further testing, but the idea has accurately predicted so many things about the aspects of the world it attempts to explain that experts who are using a scientific mindset treat the idea with high levels of confidence.

Why this definition matters

Because a theory, in the scientific sense of the word, is not a speculative assertion, and we are confused when we think it is. While being open to evidence that would bring a theory into question is important, scientific theories are highly likely to be accurate.

Proof

This word is often used interchangeably with “evidence,” but I think we ought to create a hard distinction. “Proof” is absolute. It is incontrovertible. The only realm in which proof exists is mathematics.

Why this definition matters

Even science does not meet the proof-standard, and in fact, this is a critical thing to understand about science (see entry on science). When we use the word “proof,” we ought to mean it.   

Truth

As opposed to “fact,” we usually use the word “truth” in reference to an assertion of large explanatory power. While “fact” mostly refers to the existence of a thing, “truth” refers to a logic that binds many facts together. I think it makes the most sense to conceive of truths as the humanites’ best attempt at capturing an explanation of reality, and yet we should recognize that these claims can almost never be tested as rigorously as scientific hypotheses.

Truths are often—rightfully in my view—called narratives, not because they are invented from thin air but because they possess the most important quality of stories: they simplify the world by selecting facts that fit the explanation and by leaving out—either intentionally or not—all the facts that don’t. (In a novel or movie, the “explanation”—usually unstated— is the theme.) We crave meaning, which is a refutation of randomness, and this is how we often make it.

That the Allies landed on the beaches of Normandy on June 6, 1944 is probably something we would call a fact rather than a truth. But as soon as we start to posit why they were invading or what impact it had on the rest of the war or the rest of the century, that is something that, if we agreed with the statement, we would call a truth. Claims like this one transform some facts into evidence and leave other facts unnoticed or unmentioned. The difficult and yet essential thing to grasp here is that “truths” are more like partially tested hypotheses than we sometimes want them to be.

It is useful to remember that our brains evolved to posit cause and effect relationships (see Kahneman and Tversky), and that we usually do this not in the form of analysis but in the form of rationalization of a claim that we want, or instinctively believe, to be true. The scientific method is the cure for this, and though our thinking in the humanities can be more or less scientific—we ought to aim for more—it can never be perfectly scientific.

Why This Definition Matters

It calls attention to what is necessary—we need to posit causes and effects—but always incomplete about truths. Truths are heuristics, but we often think of them as complete descriptions of reality.

In my view, one of the least-taught skills is the practice of noticing and acknowledging facts that don’t fit our truths and seeking to combine them in synthesis with the ones that do. This process results not in perfect, but in better, truths. Students are often taught to do nearly the opposite of synthesis—what they mostly learn to do in school is to make a claim and then back it up. That process results in refuting facts or, more likely, ignoring their existence.

Reality

See Philip K. Dick’s definition above.

Why This Definition Matters

Like fairness or happiness, even if we never grasp it all, we need a thing to reach for.

Science

We should not think of science as a set of incontrovertible claims. We should equate it with a way of thinking that combats bias or faulty reasoning. It can get us closer to understanding reality.

Why This Definition Matters

If someone says “trust the science” and means that we should always trust what scientists say, that is a terrible piece of advice. Scientists are human beings who can make mistakes, confirm their biases, fake their results and mislead by systematic selection of what they’ve uncovered. Furthermore, scientists don’t always agree about “the science.”

What we should do, as biologist Heather Heying has said, is “trust science.”

What’s the difference? Science is the way of thinking, not the conclusions. When a false result is achieved or reported, something went wrong with the way of thinking—either intentionally or not—and therefore the process, or the conclusion from it, was not science.

The Dangers of Truthiness?

Perhaps you’re thinking that it’s strange for someone who is worried about sensemaking to define almost every major term associated with truth in a way that emphasizes uncertainty and for him to recommend such definitions to teachers and students. Don’t we need to be more certain, not less? Isn’t the problem all those people who are wrong? And don’t the smart people just need to correct them?

Well, at least some data suggest that we’ve never been more entrenched in our mutually exclusive camps of certainty. How’s that working out? How does that project into the future?

Right.

How did we arrive in this state?

Let’s go back to Lincoln’s birthday. Let this one example stand for the vast majority of reality that we cannot know directly. The anxiety associated with this uncertainty has always lurked. We’ve quelled it—some of us more easily than others—by trusting people who should know the facts. And, of course, this trusting is made easier by realizing that many facts carry with them fairly low stakes—no one’s life would be altered if it turned out that Lincoln was born on the 13th instead of the 12th.

But let’s push beyond facts and consider where we get our truths (remember the definition) about complex social dynamics, which never have just one cause. We tend to get these assertions from ideology, from politics, from market-driven journalism, or from activism of one kind or another. We all sense that something about the 21st century world is unraveling, and when we’re honest with ourselves, we all experience the frustration of not being certain about what it is.

These are high stakes, and the simplicity of the stories that explain the impending, or already-arrived, doom is empowering, passion-inducing, intoxicating. Narratives push people into the streets, they move money, they get presidents elected, they put rocks through storefront windows and sometimes bullets into flesh. Narratives are scalable, and thus we are incentivized to treat them as if they were describing reality itself. And now, virtually every person on the planet has the reality-shaping capacity of X-thousand printing presses in the palm of the hand. So why not pick a narrative that solidifies your position with your in-group and press send?

What we don’t seem to have realized is that an accurate description of reality is not even the purpose of ideology, politics, market-driven journalism or activism. What they all have in common is the desire to achieve a goal. If a description of reality helps to that end—and can be achieved—then bring on the descriptions of reality. But if the goal can only be achieved by deception, partial truths and emotional manipulation—or the ideologues think this is the only way—then bring on the propaganda. When we were told at the beginning of the pandemic that masks would not protect us from Covid-19, the experts telling us this were not speaking in science, they were speaking in politics—that is, they were trying to achieve a goal: the preserving of masks for the people—medical staff—the experts thought needed masks most. Maybe they had the right goal in mind. Maybe they didn’t. But now think about what gun rights or social justice activists are incentivized to do in a human population that flocks to propaganda. If you’re like most people, you’re deeply worried about at least one of those groups, and thus you can see the problem.

In other words, the real danger in sensemaking is not uncertainty but unwarranted certainty.

What we desperately need is a populace trained in probabilistic thinking and reasonable skepticism. Probabilistic thinking will keep us from existing in a state of paralysis—at any given moment, we will act according to what we believe is likely accurate. Reasonable skepticism will keep us open to updating our beliefs and will, of course, make our current beliefs more likely to be right.

Let’s fantasize for a moment. What if we made that scalable? And what if we saw education as the mechanism for doing it?