Often, the beliefs and tools seem to be true conflict with the beliefs that are actually useful. In which case, we should often take note of our confusion, and use these useful beliefs anyway.
I’ll begin this essay perhaps controversially, and then I’ll extrapolate. (I’m not religious.)
I have found no sound scientific arguments for the existence of God. But this doesn’t mean that believing in a religion is useless.
Religion isn’t about facts and objective reality, and it shouldn’t be. Instead, religion is about beliefs and utility.
For some people at least, religion can provide benefits:
But it doesn’t matter at all whether religion is ‘strictly true’. Instead, what matters is whether religion helps someone live a better life.
Because, knowing their philosophy, you feel like you can better predict their behavior. They won’t surprise you with malice that neither of you thinks is justified. You have shared morals.
I suppose, it’s far easier for a system to develop where everyone believes “this religious philosophy is true” than it is for everyone to believe “this philosophy isn’t necessarily true, and I know some of the parts don’t make sense, but it still bestows an advantage upon me, so I’ll pretend along with everyone else and encourage everyone else to keep pretending.” Blind faith is just simpler metabelief.
Moreover, believing in a religious philosophy completely-and-no-matter-what is a strong social signal, which serves to benefit the believer for the social-trustworthiness reason explained earlier.
In general though there are many reasons that we might not be naturally conscious of the weird pretending in religion. If you’d like to learn more about that, read The Elephant in the Brain. I address the book in the Further Reading section of this essay.
There are so many religions, and the question is never “Which religion should I choose?”, but “Whether I should do religion”. This suggests that ‘whether religion’ may be more important than which in particular to choose. There are thousands of religions anyway, so maybe they’re almost all nearly-equivalently effective, or else there’d be less variety in the ecosystem.
And why is it that most people who are religious also don’t seem to care about the arguments for and against whether religion is ‘true’? I think this is because they actually don’t care.
There are probably many other benefits to religion other than those I’ve enumerated, too.
But how do complex religions develop then?
Perhaps because of apophenia.
Indigenous Americans have a tradition of processing grain in a particular way before eating. Anyone that doesn’t do this and eats grains for a long period of time gets sick.
It wasn’t known until the twentieth century that their processing method prevents what would otherwise create a niacin deficiency. But this was a tradition— they didn’t have to know why it worked, they just had to do it.
And supposedly, original colonial settlers were taught this process by Indigenous Americans, but failed to keep the tradition. The resulting sickness was the most widespread nutritional deficiency in American history. The cause was not widely known until the 20th century.
By the time something becomes tradition, the people that discovered it are most likely dead, and the knowledge of the consequences have most likely been forgotten. So if you were to ask the original Indigenous Americans why they had this tradition, they might not even know that the tradition prevents a kind of sickness!
How is it that, despite tremendous pressure towards forgetting and towards simplification, complex traditions and stories persist across millennia?
Because the people who believed the current surviving stories (there’s survivorship bias here) out-reproduced those who didn’t, furthering both the people and the meme they carried.
We don’t entirely know how anesthesia works. Yet it’s been used medically for a very long time. And anesthesia definitely isn’t the treatment in modern medicine that’s like this.
Strictly, all models are imprecise.
Strictly, all maps are incorrect.
For traditions, there is a natural selection process that filters the traditions that have lasted long enough to be observed today—and for this reason, we should trust them even if we don’t know why they work.
Yeah, sometimes things turn out wrong (ex), but sometimes they turn out right (ex).
So while complete scientific data is of course preferable, consideration of how long a belief has persisted—despite the pressures forgetting and simplification—can be valuable.
We shouldn’t limit ourselves to only acting on directives from what’s studied only scientifically. This allows us to leverage wisdom beyond our current capabilities.
Whenever you use something which you don’t understand, add it to a list of “things I use but do not understand”. It can be easy to forget (and then never question) the source of beliefs!
Moreover, take note of when a tool or belief is useful, and in what situations it isn’t. We don’t have to understand why the heuristic fails, either, just when it does.
(This is especially important when building something. Take note of every assumption or logical-rounding that was crucial in the development of your thing.)
In life we’re not necessarily optimizing for truth— we’re optimizing for value.
But in practice, what seems to be true sometimes conflicts with what tools are actually useful. In which case, we should take note of our confusion, but use the tool anyway.
It’s an artificial limitation to limit yourself to merely the things that seem to be true, rather than the things that seem to be useful. (And this, itself, is a useful belief to have!)
See The Elephant in the Brain. This book completely changed my model of how minds work. This is also one of the few books that I think makes full use of its length. I don’t think I would’ve believed it if it was shorter.
Also, a reader informed me that what I outline in this essay is an argument in favor of something called pragmatism.
Ideated 2020 July, posted 2020 September 18, last updated 2021 January 5.