There are so many ways to arrive at faulty conclusions.
When it comes to populations that have undergone trauma, one way to arrive at a faulty conclusion is to learn exclusively from those survive their experiences—taking campus policy advice solely from students who manage to stay in school despite bullying and marginalization, for example.
Another way to arrive at a faulty conclusion is to take cues only from those who don’t survive. A study about contraception recently ended because some study subjects found common side effects intolerable. By ending the study prematurely, researchers have ensured that they can’t track the progress of subjects who were more resilient, and the testing of this medicine might be permanently halted.
As they work to understand their members’ motivations, denominations like mine are commissioning retention research projects about former members as well as current ones. These studies are a tool for mitigating the impact of survivorship bias on church policymakers who would otherwise focus only on people who remain formally affiliated. Of course, the quality of their conclusions from these studies depends on how the studies are designed and what respondents volunteer when asked.
A fascinating example of survivorship bias featuring US Navy researchers went viral on Twitter yesterday:
It’s a remarkable story: were it not for Abraham Wald and his analyses, the Navy might have protected parts of their planes that didn’t need protecting and left vulnerable the parts that most needed armor.
Consider the range of sources you use to teach you about the populations you work with. Could you be misreading why “survivors” survive and what set them apart from those who don’t thrive?
As far as it lies within your power, learn from all cases, not just those that match your initial assumptions.