Over the past few decades, a troubling shift has occurred in how we perceive and utilize science, particularly within educational contexts. Traditional science, grounded in facts, evidence, and experimentation, is increasingly being overshadowed by modern social science frameworks that emphasize subjective interpretations over empirical data. This shift has significant implications for the reliability and credibility of scientific knowledge.
The way we know this is a problem is something called the reproducibility crisis. Reproducibility—the ability to replicate the results of a study using the same methods and data—is one of the foundations of science. But studies have shown that this isn’t happening. A 2016 survey published in Nature found that over 70% of researchers tried and failed to reproduce another scientist’s experiments, and more than half couldn’t reproduce their own. If a study can’t be replicated, it means the findings aren’t proven at all. This issue has eroded public trust in science and made it harder to solve real-world problems.
In educational settings, this problem is compounded by the introduction of certain modern social science approaches that prioritize subjective interpretations over objective analysis. Instead of focusing on evidence-based questions like, “How does this process work?” they might ask, “How does this reflect systemic inequalities?” For example, someone might argue that labeling certain smells as “bad” is tied to cultural bias. But how do you test that? How do you prove it’s true or false? These claims often rely on interpretations rather than measurable evidence, which makes them less useful for solving real-world problems.
Another example is implicit bias testing, which claims to reliably measure unconscious prejudice and predict discriminatory behavior. While the idea has been widely adopted in workplaces and institutions, many researchers have questioned the reliability and validity of these tests. How do you objectively measure or prove the existence of a bias that the person may not be aware of? And can it accurately predict real-world actions? The evidence for these claims is often inconsistent and difficult to replicate.
Similarly, in education, the concept of learning styles suggests that students learn best when taught according to their “preferred learning style” (visual, auditory, kinesthetic, etc.). While this sounds intuitive, numerous studies have failed to find consistent evidence supporting the effectiveness of tailoring teaching methods to these styles. How do you test this claim objectively when evidence suggests that all students benefit from well-rounded teaching strategies, regardless of their preferred style?
Another common example is cultural appropriation in art and fashion, where critics argue that using certain cultural symbols is inherently exploitative or oppressive. For instance, a designer might be criticized for incorporating a traditional motif from another culture into their work. While these discussions can raise important questions about respect and representation, how do you measure whether such acts cause tangible harm? These claims often rely on subjective perceptions of offense, which are not easily quantified or tested.
I am going to attempt to convince you that, if we want to solve real problems and move forward as a society, we need to abandon this way of thinking and return to science rooted in evidence and logic.
Let’s start with what real science looks like. Science asks clear questions about the world and then tests them. For example, scientists might wonder, “Does this medicine cure disease?” They test it on a large group of people, compare the results, and share their findings so others can confirm their work. If the results hold up, we accept them as truth—at least until new evidence suggests otherwise. This process has given us life-saving breakthroughs like antibiotics, airplanes, and smartphones. Science works because it’s based on facts that can be proven and tested repeatedly.
This focus on feelings creates something called relativism. Relativism is the idea that all opinions are equally valid, no matter how ridiculous. For example, certain frameworks might argue that math is oppressive because it emphasizes correct answers. But without objective truth, how do we build bridges, design computers, or cure diseases? Imagine an engineer saying, “My truth is that this bridge will hold up,” even if the math proves otherwise. Relativism doesn’t lead to progress; it leads to chaos.
We’re already seeing the damage this mindset has caused. In some workplaces, employees are forced to sit through workshops where they’re told that their race or gender determines whether they’re an “oppressor” or “oppressed.” These sessions aren’t based on evidence but on assumptions. In schools, lessons often focus on how systems are unfair instead of teaching students how to think critically and solve problems. Instead of creating solutions, this approach fosters division and resentment. And since these frameworks reject the idea of objective truth, it’s impossible to argue against them—it’s like debating with someone who denies the sky is blue.
The truth is, these modern approaches don’t fix problems; they just find new ones to complain about. Imagine if doctors used these methods. Instead of asking, “What’s the best treatment for this disease?” they’d spend all their time arguing about how healthcare systems are unfair. While that might be worth discussing, it doesn’t help the patient. Real science, on the other hand, focuses on solutions. It asks testable questions, runs experiments, and uses evidence to make the world better.
We need to return to real, objective science. Science works because it relies on evidence, not opinions. It’s the reason we have airplanes that don’t fall out of the sky and medicines that actually cure diseases. These frameworks might sound sophisticated, but they’re a dead end. If we want to move forward, we have to focus on what’s real and testable. That’s what real science does—and it’s the only way to truly solve the problems we face.
I love the premise of these thoughts. I agree that the divergence between evidence-based science and proven hypotheses to the lowered bar of social acceptance is a key issue in today's society. I appreciate the fine line you are working to navigate between the value of DEI initiatives in education and business. They certainly have value and a place in the educational paradigms of both. However, the ONE THING that is most important in the growth of an individual in either education or business is the skill to determine what is relevant to moving the needle toward a goal. The only way to do that is to remove feelings from the equation. That doesn't mean they are to be ignored or, worse, disregarded. It means that we need to focus on teaching the ability to be present, acknowledge feelings as humans, and separate them from decision-making. The challenge with this is that the vast majority of people go through life in the exact opposite way because of the 3000 decisions a day the brain makes; the vast majority of them are made from emotion and then rationalized or justified with logic. I don't believe we are going to change that. What we need to change is the awareness around it and carve out the space necessary to allow for the time and systems to go back to the scientific process for the critical decisions that guide our societies, cultures, and personal values. It's beautiful to identify this challenge; I struggle with the solution.
Well said Travis. Also scary thoughts about the division resentment and other issues these teachings may unintentionally be fostering in our next generation.