The scientific method deficiency

To understand how modern science tests and validates hypothesis we have to understand the procedure of the scientific method.

The scientific method works in essence like this:

  • Ask a question that is related to an observation and brainstorm about it
  • Construct a hypothesis from the observation
  • Deduce predictions from the hypothesis
  • Develop the tools (if needed) and/or look for things that allows you to test your hypothesis and see if your trials produces the observational results
  • Make conclusions based on the results of the experiment and validate the hypothesis when the reproduced results are in conformity with the observational facts – RICO (results in conformity with observational facts)

This process is graphically simplified presented hereunder.

Today the scientific method is used to investigate Nature and to verify the validity of conclusions following from the hypothesis contained in a theory that tries to describe the natural phenomenon in question.

Testing the validity of conclusions reduces itself mainly in verifying if the predictions following here from are in conformity with the observational facts. This poses a problem since different work-mechanisms, be they fictional or not, can attain the same goal, or to put it in more practical terms: ‘different models sprouting from divergent hypotheses could all reproduce results that are in conformity with the same observational facts (shortly: RICO)’.

What is often overlooked is that once we deal with fundamental Nature than hypotheses don’t only say something specifics about the phenomena we describe but also something fundamental of our world; hence, different models able of guaranteeing results in conformity with the same observational facts will often, on the fundamental level, produce world pictures that are at war with each other – that’s why we refer to it as “the scientific method deficiency”

In trying to avoid this conundrum scientific materialist claim that the underlying structure of the universe is mathematical from nature, hence, all premises contained in a mathematical model of Nature that guarantee RICO might all be seen as true representations of physical reality.
In other words, being absurd isn’t longer a problem in today’s science.

However, mathematics, besides being self-containing and non-contradicting, becomes a relational science when dealing with Nature.

Yet, as far we can tell there is only one knowable/intelligible universe, hence, all hypotheses contained in the different models describing a common phenomenon of Nature and able of guaranteeing RICO may certainly on the fundamental level not be in contradiction with each other, even though all these models could reproduce RICO.

All this confront us with a substantial problem and deals with model preferentiality.

Much can be written about the philosophy and psychology of model preferentiality but we’ll leave it to others to do so; what we’ll do here in brief is outlining you how secularists try to come to terms with it.

Model preferentiality in post-modernist science is resolved thru a reformulated guiding principle of logic and problem-solving of the logician and Franciscan monk William Ockham (c. 1285–1349) and is called in honor of him the Ockham’s razor.

William Ockham stated the following, I quote:

“Entia non sunt multiplicanda praeter necessitatem” or “Entities should not be multiplied unnecessarily.”

In more straightforward language this reads: ‘things should not be made more complicated than is required’. But required in relation to what?

Newton probably saw the weakness of Ockham’s initial principle that sprouted from its abstractness, and so, for general usability he reformulated it in more coherent terms by stating, I quote:

“We are to admit no more causes of natural things then such as are both true and sufficient to explain their appearances”

Newton clearly emphasizes on the word “true” and “sufficient”. ‘True’ in this case meaning related to the true nature of things while ‘sufficient’ implying that these truths should be adequate enough to explain the phenomenon at hand.

Unfortunately it seems that secular science (SS) has deliberately expelled the word ‘true’ and reformulated Ockham’s razor as follow: ‘when different competing theories can reproduce the same observational facts than the one making the less assumptions is preferential’.

Yet, can someone please tell us:

  • How to comprehend fundamental Nature and self if the hypotheses of our models don’t have to be truth-related but just have to enable us to reproduce results in accordance with first order observational facts?
  • Why deciding what is preferential on the fundamental level should be rather a matter of predilection or sufficiency than of choosing the most relevant in regard to the true nature of things?

Evidently, if we on the fundamental level want to use the scientific method in combination with Ockham’s razor than a truth-related background referential or background filter is necessitated to enable us to maximize the exclusion of absurdities by checking if the hypotheses contained in the advocated models don’t contradict with the fundamental nature of our knowable world.

Thus, the scientific method as used today is to put it in simple terms deficient and not a valid tool to understand and investigate fundamental Nature.

This post is also available in: Romanian, Dutch, French, German, Italian, Russian, Spanish