Our Philosophy

Ockham's Razor

Beginning with Ockham's Razor, our goal is to conduct our research using the simplest techniques available while incorporating conditional information into our datasets so that we can refine the truth in the data as we determine which additional events to add to ensure our results are as purified and unbiased as reasonable possible.

Our basis of Occam’s razor identifies with William Ockham’s (c. 1287-1347) philosophy problem-solving principle. This requires a burden of proof and requires simplicity of solutions through hypothesis testing. As a heuristic tool of discovery, this guides us in our discovery of the truth contained in our data using the inductive logic of Bayes’ Theorem, which follows Solomonoff's theory of inductive inference.

Ockham’s Razor is derived from basic probability theory and is grounded in Bayes’ Theorem. Notable scientist such as Harold Jeffreys and E. T. Jaynes argue for the Bayesian basis contained in Ockham’s Razor. David C. MacKay also argues this in his manuscript, Information Theory, Inference and Learning Algorithms. William H. Jeffreys and James O. Berger (1991) continue this argument and welcome the use of Bayesian inference and marginal, conditional, and posterior probabilities.

Grover (2013) offers a literature review and continues the argument of the utility of Bayes’ Theorem in the search for the truth.

Ockham's Razor


The steps in discrete Bayesian research methodology begins with two options:

  1. Does the data exist?

  2. Does the data need to be collected?

CASE I: If the answer is #1, then the following research methodology applies:

  • Step 1. Cleanse the data
  • Step 2. Specify conditional (posterior) hypotheses
  • Step 3. Identify the events and elements of each event.
  • Step 4. Ensure data is discrete and if not, then perform discretionary operations.
  • Step 5. Upload the data to a database manager (
  • Step 6. Conduct counting crosstab operations, present the data as prior and likelihood counts, and report prior and conditional counts in confusion matrices.
  • Step 7. Conduct Bayesian statistical operations using the count data including the following:

  1. Compute prior probabilities
  2. Compute likelihood probabilities
  3. Compute joint probabilities
  4. Compute marginal probabilities
  5. Compute posterior probabilities

  • Step 8. Revise the initial hypotheses based on the conditional evidence determined with the posterior probabilities.
  • Case 2: If the case is # 2, then collect the data.

    • Step 1: Specify the hypotheses of interest.
    • Step 2: Collect the data ensuring there is a common unit of measurement.
    • Step 3. Begin with Step 1 in Case # 1 and evaluate the hypotheses.