Some Thoughts on the Distinction between Protoscience and Modern Science

All modern sciences began as merely natural philosophy and gradually matured into the powerful, reliable, and vibrant disciplines that we have today.  I do want to identify the primary epistemic dimensions that are aspects of fully legitimate science.  I think we can differentiate between the protoscience of centuries ago and the mature science of today, but I’m searching for a word that better indicates the crux of this distinction than “modern”.  This distinction does not hinge upon the notion of new vs. old science.  Rather, this seems to come down to how refined the practice of science might be.  If a science is practiced with a clearly understood methodology and well-established practitioners who produce reliable results, perhaps then we could say it is systematic.  Such sciences are driven by a well-accepted standard model that features frameworks, theories, and paradigms.  On the other hand, if a so-called science (or natural philosophy) is practiced in an arbitrary, speculative, and makeshift manner then we can say it is proto, as in protoscience, proto-psychology, proto-chemistry, and even proto-phenomenology.  Some, but not all, protosciences develop into systematic sciences.  Cutting-edge developments in science, in which the methodology has not yet been clearly defined and where the paradigms are still under development, are protosciences.

Protosciences tend to be conducted more ad hoc, with experimental, observational, and interpretive processes conducted on the fly, whereas systematic sciences are more universally applicable to similar types of known phenomena and are reproducible, or at least are driven largely by some sort of reproducible processes.  Indeed, many sciences study phenomena that are not reproducible, but there has to be interpretive association with phenomena that are reproducible, and this has to be connected to the greater scientific knowledge base and explanatory scheme through consilience.  For example, astrophysics might involve the study of the formation of galaxies that occurred several billion years ago, but this study does rely in part on the repeatable observation of the slight movement of stars and planets.

The concept of protoscience is related to the notion of folk science, which refers to ways of understanding the world using common wisdom within a given culture and without the use of rigorous methodologies.  There is some conceptual overlap between these two, but we might say that protosciences are in the process of emerging into mature disciplines and are semi-sophisticated, whereas folk science is practiced in a rather ignorant and haphazard way by people who are oblivious to the ineffectiveness of its enterprise.

One central theme upon which this epistemic dimension hinges is the extent to which there is a coherent approach to understanding the subject matter in question.  In order to be systematic, a science would need to have a developed strategy that brings together the tactics for working toward high levels of mutual understanding and consensus.  If a science lacks such a strategy, which is usually takes the form of a rigorous and coherent methodology, then it is proto.

This epistemic dimension is also related to the distinction between so-called “hard” and “soft” sciences.  The hard/soft distinction in science has never been clearly defined and universally accepted, but we can say that the hard sciences are those that are systematic and quantitative, and the soft sciences are those that are proto and qualitative.  Physics and chemistry are the hardest of sciences, while sociology and economics are often thought of as soft sciences, with biology being mostly hard and psychology falling somewhere in the middle of this spectrum.  All of these sciences have been established as systematic, some more than others, but we can say that none of them are proto.

0 Read More

Some Thoughts on the Distinction between Legitimate Science and Pseudoscience

Some disciplines and systems of methods that have a wide number of followers and practitioners might seem to be scientific, but they are actually pseudoscientific.  This includes any belief systems that in some way take into account empirical evidence and that have rules for using this to purportedly better understand the world, but that are based on theories that are not epistemically justified.  The fact that these systems do involve empirical observation and rules for interpreting the findings from this and making predictions often tricks people into thinking that one can gain useful and reliable knowledge from these systems.

Twentieth Century philosopher of science Karl Popper identified pseudoscience as any system that involves theories that are not falsifiable.  A theory is falsifiable if it is conceivable that empirical data could be found that could show that this theory is false.  Theories that are not falsifiable are formulated in a way so as to interpret any possible data as a corroboration of the theory.  Popper identified Freudian psychoanalysis as an example of a theory that is not falsifiable because within this theory, any possible behavior of someone can be interpreted as being in line with its assumptions and there is no possible observable data that could allow a practitioner to challenge these basic assumptions.

Popper also identified astrology as pseudoscience because he figured that one can interpret star charts and personal behavioral data in any way that they want in order to match the experimental evidence.  Actually, in this analysis, Popper is not exactly correct.  For one thing, astrology actually does make vague predictions about the movement of the planets.  For another, many predictions made by astrology regarding personal behavior can be completely falsified.  What makes astrology pseudoscientific is that it takes into account some empirical evidence, but it interprets this data and tries to make predictions based on this data using rules that are quite unreasonable.

Any set of rules that an episteme uses to make predictions are supposed to somehow be based on natural laws or have some sort of indirect connection to nature.  If the rules that practitioners use to drive their investigations are entirely disconnected from nature then there probably isn’t anything about the natural universe that will be understood through these investigations.  These methods and procedures need to be understood as somehow controlling or bracketing certain aspects of the natural processes under investigation, otherwise the practitioners are only using their own fantasies and imaginations to interpret the world.   The rules and methods employed probably need to be based on theories with regards to the laws of nature and should be formulated and refined through empirical observation.

As far as astrology goes, it is based on a fixed set of rules that never get updated on the basis of new observations.  The most significant explanations and predictions supposedly made by astrology pertain to human personalities and behaviors.  There simply is no evidence that human personality is linked to the movement of the planets, as the theories of astrology claim.  These ideas could only have originated in someone’s imagination some 2500 years ago and were then made into inflexible dogma.  Astrology had maintained popularity for centuries leading up to the Age of Enlightenment because science had not yet developed a better understanding of the movement of the heavenly bodies nor of the inner workings of the human mind that drive our behavior.  Even though astrology is still popular in some circles to this day, we can now say with confidence that this discipline has been quite thoroughly falsified.  Its enduring popularity among many people in our highly science-based contemporary world might be partially due to the fact that, as a pseudoscience, it has the ability to trick some people into believing in its validity.

It is not difficult to create a theory from one’s imagination, and it is also apparently not that difficult to formulate a theory from imagination that interprets empirical data and provides unjustified explanations for how things work.  It is orders of magnitude more difficult to come up with a theory that is justified in being a model of a small slice of reality, which a scientific theory should be.  Though it is easier to practice pseudoscience than real science and some people have a tendency to believe unjustified theories if they seem scientific, over time only systems that use a genuine scientific method will be able to make reliable predictions and thus they will be more credible to people and pseudoscience will become less so.  As Carl Sagan has said, “Science is a self-correcting process.  To be accepted, new ideas must survive the most rigorous standards of evidence and scrutiny.”

The following is a generally accepted list of criteria that is the answer to the “demarcation problem”, which is the line between legitimate science and pseudoscience:

  • Reproducible: Makes predictions that can be tested by any observer, with trials extending indefinitely into the future.
  • Testable: Empirical tests can be conducted and result can be gathered that might or might not be in line with the theory.
  • Falsifiable: One can at least conceive of some empirical data eventually coming to light that would falsify the theory.
  • Consistent: Generates no obvious logical contradictions and is consistent with observations and the data that was directly gathered.
  • Pertinent: Describes and explains the observed phenomena.
  • Correctable and dynamic: Is subject to modification as new observations are made.
  • Integrative, statistically stable, and corrigible: Subsumes previous theories as approximations, and allows possible subsumption by future theories.
  • Parsimonious: Economical in the number of assumptions and hypothetical entities.  Provisional or tentative. Does not assert the absolute certainty of the theory.

f a science is driven by evidence and reason and its methods are refined over time through critical thinking, then we can say it is legitimate, since we would be legitimately using the word “science” to refer to it.  However, if a so-called science involves rigid rule-based analysis that is based upon an unchallengable dogmatic edifice, then it is pseudo, since this fundamental dogmatism would mean that this would not be a legitimate science.

Pseudosciences are presented in a way that might seem scientific to many people unless they have developed the skill to differentiate that which is legitimate from that which is pseudo.  Any episteme that is empirical and is driven by some sort of rules for what to observe and for what explanations and conclusions might follow from these observations is going to seem like a good source of knowledge to some people, simply because the presentation would seem to connect observations to explanations and conclusions and because the practitioners would seem to know what they are doing.  This dynamic often tricks people into thinking that pseudoscientific practices are reliable sources of knowledge, even though a closer examination would show that they are not.

Ultimately, this distinction hinges upon whether a methodology has a feedback loop built into it such that the paradigms and theories are adaptable based on the results of observations.  If it meets this condition, then it is legitimate.  Otherwise, it is pseudo.  In the end, there has to be accountability and peer review for any findings and conclusions that are generated through any legitimate science.

Both falsification and verification are important to any legitimate science.  Falsification is where observed results are entirely inconsistent with what a hypothesis would predict, reliably so, and verification is where observed results consistently corroborate a hypothesis such that it can rationally be considered to be grounded in some fundamental reality.  Only when there is strong and consistent connection between observed results and the interplay between these two such that the theories and frameworks can adjust accordingly, can a science be considered legitimate.

Legitimate sciences, generally speaking, go through paradigm shifts and revolutions occasionally in the face of new evidence that can’t easily be made to conform to the old paradigms.  The need to parsimoniously interpret data can lead to adjustments of the foundational assumptions and methods of the paradigm, especially over years and decades of normal science.  Eventually, this leads to a reassessment of the paradigm.  If the science doesn’t adapt in some form in order to best accommodate anomalous data and in order to optimally conform to the results, it would end up having to take on the character of pseudoscience in order to continue to be practiced in some form.

2 Read More

Some Thoughts on the Distinction between Qualitative and Quantitative Research

Today I want to offer some thoughts on the distinction between qualitative and quantitative research.  This is important to me, since a lot of people seem to think that legitimate and reliable science has to be quantitative.  I do consider the qualitative vs. quantitative distinction to be one of the fundamental epistemic dimensions, which means that every principled way of knowing or coming to believe something would have to essentially be oriented toward one of these two, although I do also acknowledge that some epistemes involve a mixture of these two poles.

If an episteme involves quantities, which are things that can be counted and/or measured in some way, then it is quantitative.  On the other hand, if it involves descriptions of unique circumstances that cannot easily be quantified, then it is qualitative.  We can reliably measure a diverse array of phenomena such as space, time, mass, light frequency, brightness, wavelength, shape, location, velocity, and acceleration.  We can also simply count individual units of any type of thing that is similar, or that at least have some sort of identifiable similarity.

The most reliable forms of science make extensive use of quantification.  Science operates best with numbers because this allows us to converge the senses, any of which can deceive us.  We can, for example, see, touch, and hear the same quantification, which is the same numeric representation of the quantity of the phenomenon that is going on.  Scientific research works best when it is driven by numbers, but this is not because of any overarching effort to reduce the world to quantity, nor to eliminate any genuine qualitative distinctions and categories.  Rather, this is because quantification helps us to develop, foster, and sustain the trustworthiness of our information gathering.  Numbers allow us to reduce the personal noise that we might be introducing into our observations and to more accurately focus on what we are modeling so that our understanding is driven by mind-independent facts rather than by self-deception.

Quantification might seem quite natural to us, but based on certain perspectives, nothing is ever exactly the same.  In the real world, things are so often unique and complex and mixed up with all kinds of things, some of which are similar.  At a basic level, we can understand and describe each moment, each place in the world, and the quality of everything as it is and as it changes.  We can recognize similarities and classify them and differentiate and organize and categorize them.  We can notice cross-similarities and differences and correlations.  We can identify specific substances, entities, events, causes, structures.  Qualitative research is essential and often involves categorization, association, and recursive hierarchical sorting, and the identification of relations, properties, wholes and parts, and essential attributes.  We need to do this in order to count similar things and in order to measure things.  Thus qualification is often necessary for quantification.

In truth, all circumstances in reality are unique in their own right, but we try to find generalizations among the particular circumstances that seem similar in certain ways.  We have to first make categorical generalizations amid the complexity and ubiquitous uniqueness of the world in order for anything similar to be counted and before any measurements can take place.  Every point in space and every instant in time is unique and this means that if we try to measure space or time, we are introducing a generalization to these unique circumstances.

In any observed phenomena, lots of things might be moving and constantly changing and the first thing that a researcher needs to figure out is what are the different kinds of things and what is changing over time and what is staying the same and what are the relations to other things.  Somewhere down the line, these things might become countable and measurable.  Similar sorts of things or dimensions of some sort can be discerned and then quantities can be computed.

Note that understanding this distinction does not imply that reality must be quantifiable nor that we cannot know something unless it is quantifiable nor that all empirical knowledge must be quantifiable.  Sometimes it happens that detailed unique descriptions must be gathered and from there it might be possible to find certain types of generalizations within this data, which could then be quantified.  Thus it is silly that some people think that science has to be quantitative.

Usually, quantitative data is objective, but there are circumstances where intersubjective data can be quantifiable as well.  This would probably include aspects of color such as brightness and saturation, the volume and tone of sounds, and degrees of pain, among other phenomena.  For anything that is intersubjective and quantifiable, it is very difficult to measure with clearly understood units and the measurements will probably always be rough estimates, and thus social verification is difficult, although this is easier if the goal is merely to rank and compare different subjective experiences that one might feel within their own body or perhaps between oneself and one or more others.  If the only goal is to figure out which experience is higher or lower or better or worse, then that is more doable.  We can know what bad pain is in comparison to not-so-bad pain and also in comparison to really bad pain.  Comparisons such as these rely on a certain kind of rough quantification that can be mutually understood to a large extent.

The quantitative vs. qualitative distinction is closely related to the distinction between so-called hard and soft sciences.  This distinction hinges on whether findings and conclusions are truly reproducible or are dependent on particular circumstances and complexities that cannot be reliably reproduced and studied with the highest level of clarity and mutual understandability.  Physics and chemistry are considered hard sciences and biology is usually considered hard as well, although it sometimes extends into territory that is a bit soft.  Psychology can be both hard and soft, depending on the circumstances and what aspects of the mind and of animal or human behavior are being studied.  The social sciences, including sociology, anthropology, linguistics, economics, and political science, are very much in the realm of soft science because they necessarily operate within complex environments and nearly all research projects will require a significant amount of interpretation of qualitative data rather than simply making measurements and cold calculations.  This fact, however, does not de-legitimize any soft science because there is a huge difference between evidence-based and peer-reviewed qualitative research and pseudoscience.

0 Read More

Some Thoughts on the Relation between Empirical Knowledge and Metaphysical Understanding

Today I want to offer some thoughts on the relation between empirical knowledge and metaphysical understanding.  This drives off last week’s post on the analytic/synthetic distinction.  Analytic knowledge is entailed from existing knowledge, beliefs, and assumptions.  The structural functional relation of information can be analyzed and entailments logically derived.  Synthetic knowledge is different in that it would include immediate observations and also generalizations, abstractions, and insights into the reality behind appearances.  As such, we can unpack this to see it as a spectrum that starts with immediate and particular empirical knowledge and that flows in stages into deeper insights and wisdom of metaphysics.  This is also how we know details about the nature of time, space, substance, and causation in the universe, even though we can’t actually observe any of these things directly.  We can develop profound understandings of these aspects of reality, but it requires significant mental work.

We can see this as a process that begins with the unstructured raw information given in one’s perceptions.  Empirical knowledge is that which is dependent upon some sort of experience, either through an external sense such as sight, hearing, etc. or a conscious/mental information gathering process (internal sense) such as introspection, reflection, etc.  Raw sense data comes into the mind and is filtered in different ways through innate processing capabilities, which is the self-constituting dynamic described in Gestalt psychology and which imposes a structure of intelligibility onto what we perceive.  On one level, this takes the form of a rapid succession of particular impressions, but on another level, one can gradually identify patterns and correlations, and over time can develop insights into the overall structure of how things work in nature.

Raw sensory data does not inherently contain any meaning beyond a shade of color or a tone of sound, or something of this sort, and only for a minute instant in time.  Our computational abilities allow us to realize, discover, and understand the meaning by finding patterns and inferring from this data.  Our minds distill, compress, consolidate, synthesize, and integrate the information that is given in processes that are driven by the innate structures of the mind and are assisted by analytic reasoning and are also determined in significant ways by one’s developmental levels.  Once one has discerned the most significant patterns of their experiences, the laws of nature and categories of being can then be understood and systematized and one can begin to appreciate the overall complexity of how things work and the processes of change and interdependency.

The wisdom development process involves careful and perceptive pattern discernment of similarities and differences across lots of data and it also requires good memory and countless historical examples and the ability to find correlations and then to figure out not just correlations but deep causations and tendencies and laws, or at least formulas that are reasonable approximations of natural laws.  Sometimes this can be misleading because what you think might be a deep insight gained from carefully studying certain things could be undermined because you neglected considering other things.  As such, these sorts of insights probably can’t be reliably developed by any one person operating in isolation.  Instead, they are usually developed collectively through the certain social structures that are formed into institutions, but individual people and specific benchmarks and rough algorithms that people can follow definitely can make a difference.

This process of course relies on inductive reasoning, which in my opinion is essentially based on inference to the best explanation.  Much of this is made possible through our innate mental capacities and intuition.  Some of the deep insights that people develop can be unwound and analyzed and thus they can explain their justification for coming to the overarching conclusions that they do about the reality behind appearances and the deep truths of the world.  If the raw evidence is given and sound and parsimonious lines of reasoning are articulated, then the conclusions should be mutually understandable.  This process can be quite fruitful, but it sometimes can lead us down the wrong path.  There is a distinction between the straightforward empirical analysis of information on the one hand and the critical and speculative interpretation of information on the other.  The latter can be wise and can lead us to grasp the complexity of the world, but it can also be foolish if conducted improperly and irrationally.

This is one process through which we can develop epistemic justification, although lesser degrees of certainty are possible the further one moves away from raw data.  Utter certainty is possible probably only for direct evidence and experience, but that is only on the empirical end of this spectrum and this data doesn’t tell you much by itself.  Moving toward the metaphysical side inevitably comes with lesser degrees of certainty.  It is the interplay with the analytic cognitive process that can offer epistemic justification for our insights, which can save us from having to rely on faith for these things that are the reality behind appearances.  We can’t live our lives based on too much reliance on faith, but we probably can’t avoid it altogether.  We can’t live our lives avoiding the questions of the reality behind appearances, but we can’t pretend to have utter certainty of that stuff either.  Thus the notions of epistemic justification vs. faith and degrees of certainty vs. doubt are related to this epistemic dimension.

0 Read More

Some Thoughts on the Analytic vs. Synthetic Distinction

Today I want to offer some thoughts on the analytic/synthetic distinction.  I see this as one of the foundational epistemic dimensions, which means that every principled way of knowing or believing something would have to be either analytic or synthetic.  Here are the most basic and non-controversial way of defining these terms (that I’m aware of): when one one uses their reasoning capacity to come to conclusions that are logically entailed by other knowledge, beliefs, and assumptions, we can call the process that produced them analytic.  Analytic knowledge is that which is logically derived from existing knowledge without having any new information come into consideration, either externally or internally, except through deduction or any processes that can be reduced to deduction.  All forms of philosophical logic and all types of mathematics are analytic epistemes.  This contrasts with synthetic, which includes all processes for deriving new knowledge, beliefs, and assumptions that comes from some means other than logical entailment, which requires that one synthesize some sort of meaning out of the bits and pieces of information that they perceive, think, or feel.

In some philosophical contexts, the term synthetic is understood to also include that which is innate to the mind.  However, since epistemes are processes for coming to know things, this would exclude anything innate to the mind.  There are likely aspects of consciousness that include innate capabilities and thought processes and that could perhaps even include innate beliefs, but these would not fall under the umbrella of the term episteme since there are not ways that one would come to know them other than to simply develop into a functional human being.  The processes of mental development from infancy through childhood and into adulthood and beyond are certainly interesting and relevant and should not be overlooked, but that is just not the focus of this section.  Instead, we’re focused on experiential and thinking processes that are not genetically determined.

One’s genes might predetermine their mind to be able to tell a sweet smell from a rotten smell and that heights can be dangerous.  We might have DNA that constrains and determines the parameters of our thinking processes.  These are examples of innate knowledge and they are not produced through anything we can call an episteme.  Obviously, most of what we know and what we believe is not innate to the mind, and these are produced through some sort of episteme.  Nobody has DNA that tells them how to soundly apply syllogistic logic nor how to solve a quadratic equation, both of which are analytic.  And your DNA isn’t going to tell you what the ocean looks like nor what it feels like to ride a bicycle, both of which are synthetic.  Thus, for our purposes, the term synthetic does not include anything innate to the mind and instead refers to the cognitive functions that can synthesize new ideas, thoughts, feelings, perceptions, and other experiences that can be remembered and recalled and perhaps revised at a later time.

The analytic process always starts with things that are given such as axioms, postulates, and firsthand experiences, which are all synthetic.  Even though you end up with nothing more than tautologies and logical identities, the process of analysis is important as an episteme.  Indeed, from an objective standpoint, both the premise and the conclusions are semantically equivalent, so this distinction can be seen as dubious from that perspective.  But from an internal mental perspective, this distinction is important since it highlights the process of knowledge development.

Analytic epistemes can be formalized, meaning that there is a system of symbols through which the answer can be deduced, and entailments derived.  Synthetic epistemes cannot be fully formalized.  Analytic knowledge can be proven, which means that there is a clear and objective process for developing mutual understanding and wherein absolute consensus should be reached if everyone understands the premises and the rules for deriving conclusions.  You can show someone the answer to a problem with analytic epistemes, and you can be confident that the conclusion is accurate so long as it is based on sound reasoning.  If you were to explain examples of sound reasoning to someone and this person does not understand or if they don’t agree with the conclusion then it is not that they have an equally valid perspective – it is that they don’t understand the logic.

It is possible that utter certainty can apply to some synthetic knowledge as well.  One form of synthetic knowledge is that which is clearly and directly apprehended, and this can in some circumstances come with the highest possible level of certainty, but life is rarely so clear and unambiguous.  Most of the time, synthetic knowledge can at best be conjectured, but not fully proven, and these conclusions would come with probabilities of certainty and a certain degree of doubt.

Formalized proofs are the most reliable knowledge that there could possibly be, but they don’t cover much of reality.  Logical entailment and formal proofs are one process through which we can develop epistemic justification and a high level of certainty.  However, in the complex world in which we live, this option is not often available, which means that we usually have to settle for other means of developing justification that come with degrees of uncertainty.  This also means that mutual understanding is usually more difficult for synthetic epistemes and consensus-building takes extra effort and might be limited in some cases by people’s individual perspectives.

0 Read More