Scientific Research and the Information Ecology

Any legitimate scientific research, whether it is conducted within academia or governmental agencies or private corporations or professional societies, should include certain self-correcting processes such as documentation, reproducibility, and peer review.  Science is essentially conducted as a multilateral social engagement, both within institutions that conduct research and between them.  When different scientists and different institutions compete with each other in their research, this can produce a dynamic wherein the theories with the best evidence and the best explanatory power and those that are the most reproducible and that stand up to scrutiny the best end up having the greatest propagation power.  Thus, there is a certain sociocultural evolution that favors the most evidence-based, reasonable, and parsimonious theories.

This is how the system is supposed to work, but it has started to break down recently because of the aforementioned crisis of sensemaking.  We are witnessing implausible and disproven ideas being propagated far and wide and enjoying widespread support in the general population.  We should be concerned about essentially nonscientific information being promoted as scientific.  We should equally be concerned that a significant percent of the population does not accept scientific knowledge, since it happens quite often that people reject conclusions that enjoy widespread and near unanimous support within the broader scientific community.  The root of the problem is that these people don’t seem to understand the self-correcting process through which bad science would be expected to be exposed and debunked and through which good science tends to rise to the top.  The scientific consensus is usually not wholly wrong because different scientists scrutinize and review each other’s work.  We don’t believe the world is flat because of abundance of evidence that it is round, but the intellectual tools upon which our modern world was built are now being rejected by large segments of the population, even into the Twenty First Century.  Lots of people don’t believe in even well-attested and generally accepted science, and we see huge social, environmental, and humanitarian crises being caused by this level of widespread ignorance and delusion.

As philosopher Michael Shermer has explained, most of us don’t understand the technical aspects of particular sciences such as epidemiology or climate science.  When people say that they believe in science, what they are doing is signaling that they accept science as a viable method of gaining reliable knowledge and therefore they trust science.  This does not rely on faith, but on a reasonable trust and confidence that science has self-correcting mechanisms.  These people are acknowledging that most of the time, science works.  Then there are people who express skepticism of science.  This sentiment is somewhat understandable since, in general, most people don’t understand the complex ins and outs of science very well.  At least with those who accept consensus scientific conclusions, they are going with a reasonable trust.

The science skeptics, on the other hand, seem more often to be motivated by political commitments rather than any legitimate intellectual reason.  If the consensus among climate scientists is that human activity is causing the planet to gradually warm and to threaten ecosystems and the future of human life, then those who find this sort of conclusion to be inconvenient for their political ideology will often be driven to reject the science and to dress up their skepticism in faux scientific language.

There is a very difficult dilemma for common people who aren’t professional scientists regarding what scientific theories to belief.  How do we decide who to trust?  Which sources?  In the modern age, we have so many.  In this information age, we have almost infinite sources, so we have to do some filtering.  Therefore the idea of scientific consensus needs to be more properly understood.  It is not based on majority opinion.  It is not a democracy.  It is not based on hierarchical authority.  When we accept science, we are acknowledging that the scientific claims have already been vetted.  This is what Karl Popper described as conjecture and refutation as the scientific method.  These conjectures have already been refuted or attempted to be refuted by professionals in their respective field.

By the time that this information is summarized for us in some mainstream publication, it has already been filtered.  In such situations, one can believe that what is being printed is probably true, not based on faith or authority but because they know that the people who are making those claims have already been tested and debated and disputed and that attempts to refute their work have already been made.  One who understands this process can have confidence that the system of scientific research works.

Photo by Satheesh Sankaran on Unsplash

0 Read More

How the Information Ecology Works

Most educated and open minded people understand that claims found in science books, history books, and credible news reports are mostly worth believing and that claims involving sensationalized propaganda or strange superstitions are not, but they might not be able to easily explain why they have made this distinction.  Indeed, what some people see as credible news reports, others see as propaganda and “fake news”.  Likewise, what some people accept as legitimate evidence-based science, others see as baseless claims that were fabricated as a part of a conspiracy.

Some people will say that we should question everything that anyone might try to say to us and to not accept anything at face value, but if we are rational then we do not question everything equally.  We have some well-known facts that we don’t need to question and some things we know pretty well, and these are the basis for further evaluation.  There are some people, some media organizations, and some institutions that we largely trust and empathize with and others of which we are suspicious and that we are unlikely to trust.  But how do we know who to trust?  How do we know what type of information to accept or to reject?  While it is necessary to accept some claims that one comes across, it is often quite difficult to figure out what claims to believe in or not to believe in.  There has to be a process through which one can judge whether a given claim is epistemically justified, but what should be the criteria for this?  Any reasonable evaluation process would need to take into account not only the content of the information that is being conveyed, but also the social structures that produced and propagated the information.

We can develop increasingly accurate conceptions of reality through mental processes, methods, and frameworks.  This is something that one can do partially within their own mind, but this also depends to some extent on one being cognizant of and working within certain social structures.  We need to be connected and associated somehow with social structures, such as those of scientists, journalists, and historians.  We consume the products of their work, and we also use our minds and mental processes to scrutinize it.  These organizations can also in some cases learn from our own work.  Thus, we defeat both skepticism and hubris through our own mental capabilities and through social structures.

We can call this process the information ecology because it has certain similarities to the natural ecology wherein different species and organisms both compete with each other and strategically cooperate to achieve their goals.  A healthy and vibrant natural ecosystem has many organisms sharing resources and none of them is using their power advantage to monopolize access to resources, since this would threaten the ecosystem as a whole and would threaten the existence of the powerful and powerless alike.

The key to developing and maintaining a healthy and vibrant information ecosystem is in the type and form of social structures that exist within it, including governmental agencies, media organizations, knowledge-based institutions, and privately owned business entities.  We need to be cognizant of how any such entities are structured internally and how they are interrelated within the greater information ecosystem.

Some social structures can amplify individual intelligence, if they are organized scientifically using principles of peer review, adherence to certain norms and rules, and a high degree of independence.  Some types of social structures have the effect of hampering individual intelligence, such as those that are hierarchically structured and where certain overarching assumptions are not challengeable and where there is a charismatic leader, either living or dead, who is assumed to be inherently great by the rank and file membership.  It is necessary to avoid social structures that are too loose and anarchic and those that are overly rigid and strict.  In our quest for a fuller and deeper understanding of the facts, both extremes are undesirable because they usually end up hindering these efforts.

We can probably say that everyone needs to believe in something beyond their own self or else they will lose their purpose in life and end up becoming extremely skeptical, cynical, nihilistic, and depressed.  Each of us needs foundational beliefs in order to live happy and healthy lives.  For most of us, that would be our foundational belief in our close family and in romantic loves.  Of course, we should believe most strongly in the people that we know well and also in the things that we directly experience, but we also need a certain reasonable trust in institutions as well, even though we do not personally know the people who work within them.

It simply does not make sense for us to have broad skepticism of all governmental, media, and professional scientific organizations.  They are all certainly fallible, but if we give them due scrutiny and develop some understanding of how these institutions work and how the people who work for them do their jobs, then we can retain a sufficient degree of belief in many or most of the claims that are put out or propagated by these institutions, as appropriate given the contextual factors.  Sometimes we have to trust experts.  Sometimes we have to offload some of our sensemaking to external organizations whose purpose is to be well informed on certain matters, such as current events, science, and complex rational decision making for economic well-being and social stability.  But each of us can and should have a certain rough methodology for scrutinizing the credibility of these institutions and judging their trustworthiness.

There is no person who is in a position to understand the truth better than everyone else.  We all have blind spots.  Of course, some people are better informed and better educated than others in certain regards, but no one person is just in the single best position to understand the full and unadulterated truth.  Many people act as if they are that person, unfortunately.  Even experts in certain fields rely on other people for knowledge outside of their area of expertise.  The smartest, most accurate, most rational, best informed things are not individual and independently operating persons.  We can recognize that this honor can be bestowed upon some organizations that are structured optimally so that their internal functioning encourages accurate information to be uncovered, scrutinized, critiqued, tested.  In addition, this honor would only go to organizations that only promote and propagate information after it has gone through all of those phases and had stood up to such challenges.  Individual persons indeed are the ones who work for such organizations, but it is their relation to each other within this organizational structure that makes the entire thing so smart and so accurate at understanding the truth.

Some scientific and journalistic institutions are structured this way.  Unfortunately, they are often not believed by large swaths of the population.  That is a huge problem in our society.  We need to shed light on how these organizations operate.  We need to educate people on how this works.  Then they might come to trust science and journalism a little more.  The point would not be to encourage blind faith in such organizations, but certainly we need to counteract this broad skepticism and cynicism and nihilism.  It is very bad for society.  Rather than thinking that we each have the ability to independently tell fact from fiction, we should admit that sensemaking is an individual and a social process. 

Photo by Jezael Melgoza on Unsplash

0 Read More

Faith vs. Evidence in the Evaluation of Claims

Since life is so often filled with uncertainty and confusion, many people turn to faith in order to provide more fulfilling purpose and meaning to their lives.  There are things that we don’t know and can’t know, and some people fill these gaps through leaps of faith.  It is necessary to clarify what is meant by “faith” in this context.  In some contexts, faith refers to an ongoing relationship between people where each party has certain obligations to the other and to break such obligations is therefore said to be “unfaithful”.  Faith may also refer to certain types of community involvement or rituals.  In this context, it simply means dogmatically believing in an idea despite the lack of evidence or even in the face of counterevidence.  This is also referred to as “blind faith” to distinguish it from the other senses of the word.

If we take account of the things that we can know with utter certainty on the basis of evidence alone, we would not honestly come up with a very long list.  The truth is that we only directly perceive bits and pieces of information and that our minds bring it together to construct our conception of reality.  Some will argue that this entails that it is inevitably one’s faith that binds together the otherwise meaningless and disconnected bits of information that are perceived so that a full and meaningful picture of reality can be possible in our minds.  Such lines of reasoning are mistaken because it is possible for one to go through the mental process of conceptualizing reality on the basis of evidence and without resorting to blind faith.  We can acknowledge that there isn’t always an abundance of evidence that ties our sensory information together and it isn’t always clear how the laws of nature work within the greater universe.  However, we do at least have sufficient evidence to come to reasonable conclusions about these matters, and this process does not require a leap of faith.

There is a spectrum of degrees of epistemic justifiability that can apply to any conclusion that we might come to.  At the low end of the spectrum, where evidence is not considered, is blind faith.  At the other end are situations where an abundance of evidence provides undeniable proof and where there is no possibility of reasonable doubt.  The latter is not often possible in life, but we nonetheless can utilize the evidence available to us to make reasonable conclusions.  If one believes in a claim, account, or narrative that cannot be proven but nonetheless has some evidence in its favor, then this has a higher level of epistemic justification than other scenarios wherein one simply accepts these things based on blind faith.  Anytime someone comes across a claim, account, or narrative, they can consider the evidence and determine if anything can be found to support it.  If none can be found, then the most reasonable option is to simply not accept the information.

Although there are many people in the world who believe in some sort of so-called holy scriptures, the reason given by such people usually rests on their own faith rather than on evidence.  A sober assessment of this would have to classify such belief as dogmatic.  It is possible that some holy scriptures can be supported by strong historical or scientific evidence.  Indeed, there are portions of each of the foundational scriptural books of every major world religion that does stand up to such scrutiny.  There are actual historical events recorded in the Jewish Old Testament (Tanakh), the Christian New Testament, the Islamic Qur’an, and in some of the central books of Hinduism and Buddhism as well.  However, the firm belief that everything written in a book is entirely true is very unreasonable and dogmatic.

An example of having dogmatic faith in an idea would be when one believes in the afterlife because of a claim someone else made or that was written in a book, despite them never having seen any evidence of someone living after they die.  It is still conceivable that a person could logically conclude that there is some sort of life after death, but this conclusion would have to somehow be logically implied from empirical data.  If one simply believes in the afterlife even though they have no empirical or logical evidence in favor of this, then this belief would have to be the product of blind faith.  An example of faith in the face of counterevidence would be if someone believed that the world was created in seven days, despite the overwhelming scientific evidence that this process took much, much longer.

Although some people talk about having “strong faith” or “insufficient faith”, often in religious contexts, a better way of thinking about faith is to consider each individual claim on its own merits and also how different claims are dependent upon each other.  It should be possible for one to isolate anything that they believe and to determine whether or not this idea by itself has evidence in its favor.  Either a given idea has evidence in favor of it, and thus does not require faith in order to believe it, or it lacks evidence, and thus does requires faith in order to believe it.  Most beliefs that one has, though, are dependent on other beliefs.  For example, if a person believes that the god Thor is responsible for the lightning bolts that come from the clouds, then this depends on this person also believing that Thor exists, and neither of these two ideas has evidence in its favor.  Therefore, in order for one to belief that Thor sends lightning bolts from the clouds, they must not only have faith in this idea but also have faith in all of the ideas on which this depends.  So sometimes for one to believe in some idea on the basis of faith, they must also believe in other ideas on which the original idea is dependent.

One must have faith in order to believe in a claim that is supernatural, unless it is somehow directly experienced.  It seems doubtful, but at least possible, that there are supernatural occurrences happening in the universe.  At least, we can say that anyone believing a claim or account of supposed supernatural events would have to be relying on faith if they did not also observe such phenomena.  The track record of the person or organization who produced the information is also relevant when assessing how reasonable are the claims that come from that source.  If a claim comes from a source that lacks credibility because it has provided misinformation in the past or because it is seen as lacking credibility among its peers, then there is less of a reason for one to believe the information that is coming from that source.  In such situations, for one to accept those claims, they must have faith that this person or organization is telling the truth.

Photo by Kelly Sikkema on Unsplash

0 Read More

The Importance of Developing Better Mental Sensemaking Tools

In order for someone to gain knowledge from what is written or what is said, they need to have cognitive tools to reasonably evaluate information.  One’s ability to gain knowledge of things in the world that are beyond their range of firsthand observation depends on their mental skills for rationally filtering and scrutinizing claims, accounts, and narratives.  This is why knowing the means of correctly identifying facts, including the frameworks that we can use for sorting out fact from fiction, are just as important as knowing the facts themselves.

In other words, it is not enough just to identify facts.  It is not sufficient to merely point out the truth and to reject falsehood.  Our focus should also be on how to develop better sensemaking tools within our minds and within the minds of others.  We need to learn how to think more reasonably, how to develop critical thinking skills, how to avoid propagating false information, how to set the record straight, and how to help other people recognize the truth as well.  This day and age, this is increasingly difficult, so we need to carefully calibrate the disinformation detector (also called a “BS meter”) within our minds so that we can avoid being manipulated by crafty propaganda artists and prevent ourselves from becoming brainwashed by the relentless lies concocted by certain wealthy and powerful organizations.

We need to understand contextual information associated with claims that are made that that we can construct a more accurate mental representation of reality.  None of us will ever be able to get a full understanding of reality, and each of us will inevitably always have some biases and falsehoods in our minds, but there is a huge difference between believing complete fiction and being well-educated and responsible.  Sure, the latter is still a bit biased and will always involve imperfect knowledge, but it is far better than believing in and propagating contrived and dangerous falsehoods.

Indeed, everyone has some bias.  Everyone has an imperfect perspective on things.  Nobody sees and understand reality as it actually is.  Anyone accusing another of bias should acknowledge that they also have some bias.  It is important, however, to recognize that not everyone is biased to the same degree and in the same way.  We have to acknowledge that we have certain innate cognitive distortions, but that does not mean that our conception of reality is always hopelessly distorted.  Those who are more educated in particular subjects, meaning that they have taken the time and effort to learn about how certain things in life work, are probably less biased in those areas than someone who hasn’t gone through such training.  Also, people might be more or less biased than others with regard to certain subjects based on their background in life and past experiences.  It is possible, and in some cases entirely reasonable, for some people to recognize that other people have greater bias than they do in some specific area, but they should probably only do this if they can benchmark this judgment against some objective and largely indisputable facts.

We can, for example, imagine a plumber or an electrician who has extensive experience in their field of expertise who consequently would likely be far more accurate and less biased in their assessment of how the electrical and plumbing systems of a building should be constructed and maintained.  People who don’t have knowledge or experience in these areas might have ideas or opinions about how to wire up a building or how to install a new water line, but they could not be well-informed on these matters.  Similarly, we would expect that someone who has studied political science and sociology and whose career has involved working in the public sphere would be in a better position to understand these matters with less bias than people who don’t have any special training or experience in these areas.  Just as the plumber and the electrician would be expected to understand the ins and outs of their respective fields and how to get a building’s infrastructure in working order better than any layperson or outsider, so too would trained and experienced political scientists, sociologists, and economists be expected to know what needs to done to hold society together better than anyone with no legitimate training or experience in these fields.

Even for the majority of us who don’t have formal training or experience in political or economic issues or topics involving scientific research and discoveries, we can at least be well-informed enough to identify reasonable claims that are supported by evidence and to filter out those that don’t seem plausible.  This requires one to have curiosity and willingness to spend time learning.  If one makes enough effort to inquire, to study, and to scrutinize, then eventually they will have the mental skills to identify the significant details embedded within claims, accounts, and narratives that would give indication of how plausible they are and whether they are supported by sufficient evidence to be accepted as likely true.  One need not be an expert in any specific field to have this ability.

Background vector created by tartila – www.freepik.com

0 Read More

Conventional Empiricism vs. Radical Empiricism

Empiricism is the theory that knowledge comes primarily from sensory experience.  This means that a person comes to know facts about the world by use of their senses.  This theory makes the most sense given the prevalence of objective evidence in support of it, although there are situations where knowledge can come from other sources as well.  Evidence shows that some knowledge is inherent to the brain.  We also know that reason can allow one to form knowledge that is logically implied by existing knowledge.  Other than these situations, though, new knowledge can only be formed through experience of some sort.

The traditional understanding of empiricism includes forming knowledge through the five senses.  Some neuroscientists have argued that the traditional list of senses should be amended to include other ways that humans can observe phenomena, such as balance, acceleration, etc.  Regardless of what this list includes, what is common to all of them is that they can all be studied objectively.  This means that any body function that is considered a sense must allow one to gain knowledge of the external world and also must be applicable to scientific testing so that there can be an objective understanding of how this sense works.  Although some will argue that there can be senses that allow them to understand phenomena that are internal, meaning that they are specific to their own conscious experience, the traditional understanding of empiricism understands the internal in terms of the external world.

This understanding of what constitutes empiricism is widely accepted among modern scientists, but it does have a potential problem in that it might be overly strict so as to exclude certain experiences that people commonly have that lead to the formation of knowledge.  There is a possibility that the common definition of empiricism needs to be supplemented to allow for other types of observation that are not often considered empirical.  In a previous post, I explained why I think that emotions and self-knowledge could perhaps be considered empirical as well.  There are other inner experiences that might be considered empirical as well.  Nearly all people, it seems, have beliefs regarding the intrinsic value of certain things and also have certain ethical beliefs that derive from these values.  While some people’s values and ethics are largely determined by what they are told to believe when they are young or what their social group tends to believe, there are others whose beliefs in these areas seem to be the product of mature thinking that derives from their experiences in life.  One can almost say that the latter group’s values and ethical beliefs are the product of their perceptions of the world.

On the one hand, we can simply reduce any beliefs one can have regarding values or ethics to that which can be studied objectively.  For example, if someone believes that some object has value based on their experience with this object, then we could reduce these experiences to the objective senses such as sight and sound (they hear the object, they see the object) and we can reduce their experience of value to a feeling that is somehow determined by their more immediate experiences of sight and touch.  So under this objective interpretation, values are nothing more than feelings that are determined by one’s sensory experience or by the memory of a sensory experience.  Values therefore can be understood as nothing more than brain functions and can, theoretically, be studied objectively.

On the other hand, it is conceivable that people’s values are partially determined by experiences that cannot be studied objectively.  But perhaps if one is able to gain knowledge from an experience that is not reducible to any senses that can be studied objectively, then this should be considered a distinct sense.  The idea that one’s values and other subjective experiences are observed from a first-person point of view and thus should be considered a type of sensory experience is a version of radical empiricism, which was first formulated by Nineteenth Century philosopher William James, who summarized the central idea of this theory as follows: “To be radical, an empiricism must neither admit into its constructions any element that is not directly experienced, nor exclude from them any element that is directly experienced”.  In other words, any philosophical worldview is flawed if it stops at the objective, physical level and fails to explain how directly experienced phenomena such as meaning, values, and lived thoughts and feelings can arise from that.  This notion is also quite similar to what Edmund Husserl called “Evidenz”, which is awareness of a matter itself as disclosed in the most clear, distinct, and adequate way for something of its kind.

While the theory of radical empiricism comes from the pragmatist tradition, it is not conceptually dependent on any other theories that are commonly associated with pragmatism, such as instrumentalism (the idea that absolute truth is unimportant or unattainable and that the only thing that is important is that a theory works in practice), verificationism (the idea that statements only have meaning if there is some way of determining if the statement is true or false), or fallibilism (the idea that any belief could conceivably be false and that absolute certainty is impossible).

Those who have a more conventional understanding of empiricism, which is restricted only to senses that can be studied objectively, argue that people often misunderstand their own experiences and are unreliable in interpreting the origin of their own beliefs.  Subjectivity, according to this line of thinking, is inherently unreliable and that therefore sensory experience is only possible through the brain functions and sensory organs that can be understood objectively.  This sentiment is known as positivism, which holds that valid knowledge is found only in verified data (positive facts) received from the senses and that introspective and intuitive knowledge does not count as such.  The most extreme version of this view is called scientism, in which it is believed that objective science provides the best way of investigating, understanding, and predicting everything that can possibly be known. A more moderate version of this view, known as naturalized epistemology, has the central tenet that formation of knowledge must occur through natural, physical processes.

While naturalized epistemology allows for ways of forming knowledge outside of science, such as common sense for example, it shares with scientism the belief that it is impossible for anything to be known subjectively (through direct first-person conscious experience) but that is nonetheless outside the realm of objective study.  This view seems to imply that a coherent epistemology could, in theory, be completely natural but still be incompatible with this notion of naturalized epistemology.  This is because it is conceivable that humans could have a purely natural way of forming knowledge (following natural laws) that nonetheless cannot be studied objectively.  This could be possible if humans had a distinct sense through which they can form knowledge but that is too elusive to be studied in any way that can be called objective but would nonetheless be naturalizable because it would be governed by certain laws of nature that are as yet unknown.  If such a sense did exist, then naturally there would be people who would claim that they gained certain knowledge through the use of it, but those who believe in naturalized epistemology, as defined above, would not accept this as knowledge because this sense cannot be studied objectively.

Naturalized epistemology and scientism may have some significant differences, but they both rely on science to discount the theory of radical empiricism.  Quite simply, since radical empiricism involves taking at face value certain observations that cannot be known objectively, it therefore admits knowledge that is outside the realm of objective science.  Radical empiricism is incompatible with the traditional notion of positivism, and it is instead a form of post-positivism, which is a family of epistemological beliefs that admit knowledge beyond that which can be studied objectively.  Believers in some form of traditional positivism, whether it be either naturalized epistemology or scientism, would argue that any subjective experiences either must be reducible to phenomena that can be known objectively or else they would inevitably end up being no better than extremely vague concepts.  Positivists would contend that anything of the sort is simply not worth discussing unless there is some objective basis for it.

The problem with this argument is that it should be obvious to anyone that people discuss their values and the ethics that derive from these quite frequently and therefore it must be false to say that this subject is not worth discussing.  As for the idea that values and ethics can be reduced to brain functions (and other phenomena that can, in theory, be understood objectively), this is based on the overarching assumption of naturalism, which is on one side of The Great Dilemma.  I would argue that we should remain openminded and agnostic on which side of The Great Dilemma is the most reasonable and the most accurate.  It seems theoretically possible for one to gain genuine knowledge from subjective experiences that are beyond the reach of objectivity.  This would include the notion of qualia, which is the supposed qualitative aspect of conscious experience.  If anything of this sort exists, it might be immaterial and nonphysical, which entails that the idealism side of The Great Dilemma is at least plausible.

Nobody is going to deny that there is the redness of red and the distinct sound of a trumpet and distinct emotional experiences of being in love, but many people will argue that these have purely physical and natural explanations.  Perhaps they do, and perhaps they do not.  The main point is that it is plausible that there might be some aspect of conscious experience that is immaterial and nonphysical and that both sides of this are worth openminded consideration.

This discounting of subjective experiences seems to be partially driven by the success of the modern mainstream physical, biological, psychological, and social sciences, which have, for the most part, relied on methods for objective study.  It is true these sciences have certainly demonstrated their ability to allow us to understand the nature of the universe, the earth, and life in many ways.  However, this fact alone does not necessarily imply that nothing exists in the universe that is both completely outside the reach of objective science and entirely beyond the grasp of our minds.  The restriction of legitimate knowledge to that which is objective makes much sense within the contemporary scientific community, but it doesn’t work as well if one tries to apply these same restrictions of knowledge formation in trying to understand life as a whole, including the many aspects of first-person conscious experience.

For example, in our lived experience we cannot help but express opinions that, by all accounts, seem to derive from subjective experiences such as value judgments.  It seems that at least some of our value judgments would have to come from our first-person experience, the direct study of which is incompatible with positivism but is compatible with radical empiricism.  In light of this, if a person expresses belief in positivism and then engages in some form of moral advocacy, then their worldview does not seem to be fully coherent.  It appears that for this person, the position that purely subjective knowledge is impossible, which is implied by positivism, might amount to a kind of self-defeating skepticism.

The term self-defeating skepticism might sound harsh, but the usage here refers to situations where a person’s explicitly stated views do not appear to be coherent with how this same person acts.  For example, if someone makes clear that they do not believe that any normative moral statements can be true regardless of anyone’s point of view but then later tries to convince others to believe in certain moral statements that, by all appearances, are normative in nature, then a valid interpretation is that this individual is self-defeating on the issue of whether or not normative moral statements can be mind-independent facts.  This is because the only conceivable way that normative moral statements can only be absolute truth is if they derive from direct personal experiences of value that are not reducible to any objective senses.

This is not in any way intended to be an attack against people who say these things.  This is instead meant to make the case that when someone explicitly states their beliefs and then acts in a way that makes it appear that they believe something else, there might be an underlying motive for this that the speaker has not become introspectively aware of.  For example, imagine a person who will be called Jenny who explicitly states that she does not believe that X exists where X represents an idea that some people believe in and others do not.  Although Jenny makes clear that she does not believe in X, she later says things for which the most straightforward interpretation is that she does believe in X.

It might be the case that Jenny does not believe in X and that such statements are taken out of context.  However, it might also be the case that when Jenny says she doesn’t believe in X that she is expressing beliefs that stem from a worldview that was formed in an effort to understand reality in as simple of terms as possible, despite the existence of reliable evidence in favor of X that is not coherent with this worldview.  Although Jenny knows that this evidence exists, she chooses to ignore it when forming a worldview.  Although Jenny says she doesn’t believe in X, her knowledge of the evidence in favor of X inevitably contributes to her behavior and other people can recognize this.  In this situation, the most rational thing for her to do would be to acknowledge that X exists and to construct a worldview that incorporates X with all other knowledge.

Vector created by pch.vector – www.freepik.com

0 Read More