Thus exploration rather than understanding is a better term for what multi-agent simulations can achieve. For example, your goal is to explore the functioning of a complex social system. In that case, your research design must stay between unattainable explanations and inadequate understanding. The solution in the form of abduction generates a new hypothesis, deduction draws predictions, and induction puts them under the test. It is the unity of all three types of inferential reasoning.
We want to explore the following problem: Where exactly is science in Computational social systems, and how to differentiate it from pseudoscience?
Let us first observe the methodological problems implicit in the agent-based modeling of complex social systems and in the multi-agent computer simulations widely used to generate necessary quantitative data.
Next, we will describe the consequences of (non)linear relations and complexity for social science research. Afterward, we cope with abductive reasoning as an alternative way of proposing new hypotheses.
Finally, we will turn this into a general framework for any research design based on agent-based modeling and multi-agent simulations within social sciences.
Let us clearly distinguish:
There are, of course, other possible methods. Still, multi-agent simulations are the best way to put into praxis agent-based models of complex systems.
Quantitative methods explain as many cases as possible with the help of the fewest possible assumptions. Furthermore, they attempt to identify critical variables behind the linear relationship of explanans and explanandum.
On the other hand, we generally consider traditional qualitative methods and case study approaches well-suited for explaining complex phenomena. Furthermore, they provide a whole and good picture of various problems in social sciences. In such research design, one often looks at only a few cases with many considered variables.
Agent-based modeling and multi-agent simulations in social science are unique quantitative methods for analyzing complex social systems and emergent phenomena. However, this method does not necessarily rely only upon simple linear relations of given variables.
A complex system means that its outcome(s) and functioning are not determinable by the force of pure logic, deduction, or mathematical inference.
Repeated interactions of autonomous agents often lead to emergent properties at the higher systemic level of the model concerned. That's also why one can often draw a clear-cut distinction between equation-based approaches to scientific modeling and their agent-based counterparts.
By repeated program runs with variable input parameters, computer simulations enable experiment-like research design, which is unseen and unprecedented in social sciences. Moreover, feedback loop inclusion facilitates a better understanding of the links between micro- and macro-level properties.
Precisely because of the character of complex social systems, the experiments mentioned above rarely lead to the discovery of simple solutions and similarly accurate knowledge as in the case of natural sciences. Of course, we are not saying there are no feedback loops in problems that natural sciences deal with. But the social world composed of interacting human beings functions thanks to mind-dependent (i.e., language-dependent, i.e., socially dependent) non-observables. This fact makes social sciences much more interested in exploring constitutive instead of causal relations. Furthermore, it makes them interested in analyzing agent structure mutual interdependence.
Under normal atmospheric pressure, 100 °C will be the boiling point of water regardless of whether we have concepts to describe or use such a discovery. On the other hand, the last witch in Slovakia was arguably burned in 1741. There are no witches anymore, and this is not the case of human-caused extinction like that of moa or dodo. People just ceased to act accordingly. Similarly, anarchy among states can favor either cooperative or defective patterns of interactions.
Logical consistency understood in deductive reasoning would be fine. Still, unlike in natural sciences, you can only sometimes take the social world seriously. You can rarely deductively infer predictions from general assumptions and expect you will be able to test them under the same conditions whenever you want. One does not influence mind-independent natural phenomena. Still, the very existence of objects of inquiry in social sciences is necessarily dependent on human (i.e., observer's) action.
Knowledge of some complex social system may mean a better capability to predict its future development. But unfortunately, we don't usually have the opportunity to collect data via real-world experiments and compare them with simulation outcomes to calibrate the model.
The best one is to test that there is a reasonable likelihood that we can draw the observed behavior of the target from the distribution of outputs from the model. Unfortunately, this is a relatively weak test.
This idea starts smelling like interpretivism research methodology.
For example, in natural sciences, you can simulate the airflow past the aircraft's wing, build the airplane itself, and finally try, if it flies. Similarly, you can model avalanche occurrence in a given valley, arrange avalanche barriers, and see if they work. Finally, if real-world events do not correspond to simulation results, you can calibrate the input variables and try again.
There is usually no such option in a complex social world. You can model an international relations system, but you cannot experiment with the real one or look for another example.
Your options are limited when your simulation results differ from the target's actual behavior. Especially since there is no large N of comparable real-world cases, we can hardly know whether our model is flawed. Or, what you observe in the real world is an accidental system development that usually evolves entirely differently.
Prediction based on outcomes of simulations is possible only if the model offers a very accurate picture of the natural world target. Unfortunately, we need more comprehensive knowledge to achieve this level of accuracy thanks to the dynamic and complex character of the social world.
Hence if not testable predictions, causality, and accuracy, then we must choose other options with a greater level of abstraction. But we come across another peculiar issue in social sciences, namely objective knowledge. Observer and observed are often inseparable here.
There are two plausible stories to tell, one from outside about the human part of the natural world and the other from inside a separate social real. One seeks to explain, the other to understand.
The explanation makes sense only if we accept the crucial role of causal reasoning, the possibility of objective knowledge, and the ability to identify (in)dependent variables. That is unproblematic for rationalist thinking but somewhat challenging to achieve when modeling complex social systems.
However, the particular epistemological position that emphasizes a subjective understanding of the social world instead of objective causal explanations remains. In contrast with rationalists, proponents of this approach are often labeled as interpretativists or reflectivists. The problem here is that it is overstretched to include multi-agent simulations among qualitative interpretative methods. However, theories focused on constitutive relations often employ these methods as well.
Choosing 'understanding' as the only alternative available after rejecting explanation and prediction is somewhat misleading, despite all the problems mentioned above in multi-agent simulations. Moreover, it unmasks little awareness of the debate on epistemological issues and maybe also still unsettled terminology of the research program dealing with agent-based modeling.
Thus exploration rather than understanding is a better term for what multi-agent simulations can achieve. For example, your goal is to explore the functioning of a complex social system. In that case, your research design must stay between unattainable explanations and inadequate understanding.
Regarding epistemology, multi-agent simulations can take the middle position between explanation and understanding to explore how complex systems work. This position is possible when agent-based modeling embraces parts of individualist and structuralist theories. At the same time, this particular computer simulation method doesn't associate with some specific theoretical position or school.
Any agent-based enterprise starts in a usual deductive manner. More specifically, by constructing a model with the help of general assumptions about the composition and ruling principles governing the target. For example, which states interact with each other, how often, and with what possible outcomes?
Yet research in the next step does not proceed to compare predictions inferred from these general assumptions with data acquired by observing the real world. Instead, computer simulations generate our data, from which we inductively generalize upon the functioning of the modeled target itself.
Thus if our model of international relations leads to the cooperative behavior of states, we infer that the natural world system favors cooperation too.
For some, agent-based models and multi-agent simulations naturally represent the "third way of doing science" besides widely accepted deductive and inductive reasoning.
Suppose we find some complex phenomenon that needs to be understood better. However, we cannot do so because of the missing data or lacking analytical tools (e.g., such as peace or segregation). Therefore, we construct an agent-based model and run the associated multi-agent simulations in this case.
Abduction should be at the center of our methodological efforts, while deduction and induction are essential but auxiliary tools. Abduction follows the following predicament. Social science is a more conscious and systematic version of how humans have learned to solve problems and generate knowledge in their everyday lives.
According to Peirce, abduction generates a new hypothesis, deduction draws predictions, and induction puts them under the test. He understood the scientific process in terms of the unity of all three types of inferential reasoning.
However, abduction raised severe doubts among scholars primarily because of how Peirce described it. On the one hand, he declared that "abduction is, after all, nothing but guessing." He also states, "[n]o reason whatsoever can be given for it, as far as I can discover; and it needs no reason, since it merely offers suggestions." Yet, on the other hand, he made sure that abduction "is logical inference, asserting its conclusion only problematically or conjecturally, it is true, but nevertheless having a perfectly definite logical form." He defined abduction in the following way:
The surprising fact C is observed, But if A were true, C would be a matter of course. Hence, there is reason to suspect that A is true.
Now there is a problem of what presumption we choose and why we choose it. The relationship between two alternative hypotheses is the most contested issue, and Peirce is rather vague. How do we discriminate between A and B if both claim to account for C? For him, suggesting a hypothesis is a matter of insight and some background knowledge. Understandably, this is not enough to put abduction on par with deduction and induction. Insight does not prevent us from proposing the most foolish hypotheses.
Even if Peirce tried to introduce some rules for coming up with new hypotheses, his attempts needed to be considered insufficient, even counterproductive. Objections thus remained.
How do we infer A from C, and why should we prefer it if there are other available hypotheses?
People usually use abductive reasoning.
Yet despite all this skepticism, people do make abductive inferences. You do not have to watch Dr. House to realize that diagnostics is an abductive process par excellence. In this process, physicians try to find an explanation for a patient's problems. The reasoning of William of Baskerville from The Name of the Rose by Umberto Eco is a perfect example of the inference nonreducible to deductive or inductive logic. Furthermore, people regularly propose hypotheses explaining observed data and then proceed to test them after drawing predictions, as Peirce demanded.
While trying to find out what a good abduction requires, he turned his attention to the problem of a good explanation. His implicit emphasis on explanatory power later enabled the perfection of the concept of abductive inference.
Peirce's abduction is now generally identified with a more advanced and refined version called inference to the best explanation. This version solves the problem of what hypothesis we draw from available data and why we prefer that particular hypothesis.
We will shed some light on problems of description (what hypothesis) and preference (why this hypothesis) as articulated by Kapitan and Lipton. Let's start with "the idea that explanatory considerations are an important guide to [abductive] inference, that we work out what to infer from our evidence by thinking about what would explain that evidence."
One can already see the connection with Peirce via an emphasis on explanation and inference of hypothesis from given evidence. However, to understand Lipton's theory, it is essential to notice the difference between inference and explanation.
Take four seasons as an example. After observing winter, spring, summer, and autumn in Central Europe each year, we use inductive generalization to infer that they will continue to alternate also in the future.
Yet this inference does not explain why seasons change. To do so, one needs to know the Earth's obliquity and its orbital motion around the Sun. Bearing in mind this difference, Lipton defines abduction (or inference to the best explanation, if you want) in the following way:
We infer the explanations precisely because they would, if true, explain the phenomena. Of course, there is always more than one possible explanation for any phenomenon, so we cannot infer something simply because it is a possible explanation. It must be the best of competing explanations. Given our data and background beliefs, we infer what would if true, provide the best of the competing explanations we can generate from those data. Far from explanation only coming on the scene after the inferential work is done, the core idea of inference to the Best Explanation is that explanatory considerations are a guide to inference.
This definition is a much more advanced version of abduction than Peirce's. First, we infer the best possible explanation given available information (the 'what' question). Then we assume this inference is true because it is the best explanation (the 'why' question).
But there remain some questions, especially concerning the quality of being the best explanation. Lipton tried to clarify his ideas by defining 'best' in terms of the loveliest potential explanation. First, he stressed the so-called contrastive explanation as a tool that helps us find the best available options by comparing alternative causal stories:
"To explain why P rather than Q, we must cite a causal difference between P and not-Q, consisting of a cause of P and the absence of corresponding event in the case of non-Q." And second, to be considered 'lovely,' an explanation must also demonstrate theoretical elegance, simplicity, and unification.
Also, for Harman, being more plausible, simpler, and able to explain more in a less ad hoc manner were the criteria for seeing one hypothesis better.
Many established criteria exist for evaluating the relevancy and rigor of scientific theories and associated research in social sciences.
But the problem is that most scholars dealing with scientific criteria assessing various research designs have adopted a rationalist view that promotes a hypothetico-deductive model of science. This model needs empirical falsifiability of examined theories (i.e., Popper).
Thus Walt also demands empirical validity and consistency in deductive logic, besides naturally requiring originality.
The hypothetico-deductive model of inquiry is only sometimes suitable for the social science of complex systems. More specifically, consider the discussion above about the observer-observed relationship, the possibility of real-world experiments, and causality.
Taking the world of natural sciences for granted, people can begin in a deductive way by making theoretical assumptions, drawing predictions, and finally falsifying or corroborating them against the evidence. But, certain social phenomena are more stable than others. Therefore, we are not saying that deductive reasoning is foreign to social sciences. Yet as far as multi-agent simulations of complex social systems are concerned, the real-world (experimental) testing of predictions is hardly achievable or even expressed as a goal.
Given that natural phenomena are independent of the human mind. Still, social ones are not. So there must be a difference in how social and natural sciences do their jobs. Hypothetico-deductive code of conduct can be a bonus rather than the core of the research design here.
We cannot take the world of social sciences for granted. Thus we have to start with accurate observation, assuring that the unexplored object of our research still exists. Then, given that the phenomenon is present and significant, one can build a model. Finally, the agent-based model's plausible and empirically valid assumptions should lead to the growth of the modeled phenomena, together with replicable multi-agent simulations.
If we achieve this, one can finally conclude that it is reasonable to regard assumptions of the model and their consequences as correct. Therefore, we can successfully explore the functioning of the complex system. There is no need for, and the possibility of, real-world experiments or deductive reasoning so far.
Only after that, we may inductively (from simulation results) or deductively (from the model's assumptions) infer some predictions concerning the real world. Finally, manipulating particular target features will enable us to examine whether these predictions hold.
For example, suppose realists want to explore cooperation among states in an anarchic international environment. In that case, it leads only to war for them. We can therefore propose a model with few assumptions based on available information. Fundamental units would be states with different rules of behavior and interacting in the Prisoner's Dilemma with the frequency of encounters dependent on their power and distance. In addition, friendship, enmity, and various mistakes would be possible. Finally, such a model favors the cooperative behavior of agents. In that case, we can draw some conclusions about the real-world target, which thus becomes better explored.
Moreover, from the results received, we can make further inferences about the impact of different variables (e.g., noise, power) on the functioning of our target. For instance, one can test the simulated noise effect by improving access to information. This behavior is a typical role of international regimes and institutions. Yet we must still bear in mind the difficulty of conducting real-world experiments with complex social systems and the possibility of multiple alternative outcomes.
We observe an unexplored emergent phenomenon of some complex social system and construct an agent-based model of the corresponding complex system. Suppose multi-agent simulations lead to the growth of emergent phenomena. In that case, there is a reason to suspect that the model's assumptions are correct.
As one can see, we proceed from the given evidence or observations to the formulation of a new hypothesis (model), which justifies abductive inference regarding its validity in the case of successful simulations.
Yet as with the original definition, the modified version of abduction fitted for agent-based modeling needs a more detailed description of how we move between the premises of the argument. More specifically, the explanation of how we get from the observation to the particular form of the hypothesis. Similarly, more has to be said about how to discriminate between different available models and the place for subsequent empirical testing.
We tried to show above that generally accepted criteria for evaluating research in social sciences based on a hypothetico-deductive framework are not very helpful here. Nevertheless, we can still draw some inspiration from them and refine these rules according to the logic of inference to the best explanation. Thus, any good agent-based research design must include real-world and scientific significance, (intuitive) plausibility of assumptions, and replicable data.
The first characteristic of every sound agent-based model using multi-agent computer simulations is its significance both for the natural world and the scientific community. Significance means not only that your research must have a tangible impact on human life (e.g., the robustness of peaceful interactions among states). But it must also enhance the scientific knowledge so that your model is different from the others in a meaningful way.
The second feature of an excellent agent-based research design is the plausibility of its assumptions. This design is also a shortcut for deciding which possible models are the best (or loveliest) exploratory tools for a given complex system. Plausibility is achieved theoretically or empirically, but ideally via both ways simultaneously. The necessity to defend different features of the model in both ways limits the number of possible alternatives. For example, one can hardly justify actors moving on a playing grid if you model sovereign states.
However, you can still include many additional features or modify the present ones so that your model will correspond better to the real-world target. What to include and what to leave behind is essential to the model's plausibility. To get the loveliest solution, one has to strike a compromise between maximizing accuracy and simplicity. On the one hand, there are highly complicated models with many details, such as territorial extent or the political system. They commonly represent what Lipton called 'likeliness.' By producing an accurate copy of the modeled complex system, they try to increase the probability of desired results (emergent phenomena). The problem is that we can omit many features of such models as they have minimal impact on simulation results. For example, allocation of resources, territorial growth, and ethnic composition of the population, all these features are probably redundant if you model the system of states. Notice also that loveliness and likeliness are not disjunctive properties in case of inference to the best explanation. On the other hand, a lovely model can also be the most likely to produce desired results.
On the other hand, there are incredibly abstract models with only very few assumptions. They can offer exploratory qualities of unmatched loveliness if they grow emergent phenomena successfully. One can still reduce the number of assumptions as far as the model intuitively makes sense and resembles the target. One can achieve this even without a contrastive explanation. But many times, abstraction goes on account of empirical and theoretical plausibility. For example, you can hardly justify round-robin encounters in the model of international relations. Even if it is the most straightforward possible pattern of interactions, states only meet each other sometimes. Deciding what to include in any agent-based model requires striking a balance between simplicity and accuracy by asking in a Lipton's way, 'Why this feature rather than the other?' One always has to keep in mind overall empirical and theoretical plausibility to maximize the loveliness of the model in the form of elegance, unification, and simplicity.
Finally, the model built upon plausible assumptions leads towards successfully growing some phenomenon for every sound research design. Following this, the simulation results must also be readily replicable. But the problem of replicable data has several aspects besides the ability to reproduce outcomes by other scientific community members. For example, to prove that results are not artifacts, one must first and foremost be sure that some bug in the source code does not cause them. Nevertheless, debugging is only the fourth out of six steps of modeling research design.
There is no demarcation explained.
There is a possibility of refutation of the hypothesis - model. Imagine a situation that two sufficiently different models explain the same phenomenon (both lovely). They would contradict and falsify each other (according to Popper). So technicality, the hypothesis formulation is still an educated guess, where someone can guess better and contradict each other.
CSS finishes with exploratory sensitivity analysis, but we are left without hypotheses, just conjectures.
Now we want to explore the problem of where exactly science is in Computational social systems and how to differentiate it from pseudoscience (the problem of demarcation). Let us now observe the limitations of science disciplines based on machine learning, deep learning, and AI. Through this limitation, we can navigate our way through the mentioned problems.
We will give evidence that machine learning and deep learning use evolutionary algorithms in their inferences and not inductive inference, as often believed. Furthermore, their search algorithm uses linear regression (evolutionary algorithm) and gradient descent (universal Darwinism - generalized ). Therefore, machine learning and deep learning evolutionary methods can be refuted and generate new knowledge.
The scientific reasoning is related to constructing models and searching for explanations to obtain insight into a phenomenon. The derivation of hypotheses from these models and their application in empirical investigations allows the evaluation of the phenomenon.
The division of all inference falls into the three forms of scientific reasoning and inquiry: abduction, deduction, and induction. They may be the key to logic.
In this context, abduction is about generating a cause as the best explanation for an observed phenomenon based on existing rules or theoretical knowledge - inference to the best explanation or 'educated guess. This kind of reasoning is knowledge-expanding, leads to creative ideas, and thus forms new theoretical assumptions.
In contrast, induction derives a general rule from repeated observations of a phenomenon. This inference is knowledge-expanding but does not provide any principally new ideas. Francis Bacon popularized the idea that science was based on a process of induction by which repeated observations are generalized to theories. This idea was criticized by Hume and others as logically untenable, leading to the famous 'problem of induction' whereby science was assumed to utilize a logically invalid process. It wasn't until landmark work that Karl Popper resolved the problem of induction. Popper showed that induction was not the basis for science by showing how science advances via falsification rather than confirmation.
In deduction, a general rule as a theoretical basis and a cause are used to predict a result of a particular case. If the rule is true, each case will fit this rule. Thus, deductive reasoning is truth-preserving and logically flawless. However, as in the case of inductive reasoning, it does not generate principally 'new ideas.
Deduction proves something must be; Induction shows that something is operative; Abduction merely suggests that something may be.
The concept of induction dates as far back as the 15th century and signifies the idea of specific instances being generalized to universal laws. To use the canonical example, suppose we see a specific swan S1 and see that it is colored white or in other words:
S1 -> White
If we later see a bird that is not white, deductive logic allows us to find that that bird is not S1:
S1 -> White ¬ White ¬ S1
However, deductive logic does not allow us to generalize from a specific statement like this. For example, the fact that S1 is white does not let us assume that an S2 - a different swan - will also be colored white. But what if we see hundreds or even thousands of swans, and all of them are white? Is there some point at which we can rightly assume that we can logically reason that all swans are white? In other words, is it valid to reason:
S1 . . . S1000 -> White for all Sx
The supposed ability to reason from specific statements to universal statements is the method of induction. Francis Bacon popularized the idea that the scientific method was based on this 'inductive method' of reasoning from specific statements to universal statements.
However, Hume pointed out that no matter how many specific statements we observe, we are never justified in reasoning to a universal statement. This problem is because it is logically invalid to ever reason from a particular statement to a universal statement. And in fact, the discovery of actual black swans showed this was the case.
This problem raised the question: if the inductive method is logically invalid, then how can it be the basis (as Bacon supposed) for scientific discovery? If Bacon was correct, would that imply that science is 'unjustified' and therefore no better than myths and dogmas? These questions soon become known as ' the problem of induction.'
While statistical induction has some utility, it also has its problems. For example, people believed that all swans must be white if they lived in Europe during the 16th century. Following, "black swan" had come to refer to something being impossible. Even a seemingly valid random frequentist sample would have found 100% white swans because black swans were only available in the yet-to-be-discovered Australian continent.
We notice that even a statistical inductive model only makes good predictions if we have a correct prior theory about what variables to factor over (i.e., location). This behavior is because statistical models are inherently parochial. They do not have "reach" beyond the domain where the sampling occurs. Models which only reach what is known cannot stake out new claims (conjectures) and thus do not expose themselves to refutation.
Statistical induction thus cannot replace the conjecture and refutation process of knowledge creation and instead is just a tool that is sometimes useful when building theories that do have reach.
We usually consider machine learning as rooted in induction.
In statistics and machine learning, we often refer to a proposed model as a 'hypothesis' as if it is a scientific theory. But statistical/machine learning models rarely take the form of an explanatory theory. Instead, they are generally simple predictive heuristics (exploratory investigations).
We construct a model from the theoretical background and abductive or inductive reasoning, both forms of logical inferences.
Deductive reasoning is the third logical inference. We use it in a model application. Such an application starts with deriving hypotheses deductively from the model, usually followed by empirical testing.
We typically perform linear regression using gradient descent, which we already demonstrated is an evolutionary algorithm.
But linear regression can also be performed using normal equations which do not use an evolutionary algorithm. We prefer using gradient descent for linear regression because the normal equations quickly become intractable. This behavior suggests that one advantage of utilizing universal Darwinism is tractability.
A more concrete formulation of induction, Solomonoff induction, however, has been proposed as a basis for AI. The notion that AI systems approximate an idealized Bayesian agent has been quite popular. Unfortunately, Solomonoff induction suffers from several problems, a few of which we believe are fatal - not even considering being incomputable and hard to approximate. A complete enumeration and study of these problems are beyond the scope of this work. But two particularly fatal issues arise from the inability to construct an appropriate prior beforehand (the 'grain of truth' problem and the 'problem of old evidence').
What about understanding deep learning as approximating Bayesian statistical modeling?
Gellman and Yao elaborated on how Bayesian statistical modeling suffers from several holes and pitfalls that practitioners must grapple with solving through trial and error.
Nielson argues that the universal Darwinian framework provides a better foundation for understanding AI systems.
Outer Loop: (a) Initialize network weights randomly (b) Try a set of hyperparameters (c) Inner Loop i. Measure loss function for weights ii. Calculate slope at current weights iii. Use slope to try to move to a better set of weights iv. Go to step i until improvements in loss function stop for some time. (d) Go back to step b
Most people may not think of deep learning as an evolutionary algorithm. Still, a careful look at the above algorithm (where emphasized) reveals deep learning is two nested evolutionary algorithms of trying variant solutions.
There is the problem of how humans develop a new hypothesis to test. He argued that science would best progress using deductive reasoning as its primary emphasis, known as critical rationalism. He held that induction has no place in the logic of science (see The problem of induction). However, deduction also suffers from the issues above, and we are left with abduction.
An established theory of abductive reasoning from cognitive psychology describes seven components of abductive reasoning. This theory describes a continuous, implicit process with different steps that we do not have to run through in a strict order. This process can lead to a consistent type of explanation free from redundancies.
Ideally, abductive reasoning begins with the perception of a phenomenon, for which the step of data collection takes place in an exploratory or theory-based manner. Subsequently, we incorporate these data into an existing mental model leading to a preliminary comprehension. Next, we check whether the new data contradicts the previous model or needs more understanding. These thoughts lead to the step of resolving the anomaly. If this occurs, we collect new data. If there are several possible explanations, we refine plausible alternative explanations. Due to this, it is necessary to discriminate by selecting one potentially plausible explanation. In checking for consistency, we include both likely and unlikely explanations. This process of decision-making may lead to the collection of new data. If checking for consistency is not successful, we discriminate other potentially plausible explanations.
Although model testing in the theory of Johnson and Krems is about eliminating this uncertainty about improbable explanations. We can extend this step to an abductively developed model. Then, when it comes to the application of this model, hypotheses are derived deductively to be tested ("abductive model evaluation").
Philosopher Karl Popper solved the problem of induction by reframing the question entirely. First, Popper threw out the idea that any sort of 'justification' (in the sense of certainty, near-certainty, or even just being probable) was possible.
Instead, he believed that the scientific method had nothing to do with induction. Therefore, he based the scientific method on a Darwinian epistemology (theory of knowledge). In his view, scientists started with some problem they wanted to solve (i.e., why the perihelion of mercury didn't follow Newton's laws of physics), and they 'conjectured' (guessed) possible solutions to the problem (i.e., Einstein's special or general relativity). Subsequently, they subjected both the old and new explanations to tests designed to 'refute' either or both theories (i.e., Arthur Eddington's expedition to test the positions of stars near the Sun during an eclipse.)
Suppose these critical tests or any criticism refute one of the theories but not the other (i.e., refuting Newton in favor of relativity). In that case, it is rational to follow the surviving theory rather than the refuted theory, regardless of any need for certainty, justification, or probability.
Popper argued that this evolutionary 'survival of the fittest theory of knowledge was the fundamental basis for science and that induction was unnecessary to explain the process and only created unnecessary problems when shoehorned into the scientific method. Popper summarized his view of science as 'conjecture and refutation'.
I want to critique now the Popperian way of blindly accepting the refutation process as a way of gaining knowledge. Consider that if we prefer a theory over another by the process of refutation, we are just moving to a better selection - in evolutionary terms, where there is no certainty given that a new theory is a universal law. Therefore, how can we be confident in the sole process of refutation that we can gain some knowledge and apply it to the selection choice? Certainly not through the feedback loop of choosing 'better' theories, since 'ad infinitum' would render all of these theories false and, in that regard - infinitesimally closely false to each other that we might not be able to discern which one is truer or false in that manner. At this point, I argue that using Aristotelian logic to gain knowledge is false due to the human condition of our senses (percepts, aggregates). Therefore, no knowledge can be the basis for the evolutionary selection of the refutation process of theory or theories.
Popper proposes the hypothetico-deductive model, which considers the hypothesis an 'educated guess.' However, when the formation of a hypothesis is viewed as the result of a process, it becomes clear that this 'guess' has already been tried and made more robust in acquiring its status of hypothesis. Indeed, many abductions are rejected or heavily modified by subsequent abductions before they ever reach this stage.
One example of an algorithmic statement of the hypothetico-deductive method is as follows:
One possible sequence in this model would be 1, 2, 3, 4, but if the outcome of 4 shows 3 to be false, you will have to go back to 2 and try to invent a new 2, deduce a new 3, look for 4, and so forth.
Note that this method can only partially verify (prove the truth of) 2. It can only falsify 2. (This is what Einstein meant when he said, "No amount of experimentation can ever prove me right; a single experiment can prove me wrong.")
But we did not need to observe the perihelion of Mercury thousands of times before we realized something was amiss with Newtonian physics. Often a single observation is sufficient to start the conjectured process so long as the observation is a problem needing a solution. Therefore, only a special kind of observation starts finding a new general law. This special observation observes a problem at odds with present theories.
Multiple observations are unnecessary.
But the most crucial philosophical mistake introduced by Baconian induction was the idea that science ever needed justification or certainty in the first place. Popper pointed out that the mere fact that we can compare two theories via a critical test and demonstrate that one theory/explanation was better than the other was sufficient reason to prefer one theory over the other without ever needing to claim certainty that the theory in question was correct. The mere fact that it is the sole surviving theory currently available is reason enough to adopt it. In other words, theories are never confirmed but only falsified, and that's OK. Popper argues that if we can live without confirmation, we do not need Baconian induction.
There is no need for certainty.
Now, we would like to discuss the so-called 'problem of induction. More specifically, whether we can justify inductive reasoning and under what conditions. Popper points out the problem with induction in one of the appendices of The Logic of Scientific Discovery by referring to Hume's criticism of induction:
Hume argues, 'even after the observation of the frequent constant conjunction of objects, we have no reason to draw any inference concerning any object beyond those of which we have had experience.' Suppose anybody should suggest that our experience entitles us to draw inferences from observed to unobserved objects. In that case, Hume says, 'I would renew my question, why from this experience we form any conclusion beyond those past instances, of which we have had experience.' In other words, Hume points out that we get involved in an infinite regress if we appeal to experience to justify any conclusion concerning unobserved instances.
Popper held that seeking theories with a high probability of being true was a false goal that conflicted with the search for knowledge. Instead, science should seek theories that are probably false (which is the same as saying that they are highly falsifiable and so there are many ways that they could turn out to be wrong). Nevertheless, all attempts to falsify those theories have failed so far (that they are highly corroborated).
We can extend the problem of induction to deduction as well.
Popper found the growth of scientific knowledge followed the same principles as biological evolution, leading to the field of evolutionary epistemology. Popper claimed to have refuted the idea that induction provides a foundation for knowledge. Years later, many scientists still believe some version of induction (for instance Bayesianism) is the basis for science.
Peirce provides various justifications for the knowledge-enhancing role of abduction, which resort to the conceptual exploitation of evolutionary and metaphysical ideas. Nevertheless, this explanation clearly shows that abduction is constitutively akin to truth, even if certainly always ignorance-preserving or mitigating in the sense that the "absolute truth" is never reached through abduction.
Campbell claims that we base science on an evolutionary process of 'survival of the fittest (idea),' and all knowledge creation on evolutionary epistemology. Popper later strongly endorsed this generalization of his theory. We refer to this generalization as "universal Darwinism." We can think of the universal Darwin meta-algorithm as a generalization of biological evolution.
Halas, M. (2011). Abductive reasoning as the logic of agent-based modelling. Proceedings - 25th European Conference on Modelling and Simulation, ECMS 2011, 105–111. https://doi.org/10.7148/2011-0105-0111
Upmeier Zu Belzen, A., Engelschalt, P.; Krüger, D. (2021). Modeling as scientific reasoning—the role of abductive reasoning for modeling competence. Education Sciences 11(9), 9–11. https://doi.org/10.3390/educsci11090495
Nielson, B.; Elton, D. C. (2021). Induction, Popper, and machine learning. 2021(2). http://arxiv.org/abs/2110.00840