Validity & Liability
Basic Olicognograph: Only Relative Truth
Truth ?
For first order traditional logic "a statement is true only if it is logically true. Logically true statements are to be distinguished from empirical truths, which are contingent truths – that is, an empirical statement is true only if it logically follows from other true empirical. The paraphernalia of the pursuit of logical truths include the following ‘axioma’: ‘proposition’, ‘theorem’, ‘lemma’, ‘proof’, ‘corollary’, ‘hypothesis’, ‘condition’, and ‘definition’. Years ago, there was a small set of mathematical theorems which would be invoked in almost every book devoted to the mathematical structure of neoclassical economics. The most frequently used theorems had names such as Kakutani, Lyapunov, Brouwer, and Frobenius. For a while, until quite recently, this game had been transformed into one of referring to theorems named after economists, such as Arrow’s possibility theorem, Sheppard’s Lemma, Stolper-Samuelson theorem, etc. Today, it is somewhat curious that theorists refer to very few named theorems. domain’ by demonstration. Thus the current fashion in economic ‘theory’ methodology is to incorporate all givens in the ‘universe of discourse’ and provide a proof for anything else that is introduced".
"Since ‘pure’ economic theory takes formal logic as a given for the purpose of providing proofs of propositions, the only question of concern here is what constitutes a minimally acceptable statement to be included in the logical argument. For example, Aristote stated three minimum conditions: 1) the axiom of identity – in the process of forming or stating an argument the definition of any term which appears in more than one statement cannot vary; 2) the axiom of the excluded middle – admissible statements can be true or false (i.e., ‘maybe’ is not allowed), and, more important, if a statement is not-false, the only other possible status it may have is that it is true; 3) the axiom of non-contradiction – no admissible statement can be both true and false simultaneously in the same argument".
The truth status of the compound statement ‘If P, then Q’ is decisive only in one of the four possible combinations of the states of P and Q. Whenever P is false we cannot determine what the truth status of ‘If P, then Q’ is. In particular, the statement is logically decisive only when it is false.
L. A. Boland proposes:
- We are wrong about the problems of the universal applicability of ‘if ... then’statements; thus analytical economics is a successful program to establish logical facts. Furthermore, the ultimate objective of this program is the‘generalization’ of neoclassical economics – that is, an inductive proof of its universal truth proposition.
- We are correct and thus analytical economics cannot provide proofs of universal propositions. It can only provide analytical refutations of contingent propositions. A successful generalization of neoclassical economics is thus an impossibility for the same reason that inductive proofs of universal statements are an impossibility.
"Once one has put the relationship between the quantity of information and the probability of the truth of knowledge into the context of an input-output relation, the way is open to apply economic analysis to the status and acquisition of knowledge. Unlike consumer theory, in which absolute utility maxima may be allowed for finite quantities of any good. Conventionalist theory of learning specifically denies a maximum probability in the real world. Probability 1.00 is reached only with an infinity of information which would require an infinity of time. Thus the marginal productivity of information, so to speak, is always positive, although it approaches zero as the size of the information set grows decision larger".
Be a tautology a proposal true by construction, "if the theory is not a tautology (i.e., an argument which for logical reasons cannot be false), then to prove it true we would have to provide a potentially infinite series of models. That is, no finite set of confirmed models will do, since there will always be the logical possibility of a situation which cannot be modeled or fitted. Critics are often a bit confused about the nature of tautologies. They tend to think that any argument involving definitions and logic must be purely analytical, resulting only in tautologies. Although by their nature tautologies make the meaning of non-logical terms irrelevant, tautologies are not just a matter of definitions".
"We are not facing up to a fundamental question: why not seek tautologies, since they are always true statements? In other words, why are tautologies unacceptable as explanations? The point of all this complexity and perversity is that a statement which some might consider to be a tautology may only be a statement for which the hypothetical world has been designed logically to rule out all counter-examples. In fact, in economics there are very few pure tautologies (statements which are true regardless of definitions)".
Convention & Revelation
Be convention a proposal agreed to be observed if not taken as true for sure. "The Problem of Conventions is the problem of finding generally acceptable criteria upon which to base any contingent, deductive proof of any claim to empirical ‘knowledge’. To ‘liberal’ view, any inductive ‘proof’ cannot be complete because every reported ‘fact’ will require a proof too. Hence, we will begin an infinite regress unless we have already accepted ‘conventions’ concerning the ‘truth’ of the ‘facts’. In other words, the most we could ever expect to achieve is a logically consistent, deductive proof based.".
In some sense the only difference between the ‘liberal’ and ‘conservative’ positions is that only the latter holds out for a long-run solution to the Problem of Induction. In the short run – that is, for day-to-day methodological concerns – the positions are identical. Both positions require that the Problem of Conventions be solved in the short run. The ‘conservative’ methodologists thus have two viewpoints. They adopt Conventionalism in the short run and hold out for Inductivism in the long run. Given their schizophrenia, discussing methodology with economists is often rather difficult because it is not always clear which viewpoint is operative. Besides, as L. A. Boland, although the hidden agenda is necessary for the explanation of neoclassical methodology, it is not necessary for neoclassical economic theories".
Now what,could for example change a convention? "One of the most well known and most applied (group of) result(s) of mechanism theory is the Revelation Principle. It is represented by some formal theorems asserting for various notions of equilibrium under varying assumptions versions of replacement of implementing mechanisms by direct mechanisms. The general value of this principle lies in the fact the planner when looking for a suitable mechanism may restrict his search to the much smaller family of direct mechanisms. The problem with the Revelation Principle lies in the fact that in the class of situations which allows its strongest and most satisfying version this leads to general impossibility results, while in cases where it would be most helpful only such versions hold true which have considerable drawbacks. This dichotomy concerns dominance equilibria versus Nash or Bayesian Nash equilibria. The basic idea of the Revelation Principle is it to replace some game form that implements a social choice rule, i.e. which has only socially desired equilibria, by a direct game form, where truth telling constitutes an equilibrium..
False ?
"Modern Pragmatism, like Conventionalism, has its roots in our inability to solve the classic Problem of Induction, the alleged problem of providing a factual proof for every scientific statement. Conventionalism and Pragmatism are based on the acceptance of the necessity of dealing with the Problem of Induction. The former deals with the problem by denying its original objective, which was to establish the truth of scientific theories. The latter deals with the problem by accepting a weak criterion of truth, namely, ‘usefulness’."
"Instrumentalism considers the truth status of theories, hypotheses, or assumptions to be irrelevant to any practical purposes, as long as the conclusions logically derived from them are successful. Adherents of Instrumentalism, who think they have solved the Problem of Induction by ignoring truth, modus ponens will necessarily be seen to be irrelevant. This is because they do begin their analysis with a search not for the true assumptions but rather for true or useful (i.e., successful) conclusions. Modus tollens is likewise irrelevant because its use can only begin with false conclusions. Instrumentalism, as presented by some does not seek a truth substitute. Instead, the Problem of Induction is dismissed. In fact, all such philosophical problems (and solutions such as Pragmatism) are dismissed. The only question at issue concerns which method is appropriate for success in choosing theories as guides for practical policies.To a certain extent requiring falsifiability is ad hoc, since falsifiability is not necessary for the avoidance of tautologies. All that is necessary forthe avoidance of a tautology is that the statement in question be conceivably false. Some statements which are conceivably false are not falsifiable. Ideologically, whatever the schools of economic inspiration, you find quite similar concepts, differently covering the issues, while not in the same way".
By side of non logically formalized theories, it seems to us that ideological differences are theoretical but it often comes pratically complementary. From one epistemological economics system you may find answers of the limits of your systems in the opposite system; despite antagonisms pretended irreducible and imposed by geopolitics. Antagonisms are more often nurtured by dogmatism, dialectic, and rhetoric (and statutory positions in the markets of intelligence) rather than an incompatibility of management. There are enough levels, fragmentation and confused complexities to make complementary flattened contradictions work together. Alas, human ambitioned positivism in social affairs still predominates and empirical sense of execution (as applied by executives), may not have very different results than cartesian logical vanity: vain and dangerous.
Model ?
Schematically, in model-building, we traditionally start with a set of autonomous conjectures as to basic behavioral relationships which must include an indication of the relevant variables and which of them are exogenous and which are not. To these we add, specifying or simplifying assumptions, their nature depends on what is being simplified or specified (i.e., on the behavioral assumptions). One reason why we must add these extra assumptions is that no one would want to make the behavioral assumptions of our neoclassical theory of the consumer (or producer) as specific as would be required in order to make it predictions deduced from it) directly observable. Applied models add another set of assumptions designed to deal with the values of the parameters either directly specifying them or indirectly providing criteria to measure them. This gives us the following schemata for any model (in our engineering sense):
- A set of behavioral assumptions about people and/or institutions. This set might include, for example, the behavioral proposition Q = f(P), where dQ/dP is negative. The conjunction of all the behavioral assumptions is what traditionally constitutes a ‘theory’.
- A set of simplifying assumptions about the relationships contained in the above set. For example, the demand function stated in the theory might be specified as a linear function, Q = a + bP, where‘a’ is positive and ‘b’ is negative.
- A set of assumed parametric specifications about the values of those parameters created in the second set above.
"Virtually every applied neoclassical model today is a stochastic model. The reason for this is simple. Stochastic models are the primary means of accommodating the dictates of Conventionalism and at same time solving the Problem of Conventions externally by appealing to universally accepted statistical testing conventions. One does not to build stochastic models to satisfy Conventionalism, but it certainly helps. Some economists are fond of claiming that the world is ‘a stochastic environment’; thus technically no model is ever refuted or verified, and hence there could not be any chance of our construing one as a refutation or a verification of a theory? This concept of the world can be very misleading".
Other types of models are provided as strategies between actors in sorts of games, formalized by the Theory of games. There are many kinds of games and simple classifications. For example "a Cooperative Game Without Side Payments or for short an NTU-Game is a triple (I; P;V ). Here, I is the set of players (frequently assumed to be finite, e.g., I = f{1, ..., n}). P is a system of subsets of I which is interpreted as the collection of feasible coalitions . Finally V : P x P(Rn) is the coalitional function. This function assigns to every coalition a set of utility vectors. Certain regularity assumptions are imposed upon the function V in order to render it feasible for a #game#. For instance, as coalition S usually can assign utilities only to its members, it makes sense to assume V (S) x RnS S. Also it is assumed that V (S) is comprehensive , that is utility can be freely disposed of (formally: every vector dominated by an element of V (S) belongs to V (S)). In addition some version of boundness from above ensures that utility is not unlimited available. Convexity assumptions also are quite common.Solution concepts should exhibit certain appealing properties expressed by conditions or axioms. Ideally, they are uniquely defined by an appropriate set of axioms.
It is to consider sort of functions of reward or gain, criteria of decision and way to reach goals, For example "a bargaining solution obeys standard axioms: as a mapping on (a subclass of) bargaining problems it commutes with permutations of the players. It commutes with positive affine transformation of RI , i.e., follows a transformation of the scale. Frequently it is Pareto efficient. That is, at the evaluation of a fixed game, eventually no player can strictly improve his outcome unless another player suffers. The historically first and basic approach is provided by the Nash solution . The decisive axiom is called "Independence of Irrelevant Alternatives". It determines an outcome on the Pareto surface of the feasible set maximizing the coordinate product (relative to the status quo point's coordinates). Further solutions are the Kalai-Smorodinsky solution (uniquely characterized by a weak monotonicity axiom) and the Maschler-Perles solution (defined by superadditivity).Harsanyi's idea to model strategic scenarios under incomplete information as Bayesian games with consistent beliefs , i.e. player's probability distributions on all players' type space as marginals of the same a priori distribution".
The general stochastic game with infinite horizon offers a host of new problems. Convergence of the payoffs can be established by either discounting or averaging. The transition between discounted versions and average versions technically is manipulated by a "Tauberian Theorem", which links the behavior of the coefficients of a power series with the limiting behavior of the corresponding holomorphic function. For the non zero sum case the situation is more complicated. A first approach is established by what is known as a "Folk Theorem". According to this theorem one considers a bimatrix game infinitely often repeated with average evaluation ( the supergame). It is not hard to see that the payoffs constitute the convex hull of the bimatrix game's payoffs, meaning that the repeated game yields the same payoffs as correlated strategies in the one shot game. The payoffs resulting in the latter game can be interpreted as a bargaining problem
Now, in a broader sense, since the design of of an exact inspired by science we could observe a modernized Galileo's model concept you can proceed so:
- Identify the entry sequence (formal) or the subject (oranalytical object), - I start the sequence by 0 because with and idea of emergence of unit (from a zero which is not an absolute one)
- Specify basic characteristics of a system,
- Define the language to quantify and describe,
- Choose a way to review, deductive (from previous knowledge)or inductive (following what is happening),
- Specify the dynamic of your system (moving, fixed,restructured, etc.)
- Explain your experimental (empirical) conditions,
- Suggest the evolution, issues, possible states of transition,
- Qualify phenomenologically the kind of formal behavior of your object,
- Record and control or follow the general parameters of your experiment,
- Establish, justify, or calibrate your system of measure,
- Frame the mathematical treatment of your measures (in general or specifically),
- Characterize the equilibrium of your system, especially thermodynamics and systemic constraints,
- Analyze the dynamic of your system and its trajectory(mathematical formalism applied to trajectories),
- Assess the generality of your system and universe where the laws you established are valid, or put it first, if the laws are true enough to be given from the beginning.