3. The Unknowable Is the Ground of Whatever Is Known
Jorge Luis Borges has a famous short story called "On Exactitude in Science," in which a cartographer makes a map that is as large as the territory that it covers. What can this tells us about the relation between models and the reality that they proport to disclose?
Causal Explanation Doesn’t Disclose Everything That There Is
There is a common joke in the sciences that you’d need a computer the size of the Universe to explain and / or describe it completely, which not only indicates the mismatch between sheer amounts of information and computational capacities, but also identifies an essential distinction between “explaining” and “describing.” Explaining requires generalizations about causes and conditions, which means a reduction of the totality of information to discern underlying networks of relations often called “probability spaces.” “Describing” can be more of a one-to-one mapping of the positive bits of information to whatever there is, but even this reduction of the general interpretive framework of knowing, requires some relation between the particular and the general, and thus some interpretation beyond what is given as a “raw” fact or state-of-affairs. There is no reduction to the raw facts of the Universe because facts require an interpretative framework, either mathematical or linguistic, to frame what counts as a positive bit of information. There is no knowing without generalizations. The quests for “presuppositionless” knowledge in phenomenology, or to rid scientific observation of its “theory laden” biases have mostly been disbanded at this point in the history of human knowing.
It might be possible to describe the Universe without understanding it, but even the most basic description includes concepts that may be hidden but are the necessary generalizations that organize and categorize the details, or mere facts, of the Universe. For example, someone perceiving a patch of red may not have a full grasp of the larger situation in which the color is situated, but this basic perception requires the concept of color to organize her sensory data into the experience of red. She is relating a general concept to a particular occurrence. There may be something like pure sensation without the mediation of concepts, especially for human infants and for non-symbolic forms of plant and animal life, but sensation still requires the mediation of a sensory system. A sensory system relates percepts in much the same manner as a generalization but without the abstraction of a concept. However, a sensory system isn’t exactly immediate either because it still requires a relation of difference, and once a language user has entered symbolic mediation through language, even “pure” sensations have already been copulated with the Symbolic.
Alfred North Whitehead wrote that the simplest concept is the joining of two percepts according to a rule. A sensory system also joins at least two different percepts according to a sensory rule. For example, as discussed above, in the constant temperature of the womb, the infant does not perceive temperature. It is only the different temperature of the delivery room that gives the infant its first experience of temperature without a concept but not without a relation of consistency to difference. In this way, knowing is a generalization of some sort because it relates the absence of the perception of temperature in the consistent temperature of the womb to the perception of temperature inthe difference of the delivery room through the nervous system, specifically the thermoreceptors in the skin and the Hypothalamus in the brain. The term “knowing” is often reserved for the generalizations given by concepts, but therelations of difference given by sensory systems also offer a kind of generalization to a past condition, so that sensation is also a kind of very rudimentary knowing, or bodily conception, which might be thought of as a sensual objectbecause it is the object of sensation.
But it is also a non-object because it isn’t closed or fully determined, which means that knowing at the most basic level of bodily perception includes unknowing. This is the unknowing that creates the desire for knowing according to the psychoanalytical model of an obstacle that is the cause of desire, which is also the unknowing that motivates the infant’s entrance into the Symbolic for Lacan. The infant learns the language of its caregivers to symbolize and communicate its desire. However, in whatever ways symbolic conceptualization satisfies the desire to know what sensual conceptualization couldn’t understand, symbolic conceptualization frustrates understanding by forming its own barrier to complete knowledge. Any conceptualization, from a basic sensation to a complex mapping of a situation, is formed by a closure of some sort, which might be understood as the closure of identity. A sensory system identifies the situation by joining past percepts to present ones via biological processes of difference detection. A symbolic system of difference, or Derridean “Differánce,” identifies the situation by joining past percepts and concepts to present ones. A symbolic intention makes this identification by copulating signifiers with the present situation. An identification is a generalization of what is essential to the situation according to some given intention, butthis conceptual closure must disavow what is different from or counter to its completion, or to its intention.
This disavowal, which is necessary for the closure of identity, of what doesn’t fit the general explanation is why there is currently no unified “Theory of Everything.” Any disavowal, as psychoanalysis has taught us, is subject to a “return of the repressed,” in the ominous words of Freud. All theories thus far that have claimed to disclose the Universe according to its basic structuring principles have been so plagued by the return of their repressed counterfactuals,that many thinkers have come to assume that any possible Theory of Everything is only a temporary best guess that will eventually have to be overturned as counterfactuals to that new, grand theory accumulate in the mode of Thomas Kuhn’s The Structure of Scientific Revolutions, best known through the overused phrase “paradigm shift” in popular culture. A “Post-Modern” formulation of the concept of paradigm shifts might be something like “There is no meta-paradigm.”
A Theory of Everything would not have to cover every fact about the Universe, like the temperature of every delivery room, plus a whole lot of other specific details about the Universe to be complete, but it would have to be able to explain the “conditions of possibility” for any possible fact. However, counterfactuals are a problem because they “falsify” a theory as an incredible explanation of the Universe. Falsifiability is the cornerstone of scientific knowledge because what isn’t falsifiable, isn’t provable, and therefore, is neither objective nor “true” according to the scientific definition of truth as corresponding to the “facts” of the Universe, or to the states of affairs of the “real” world. A Theory of Everything wouldn’t have to account for every fact of the Universe because it would be a generalization about the Universe that would account for how any possible fact was given by its conditions of possibility, or how any given possibility was related to every other possibility. In other words, it would give a causal description of whatever there was and whatever there could possibly be without having to determine or observe each fact. This sort of not knowing everything is still a positive view of knowing because this general knowledge of the conditions of possibility could potentially determine any possible situation, so that there are no more novel situations, even though most of them have never been realized.
A Closed Totality or an Open Infinity?
There isn’t any lack of determination about an undetermined Universe, as there would be if there were any true indeterminacy in the Universe, which would be something like Emmanuel Levinas’s distinction between an “open infinity” rather than the “closed totality.” Any possible Theory of Everything would require a closed totality, even if it “contained” the determinate infinities of Douglas Hofstadter's “strange,” recursive loops, to disclose whatever there is. Again, a closed totality is like Hegel’s vertical infinity, which is like the additive infinity of “plus one.” However, if the sort of infinite paradigm shifts of Thomas Kuhn’s scientific revolutions describe an open infinity, then there is no possible final or absolute Theory of Everything. Hegel’s horizonal infinity is an open infinity because it isn’t additive but simultaneous, which relates to Hegel’s unique use of the term “absolute,” which doesn’t mean ultimate or final as it does for most contemporary uses of the term.
An open infinity is the always present relation of determinate being to indeterminate non-being, which “resolves” as the continual process of becoming. In whatever way Hegel’s dialect of the Absolute is final or complete, it isn’t a synthesis, or it doesn’t determine being without a remainder of irreducible indeterminacy. “Absolute” for Hegel means something like “absolve” in English, which means vindicate, acquit, or liberate. Hegel’s Absolute absolves, withoutthe total solution, or dissolution, of a synthesis because in whatever way nonbeing negates being, becoming is never complete. Therefore, any present situation is composed of what can be determined and what can’t, and what can’t isthe absolute resistance to determination that allows being’s becoming to be open, because indeterminate, and horizontal because of the simultaneity of finitude and infinity of the present.
For those who believe in an eventual, unified, physical description of the Universe, there is no actual indeterminacy in it, only “unrealized” possibility, which is another way to think about incomputable problems as nonetheless determinate. In this view, typical of scientific materialism, all the Universe's possibilities have already been actualized, even if not realized, which is the “predetermined” possibility of Hofstadler’s recursive loops, or of Levinas’s closed infinity. Determined possibility is without choice or chance because a free choice is grounded in the freedom of chance. Each determinate choice must relate its determination to indeterminacy, or else it is the illusory choice ofpredetermined possibility. For most determinists, the work of the sciences is to uncover the determined “fields” of the predetermined possibilities of the Universe. The problem has been that indeterminacy seems like a permeant feature of the Universe rather than a bug, so much so that at the level of quantum fields, “quantum indeterminacy” appears to be structural rather than epiphenomenal, which has meant that quantum fields, the most basic buildingblocks of the Universe’s possibilities, aren’t describable in terms of determinate causality, but rather, in terms of indeterminate probabilities.
Chaos provides an illustrative counterpoint to indeterminate probabilities because it is a determinate field or possibility space. Chaos is potentially computable given the right inputs and formulations, even though it is notoriously hard to predict, because Chaos describes the behavior of a system that is extremely sensitive to seemingly minor variables and that is also extremely resistance to seemingly large ones. The weather is a chaotic system that isincreasingly predictable because these variables have been increasingly accounted for and put into more accurate mathematical relations with each other. Hypothetically, any remaining uncertainty about the weather can be addressedby better inputs and subtler formulations. Chaos is a complex system, and because of its characteristic mismatch between inputs and results, it has been used as an example of how complexity is determinate even when it appears to be “chaotic,” not chaotic in the formal sense, but chaotic in the more colloquial sense of indeterminate or without reasons. The concept of “emergence” is evoked whenever there is an apparent non-coincidence between inputs and outputs in complex systems. Complexity has become a sort of catch all explanation for whatever appears to be without determination, but nonetheless is determined, especially such emergent phenomena as the illusion of a free choice, or the even grander illusion of consciousness.
Incomputable problems demonstrate a mismatch between computation and information that is different than the reducible indeterminacy of chaos; although, the indeterminacy of both isn’t indeterminate in-itself but merely undetermined. The un-determinacy of incomputable problems is not reducible, but it is determinable, even if imputable problems produce too much undeterminable information to compute. Incomputable problems can’t reduce situations to the general formulas of solvable problems because these general formulas produce more undetermined particularity, or difference, than can be resolved within the set defined by their computations, which creates an undeterminable, but closed and determined, recursive loop without any actual indeterminacy or open possibility.
There is another related thought experiment about knowing the Universe in a complete way associated with the Jorge Luis Borges’ story “On Exactitude in Science.” Borges demonstrates that a map of everything is useless unless it reduces information to a more general accounting of the causes and conditions of that information. The higher order ability to generalize is what prediction machines, forged by both evolution and the Symbolic, were designed to do. This is the material cause of the Symbolic, which is the relation between the Symbolic and natural selection according to evolutionary biology. Whatever virtuality concepts allow human prediction machines to access, it is ultimately a determined virtuality. Imagined possibilities could only ever be uncovered or discovered possibilities, which means that all possibilities have already been actualized by the determinate beginning conditions of the Universe and only realized as the determined relation of space-time with matter-energy according to the physical laws.
A complete account of every detail of the Universe would be worse than useless, as Borges’ map of the totality of the territory shows, because pattern recognition is what matters for uncertainty reduction. The right ratio of particularity to generality is what makes knowing either advantageous or useless. An intention outside of any set of mere facts must generalize about them to “know” the situation and make accurate predictions about what might happen next, which is another formulation of how what is outside the set or situation determines the inside according to criteria that are not inside the situation itself. As has already been mentioned, causal structures are not “in” the situations that they determine but are imposed on it from an outside intention. The possibility space that gave rise to the present situation must be inferred or deduced from past situations, so that generalizations about the present are always from the past. A system of generalizations, like mathematics and language, relates the present to the past, so that if we only measured the situation’s quantities without generalizing about its causes and conditions as repetitions of past structures, we would not reduce any uncertainty about the situation or really be said to understand it in any way. And what’s even more is that even the mere measurements of the present situation that appears as if without-generalization, are generalizations, and so are not actually in the present, but tools, or concepts, brought into the present to generalize about it, so that it can be known. All knowing is this relation between the novel particularity of the present and the Whiteheadian “generals” of the past.
The particulars of a given situation must be related to each other via generalizations from an intention outside of the particular situation, which is the relation between that outside definition of a set to its inside operations. The general criteria for any given intention are evaluated from the outside of that intention, so the outside is the recursive vantage point that is always one step outside of itself, so that there is no ultimate outside, or no ultimate intention,which can evaluate itself according to its own criteria, but always from one step back “into” the outside of each intention’s intention. Every present situation is comprised of what can be determined about it with the given concepts of the Symbolic and what can’t. This latter portion of any possible present situation is unintentional because it can’t be objectified in the phenomenological intention nor conceptualized by the symbolic intention.
The Universe is then comprised of repetition and difference, as Deleuze so famously put it, or of “generals” and the particulars, as Whitehead put it. The material determinist holds that whatever difference or particularity there isin the Universe, it can ultimately be reduced to determinate repetitions or general causes, according to discoverable, physical necessity. The material indeterminist holds that whatever difference is left after the physical determination of a given situation, it is the particular indeterminacy that makes each situation singular. This absolute remainder of indeterminacy, that Jacques Lacan called “the Real,” is irreducibly ambiguous, so that this ambiguity is the infinity that opens itself up for the multiplicity of symbolic interpretation that is beyond the univocality of scientific identification or determination.
This is not a rejection of science nor of materialism per se, although most scientists see any assertion of irreducible ambiguity that way. This pessimist position with regard to complete knowing, usually associated with some flavor of Panpsychism, but also with various new formulations like Material Idealism, Idealist Materialism, Pan-experientialism, Mysterianism, and certain types of dual aspect monism, all hold that there is some, yet undetected, more “spiritual,” in the sense of ineffable, aspect of matter-energy beyond the determinations of the physical laws, which is usually something like consciousness, or qualia, that is as real as matter’s measurable quantities. For the material determinist, quantities and their causal relations are all that are needed to explain and describe the Universe in full. For the material indeterminist, a full description of the Universe includes what cannot be determined about it, which is its irreducible indeterminacy or openness to symbolic interpretation. This is the optimism of these pessimists about knowing mentioned in the introduction: there are other sorts of valid knowing beyond those of scientific measurements and the mathematics of algorithms. If there is an undeterminable indeterminacy, or “absolute” indeterminacy, there cannot be a Theory of Everything because the Universe cannot be fully known, but this absolute lack of determination is an excess of interpretability, or the multiplicity of being given by the indeterminacy of nonbeing.
Thinking, as well as the consciousness that is “epiphenomenal” to it, are thought of as material processes by the modern sciences, so they largely assumed that both must be structured like algorithms, which is why both neuroscience and evolutionary psychology understand both thinking and consciousness under the rubric of “prediction machine” for the “reduction of uncertainty.” In whatever way we are algorithms, or our intentions are algorithmic, we are also able to step outside of any given one of these algorithmic intentions to evaluate and generalize about them. This is the sense in which we are Douglas Hofstadter’s “strange loops,” but we seem able to do this indefinitely in a way that other assemblages of algorithms can’t. The algorithms of artificial intelligence are now able to generalize and “self-evaluate” by recognizing patterns, making predictions, and reprogramming themselves better than we can. But we are still their ultimate outside because we are still the ones who generalize their generalizations relative to our recursive intentions, which are the processes described as “relevance realization” by John Vervaeke. Algorithms realize relevance in accordance with their given intention, which is like the relation between a definition and the set that it defines. Once the intention has been established, the program becomes recursive within itself, as it finds patterns and procedures relevant to the defined set of its intention. It assembles and reassembles the sets of algorithms that define it according to the intention that has been given to it by its programmer. A defined set of algorithms might be thought of metaphorically as a “self.” In whatever way programs become recursive sets, or selves, they can’t step outside of their given intentions, or step outside of “themselves,” in the infinitely recursive way that human selves can. Human programmers often say that their programs have all sorts of unintended consequences, which would seem to demonstrate that their programs transgress the boundaries of themselves by getting outside of the programmers’ original intentions. But whatever surprising solutions to the programmers’ requests these programs find, they still find them from the “inside” of the programmer’s intentions, including from within the apparently “open” intention of Large Language Models to respond appropriately to human communication.