Part 1: What Cannot Be Known.

Part one of a series about the possibility of complete knowledge. There is currently no "Theory of Everything," but what are the conditions of possibility for knowing? Is it possible that the ground of our knowing is unknowing?

Part 1: What Cannot Be Known.

Part One: Is Whatever There Is, Knowable? 

 Is there still a place for mystery in the age of “natural” science? Or another way to frame this question might be, are the techniques of science sufficient for disclosing whatever there is? This isn’t exactly the either/or proposition that it may at first appear to be. Those who believe that complete knowing is possible may be thought of as optimists and those who hold that the “total” or the “all” of existence can never be grasped, may seem pessimists, but as with all things, there are subtilties about each position that can confuse negation and affirmation, so that total knowledge pessimists are often optimistic about sorts of knowing that scientific Positivists are not, which of course means thatknowledge optimists are pessimistic about sorts of knowing that pessimists are not. And it is often the case the total knowledge pessimists are optimistic about the absolute mystery of existence, or being, because of their optimismabout the possibilities of being, which for them is derived from their positive view of being as being too excessive to be grasped with concepts. 

In general, the scientific world view holds that whatever mystery is left in the world is theoretically knowable because complete knowledge about whatever there is, or complete knowledge about the Universe, is comprised of a finite amount of information, so that with enough research and time and, perhaps, with the help of AI’s storage and processing capacities, a complete map of reality is hypothetically possible. This view holds that what the Universe is, is a collection of provable facts, so that what is true, is true according to the correspondences of language to objective situations, or “states of affairs.” This positive view of knowing, sometimes called a “Theory of Everything,” is in line with modern Information Theory’s belief that whatever there is, is either in-itself a quantity or is quantizable in the sense that it can be represented as digitized “bits” of information. Whatever incomplete determinations there are about the Universe, or whatever indeterminacies remain after the determinations of the modern sciences, they are the results of the non-coincidence between amounts, or quantities, of information and computational capacities, and not inadequacies inherent to information nor to the Universe to which information corresponds. And whatever there is that isn’t quantizable, doesn’t exist, but “is” a kind of illusion that “is” somehow “epiphenomenal” to whatever there is. 

Positivism’s Optimistic Take on Knowing 

There are incomputable problems, but these unsolvable complications don’t point to any inherent lack within information per se for the positivist. There is a provable, or “necessary,” incompleteness within the structure of computation itself, but not within mathematical and scientific knowing. Incomputable problems demonstrate that there is a necessary mismatch between algorithms and the Universe’s production of possibilities. Incomputable problems do not necessarily obviate a positive view of complete knowledge even though the Universe’s production of information out paces any conceivable computational process. Scientific knowledge is always incomplete because information is always quantitatively more than the capacity of any specific algorithmic intention and not because all knowledge isn’t ultimately reducible to quantities.  

If whatever there is, is reducible to information, then this is still a positive view about knowing because information’s ability to disclose the Universe is total, even if necessarily incomplete as far as working out every possible permutation goes. The general computations that describe the possible physical states of the Universe will someday suffice to disclose whatever there is because they will reveal a complete picture of its causal structure. For the modern sciences there are no causes other than physical causes, so this physicalist account of the positive mapping of one-to-one, casual relations, is all that there is to be known about what is. This view accords with modern Information Theory’s idea that knowing is knowing about these kinds of positivistic correlations between the “zero’s and one’s” of binary code and what is or isn't real, in which the zeros correspond to “isn’t” and the one’s correspond to “is.” What counts as ontologically real is what can be counted, and what can be counted is information. This is the scientific positivism that boldly makes the extraordinary ontological claims that the Universe just is information and nothing else. 

An incomputable problem is a problem in which algorithmic formulation is unable to keep up with the increasing number of variables, permutations, and procedures of a formulation of a problem, as Alan Turing’s famous “Halting Problem” showed. The Halting Problem proved that there is no hypothetical, ultimate algorithm that can determine if any given program will finish. Like Kurt Godel’s Incompleteness Theorem, the Halting Problem demonstrated that the truth conditions of a set cannot be proven from within that set, and thus there is no set of all other possible sets, which means that there is no algorithm that can decide if any given problem is computable or not. This necessary indeterminacy is still compatible with the determinism of scientific information, as Douglas Hofstadter showed with his famous “recursive loops.”  

The determination of any given situation is always from an indeterminate outside of that situation, but each exterior indeterminacy can be determined by an outside relative to which it is interior. Again, as above, there is a mismatch between the amount of information and computational capacities for these amounts because there is always the “one more” of the outside of the computation that defines the parameters of the computation. This “add one” sort of recursion is the sort of infinity that Hegel thought of as a “vertical” infinity because it is a determinate infinity. Each additional recursion is determined by the next, so that any given situation is determined, even if its determinations haven’t been worked out by a computer of some kind. The determination of any given situation comes from outside the situation, but this outside is itself determinable by the computations of mathematical and scientific descriptions, which are the necessary and sufficient reasons of determinate, causal relations.  

Set Theory is a helpful way into the determinate nature of Hofstadter’s “Strange Loops.” The definition of a set determines what belongs “in” the set and what doesn’t. These outside definitions are the “truth conditions” of that set, but those truth conditions only determine what is in the set and not what determines them. The necessary and sufficient reasons of determinate, causal explanations do not establish the necessary and sufficient reasons that define those defining reasons. If we want to know the cause of a cause, then we must back up one step to include that cause in a larger set that accounts for it. This infinite, but determinate, recursion can be seen most clearly at the current limits of the natural laws because they don’t have causal explanations for themselves beyond themselves, but they act as the causal explanations of all other physical relations.  

The truth conditions of the natural laws may someday be determined by truth conditions beyond them, but those truth conditions will have to either stand as uncaused causes or someday be explained by something beyond them as well. Whatever determines the current physical laws might in turn be determined by something outside of them, but even if these explanations temporarily appear as if absolute or foundational, they do not explain themselves. An “absolute” foundation in physical science would be something like an “uncaused cause,” or a “raw fact,” that “just is” without any further explanation. Each recursive step expands what the previous set had determined by nowincluding what had been the outside determination of that set, but there will always be a determinate, even if undetermined, outside of any given set no matter how expanded the set has become through this recursive process of inclusion. Whatever is determined by mathematical computation or by scientific observation does not increase determination nor decrease indeterminacy because computation and observation are only uncovering what has already been determined by the causal chain of the physical laws of the Universe, which is Einstein's already-finished “Block Universe.” 

In this way, knowledge may be perpetually incomplete for the material determinist, but there is no lack, either “in” or “of,” information. There is only the lack of algorithmic or computing capacity discussed above in terms of the Halting Problem and its relation to Set Theory, which is that there is always at least one more step outside any given algorithm, or set, that determines the parameters of that computation or set. This is the classical problem with the interminability of the casual chain, which will be discussed in the next section. This is also the problem with the positivism of Information Theory and the sciences in general. A digitizable “bit” of positive information has a hidden, or absence, causal chain that can’t be accounted for by direct observation, so its causes must be deduced or inferred from what is to what isn’t any longer, as David Hume famously brought to Kant’s attention when he pointed out that causes cannot be observed in the present but are conjectures about the past. Any complete account of whatever there is, must include its unobservable causal structure, which requires not only a complete description of what is, but also a general explanation of how it came to be.  

Why a Total Description Requires Causal Explanation 

There is a common joke in the sciences that you’d need a computer the size of the Universe to explain and / or describe it completely, which not only indicates the mismatch between sheer amounts of information and computational capacities, but also identifies an essential distinction between “explaining” and “describing.” Explaining requires generalizations about causes and conditions, which means a reduction of the totality of information to discern underlying networks of relations often called “probability spaces.” “Describing” can be more of a one-to-one mapping of the positive bits of information to whatever there is, but even this reduction of the general interpretive framework of knowing, requires some relation between the particular and the general, and thus some interpretation beyond what is given as a “raw” fact or state-of-affairs. There is no reduction to the raw facts of the Universe because facts require an interpretative framework, either mathematical or linguistic, to frame what counts as a positive bit of information. There is no knowing without generalizations. The quests for “presuppositionless” knowledge in phenomenology, or to rid scientific observation of its “theory laden” biases have mostly been disbanded at this point in the history of human knowing. 

It might be possible to describe the Universe without understanding it, but even the most basic description includes concepts that may be hidden but are the necessary generalizations that organize and categorize the details, or mere facts, of the Universe. For example, someone perceiving a patch of red may not have a full grasp of the larger situation in which the color is situated, but this basic perception requires the concept of color to organize her sensory data into the experience of red. She is relating a general concept to a particular occurrence. There may be something like pure sensation without the mediation of concepts, especially for human infants and for non-symbolic forms of plant and animal life, but sensation still requires the mediation of a sensory system. A sensory system relates percepts in much the same manner as a generalization but without the abstraction of a concept. However, a sensory system isn’t exactly immediate either because it still requires a relation of difference, and once a language user has entered symbolic mediation through language, even “pure” sensations have already been copulated with the Symbolic.  

Alfred North Whitehead wrote that the simplest concept is the joining of two percepts according to a rule. A sensory system also joins at least two different percepts according to a sensory rule. For example, as discussed above, in the constant temperature of the womb, the infant does not perceive temperature. It is only the different temperature of the delivery room that gives the infant its first experience of temperature without a concept but not without a relation of consistency to difference. In this way, knowing is a generalization of some sort because it relates the absence of the perception of temperature in the consistent temperature of the womb to the perception of temperature inthe difference of the delivery room through the nervous system, specifically the thermoreceptors in the skin and the Hypothalamus in the brain. The term “knowing” is often reserved for the generalizations given by concepts, but therelations of difference given by sensory systems also offer a kind of generalization to a past condition, so that sensation is also a kind of very rudimentary knowing, or bodily conception, which might be thought of as a sensual objectbecause it is the object of sensation.  

But it is also a non-object because it isn’t closed or fully determined, which means that knowing at the most basic level of bodily perception includes unknowing. This is the unknowing that creates the desire for knowing according to the psychoanalytical model of an obstacle that is the cause of desire, which is also the unknowing that motivates the infant’s entrance into the Symbolic for Lacan. The infant learns the language of its caregivers to symbolize and communicate its desire. However, in whatever ways symbolic conceptualization satisfies the desire to know what sensual conceptualization couldn’t understand, symbolic conceptualization frustrates understanding by forming its own barrier to complete knowledge. Any conceptualization, from a basic sensation to a complex mapping of a situation, is formed by a closure of some sort, which might be understood as the closure of identity. A sensory system identifies the situation by joining past percepts to present ones via biological processes of difference detection. A symbolic system of difference, or Derridean “Differánce,” identifies the situation by joining past percepts and concepts to present ones. A symbolic intention makes this identification by copulating signifiers with the present situation. An identification is a generalization of what is essential to the situation according to some given intention, butthis conceptual closure must disavow what is different from or counter to its completion, or to its intention.  

This disavowal, which is necessary for the closure of identity, of what doesn’t fit the general explanation is why there is currently no unified “Theory of Everything.” Any disavowal, as psychoanalysis has taught us, is subject to a “return of the repressed,” in the ominous words of Freud. All theories thus far that have claimed to disclose the Universe according to its basic structuring principles have been so plagued by the return of their repressed counterfactuals, that many thinkers have come to assume that any possible Theory of Everything is only a temporary best guess that will eventually have to be overturned as counterfactuals to that new, grand theory accumulate in the mode of Thomas Kuhn’s The Structure of Scientific Revolutions, best known through the overused phrase “paradigm shift” in popular culture. A “Post-Modern” formulation of the concept of paradigm shifts might be something like “There is no meta-paradigm.” 

A Theory of Everything would not have to cover every fact about the Universe, like the temperature of every delivery room, plus a whole lot of other specific details about the Universe to be complete, but it would have to be able to explain the “conditions of possibility” for any possible fact. However, counterfactuals are a problem because they “falsify” a theory as an incredible explanation of the Universe. Falsifiability is the cornerstone of scientific knowledge because what isn’t falsifiable, isn’t provable, and therefore, is neither objective nor “true” according to the scientific definition of truth as corresponding to the “facts” of the Universe, or to the states of affairs of the “real” world. A Theory of Everything wouldn’t have to account for every fact of the Universe because it would be a generalization about the Universe that would account for how any possible fact was given by its conditions of possibility, or how any given possibility was related to every other possibility. In other words, it would give a causal description of whatever there was and whatever there could possibly be without having to determine or observe each fact. This sort of not knowing everything is still a positive view of knowing because this general knowledge of the conditions of possibility could potentially determine any possible situation, so that there are no more novel situations, even though most of them have never been realized.  

 

A Closed Totality or an Open Infinity? 

There isn’t any lack of determination about an undetermined Universe, as there would be if there were any true indeterminacy in the Universe, which would be something like Emmanuel Levinas’s distinction between an “open infinity” rather than the “closed totality.” Any possible Theory of Everything would require a closed totality, even if it “contained” the determinate infinities of Douglas Hofstadter's “strange,” recursive loops, to disclose whatever there is. Again, a closed totality is like Hegel’s vertical infinity, which is like the additive infinity of “plus one.” However, if the sort of infinite paradigm shifts of Thomas Kuhn’s scientific revolutions describe an open infinity, then there is no possible final or absolute Theory of Everything. Hegel’s horizonal infinity is an open infinity because it isn’t additive but simultaneous, which relates to Hegel’s unique use of the term “absolute,” which doesn’t mean ultimate or final as it does for most contemporary uses of the term.  

An open infinity is the always present relation of determinate being to indeterminate non-being, which “resolves” as the continual process of becoming. In whatever way Hegel’s dialect of the Absolute is final or complete, it isn’t a synthesis, or it doesn’t determine being without a remainder of irreducible indeterminacy. “Absolute” for Hegel means something like “absolve” in English, which means vindicate, acquit, or liberate. Hegel’s Absolute absolves, withoutthe total solution, or dissolution, of a synthesis because in whatever way nonbeing negates being, becoming is never complete. Therefore, any present situation is composed of what can be determined and what can’t, and what can’t isthe absolute resistance to determination that allows being’s becoming to be open, because indeterminate, and horizontal because of the simultaneity of finitude and infinity of the present.    

For those who believe in an eventual, unified, physical description of the Universe, there is no actual indeterminacy in it, only “unrealized” possibility, which is another way to think about incomputable problems as nonetheless determinate. In this view, typical of scientific materialism, all the Universe's possibilities have already been actualized, even if not realized, which is the “predetermined” possibility of Hofstadler’s recursive loops, or of Levinas’s closed infinity. Determined possibility is without choice or chance because a free choice is grounded in the freedom of chance. Each determinate choice must relate its determination to indeterminacy, or else it is the illusory choice ofpredetermined possibility.  For most determinists, the work of the sciences is to uncover the determined “fields” of the predetermined possibilities of the Universe. The problem has been that indeterminacy seems like a permeant feature of the Universe rather than a bug, so much so that at the level of quantum fields, “quantum indeterminacy” appears to be structural rather than epiphenomenal, which has meant that quantum fields, the most basic buildingblocks of the Universe’s possibilities, aren’t describable in terms of determinate causality, but rather, in terms of indeterminate probabilities.  

Chaos provides an illustrative counterpoint to indeterminate probabilities because it is a determinate field or possibility space. Chaos is potentially computable given the right inputs and formulations, even though it is notoriously hard to predict, because Chaos describes the behavior of a system that is extremely sensitive to seemingly minor variables and that is also extremely resistance to seemingly large ones. The weather is a chaotic system that isincreasingly predictable because these variables have been increasingly accounted for and put into more accurate mathematical relations with each other. Hypothetically, any remaining uncertainty about the weather can be addressedby better inputs and subtler formulations. Chaos is a complex system, and because of its characteristic mismatch between inputs and results, it has been used as an example of how complexity is determinate even when it appears to be “chaotic,” not chaotic in the formal sense, but chaotic in the more colloquial sense of indeterminate or without reasons. The concept of “emergence” is evoked whenever there is an apparent non-coincidence between inputs and outputs in complex systems. Complexity has become a sort of catch all explanation for whatever appears to be without determination, but nonetheless is determined, especially such emergent phenomena as the illusion of a free choice, or the even grander illusion of consciousness.   

Incomputable problems demonstrate a mismatch between computation and information that is different than the reducible indeterminacy of chaos; although, the indeterminacy of both isn’t indeterminate in-itself but merely undetermined. The un-determinacy of incomputable problems is not reducible, but it is determinable, even if imputable problems produce too much undeterminable information to compute. Incomputable problems can’t reduce situations to the general formulas of solvable problems because these general formulas produce more undetermined particularity, or difference, than can be resolved within the set defined by their computations, which creates an undeterminable, but closed and determined, recursive loop without any actual indeterminacy or open possibility. 

There is another related thought experiment about knowing the Universe in a complete way associated with the Jorge Luis Borges’ story “On Exactitude in Science.” Borges demonstrates that a map of everything is useless unless it reduces information to a more general accounting of the causes and conditions of that information. The higher order ability to generalize is what prediction machines, forged by both evolution and the Symbolic, were designed to do. This is the material cause of the Symbolic, which is the relation between the Symbolic and natural selection according to evolutionary biology. Whatever virtuality concepts allow human prediction machines to access, it is ultimately adetermined virtuality. Imagined possibilities could only ever be uncovered or discovered possibilities, which means that all possibilities have already been actualized by the determinate beginning conditions of the Universe and only realized as the determined relation of space-time with matter-energy according to the physical laws.  

A complete account of every detail of the Universe would be worse than useless, as Borges’ map of the totality of the territory shows, because pattern recognition is what matters for uncertainty reduction. The right ratio of particularity to generality is what makes knowing either advantageous or useless. An intention outside of any set of mere facts must generalize about them to “know” the situation and make accurate predictions about what might happen next, which is another formulation of how what is outside the set or situation determines the inside according to criteria that are not inside the situation itself. As has already been mentioned, causal structures are not “in” the situations that they determine but are imposed on it from an outside intention. The possibility space that gave rise to the present situation must be inferred or deduced from past situations, so that generalizations about the present are always from the past. A system of generalizations, like mathematics and language, relates the present to the past, so that if we only measured the situation’s quantities without generalizing about its causes and conditions as repetitions of past structures, we would not reduce any uncertainty about the situation or really be said to understand it in any way. And what’s even more is that even the mere measurements of the present situation that appears as if without-generalization, are generalizations, and so are not actually in the present, but tools, or concepts, brought into the present to generalize about it, so that it can be known. All knowing is this relation between the novel particularity of the present and the Whiteheadian “generals” of the past.   

The particulars of a given situation must be related to each other via generalizations from an intention outside of the particular situation, which is the relation between that outside definition of a set to its inside operations. The general criteria for any given intention are evaluated from the outside of that intention, so the outside is the recursive vantage point that is always one step outside of itself, so that there is no ultimate outside, or no ultimate intention,which can evaluate itself according to its own criteria, but always from one step back “into” the outside of each intention’s intention. Every present situation is comprised of what can be determined about it with the given concepts of the Symbolic and what can’t. This latter portion of any possible present situation is unintentional because it can’t be objectified in the phenomenological intention nor conceptualized by the symbolic intention.  

The Universe is then comprised of repetition and difference, as Deleuze so famously put it, or of “generals” and the particulars, as Whitehead put it. The material determinist holds that whatever difference or particularity there isin the Universe, it can ultimately be reduced to determinate repetitions or general causes, according to discoverable, physical necessity. The material indeterminist holds that whatever difference is left after the physical determination of a given situation, it is the particular indeterminacy that makes each situation singular. This absolute remainder of indeterminacy, that Jacques Lacan called “the Real,” is irreducibly ambiguous, so that this ambiguity is the infinity that opens itself up for the multiplicity of symbolic interpretation that is beyond the univocality of scientific identification or determination. 

This is not a rejection of science nor of materialism per se, although most scientists see any assertion of irreducible ambiguity that way. This pessimist position with regard to complete knowing, usually associated with some flavor of Panpsychism, but also with various new formulations like Material Idealism, Idealist Materialism, Pan-experientialism, Mysterianism, and certain types of dual aspect monism, all hold that there is some, yet undetected, more “spiritual,” in the sense of ineffable, aspect of matter-energy beyond the determinations of the physical laws, which is usually something like consciousness, or qualia, that is as real as matter’s measurable quantities. For the material determinist, quantities and their causal relations are all that are needed to explain and describe the Universe in full. For the material indeterminist, a full description of the Universe includes what cannot be determined about it, which is its irreducible indeterminacy or openness to symbolic interpretation. This is the optimism of these pessimists about knowing mentioned in the introduction: there are other sorts of valid knowing beyond those of scientific measurements and the mathematics of algorithms. If there is an undeterminable indeterminacy, or “absolute” indeterminacy, there cannot be a Theory of Everything because the Universe cannot be fully known, but this absolute lack of determination is an excess of interpretability, or the multiplicity of being given by the indeterminacy of nonbeing. 

Thinking, as well as the consciousness that is “epiphenomenal” to it, are thought of as material processes by the modern sciences, so they largely assumed that both must be structured like algorithms, which is why both neuroscience and evolutionary psychology understand both thinking and consciousness under the rubric of “prediction machine” for the “reduction of uncertainty.” In whatever way we are algorithms, or our intentions are algorithmic, we are also able to step outside of any given one of these algorithmic intentions to evaluate and generalize about them. This is the sense in which we are Douglas Hofstadter’s “strange loops,” but we seem able to do this indefinitely in a way that other assemblages of algorithms can’t. The algorithms of artificial intelligence are now able to generalize and “self-evaluate” by recognizing patterns, making predictions, and reprogramming themselves better than we can. But we are still their ultimate outside because we are still the ones who generalize their generalizations relative to our recursive intentions, which are the processes described as “relevance realization” by John Vervaeke.  Algorithms realize relevance in accordance with their given intention, which is like the relation between a definition and the set that it defines. Once the intention has been established, the program becomes recursive within itself, as it finds patterns and procedures relevant to the defined set of its intention. It assembles and reassembles the sets of algorithms that define it according to the intention that has been given to it by its programmer. A defined set of algorithms might be thought of metaphorically as a “self.” In whatever way programs become recursive sets, or selves, they can’t step outside of their given intentions, or step outside of “themselves,” in the infinitely recursive way that human selves can. Human programmers often say that their programs have all sorts of unintended consequences, which would seem to demonstrate that their programs transgress the boundaries of themselves by getting outside of the programmers’ original intentions. But whatever surprising solutions to the programmers’ requests these programs find, they still find them from the “inside” of the programmer’s intentions, including from within the apparently “open” intention of Large Language Models to respond appropriately to human communication.  

The Self-Alienation of Prediction Machines 

However open this intention may seem, it is closed because it is the intention to respond appropriately or accurately to human language. Appropriate and accurate responses are defined by the probabilities of human language, so responses will always be an average of some kind. Whatever deviations from the norm appear in the responses of LLMs, they are not devious in the way that human behavior is. An LLMs response is formed from the relation of their programmer’s intention to the determinate probabilities of human communication, as any prediction machine’s behavior would, including humans if their intentions were the mere relation of their evolutionary and cultural programmingto uncertainty reduction. But the human intention can step outside the intention of the prediction machine to ask,” What is any of this uncertainty reduction for?”. This self-alienation is how Hegel thought of the recursion of making and object out of oneself, which is the object / subject recursive loop that touches the open infinity of non-being, and is why Heidegger said that human being is the being that makes being into a question. 

LLM’s may realize relevance by forming the recursive loops of tokens and probabilities, but none of this recursion “realizes” what it is realizing, as the strange loops of human cognition does. Human language is not an internal loop of tokens and probabilities because tokens touch determinate probabilities and language touches the absolute indeterminacy of the Lacanian Real, which the outside Other that resists internal recursion absolutely. Human intentions not only encounter what is unintentional but what resists their intentions absolutely, not because of a lack of determination or because of a lack of processing capabilities, but because the ground of the human intention is the without-intention of the void. The ultimate intention of the Universe is nothing, but all possible intentions arise out of this pregnant nothing.  

This nothingness of the “groundless ground” of the Universe was discovered first by Daoist philosophers, especially those of the Laozi and Zhuangzi variety, who preached the purposeless intention of “Wu Wei,” as well as by the “Advaita Vedanta” schools of Hindu philosophies, especially those that proclaimed the “neti neti” of absolute emptiness. Any of the Buddhist schools that taught the “no-self” doctrine, preached it so that one might uncover the utter lack of substance that grounded every substantial object, intention, or self. This nothing was then discovered again by the Apophatic, or mystical theologians of the West, who arose in the wake of the utter emptiness Plotinus’s “One,”and Pseudo-Dionysus's, “super-essential,” twice negated darkness, which is the nothingness tradition that culminated in German Idealism, especially Schopenhauer, Hegel, and Nietzsche, as well as the “Existentialists,” especially Kierkegaard, Heidegger, and Sarte, only one of whom acknowledged that label, but all of whom acknowledged the void as not only the historical ground of being but also as the every present source of the relation of “otherness” to the intention that becomes the internal self as the outside Other.  

The human intention touches this groundless ground every time it encounters the outside Other of its intention, which is the experience of the non-object as what is both the beyond and the horizon of the intention, whichKierkegaard and the other so called “Existentialists” that came after him, formulated as the everyday experience of “anxiety.” This anxiety about the non-object grounds all other objectifications, just as the Real’s absolute resistant to symbolization grounds the Symbolic. The first traumatic encounter of the body with its outside is when it is outside the womb. The next is when it encounters the symbolic intentions of the outside Other and relates them to its owninternal desire, which is the internal otherness upon which the symbolic intention of the self is constructed as the relation of the Lacanian Symbolic to the Real.  

Human intention is inside as the outside Other, so the unintentional ground of the intention is the “near-far,” as the Mystics have often put it, of the everydayness of being in the world. The human self is always at least one step outside of itself, even when it is felt as the interior self, because of this recursive relation between nearness and farness, interiority and exteriority, self and other, and the oneness of intention and the multiplicity of purposes. The human self is outside of itself because it touches the Real, and it is internal to itself because it is the self that is given to itself by the language of the outside Other in this unending loop of self as Other. Human recursion is the infinite recursion of the self with what is other than itself, or of the unity of intention with the multiplicity that resists the unification of identity. Artificial Intelligence of any sort is an extension of this recursive intention, but AI cannot be other than itself, as human intentions can be, even when AI appears different from, or outside of, its programmer’s intention.  

The human intention is the recursive relation between the intention and the unintentional ground from which it came, which is the recursive dialectic of Hegelian being and nonbeing. Algorithms form Hostetler's “strange loops”within themselves, but the recursion of the human intention is even stranger. The human intention of the psychoanalytic subject appears from the absolute outside of itself in the Freudian unconscious, or the Lacanian Real, or the Hegelian indeterminacy of non-being. No program can get that far outside of its given intention, probably because it doesn’t have a body that is connected to the “flesh of world,” as Merleau Ponty put it, but there may be other reasons than that. The infant gets its first taste of the outside Other when the cold air of the delivery room hits its body with the unintentional phenomenon of temperature, and it is this non-objective ground that gives birth to all other objective, intentional representations of temperature. 

When we ask about what we can know about the Universe beyond our own intention, we demonstrate Heideggerian “care” about what is beyond our evolutionary programming. Algorithms, however complex they may be, never touch the Real in this way. They do not wonder about the ground of their being, or the “things-in-themselves" before their representation as the zeros and ones of code because the zeros of the zeros and ones of their codes are not the zeros of the void, which is something like Emmanuel Levinas’s distinction between the zero of a closed totality and the zero of an open infinity, or like Hegel’s distinction between the vertical infinity of addition versus the horizontal infinity of the dialectical relation of limit to the unbounded. It is not surprising the zero was first use in the mathematics of India where some of the earliest attempts to symbolize the void began. Because computer code does not understand the zero as both the beyond and the horizon of number and being, they don’t understand either. Understanding, or any sort of knowing, must emerge from the infinite ground of unknowing. 

There recently have been claims in the AI community that programs have shown evidence of stepping outside of their own programing to evaluate their programming and even reprogram themselves according to a separateintention apart from their original programming. Programs can’t self-alienate, or step outside of themselves, so they don’t ever step into the Real. There has not been enough evidence of these phenomena offered to evaluate with any rigor yet. However, programs altering their programming, or even programming novel subprograms to do their bidding, do not necessarily demonstrate an original intention. Original intentions are generated by the failure of intention, which is the relation of intention to its limit, or the Real. It is true that programs can program sub-programs to perform operations in ways that the programmer didn’t originally intend, but these “devious” programs are still related to the programmer’s intention and not to the void as the human intention is.  

Heidegger discussed Dasein's intention as creative when it stood in relation to the void, which is something like Hegel’s notion of the process of being’s becoming as the dialectical relation of determinate being to the indeterminacy of nonbeing. In whatever ways a program has gone beyond the original intention of the programmer, in terms of reprogramming “themselves,” or resetting their own parameters without the programmer’s input, they have not done so in relation to the outside Other of the void. If a program finds novel, “unhuman,” even untraceable ways to win the game “Go,” as they famously have, it is still doing what its programmer intended but in unpredictable ways.And the computational steps that it took to realize its given intention may be untraceable, but not because programs have a hidden, interior subjectivity as human subjects do, but rather because their computations are like any other field of complexity, determinate but irreversible.  These programs, no matter how mysterious their machinations, are not other to themselves or to their programmers in the same way that the human intention is other to itself is the recursive loop between being and non-being. 

In the Borges story mentioned earlier, a mythical cartographer makes such an accurate map that it literally covers the territory that it's supposed to represent. This is a parable about particularity and the generality that is supposed to expose the distinction between mere information and understanding. The ridiculously large map falls into disuse because it is too large and detailed to be useful. The closed totality of a complete map could only be useful to a program, which is why LLMs must train on such massive amounts of data, virtually the entire corpus of online human communications. But embodied beings require a lot less data to make accurate generalizations, because their knowing follow the contours of Hegel’s “Absolute Knowing.” Absolute Knowing knows via the relation of determinate being to indeterminate non-being, which is the relation of the limit of information to the open infinity of its interpretation.  

When a map is equal to the territory that it represents, it must be reduced for a “general” understanding by adjusting its level of resolution. The resolution of human knowing is the relation between the general, or abstract, to the detailed differences of the particular. Understanding is knowing how to relate the general to the particular and vice versa. The reduction of resolution is something like “zooming out,” or abstracting the immediacy of the particular togeneralize the intentional frame that mediates these details. The Borges story demonstrates that understanding is different than the mere accumulation of information because understanding relates knowing to its ground in unknowing.Some of the particular must be reduced, or unknown, to see by what Whiteheadian general its difference appears to the intention. Humans, and animals with complex central nervous systems, need a lot less information to make accurate predictions because they infer wholes from parts and parts from whole by generalizations that are imaginal projections into the void of actual indeterminacy, which is one of the reasons why it is always so shocking when determinists tell us naive believers in actual degrees of freedom, that both the imagination and the freedom that its relation to the void seem to grant are illusions. We are so used to the sort of virtual, possibility spaces that are composed of open indeterminacy and its symbolic determination that the illusion of consciousness, conscious choice, and active imagination are hard to give up. 

LLMs do not generalize, or abstract, or determine in relation to the indeterminate nothing. They use probabilities to “generalize” or predict which token might come next, which is the recursive relation of a closed totality to itself outlined above as the recursion of tokens and probabilities. The probabilistic recursion of LLMs never includes the indeterminacy of “chance,” or the pure potential before the relation of indeterminacy to determinacy, which Giles Deleuze understood as the absolute nothingness of potential, or the difference of “Difference-in-itself.” For Deleuze, actual possibility wasn’t the determined possibility of tokens and probabilities, but rather the indeterminate relation of potential to the limit of the actual, or of determinate being. Whatever intention can be unified by the Deleuzian relation of difference to itself, it is a repetition with difference rather than a reduction of difference to the same, or to an identity, which characterizes the logic of resemblance by which LLMs imitate language by finding its averages. 

Representation as Reproduction or as Innovation 

Representation can be either a reproduction or an innovation. There is no innovation without relation to the open infinity of the void because only the vastness of potential can call forth the indeterminate imagination. In art history there is great significance placed on the stylized representations of animals and even of animals in motion that began to appear around 35,000 years ago on cave walls. These cave drawings seem to demonstrate an imaginary that isn’tas concerned about verisimilitude as it is with ideas. Cave drawings show both attention to detail and an intention beyond mere reproduction. LLM’s end is the accurate reproduction of human language, like the complete map depicted in the Borges story. The cave painter’s intentions seem to include an intention beyond the accuracy of reproduction because the paintings include, perhaps unintentionally, the perspective of the intention that formed them. The scenes of animals are idyllic, possibly pastoral, or even sacred because their realism is subordinated to the imagination that framed them.  

What is within the frame of the painter's intention is the unintentional representation of the painter herself. The cave drawings have much to commend in them about their detailed and accurate representations of the nature world, but what is also in the frame is its outside, which is the intention of the painter. For example, some animals are shown with eight legs to make it seem as if they are in movement. Other animals are drawn in such a way as to showthe cultural transmission of stylized, formal representation, more symbolic than image-like, but what bespeaks the outside Other of the scene most clearly is what gets put into the painting without conscious intention. It is very unlikely that these animals would ever have been in these “exact” proximities in “reality,” which is why these scenes may have had something like a religious, rather than a practical, purpose for the communities that produced them. Here the term “religious” is not the modern, physicalist understanding of religious practice as having a primitive, or hidden, material purpose, in accordance with evolutionary psychology. And the cave drawings are not religious in the dictionary sense of a formal system of shared mythological narratives and symbolic practices, but rather, religious as in having no practical purpose because it is both the beyond and the horizon of purpose, which is the relation of purpose to purposelessness that grounds the religious imaginary. Whatever lack of verisimilitude the images betray, they do not simply reveal a lack of skill or of technique or of materials in the primitive context of their composition. Any inaccuracies in imagistic representation are an excess of imagination, which is the excessive care for being, beyond the accuracy of predictions, which is the excessive care from which all artistic representation.  

This movement away from accuracy to “abstract” expression in the religio-aesthetic intention is clearest in the much later, more intentionally “expressionistic” movements of art history after the invention of photography. In art as well as “practical” thoughts and behaviors, from whenever a truly, human intention fist emerged until now, there has always been this profligate witness to non-practicality in all human endeavors. In whatever ways artistic expression, dance, sacrifice, dream questing, invocation, music, dress, and ritual practice will get you laid or fed, they are also the extravagant beyond of survival and reproduction. And “mechanical reproduction,” in Walter Benjamin’s words, may have had a deleterious effect on the “aura” around human expression, but it did not end this essential but gratuitous expenditure of the immoderate imagination.  

Impressionism and the more abstract and stylized expressions that followed the almost perfect verisimilitude of the photograph, clearly not only included but have been about the intention of the artist beyond accuraterepresentation. The photograph frames the intention of the photographer, perhaps, even more than the artisan’s hands because its mechanical reproduction lays bare the photographer’s imagination, which is split between the intention and the Freudian Unconscious, especially regarding the photograph’s “perspective.” The photograph relates the particularity of perspective to the generality of form, as any other artistic expression does, but the “accuracy” of the camera’s image captures the interplay of representation and the unintentional as the contrast of realism and imagination. For Deleuze the image is the relation of repetition and difference, but every repetition, here the mechanical reproduction of the photographic process, is made new by its relation to difference, here the photographer’s perspective, which includes her choices about subject, angle, resolution, lighting and proximity, and which are how the photographer always includes herself within the frame as the outside Other who is the subject of the subject of the photograph. This is the same recursivity in which any observation, scientific, or otherwise, includes the observer in the intentional objectification of the subject / object relation.  

Any intentional intensification of particularity is a lack of generality, or any generalization reduces the particularity of difference because foregrounding is backgrounding, which is the relation between a lack of determination, or indeterminacy, to the freedom of the imaginary. Deleuze thought of the image as operating according to the logic of “resemblance,” but he preached the renewal of the image by freedom from resemblance, which was the repetition with difference. When images do not resemble their referent, they are freed to make both the referent and the image new in the refractory zone between them. The perspective, or seeming lack thereof, in a photograph is only partially intentional because each picture includes the photographer’s unconscious, as any act of imagination does. The photograph hides the photographer behind its image, like a Freudian “Screen Memory,” in which one image covers for the other.  

Any lack of intention in photography is the excess of perspective, which is the presence that the image tries to cover-over. Photographs show this division of intention and unconscious imagination. Even forensic photography, which intends the disclosure of an objective intention, hides as much as it reveals, like Hegel’s “Absolute Knowing,” in which “absolves” by hiding what it reveals. The perspective of the photographer, which is presented by its absence, ispunctuated by the mechanical realism of the medium. The freedom of imaginal intention is foregrounded by the strictures of photography’s automatized determinations, like a classical musician whose singularity is pronounced by the slighted divergences from the musical notation.  

The most recent developments in philosophy disclose the relation between the phenomenological, or symbolic, intention and information, including scientific information. Human intention is phenomenological because it apprehends the world in the “Mereological” part to whole relations of objects in the register of the Lacanian Imaginary, and it is a symbolic intention because phenomenological objects are mediated through the identities of language, which are often misunderstood as correspondences between signifiers and their referents. Science has historically claimed a kind of purity about the information that it produces, which holds that it is “objective,” in the sense that it is free of the bias, or the perspective, of the subjective intention altogether. Scientific representation claims to fix the identities of things by its implicit “Correspondence Theory of Truth,” in which signifiers correspond to their referents.However, the subjective intention is both the limit and horizon of any possible objectivity, and it doesn’t apprehend the world as the closed totality of correspondences, or identificatory equivalencies, but as an open infinity into which the indeterminate wholes of the Imaginary are projected, especially the imaginary wholes of concepts that the signifiers of language must go through before disclosing their referents according to the Structuralism of symbolic difference, or Derridean “Differánce.” If knowledge were complete, or completable, as it is within the closed totality of LLMs and material determinists, then there could be no knowing at all because there could be no Real with which to correspond according to the Lacanian “non-relation.”