Do you cook your own food? My own meals tend to be boring - I optimize for speed and efficiency when cooking for myself, but I become passionate when I do it for a loved one. I’m occasionally creative, but I prefer to base my efforts off a recipe. It can be passed down from my parents, from a cooking blog, or even generated by AI, but I like to have some kind of concrete list of ingredients and steps that guides the process. If I ruin the dish, I can at least compare my actions with the instructions to get a clue of where things might have gone wrong. Recipes, like all abstractions, aim to distill complex processes into manageable steps, but they are never perfect reflections of reality. They are a virtual analogue, an information unit that uses a system of information to represent something about the physical world. By following the recipe’s steps, I can change objective reality to match the intended design. Having a good recipe does not guarantee a good meal. Even if I follow the recipe perfectly, I might still accidentally make inedible slop. There are countless variables that go unaccounted for, such as ambient temperature, humidity, or even my current mood. Even measuring by volume instead of weight, for instance, introduces variability that cleaner systems like weight-based measurements aim to reduce. Approximating reality Like recipes, all systems of information simplify reality, capturing some aspects while inevitably losing others. The ‘cleanliness’ of a system’s information determines how faithfully it reflects the original. The more ways that something can be interpreted, the less reliable the information becomes. In other words, the recipe becomes an abstraction of experience, and the original act of cooking is reduced down to words on a page. The abstraction is composed of information units, such as the ingredients, the steps, temperature and time. If the recipe’s information is clean enough, it will result in only one very specific flavor experience: the meal intended by the original chef. Most of the time, however, it’s an approximation of an experience, and the actual outcome varies significantly from the original vision. This degree of deviance between abstract information and objective reality is the focus of the ‘cleanliness’ metric in blob theory. Although clean information is my preferred personal terminology, I’m not the only one to investigate it. Scientists and logicians throughout history have been concerned with connecting abstract information to the physical world, and the methods of precision have evolved over the ages. Yet, as we’ll see, this pursuit of cleaner information systems reveals a surprising paradox: the very tools we use to define clarity are themselves imperfect. Chasing precision Recipes aren't very scientific. They use some standards of measurement, like cups or grams, but they are ultimately written in simple English. English itself is a system of information units: it is composed of patterns of sound that mirror the objects and phenomena of objective reality. As systems of information go, it's a relatively informal system. It’s not consistent or permanent enough to produce high purity information: words change meaning over time, and can have different meanings depending on context. For example, consider how “gay” as happiness and “mad” as insane have fallen out of fashion. This inconsistency over time limits English’s ability to function as a clean system. Words can also have their meaning change depending on the context, like how ‘mad’ can mean insane, but it more likely means angry and on occasion means smitten. This inconsistency does not disqualify language from being useful - language is obviously the dominant strategy of information transfer we have. That being said, English is not optimizing for precision because words are naturally ambiguous. The inconsistencies in natural language, such as evolving meanings and contextual ambiguity, highlight the need for systems like ‘syllogisms’ that aim to reduce such ambiguity and bring consistency to reasoning. This tool can be traced back to Aristotle, whose formal logical systems are still present in our modern day technology. In order to clean up the system of plain language, his ‘syllogisms’ make statements with high standards for truth and cleanliness. The syllogistic format is pretty straightforward: it proposes premises then draws conclusions based on the laws of logic. You can think of it as an if/then statement. As an example, consider the following statement: If I have a brother and his mother is alive, then my mother is alive. The premise does not explicitly state that I have a mother or that she is alive, but it provides enough information to easily come to the logical conclusion. This ability to draw conclusions relies on clean information that satisfies Aristotle’s three laws. This means the concepts of ‘mother’, ‘brother’, and ‘alive’ all being well defined, consistent, unambiguous and non-contradictory. By avoiding most of the ambiguity found in language, it revolutionized how information could be treated to be more reliable and consistent. Aristotle's syllogisms were a groundbreaking tool for cleaning information, and their influence persists in everything from formal education to computer algorithms. The path to clean information didn’t stop there. The evolution of cleanliness took a fundamental leap with Boolean logic, which fused logical principles with symbols and equations. Instead of expressing a statement with the natural language of syllogisms, you can use mathematical signs to manipulate information with greater rigor and precision. This can mean using operators, like ‘>’ meaning ‘greater than’, or ‘+’ meaning ‘and’. This symbolism gave rise to statements such as the following symbolic representation of the first law (the law of identity) =
A=A Units have consistent identities that do not change over time This equation effectively says something is equal to itself, using the symbology used in formal academia. This example is one of many forms that the symbols took, and they continually evolved across separate disciplines and across history from Boole to Gödel to Turing. The trend of cleaner and cleaner information units has continued through the modern age: the codes that power the content of your digital screen are built from binary bits of logic that have passed through the evolutionary pipeline of information purification. Yet, in this process, it was proven that perfectly clean information units can’t be achieved. Gödel demonstrated that no system of logic can be both complete and consistent. Even the cleanest systems contain truths they cannot prove, revealing the limits of absolute precision. Perfectly CleanJust as recipes aim to standardize cooking, syllogisms and Boolean logic aim to standardize reasoning. But like recipes, they simplify reality and can never fully eliminate ambiguity. Despite being relatively cleaner than English, they still all sit on the same spectrum that describes different degrees of cleanliness. There is no binary distinction between ‘clean’ systems and ‘dirty’ ones. All systems require some blurring or editing to translate objective reality into information units. As an example, imagine building a perfect map of a territory, a map so detailed it includes every blade of grass. At some point, the map becomes indistinguishable from the territory it represents, defeating its purpose as an abstraction.The closer you get to including perfect detail, the less practical the map becomes. Similarly, logic seeks perfect cleanliness but collapses under its own weight when pressed to its limits. If information can’t be described in binary terms, then it means that information is in some sense illogical, meaning that it doesn’t satisfy Aristotle’s 3 laws (despite being built upon them). After all, logic fundamentally relies on binary validation. If cleanliness itself is a gradient, then it fails the expectation of classical logic’s rigid requirements. This contradiction suggests that even the foundation of logic itself rests on imperfect ground—a flaw baked into the very tools we use to understand the world. This paradox is more than an intellectual curiosity—it challenges the foundations of how we process, categorize, and act on the information that shapes our lives. Despite being the foundation of the modern Western world, logic is a system that somehow doesn’t abide by its own rules. The concept of units themselves, or categories themselves, is contradictory. How does this make sense? Rejecting Logic This episode, I’ve explored how we use clean information units to reflect and capture objective reality, whether through recipes, language, or formal logic. I’ve also shown that, despite forming the guidelines for logical thought, the idea of ‘cleanliness’ itself can’t exist as a binary category. This contradiction aligns with insights from Eastern philosophies like Taoism and Zen Buddhism, which reject binary thinking as part of their core beliefs. This rejection of binary thought and categorical thinking can feel deeply unintuitive to the Western mind. After all, our default intuitive model is built on clean information units. Still, in the next episode, I’ll introduce my own model of inverted logic: a conceptual system that uses gradients of concentration and solidity to describe reality without relying on classical logic. This approach has guided my decisions, and I hope to make its value accessible to you. An analysis of the building blocks that power our intuition. Picture standing in the middle of a bustling marketplace—a swirl of colors, voices, and movement clamoring for your attention. This is life: a constant flood of sights, sounds, and sensations. Our brains collect this stimulus in the form of electrical impulses and organize it in a way that gives us a sense of the world. We categorize experiences based on how they match with the past: consistent patterns are recognized as ‘things’ and are turned into meaning. As a result, the experience of life is not just a blurred blend of sights and sounds but rather perceived as a world of concrete physical objects set in a shared space. This process is part of default intuition: there is no intentional thought required when generating these basic assumptions about objective reality. I don’t need to figure out every day what food or danger is—my organism has evolved over millions of years to effortlessly recognize the important stimulus and respond to it automatically. Our bodies have survival instincts engraved in our DNA, and our brains continue to shape our understanding of the world as we learn from new experiences. To understand and dissect this process, we can use models to categorize and organize the factors that constitute different parts of the world. Different models focus on different parts of reality: scientific models like Newtonian Mechanics are used to simplify and predict physical bodies in motion, or social models like Game Theory are used to understand negotiation, cooperation, and economics. In Blob Theory, I use the term ‘default intuitive model’ to describe the sorting algorithm that our brain uses to process and understand reality itself. For example, when you see a chair, you don’t need to analyze its parts to recognize it as a chair—your mind automatically identifies its shape, purpose, and context based on prior experience. This mental shortcut allows you to navigate the world efficiently without constant re-evaluation. It generates the sensation of objective reality by assessing the consistency of patterns, and it does so by using building blocks that I call ‘information units’. Information Units An information unit is a mental building block used to perceive, categorize, or reason about the world. It is a distinct and stable conceptual "chunk" that can be used to predict or calculate reality. Their role is to be a reliable, condensed summary of the external world in order to consciously calculate or predict complex truths. The units themselves, once established, go unquestioned: they are effectively the bedrock upon which further conclusions are made. As a simple example, consider a “chair” as a unit of information. It exists as a concrete concept in the mind, even if different chairs look different. A more abstract example could be “1” in binary code, where “1” and “0” are corresponding to the concepts of presence and absence. As a more complex example, consider gravity. It’s the word we have for describing the phenomenon of how things are pulled downwards. As a rule of thumb, the fact that ‘things are pulled downwards’ is extremely consistent and broadly applicable. In an industrial context, this simple truth can be used to sort objects with different properties, relying on the heavier bits to separate from the lighter ones. It can be seen in mining, where heavy metals like gold can be separated from lighter silt, or in agriculture, where grains can be sorted depending on how dense they are relative to the less valuable parts of the plant. It doesn’t matter where you are in the world, the law of gravity will behave the same regardless of your specific conditions. Gravity is a general example because it is very consistent, representing a physical phenomenon with observable effects. In contrast, information units are an abstract concept — they have no physical form but instead measure attributes like permanence, reliability, and truthfulness within mental or conceptual models. Clean Units In Blob Theory, information units have degrees of cleanliness, which is a measurement of how stable and precise the unit behaves. The default intuitive model relies on using units that are as clean as possible, as to provide conscious thought with a simplified foundation. The measurement of a unit’s cleanliness uses a system that has its origins in Aristotle’s rules for logic and dates back nearly 2500 years. His 'laws of thought' established fundamental principles for reasoning, which directly align with the criteria for clean information units. By defining rules for consistent identity, non-contradiction, and binary existence, Aristotle’s framework mirrors the qualities that make an information unit 'clean' in Blob Theory. These ancient philosophical principles continue to underpin modern science, mathematics, and computational theory. The laws look something like this:
The original purpose of these laws is to measure the degree of something’s logic or reason, but in the context of the default intuitive model, I use the laws to refer to the ‘cleanliness’ of a unit of information. A clean unit is one that follows these laws and can be relied upon for further calculation. It is permanent, consistent, regular, and solid. By comparison, a dirty unit either has an inconsistent identity, it contradicts itself, or it only partially exists. This isn’t to say that a clean unit is objectively superior to a dirty one! Clean units have their flaws; nevertheless, they are the basis of the default intuitive model’s operating system. The limits of cleanliness When comparing physics with game theory, it’s clear that physics has higher standards of cleanliness. Clean information is associated with having binary thinking and very strict laws, and it assumes that problems have only one correct solution. Binary thinking involves viewing things in absolute terms (such as true or false, present or absent) with no gray area in between. In the real world, when talking about social models like game theory or economics, there is an implicit messiness or inconsistency in human behavior that muddies the waters. The hard sciences are cleaner but still imperfect. If you consider the earlier example of gravity, at first glance it seems to fit the criteria. It has a consistent identity that doesn’t change over time: it accelerates objects at a rate of 9.8 m/s². It doesn’t contradict itself: solid objects move down, not up. It clearly exists: there is no place on earth where gravity fails to pull. That being said, Newtonian gravity is just an approximation of reality. Perfectly clean units don’t truly reflect how the world works. Newton’s description of clean gravity was upset by Einstein’s magnum opus: his general theory of relativity. His model updated our understanding by pointing out the inaccuracy of calling gravity a force and instead showed it as a condition of spacetime. From this perspective, gravity shouldn’t be seen as having binary existence, but rather as a continuous gradient of varying degrees of spacetime’s curvature. This violates the third law of thought, because under a relativistic model, gravity operates as a ‘middle option’ between concrete existence and nonexistence. Our modern understanding of gravity isn’t classically “clean”, but it is undeniably a true depiction of the way things work. This illustrates a key flaw in our default intuitive model, a concept that relies on dividing reality into clear, binary states. We intuit things to either exist or to not exist, it’s necessary for our most basic conceptualizations of the world. Objective reality is rooted in these terms, and our default assumptions are built on this foundational bedrock. Still, there are a plethora of examples of modern science where this simply doesn’t hold. We exist in a relativistic world, where light and gravity are now known to not exist in absolute terms. It extends far beyond just gravity: at every level of reality—physical, conceptual, and mathematical—our pursuit of clean, stable information units has run into unavoidable limits. These facts prompt the need to amend the default intuitive model and to propose an alternative to how we perceive objective reality. To do so requires a deeper understanding of how units of information operate, in order to build and describe an alternate model of the way things work. Next time, we’ll trace the centuries-long pursuit of clean information, starting with Greek syllogisms and the symbolic logic that followed. But this story is not just about refinement, it's about the limits of refinement itself. The clean, bounded units of classical logic have driven science and computation, but they have also boxed us into a worldview of fixed categories. By revisiting the origins of symbolic reasoning, we’ll lay the groundwork for an alternative approach, one that embraces categorylessness, boundarylessness, and the fluid nature of reality, echoing ideas found in Taoism, Zen Buddhism, and the philosophies of the East. If classical logic gave us order, what lies beyond it? “Do you think that’s air you’re breathing?” Morpheus asked this in The Matrix to challenge Neo’s assumptions about reality. But if you stopped to ask yourself the same question, how sure would you be about the answer? You probably believe in objective reality. This reality feels like it exists independently of us, composed of the space and objects in our environment. The objects in objective reality all follow consistent rules, and everybody in objective reality partakes in the same universal stage. This all seems obvious and intuitive, but history shows us that our understanding of reality changes over time as science evolves through the ages. For example, is sickness the result of God's wrath, an imbalance of the body’s humors, or the workings of hormones and cells? Does the universe revolve around the earth, or do the planets orbit around the sun? Different periods have had their own ways of depicting objective reality, and although the modern age has an evolved understanding, I’m comfortable assuming that we are mistaken about reality in a similar way that past ages were also mistaken. Even within our own era, objective reality looks different depending on which discipline you consult. For example, physics describes objective reality as mechanical motion, measuring the world’s objects and manipulating them in its system of mass and energy. On the other hand, the discipline of history describes objective reality as a story, with points of consensus appearing across connected documents, telling the tale of reality as a reconstruction of evidence left by human behavior. Neither one presents a whole picture of what objective reality truly is, but they each provide their own angle of illumination on the way things work. The Default Intuitive Model I have my own model for objective reality, my own angle of illumination that I use to depict what’s there. The discipline of Blob Theory describes objective reality based on the default intuitive model. The model shares a common goal with disciplines like history or physics: it describes the world in a way that seeks to be consistent and non-contradictory. Objective reality, as told by the default intuitive model, is the gateway to understanding Blob Theory. The model is similar to phenomenology or solipsism, and uses subjective experience as the basis for understanding the world at large. I explained the model in greater detail last episode, but as a quick recap: the model outlines the underlying cognitive processing system that we all share. The model assumes that we exist as divided selves, meaning that we have internal divisions with distinct motivations that can cooperate or compete with each other — like the automatic, reactive part of ourselves and the reflective, deliberate part. In the model, the intuitive, automatic part of you is responsible for supplying information to the conscious part of you. The conscious self, in part, makes decisions based on the foundational data provided to it. Illustration: Three major components to the default intuitive model In other words, the model is effectively a sorting algorithm, and it provides a quick layer of processing to the incoming data being absorbed by the senses. Our default intuition presents the information to consciousness in a way that is reliable and digestible, providing basic assumptions about the world. The conscious part of ourselves, in turn, can draw more complex conclusions by using the intuitive information as its foundation. The most basic assumption drawn by the default intuitive model is that we live in a material world, in a shared space filled with physical objects. This assumption is done in order to simplify the broad spectrum of chaotic sights and sounds into concrete units that are easy to understand. Physical Objects"Our whole knowledge of the world is, in one sense, self-knowledge. For knowing is a translation of external events into bodily processes, and especially into states of the nervous system and the brain: we know the world in terms of the body, and in accordance with its structure.” — Alan Watts Our intuitive model for reality transforms raw sensory data into familiar objects. No two people experience the world in exactly the same way, but our default model assumes that a single shared, material world exists independently of us. To manage this complexity, the model recognizes patterns in sensory input and identifies "objects" as consistent groupings of sights, sounds, and other sensations. Take, for example, an apple. When our senses detect a pattern close enough to past "apple" experiences, the mind codes it as an "apple" with minimal effort. This isn’t limited to vision. The smell, taste, and feel of an apple reinforce the pattern, allowing us to quickly sort it into a stable mental category. The idea of "appleness" emerges from our familiarity with the pattern, and our mind does not need to deduce it from scratch each time. This process reveals the core aim of the default intuitive model: to convert raw sensation into stable 'information units' that are clear, consistent, and easy for consciousness to process. It’s a system designed to prioritize simplicity and stability. ‘This is not an apple’ by Magritte: a visual representation of an apple, but not exactly the apple itself.
Next Week: Information Units Next week, we'll trace how this same system shapes more than just our sensory world. It shapes how we think, how we measure, and even how we define knowledge itself. We'll explore the 'clean information units' at the heart of Western science — and the limits they impose on how we see reality. |
Ruben Lopez
Archives
December 2024
Categories |