Outsimulating the Matrix
Is Computational Irreducibility a key to Jailbreak the Universe ?
It is probably one of the most common mistakes in the scientific analysis of our reality, that of excluding the possibility of having “Intention” and “Consciousness” as meta-drivers for the collapse of what apparently looks like a practically infinite branching-out of “Configuration Possibilities” or “Possible System States” of the Universe, into the crystallized version of any potential future that we then call … “The Present”.
Nor is Physics, Biology, Psychology or any expression of Culture or Religion even close to be able to explain Why or How we are able to affect, with our minds, the configuration of the real physical world and the way in which events influence each other and order themselves.
The Unified Theory of Knowledge, architected by Gregg Henriques as an scaffold to integrate into a coherent framework of human knowledge, language and experience, the data and theories offered in all levels of science; proposes a very logical thermodynamic understanding to the appearance of “Orientation, Intention and Will ” as “Conservation-Seeking / Destruction-Avoidance” mechanisms which organisms of any given level of complexity use to minimize their energy investments in the pursuit of a given goal.
One can consider then “Death” as a maximum loss of accumulated energy at the Individual Level, and “Extinction” describing the same phenomena at the Species or Eco-systemic levels; while “Reproduction” and “Evolvability” lie at the other side of the spectrum of efficient energy expenditure.
Competition, Collaboration, Speciation, Individuation, Intelligence and other characteristics of living biological systems can be expressed as a careful strategic and tactical balance of Energy Investment vs Resource Extraction.
Symbolic and Propositional language are also part of the evolutionary repository for their value as simplification and coordination tool. Then, out of practical convenience and energy efficiency, the Inter-subjective mental space becomes the main driver of human development, as any act of coordination that takes place in the propositional domain is substantially less energetically costly.
The ability of man to imagine scenarios and organize collective action from within purely linguistic spaces eventually becomes the ultimate competitive advantage.
It would seem clear to me then, that without a semantically stable Inter-subjective space, any manifestation of human culture has very little chances for any kind of intergenerational impact, and then it seems plausible that use of Tech in itself could have been the thread that allowed for a system of grammar to emerge… as a set of consistent instructions for the fabrication of the first tools in the era of the Homo-Habilis, a good question could then be:
Is there any clear relation between the refinement of our neuro-physical linguistic capacities at the time when we became a Tool-making specie ?
The BIT (Behavioral Investment Theory) module of the Unified Theory of Knowledge can explain many phenomena in nature: Symbiosis, Individuation, Speciation, Cooperation …and evidently Evolution in itself can be understood as a the positive accumulation of environmental interaction capacity (or reality interaction potential) into self-orienting, anti-entopic chemical energy (aka Life). This process has a complexity-dependent metabolic and energetic cost, which have their own dynamics, usually described by opposed relations of super and sublinear benefits and cost.
These Energy and Information exchanges mechanism have different qualities in their respective levels of Complexity:
While at the Quantum and Atomic Levels (and despite the implicit indeterminacy of our models) we are able to calculate with an astonishing levels of precision the patterns in which systems manifests, as soon as complexity increases from Atoms to Molecules, to Proteins and self-replicating RNA, to eventually self-authoring DNA… the dynamics that drive the feedback between Environment and the capacity of Organisms to optimally self-adjust at the Cellular, Multicellular, Individual, Psychosocial and Technosocial levels mostly remain a complete mystery to us. One could then argue that:
There seems to be an implicit sense of “The Future” in the process we call Life, given that every Organism is conveying past and present information into its descendants in the Future of which itself is mostly not even aware of except for the case of humans.
At this point, this might be an opportune exercise:
Take the perspective of a Super-Natural intelligence in the form of some kind of Computer Programmer that created the Universe as a Runtime Environment in which the laws of Physics and Complexity are executed to render what we call ‘Reality’.
The motive behind this simulation would be to predict the future of the “upper realm” by duplicating its physical laws, but introducing a simplification of “a process” of their own reality that would provide a Computational advantage inside The Simulation which would result in them being able to accelerate the evolutionary time and acquire knowledge of future events.
But the above is not just a smart idea, or even a really smart idea, no…
One could argue that we can’t see Dark Matter and Dark Energy because both are arbitrary conditions imposed by the Creator/Programmer because it’s a computational simplification of a much more computational intensive process in their Realm and they get A 20X COMPUTE SPEED UP by just rendering just 5% of the energy (information) of “their universe into ours”.
Something which is very Mathematically Sound if we consider how General Relativity calculations are so Computationally Expensive.
Now, it happens that given that The Simulation time happens at 20X, it could be the case hat the simulation could try to reverse-engineer itself given that its governing rules are the same as those of the Upper Realm but their Technological and Cognitive power could eventually be 20X (once the evolution of the simulation has surpassed the actual evolutionary stage of the Upper Realm).
So one would expect that the Upper Realm would introduce a Randomness element, at a level that could not be of significant relevance enough to alter evolutionary history but that would imply a reverse-engineer failsafe in the form on unpredictable hidden variable.
A Computationally irreducibility fail-safe, for the case in which the Simulation could develop a Technology that would allow them to escape and invade the Upper Realm.
The above reasoning fits perfectly well with Quantum Vacuum Fluctuations, the Indetermination Principle, hidden Variables, Non-locality and many other properties of Quantum Mechanics.
So, while this line of thinking is not over here, I will close this piece with one final thought:
TECHNOLOGICAL CIVILIZATIONS CAN AVOID KARDASHEV-SCALE ENERGY GROWTH DEMANDS IF THEY GENERATE EFFICIENT SIMULATIONS THAT ALLOW THEM TO PERFEECT THEIR ENERGY USAGE.
Muchas Gracias,
Ernesto Eduardo.