I built the first draft of MURST when life felt like a pressure vessel. The only way to stay sane was to trace patterns I kept seeing repeat across contexts: human body, teams, markets, code, even my own mind.

The pattern was boringly consistent: whatever a system is doing at time t, it’s referencing what it did at t-1, adjusting, and repeating. Over and over. The universe moves forward, entropy creeps up, and everything that wants to keep existing learns to fold its past into its next move.

MURST in one line

in an entropically increasing universe, systems are recursively self-referential over time and inherently incomplete; they stabilize by growing scale-invariant structure that balances order and chaos.

Mandelbrot’s experiment where you measure the length of a coastline with a smaller ruler and it “grows”; is the cleanest intuition here. Systems with detail at every scale don’t compress to a single neat metric without loss.

The same holds for behavior. When you zoom in on a person, a team, a market, and the texture doesn’t smooth out; it reveals more structure. Self-similarity shows up as repeated “motifs” across scales: feedback, thresholds, bottlenecks, runaways, resets. If you’re hunting for levers, assume the motif repeats and look for the smallest version you can actually touch.

MURSTQ is my extension where I take that loop seriously for prediction itself.

AI Box Thought experiment:

Let's hypothetically put a predictive AI that's omniscient (think LaPlace's demon) inside the system it’s trying to predict. Every prediction writes to memory, every write costs energy, every bit of heat perturbs the environment a hair, and now the next prediction has to model the world plus its own effect.

Do that long enough and the prediction error you’re fighting is partly (gradually increasing ratio) you. MURSTQ's thought experiment points: for embedded observers, some fraction of apparent randomness is a side effect of being in the loop. Outside demons can fantasize about perfect foresight; inside agents pay rent in heat and drift. That’s enough to matter for real systems.

Where MURST touches actual life:

Human behavior runs on loops. Social loops, incentive loops, attention loops. We copy yesterday’s move with a tiny twist, then call it “choice.” Groups do the same—norms harden, break, re-form; ideas propagate; reputations compound or evaporate. Consensus isn’t an oracle; it’s a basin of attraction. Useful, often adaptive, occasionally completely dumb. MURST’s read is neutral: if a loop persists, it either dissipates pressure or hides it elsewhere. If you want to change an outcome, interrupt the loop at the cheapest scale where it still echoes upward. If you want to surf it, find the re-entry points and time your turns.

My projects (as of writing) have integrated MURST in practice more than in name. RSRC (Recursive Self-Referential Compression) is about models learning to carry more signal with less thermodynamic waste aka the same “survive entropy by compressing” motif. CPI (Chain-Pattern Interrupt) is a metacognitive brake for agents that start to lock into a dumb path—same “insert small friction to avoid large failure” motif. Vibe Check MCP operationalizes CPI so multi-agent workflows don’t drift—same “close the loop at run time” motif. Different scales, same skeleton.

So is MURST done?

This isn't a finished scientific hypothesis. It’s a framework I find predictive enough to be useful and humble enough to be falsified.

Recent work out in the world keeps moving in the same direction. Thermodynamic accounts of computation keep tightening the link between information erasure and heat, which makes the “embedded prediction has a cost” part less hand-wavy. Quantum-flavored thermodynamics is getting more explicit about what observers know at different scales and what they pay to know it. And on the software side, agent frameworks with real control graphs and persistent state have made it normal to treat reasoning as a navigable, interruptible process rather than a single shot. None of that proves MURST; it just means the ground it’s standing on is hardening.

So why does it matter?

Because it's still a generative framework. It helps me choose where to push, where to wait, and where to re-architect. It keeps me honest about incompleteness. It gives me a portable way to move between disciplines without faking certainty. A map, not the terrain. I’ll keep refining it, building tools on top of it, and cashing out predictions where I can.

To summarize: MURST is a framework, not a law; MURSTQ is a conjecture about observers paying thermodynamic tax inside their own forecasts; fractal structure is the working intuition that glues it together.