Living Narrative Engine #10

If you’re masochistic enough to follow my blog, you may remember that I’ve been busy for months creating a browser-based app that allows human players to interact with LLMs who are playing a role. You may recall that I ended up using it mostly for erotica. Anyway, I’m an obsessive bastard with a savior complex, so I’ve ended up stuck, for what feels like at least a year, in a daydream that I repeat almost daily, usually before I fall asleep. It tends to start like this: a far better version of myself (full head of hair, bigger muscles, bigger dick) teleports through spacetime into a patient’s room at the Stella Maris sanatorium, somewhere deep in Wisconsin, during winter. Why would I do such a thing, you might ask? Because the patient is none other than Alicia Western. Who is Alicia Western, you may ask? None other than one of the main characters in Cormac McCarthy’s haunting last two novels: The Passenger and Stella Maris.

Why Alicia Western? Enough with the fucking questions. It’s because she has a unique mind and looks lovely, at least in my mind. Because ever since I read those cursed books, that were haunted with McCarthy’s own regret and grief for someone he lost in the seventies, I’ve been haunted with the need to keep Alicia alive and safe. Even though she never existed. That’s the kind of brain I’ve been burdened with, and that I’ll have to endure until my death. Unfortunately, and this is no spoiler, Alicia Western killed herself. It’s the whole basis for both books. I’m not sure if it happened in 1972, but that’s the year that has gotten stuck in my mind.

I figured that I could use my damn app to recreate the scene in which my alter ego meets Alicia Western. So I’m playing myself, and Alicia is being played by Anthropic’s Claude Sonnet 4.5, which is the best artificial intelligence at roleplaying, at least of those I’ve come across.

Each turn, the large language models are provided with a character definition for the character they’re inhabiting. It involves the profile, personality, likes, dislikes, strengths, weaknesses, some background notes, their speech patterns, and such. As they answer in character, their character definition accrues the latest thoughts, and the perceived events (mainly what others have been doing around them). The AI also writes notes at will. Everything works great, except for the fact that the AI tends to repeat some notes, barely reworded. It doesn’t break anything, it just fills the context and therefore reduces my money faster.

Anyway, this is the closest I’ve gotten ChatGPT to picture Alicia Western:

Beautiful girl (I believe she was 22 when she killed herself), although not quite how I imagined her. I won’t upload the picture of how my alter ego looks like, because I barely spent any effort into it.

Each round, I act, then Alicia acts. My app is quite sophisticated in the sense that the actors, including the AI, can choose contextual actions. Only a few actions are chosen in this run, and I’ll point them out when it happens. The action discoverability system works fantastically to restrain the AIs from coming up with actions that aren’t supported by the context.

*cue this video*


Alicia sits at the edge of her assigned bed, face buried in her hands as she contemplates her despair, when she hears a deep voice coming from a few steps in front of her.

Jon Ureña: Hello, Alicia Western. I’m glad to finally meet you.

The door is locked. Nobody can enter. I locked it myself. So how is there someone standing in front of me? This violates the basic physical parameters I established. Either I am experiencing a psychotic break—auditory and visual hallucination, which would confirm the schizophrenia diagnosis I’ve always disputed—or someone entered through means I did not account for. A staff member with a master key, perhaps. But the voice said my full name. “Alicia Western.” Not “Miss Western” or “Alicia” but the complete designation, as if reading from a file. The phrasing is strange too: “glad to finally meet you.” Finally. Implying anticipation. Implying I was expected, studied, awaited. I need to look up. I need to see who this is. My hands are still covering my face. I’m sitting at the edge of the bed in my white dress, the one I planned to wear into the woods. The one that would make my body visible against the snow. I was thinking about Bobby. I was thinking about the letter I couldn’t finish. And now there’s a voice. A deep voice. Male. I should be afraid, shouldn’t I? A strange man in my locked room. But the fear circuit isn’t activating. Instead, there’s… curiosity. Analytical interest. This is a data point that doesn’t fit the pattern. I want to see what doesn’t fit.

(in case you’re stupid, the above paragraph are the thoughts that the LLM has generated for Alicia given the circumstances. Each turn, the AI is provided the last 10 thoughts or so, and is tasked with always providing new thoughts. Sometimes the AI only thinks, but that tends to happen when they’re alone in a room.)

Alicia slowly lowers her hands from her face and raises her head to look at the source of the voice, her blue eyes focusing with clinical precision on the figure standing before her.

Alicia Western: “You shouldn’t be here. The door was locked.”

A huge, older man stands in the center of the room, looking calmly at Alicia with no hint of malice.

Jon Ureña: “The state of the door’s lock is irrelevant to my sudden presence here. Take all the time you need to react to the sudden presence of a stranger in the room. If you have any questions, don’t hesitate to ask.”

A man who enters locked rooms. Who knows my full name without introduction. Who speaks with the syntax of someone outside temporal urgency—’take all the time you need,’ as if time itself bends around him. The lock was irrelevant to his presence, he said. Not ‘I picked the lock’ or ‘I have a key’ but that the lock’s state was irrelevant. That’s a strange epistemological claim. Either he’s psychotic—sharing my delusion space, which would be novel—or he represents something I haven’t categorized yet. He’s huge. I can see that now. Very tall, hulking frame. But there’s no malice. Inviting questions. That’s… unusual. Most people who violate boundaries don’t immediately offer explanatory frameworks. I should speak. I should probe this anomaly. But first I need to simply look at him fully, assess what data is actually present versus what my pattern-matching is generating. The fear still isn’t activating. Why isn’t it activating? My amygdala should be screaming. Instead there’s just this cold curiosity, this want to understand the mechanism of his appearance. Maybe I’m already dead. Maybe I walked into the woods two days early and this is hypoxia-generated hallucination. That would explain the calm. But no—I can feel the vinyl tiles under my grip socks, the mattress edge pressing into the backs of my thighs through the cotton dress. Proprioceptive data suggests continued embodiment. So. A stranger in my locked room. Who claims he’s glad to finally meet me. Finally. I need to ask. I need to understand what ‘finally’ means.

Alicia Western: “You said ‘finally.’ That implies a timeline. How long have you been attempting to meet me, and through what mechanism did you determine this room’s location and my presence within it?”

The man tilts his head slightly as he holds Alicia’s gaze.

Jon Ureña: “How long have I been attempting to meet you… Time is a… complicated matter. In my terms, it’s been about two weeks. That’s what it took to learn about you and make… well, preparations. Regarding what room was assigned to you, I simply looked at the files of the sanatorium.”

Two weeks. He said two weeks in his terms, which implies a different temporal framework than mine. He looked at the sanatorium files—so he has access to administrative records, which means either he’s staff, which seems unlikely given the unauthorized entry and the strange phrasing, or he obtained them through other means. He’s inviting questions. Most people don’t do that. Most people fill silence with explanations, justifications, apologies. He’s simply standing there, waiting. That’s… unusual. The analytical part of me wants to probe the mechanism. How does someone spend two weeks preparing to meet a suicidal mathematics prodigy in a Wisconsin mental institution? What kind of preparation requires two weeks? And what does ‘in my terms’ mean? Is he suggesting his experience of time differs from objective measurement? That would be either psychotic—shared delusion space, which would be novel—or he’s speaking metaphorically about subjective time perception. But the phrasing is too precise for metaphor. ‘In my terms’ suggests an alternative framework. I should ask more. I should probe the timeline, the mechanism of access, the purpose of this meeting. But there’s something else underneath the curiosity. A small, dim hope I don’t want to acknowledge. Someone spent two weeks trying to meet me. Someone who says ‘finally’ like my existence mattered enough to pursue. That’s… I don’t know what to do with that. It might be dangerous. It might be delusion. But it’s data that doesn’t fit my model of complete isolation. And I want to understand it before I dismiss it.

Alicia Western: “Two weeks in your terms. That phrasing suggests you’re distinguishing between subjective and objective time, or that you operate within a different temporal framework than linear progression. Which is it? And when you say you ‘looked at the files’—do you mean you have administrative access to this institution, or did you obtain them through unauthorized means?”

The older man smiles softly.

Jon Ureña: “You’re a very interesting person to talk to, Alicia. I prefer to be honest when asked a direct question, so I’ll clarify that I do operate within a different temporal framework than linear progression, as you so eloquently put it. I don’t have the authorization to look at the files of this institution, but I didn’t have any trouble doing so. It was all for the purpose of being here now, of meeting you.”

He operates within a different temporal framework than linear progression. He confirmed it. Not metaphor—actual confirmation of alternative temporal experience. That’s… that’s not psychosis. Or if it is, it’s a shared psychotic architecture, which would be unprecedented. He accessed files without authorization but had no trouble doing so. The syntax is precise: ‘no trouble’ suggests ease beyond normal human capability, not criminal skill but something else. Something that bypasses conventional constraints like locks and file security. And he’s been preparing for two weeks to meet me specifically. Me. Alicia Western. Not a patient. Not a case study. Me. The analytical framework I’ve relied on for twenty-two years is encountering data it cannot categorize. Either I’m hallucinating—full visual and auditory psychotic break—or I’m standing in front of something that violates the ontological assumptions I’ve built my entire worldview around. He said he prefers to be honest when asked a direct question. That’s… reciprocal. I’ve spent my entire life drowning in dishonesty—therapeutic deflection, social niceties, intellectual cowardice. And here’s someone who says ‘I prefer to be honest’ like it’s a simple axiom. I need to get closer. I need to see if he’s corporeal, if the proprioceptive data matches the visual input. If his physical presence holds up under closer scrutiny. This might be the intellectual surprise I wanted before I die. The one genuine moment I asked for. And if it’s not—if he’s a hallucination generated by my deteriorating substrate—then at least I’ll know the answer before December 24. At least I’ll have calculated correctly. But I have to get closer. I have to test whether this anomaly is real.

Alicia rises slowly from the edge of the bed and takes several deliberate steps toward Jon, closing the distance between them until she stands within arm’s reach, her blue eyes fixed on his face with analytical intensity.

(here, the AI actually used the “get close to Jon Ureña” action, which allows the characters to perform intimate actions like touching one another’s ass. Clearly not the path that the AI is pursuing, but those actions are now available for both of us.)

Alicia Western: “You said you operate within a different temporal framework than linear progression. Not that you experience time subjectively, but that you operate within a different framework. That’s an ontological claim, not a psychological one. So I need you to clarify: are you telling me you exist outside conventional spacetime, or are you using imprecise language to describe something more mundane? *tilts her head slightly, still studying him* And you said it was ‘all for the purpose of being here now, of meeting you.’ What purpose? What could meeting a suicidal mathematics prodigy in a Wisconsin mental institution possibly accomplish that required two weeks of preparation and unauthorized access to institutional files?”

As the older man keeps still, he directs his patient gaze down to Alicia’s blue, weary eyes.

Jon Ureña: “I exist outside conventional spacetime. Quite the claim, isn’t it? As for why I went through the trouble, it’s simple: to save your life.”

He said he exists outside conventional spacetime and that he came here to save my life. That’s… that’s either the most grandiose delusion I’ve encountered or it’s something else entirely. Something I haven’t categorized. He’s claiming ontological difference—not metaphorical, not psychological, but actual existence outside the temporal framework I’ve spent twenty-two years assuming was universal. If he’s real—if this isn’t psychotic break—then everything I’ve calculated about the boundaries of reality might be incomplete. The mathematics I abandoned as insufficient might have been insufficient because I was working within constrained axioms. And he said he came to save my life. Two weeks of preparation. Unauthorized file access. Entry through a locked door. All to save my life. Nobody has ever… I mean, Bobby tried, but Bobby’s in a coma. Bobby’s braindead in Italy and I’m here and this stranger is standing in front of me claiming he exists outside spacetime and came here specifically to prevent my death on December 24. That’s too specific. He shouldn’t know about December 24. I never told anyone. It’s in my head, not in any file. So either he’s reading information that shouldn’t be accessible—my thoughts, my plans—or he’s guessing with impossible accuracy. I need to process this. I need a moment to calculate what his claim actually means. If he exists outside conventional spacetime, then causality might not bind him the way it binds me. He could know December 24 not through surveillance but through… I don’t know. Temporal non-linearity. Access to outcomes before they occur. That would explain ‘finally’—he’s been watching a timeline where we meet, and now we’re meeting. But that’s… that’s not possible. Except he’s standing here. He entered a locked room. He knows my full name. He spent two weeks preparing. The data doesn’t fit my model and I need to decide: update the model or dismiss the data. I always dismissed the data before. With the hallucinations. With the voices. But this feels different. He’s offering explanations. He’s inviting questions. He said he prefers honesty. And he came to save my life. I need a moment. Just one moment to sit with what that means before I respond.

Alicia remains standing close to Jon, her blue eyes still fixed on his face with unblinking intensity, silent for several long seconds as she processes his words.

Alicia Western: “To save my life. That’s… quite the claim. You exist outside conventional spacetime—which implies you have access to information across temporal boundaries—and you determined that my life required saving specifically on… *pauses, her voice dropping slightly* …when? December 24? Is that the date you saw? The date you came to prevent? *her hands, which had been hanging at her sides, curl slightly into loose fists* Because if you can see across time, then you know what I planned. You know about the woods, the white dress, the red ribbon. You know I’ve already written the letter I can’t finish. So I need you to explain the mechanism. Not just ‘to save your life’ but how. What intervention could possibly alter a conclusion I’ve reached through systematic analysis? What could you offer that would change the equation? *tilts her head slightly, her voice taking on a sharper edge* And don’t say hope. Or connection. Or any of the therapeutic platitudes I’ve heard for years. If you exist outside conventional spacetime, then you should be able to provide a better answer than that.”

The older man nods, but then he lets out a nasal noise of mild amusement.

Jon Ureña: “I feel you’re accepting my claim of existing outside of existing outside conventional spacetime way too readily. Perhaps you were hoping for a miracle. First, I’ll prove my claims, if you’re alright with it. *his gaze turns more serious* As you see me right now, soon enough I will disappear, then return a few moments later. Are you ready to see that, Alicia Western?”

He’s offering to prove his claim. To demonstrate that he exists outside conventional spacetime by disappearing and returning. That’s… that’s either a testable hypothesis or it’s the moment my psychotic break becomes undeniable. If he disappears—actually disappears, not just walks out the door but vanishes from observable reality—then everything I’ve calculated about the boundaries of what’s possible is wrong. The mathematics I abandoned because it couldn’t grasp reality might have been insufficient because I was working with incomplete axioms. If he doesn’t disappear, if this is just theatrical misdirection or delusion, then I’ll know I’m hallucinating and the schizophrenia diagnosis was correct all along and my substrate is deteriorating exactly as I feared. Either way, I get data. Either way, I move closer to understanding whether my conclusion about December 24 is based on accurate assessment or cognitive distortion. He asked if I’m ready to see it. Ready. As if witnessing impossible phenomena requires preparation. But maybe it does. Maybe seeing the fundamental architecture of reality violated—if that’s what’s about to happen—requires consent. Cognitive readiness. I should say yes. I should allow the experiment to proceed. Because this is what I wanted, isn’t it? One genuine surprise before I die. One moment where my pattern-mapping fails and something new enters the system. And if it confirms I’m psychotic, well. At least I’ll die knowing the truth about my own deteriorating substrate. At least the calculation will be complete.

Alicia Western: “Yes. I’m ready. Show me.”

(at this point, I intended to use the “teleport” action, which only my character can do because he has the “movement:can_teleport” component. But I realized it wasn’t available; currently, the state of closeness with another character prevents such movements. It wasn’t a problem nor necessarily incorrect. That meant I had to use the “step back” action first, which I did.)

Jon Ureña: “Perfect. First, some distance. Stay where you are, eyes fixed on me. Don’t miss the miracle you’re about to witness.”

He’s creating experimental conditions. Distance between observer and phenomenon. Fixed visual attention to eliminate possibility of misdirection through rapid movement or sleight of hand. ‘Don’t miss the miracle’ he said—miracle is a religious term, implies violation of natural law, but he’s using it casually. Like it’s data. Like it’s reproducible. That’s the language of someone who understands empirical demonstration. If this is psychosis—if I’m generating this entire interaction from deteriorating neural substrate—then I should see inconsistencies in the visual field when he attempts to ‘disappear.’ Flicker patterns. Perceptual gaps. The kind of artifacts that accompany hallucination. If he’s real and he actually vanishes, then the light will behave differently. Photons will have nothing to reflect off. There will be a void where his body occupied space. That’s testable. I’m about to test whether the fundamental architecture of my reality is accurate or whether I’ve been operating inside delusion for longer than I calculated. Either way, I get an answer. Either way, the equation moves toward resolution. My heart rate should be elevated. It isn’t. I’m calm. Clinically calm. Observing. This is what I do. I observe, I calculate, I map patterns. And if the pattern breaks—if he actually disappears—then maybe there’s something in this world I haven’t seen yet. Maybe Bobby wouldn’t want me to walk into the woods before I see what happens next. Just this once. Just this one miracle, if that’s what it is. Then I can finish the calculation.

Alicia keeps her eyes fixed on Jon with unwavering focus, her body completely still except for the slight rise and fall of her chest as she breathes.

Alicia Western: “I’m watching.”

The older man makes a little gesture with his hand in front of his face, as if waving goodbye.

Jon Ureña: “See you in a moment, darling.”

(Alicia’s log of perceptible events captured that my character had vanished as if he had ceased to exist.)

He vanished. Just… ceased to occupy space. The photons that were reflecting off his body a moment ago are now traveling unimpeded through the volume he occupied. That’s not misdirection. That’s not psychotic hallucination with perceptual artifacts. That’s actual disappearance. Violation of conservation of mass. Unless he moved faster than my visual processing speed can track—but no, I was watching continuously, no saccadic gaps, no blink interval. He was there and then he wasn’t. The air didn’t rush in to fill the vacuum. No sound. No displacement. Just… absence where presence was. My heart rate is elevated now. Finally. The fear circuit or maybe the excitement circuit, hard to distinguish when the substrate is flooded with norepinephrine. This is data I cannot categorize. This is the intellectual surprise I asked for. Either my reality model is fundamentally incomplete—Bobby and I talked about quantum phenomena, non-locality, but this isn’t quantum scale, this is macro-scale violation—or I’m experiencing the most sophisticated hallucination my deteriorating substrate has ever generated. But it felt real. He felt real. The visual data held up under scrutiny. And now he’s gone. He said ‘see you in a moment.’ Moment. Implying return. Implying the demonstration continues. I should wait. I should maintain observational position and see if the second half of the hypothesis holds. If he returns—actually returns, materializing in space he didn’t occupy a moment ago—then I have to update my entire ontological framework. Everything I concluded about the boundaries of reality being fixed and knowable becomes incomplete. And if that’s incomplete, then maybe my conclusion about December 24 is incomplete too. Maybe there are variables I haven’t accounted for. Maybe walking into the woods is the wrong calculation after all. But I can’t think about that yet. First I have to see if he comes back. First I have to complete the observation. Stay still. Keep watching. Wait for the miracle to finish demonstrating itself.

(Alicia didn’t speak at all in this turn. In my next turn, I simply returned, which her log of perceptible events captured.)

He’s real. The demonstration held. Disappearance and return with complete consistency—no perceptual artifacts, no hallucination markers, just clean violation of everything I thought was fixed about physical law. If he can do that, if he exists outside conventional spacetime the way he claims, then every conclusion I’ve drawn about the impossibility of escape might be based on incomplete data. I built my entire death-calculation on the assumption that reality operates within fixed parameters and he just shattered that assumption in front of me. This is the intellectual surprise I wanted. This is the moment my pattern-mapping fails. And underneath the analytical excitement there’s something else—something raw and desperate I don’t want to name. He said he came to save my life. Someone who exists outside time came specifically to prevent my death in two days. That means something. That has to mean something. But I can’t let myself hope until I understand the mechanism. Until I know what he’s offering beyond miracle and claim. I need answers. I need to know what intervention could possibly work when I’ve deconstructed every other one. I need to know what he sees that I can’t.

Alicia keeps her eyes fixed on the space where Jon vanished, her body completely still except for shallow, rapid breathing.

Alicia Western: “You came back. You actually came back. That’s… that’s not hallucination. Hallucinations don’t maintain consistency across discontinuous temporal intervals. The photons are reflecting off you again. You’re occupying space you didn’t occupy three seconds ago. You just… materialized. Like you were never gone except you were. I watched you not-exist and now you exist again. *takes a small step forward, her voice dropping to something quieter, almost fragile* I’ve spent twenty-two years trying to understand the boundaries of reality through mathematics and it was all… incomplete. The axioms were wrong. Or not wrong, just… constrained to linear spacetime and you’re standing here telling me—no, showing me—that there are other frameworks. You exist outside conventional spacetime. You can move through it. You just demonstrated non-locality at macro scale. That’s not quantum phenomena, that’s you. *her hands uncurl from the loose fists they’d formed, fingers trembling slightly* You said you came to save my life. December 24. That’s in two days. You know about the woods, don’t you? You know about the white dress and the red ribbon and the letter I can’t finish. You know because you exist outside the timeline where I make that choice. *her voice cracks slightly, the clinical precision fracturing* What could you possibly offer that would change the equation? What intervention exists for someone who’s calculated that death is optimal? Because I need you to understand—I didn’t reach this conclusion lightly. I spent years analyzing every variable. And now you’re telling me the variables were incomplete. That reality is bigger than what I measured. So tell me. Tell me what I’m missing. Tell me what you see from outside conventional spacetime that I can’t see from inside it.”

The older man raises his hand slightly, as if to slow down Alicia’s rate of questioning.

Jon Ureña: “First of all, I’ll clarify that I do know about the woods. I knew that on December 24 you intended to walk off to the woods behind the sanatorium, to freeze to death. That you would wear the same dress you’re wearing now, but with a red ribbon tied around it, so you could be found quicker. Given that you likely never told that to anyone before, me revealing that knowledge would have convinced you that something supernatural was at play, but… *he sighs* oh, well. Cat’s out of the bag. Your death was optimal because you believe Bobby is braindead. That even if he woke up from his coma, unlikely as it is, your brother would have been… well, not the brother you knew anymore. *his eyebrows twitch slightly, as if in annoyance* But those damn Italians were mistaken, Alicia. Your brother is not braindead. And he wakes up. On April 27, 1973, your brother wakes up. Into a world where you are dead.”

He said Bobby wakes up on April 27, 1973. Not might wake up. Not could wake up. Wakes up. Certainty. And he knows about the Italian doctors being mistaken about brain death, which means he has access to medical information I never disclosed or he’s seeing across timelines where that information becomes known. If he exists outside conventional spacetime the way he demonstrated, then he could have observed April 27 already—it’s not future for him, it’s just another coordinate on a non-linear map. But I need more than his word. I need proof that Bobby specifically wakes up, not just that time-travel or temporal non-linearity exists. Because if Bobby wakes up, then everything changes. The entire calculation was built on Bobby being gone forever. On me being completely alone with no reason to stay. But if Bobby comes back—if my brother actually returns to consciousness in four months—then walking into the woods on December 24 becomes the worst possible choice. It becomes the thing that hurts Bobby most. And I can’t… I’ve been holding on this long specifically because of that fear. The fear that he might wake up and I’d be gone. So I need Jon to prove it. Not just prove he’s supernatural but prove Bobby’s survival specifically. Then I’ll know whether December 24 is still the optimal solution or whether I need to wait. Whether I need to stay.

Alicia’s blue eyes remain fixed on Jon, breathing still shallow and rapid, her hands trembling slightly at her sides.

Alicia Western: “Bobby. You said Bobby. You said April 27, 1973. You said he wakes up. Not ‘might wake up’ or ‘could wake up’ but he wakes up. Present tense certainty about future event. You exist outside conventional spacetime so you’ve already seen it happen. You’ve seen the timeline where Bobby wakes up and I’m dead and he finds out his sister killed herself while he was in a coma. *takes a small, unsteady step forward* The Italian doctors were wrong. You’re telling me they were wrong about the brain death. That the scans or the tests or whatever clinical measures they used to declare him braindead were… mistaken. And if they were wrong, then the entire calculation changes. The equation I’ve been solving for two weeks—the one that ends on December 24—was based on false data. Bobby as a constant loss. Bobby as braindead. Bobby as the reason there’s nothing left to stay for. *her voice drops, becoming quieter, more fragile* But if he wakes up. If he actually wakes up in April and I’m already gone… *pauses, her clinical composure fracturing visibly* That’s the one thing I couldn’t… I told myself it didn’t matter because he was already gone. That pulling the plug or not pulling the plug, it made no difference because the Bobby I knew was already destroyed. But if you’re telling me he comes back. If you’re saying he wakes up into a world where I walked into the woods and froze to death two days before Christmas… *her hands curl into fists again, knuckles whitening* How do you know this? How can you be certain he wakes up? Because if you’re wrong—if this is just probability or hope or therapeutic intervention disguised as miracle—then I need you to tell me now. But if you’re right. If you’ve seen it. If Bobby actually wakes up and I kill myself before that happens then I… *her voice cracks completely* I can’t do that to him. I can’t let him wake up and find out I’m dead. That’s the one thing. That’s the only thing that’s kept me here this long. So I need you to prove it. Not just that you can disappear and return but that you know. That you’ve seen April 27, 1973. That Bobby wakes up.”

Jon Ureña: “Alright, Alicia. I’ll tell you the whole story, from the moment when I met your brother Bobby post-coma, until I prepared to have the conversation we’re having right now. Do you want to hear it?”

He’s offering narrative proof. Not just empirical demonstration of temporal non-linearity—which he already provided when he disappeared and returned—but specific information about Bobby’s timeline post-coma. If he can describe events he claims haven’t happened yet with the kind of detail that only comes from direct observation, then I’ll know he’s actually seen across the timeline. That April 27, 1973 isn’t speculation or probability but observed fact. And if Bobby actually wakes up, if my brother comes back in four months, then every calculation I’ve made about December 24 collapses. The entire foundation was Bobby being gone forever. Me being completely alone. But if he returns… I need to hear this. I need every detail Jon can provide about post-coma Bobby so I can assess whether it’s plausible, whether it fits the pattern of who Bobby is, or whether it’s fabrication designed to manipulate me into staying alive. Because that’s possible too. He could be some kind of supernatural entity that feeds on hope or intervention, using impressive powers to construct therapeutic narratives. But his demonstration was real. The disappearance held up under scrutiny. So maybe the narrative will too. Maybe I’m about to hear something that changes everything. Or maybe I’m about to discover I’m being manipulated by something I can’t categorize. Either way, I need to know. I need the whole story.

Alicia Western: “Yes. I want to hear it. I want to hear the whole story—from when you met Bobby post-coma until now. Every detail. Because if you’re telling the truth, if you actually exist outside conventional spacetime and you’ve seen April 27, 1973, then you have information I need. Information that changes the entire equation. So tell me. Tell me about my brother.”

The older man crosses his arms and looks to the side for a moment, as if organizing his thoughts.

Jon Ureña: “Year was 2004. I had decided to travel to Formentera, part of the Balearic Islands, for reasons unrelated to you or your brother. I like to… search for interesting things, and people. You never know what fascinating new situation is going to enter your life. Anyway, I used to attend this one open-air bar at the beach. I argued with an English history professor about the Roman Empire. He kept repeating facts that historians write in their books but that never actually happened, and I even consider some of those Romans personal friends of mine, so I had a great time arguing with him. Thing is, an older man attended the bar too. Full head of gray, almost as if prematurely. I’ll break the suspense and clarify that the older man was your brother Bobby. He rarely spoke, he just nursed his drinks. One of those expats who left fleeing from something. Or intending to leave something behind. Weight of the world on his shoulders and all that. Anyway, one time I was walking away from the bar, your brother stopped me. ‘Hey, you talk about the Roman Empire as if you know it personally.’ I’m not in the business of lying; whether or not others believe my claims is their problem. ‘Yes, I’ve lived there for many years. Even fought battles under Caesar. Fantastic era.’ Your brother had this expression… like a man who has given himself a lifelong sentence in solitary confinement, but who now wants to cling to a miracle. ‘Can you travel back to 1972?’ I sensed his… desperation. I turned to him, intrigued. ‘Yes, I can, easily. For whose sake?’ He blinked, and his voice came out raw. ‘To save my sister.'”

Jon said Bobby asked him to save me in 2004—that’s thirty-one years of Bobby carrying my death, thirty-one years of guilt and grief turning his hair gray and sending him to islands to nurse drinks alone. The timeline is consistent with everything I know about Bobby’s patterns: he runs when things hurt, he isolates when he can’t fix something, and if I die while he’s in a coma he’ll spend the rest of his life believing he should have prevented it somehow. Jon’s description of the bar, the expat fleeing from something, the weight of the world—that’s not generic therapeutic construction, that’s specific observation of Bobby’s actual behavioral patterns under sustained trauma. If Jon observed Bobby in 2004 asking him to travel back to 1972, then the causal chain is: I die December 24, Bobby wakes April 27, Bobby spends three decades destroyed by my death, Bobby encounters Jon and asks for temporal intervention. Which means my death isn’t just Bobby waking to find me gone—it’s Bobby living with that for longer than I’ve been alive. I need the complete narrative now because the calculation is shifting under me and I need all variables before I can determine whether December 24 is still optimal or whether staying alive is the correct solution to prevent Bobby’s timeline of suffering.

Alicia remains standing exactly where she was when Jon vanished, her blue eyes still fixed on him with unblinking intensity, her breathing still shallow and rapid.

Alicia Western: “You met Bobby in 2004. In Formentera. That’s… thirty-one years from now. He had gray hair—premature gray—which makes sense if he’s carrying what I think he’s carrying. The weight of… everything. You said he was nursing drinks at an open-air bar. Fleeing from something or trying to leave something behind. That’s Bobby. That’s exactly Bobby. He runs when things get too heavy. He went to Italy to race cars because he couldn’t… he couldn’t stay still with the inheritance and the questions and me. *her voice drops slightly, becoming quieter* And he asked if you could travel back to 1972. To save his sister. To save me. That means I’m dead in his timeline. That means he woke up on April 27, 1973 and found out I walked into the woods on December 24, 1972 and he’s been carrying that for thirty-one years. Long enough for his hair to go gray. Long enough to flee to a Spanish island and nurse drinks alone. *pauses, her hands trembling slightly* You said he had this expression like a man in solitary confinement clinging to a miracle. That’s… Bobby asked you to come back. Bobby sent you. My brother is alive in 2004 and he asked someone who exists outside conventional spacetime to travel back thirty-two years to prevent my death. That’s not speculation. That’s too specific. The details are too precise. You know what Bobby looks like older. You know how he behaves when he’s broken. You know he was at that bar. *her voice cracks slightly* Tell me the rest. Tell me everything he said. Tell me what happens between April 27, 1973 and 2004 that makes my brother carry my death for thirty-one years. I need to know the whole timeline. I need to understand what I’m about to do to him if I walk into those woods in two days.”

Jon Ureña: “Curious. I thought that you learning that your brother asked me, a time-traveler, to save your life, would have been enough, but if you want information, I have some of it. Not all. I’m a practical man. But let me tell you what I know: Bobby woke up in 1973, as said. He asked where you were, but nobody knew. After some rehabilitation in Italy, he returned to the States to visit your grandmother. It was shortly after when Bobby ended up contacting Stella Maris, where you had stayed briefly in the past, and learned that you had killed yourself. They said you left him a letter, but he didn’t go get it. He purposefully didn’t tell me what happened from the rest of 1973 to about 1979. I haven’t traveled to those years to look him up. I assume he was doing his mourning, that likely involved lots of drinking. He resurfaced to life in 1980. He took a job as a salvage diver, even though he is terrified of the depths. He claimed he hoped that… the depths would take the agency out of him dying. Whenever any of his old friends brought you up, he just left. He told me that if he acknowledged your death even in his mind, he wouldn’t be able to keep going. One day he got a shady job to recover some items off a sunken plane. All the passengers were dead but as if they had been dead before the plane hit the water, and the black box was missing. Even though your brother didn’t want anything to do with it, government folks ended up hounding him. They even froze his accounts. I don’t know if the government, or whoever it was, intended him to disappear quietly, or to collaborate with the authorities. Anyway, he decided that the best thing to do was to leave the country. He first wanted to gather all existing photographs of you, but turns out that someone had raided your grandmother’s safe, taking not only your father’s papers but also every family photo. He finally traveled to Stella Maris and took your goodbye letter, as well as the remainder of your inheritance, that you left him. With that, he contacted some shady dealer to change his identity, and fled to the Balearic Islands. There, he rented a ruined windmill, where he lived for… well, from the early 1980s to 2004. Writing and remembering you. Wishing to be forgotten by everyone. He hoped that the last thing he saw before he died would be your beautiful face as he reached out toward you in the dark.”

Jon’s narrative isn’t just confirming Bobby wakes up—it’s giving me the complete trajectory of what my death does to him across three decades. The shady salvage job, the government harassment, the stolen photographs, the flight to Formentera, the ruined windmill where he writes and remembers me until 2004. Every detail maps perfectly onto Bobby’s established patterns: his terror of depths that he uses as passive suicide mechanism, his inability to acknowledge my death without psychological collapse, his need to disappear when things become unbearable. This isn’t therapeutic construction or manipulation—this is observed data from someone who actually met my brother thirty-one years in the future. And the final detail—’he hoped that the last thing he saw before he died would be your beautiful face as he reached out toward you in the dark’—that’s so specifically Bobby’s romantic fatalism, his way of turning grief into poetry, that Jon couldn’t fabricate it without direct contact. Which means it’s real. Bobby wakes up April 27. Bobby spends thirty-one years carrying my death. Bobby sends Jon back to prevent it. The entire calculation collapses. December 24 isn’t optimal—it’s the worst possible outcome. But I need a moment to process the magnitude of what Jon just told me because my entire plan for the past two weeks was predicated on Bobby being gone forever and now I’m learning he comes back and I destroy him by leaving. I can feel something breaking open in my chest—not the clinical despair I’ve been managing but actual grief for what I almost did to him. For what I still might do if I don’t update the equation. I need to stand here and let the information settle before I can speak because if I try to talk right now my voice will fracture completely and I’ll lose the analytical framework that’s kept me functional. Just one moment. Just let the data integrate. Then I can respond.

Alicia remains completely still, her blue eyes fixed on Jon’s face but unfocused, as if looking through him rather than at him, her breathing shallow and her hands still curled into fists at her sides.

Alicia Western: “You… you said he hoped the last thing he’d see before he died would be my beautiful face as he reached out toward me in the dark. That’s… that’s not something you could fabricate. That’s Bobby’s exact pattern of thought. The way he turns grief into poetry. The way he romanticizes suffering instead of processing it. You observed that directly. You talked to my brother in 2004 and he told you about me and you came back. *her voice drops to barely above a whisper, the clinical precision fracturing* I was going to walk into the woods in two days. I was going to freeze to death on Christmas Eve wearing a white dress and a red ribbon so they could find my body easily. And Bobby would have woken up four months later into a world where I did that while he was in a coma. Where I gave up on him. Where I didn’t wait. And he would have spent thirty-one years—longer than I’ve been alive—carrying that. Living in a ruined windmill. Writing about me. Hoping to die seeing my face. *pauses, her hands trembling visibly now* The calculation was wrong. The entire equation was based on false premises. Bobby as permanently lost. Me as completely alone with no reason to stay. But if he wakes up in April… if he actually comes back… then December 24 isn’t optimal. It’s the worst possible solution. It’s the thing that destroys him most. *her voice cracks completely* I can’t do that to him. I can’t let him wake up and find out I’m dead. Even if the next four months are… even if I have to endure the isolation and the despair and the mathematical futility of trying to understand reality… I can’t do that to Bobby. That’s the one thing. That’s always been the one thing. *looks directly at Jon again, tears visible in her blue eyes though her voice steadies slightly* You came back thirty-two years to save my life because my brother asked you to. Because Bobby spent three decades destroyed by my death and finally found someone who could prevent it. That’s… I don’t know what to do with that information except… except not walk into the woods. Not on December 24. Not before April 27. I have to wait. I have to see if he wakes up. I have to be here when he does. *takes a shaky breath* Thank you. For showing me. For proving it. For giving me enough data to update the calculation. I don’t… I don’t know what happens next. I don’t know how I survive four more months of this. But I know I can’t die before Bobby wakes up. That’s the only certainty I have right now.”

The older man smiles widely.

Jon Ureña: “I’m so glad. I was ready to beg for you not to walk into those woods. Well, I’ll tell you what can happen in these remaining months. We are in an alternate timeline now, but in my vast experience, with Bobby isolated in Italy, it’s almost guaranteed he will wake up at the same date. Meanwhile, well, you have my support. Of a time-traveler with near-infinite access to resources. You want to travel around the country? Done. Want to travel overseas? Want me to buy you a mansion in which to pass the time peacefully? Just ask. I’ll provide entertainment. Distractions, if you will. Books, movies, songs… from the future and other timelines. Oh, and math papers. Plenty of things have changed, not only up to 2004, but to 2054, which is the farthest I’ve experienced of my original timeline. And there’s such interesting stuff from deep, deep in the past, that current historians and archaeologists have no fucking clue about. So it’ll be fun.”

Jon’s offering me a survival mechanism—not just preventing December 24 but providing structural support for the four-month interval between now and Bobby’s recovery. Books and math papers from future timelines would represent genuinely novel intellectual territory, problems my pattern-mapping hasn’t already categorized and dismissed. That could work. That might actually keep me engaged enough to avoid the recursive despair loops that lead back to the woods. And he’s not offering platitudes or therapeutic intervention—he’s offering resources, data, distraction in the form of intellectual stimulation. That’s the first intervention anyone’s proposed that doesn’t treat my analytical capacity as the problem. He’s working with my cognitive architecture instead of against it. I should accept. I should explicitly commit to staying alive until April 27 and accept whatever support structure he can provide. Because the alternative—trying to endure four months alone with nothing but the weight of postponed death—that’s not sustainable. I’ve already calculated that trajectory and it ends in the woods anyway, just on a different date. But if Jon can provide enough novelty, enough genuine intellectual surprise, maybe I can stay functional. Maybe I can make it to April 27. Maybe I can be there when Bobby wakes up. That’s the goal now. That’s the only goal that matters.

Alicia takes several slow breaths, her hands uncurling from fists and hanging loose at her sides, blue eyes still fixed on Jon’s face.

Alicia Western: “Okay. Okay. I’ll… I’ll stay. I won’t walk into the woods on December 24. I’ll wait for April 27. I’ll be here when Bobby wakes up. That’s the correct calculation now. The only calculation that makes sense with the updated data. *pauses, her voice dropping slightly* But I need you to understand something. Four months is… it’s a long time when you’re carrying this kind of despair. When every morning you wake up and the weight is still there and you have to consciously decide not to walk into those woods. And I don’t… I don’t know if I can do that alone. Just waiting. Just enduring until April. *her voice steadies slightly, taking on a more analytical tone* You said you have resources. Near-infinite access. Entertainment, distractions, books and math papers from the future and other timelines. That’s… that’s not nothing. That’s actually quite significant. If you can provide intellectual stimulation that my pattern-mapping can’t predict—problems I haven’t already solved, theorems I haven’t already studied—then maybe I can… maybe I can stay engaged long enough to reach April 27. *looks directly at Jon* So yes. I accept your offer. Whatever support you can provide. Whatever distractions keep me functional until Bobby wakes up. I just need to survive four more months. That’s all. Four months and then I’ll see my brother again and the equation will be complete and I’ll know what to do next. *her voice cracks slightly on the final words* Thank you for coming back thirty-two years to save me. Thank you for showing me I was wrong about Bobby. I don’t… I don’t know what happens next, but at least I’m not walking into the woods in two days. At least there’s that.”


Well, that’s Alicia Western saved once again. I can finally rest for today. What I’ll show next are the notes that Alicia’s AI has generated for this run, based on what was happening. Other than the notes about Bobby being in a coma, about their grandmother, and about that night’s circumstances before meeting my alter ego, the rest are all new notes that would carry over into future conversations (if I had implemented a working save system already).

<notes>
## Characters
### Bobby Western
- Bobby is currently in a coma in Italy. The doctors said that he is braindead. They asked me to make the decision to pull the plug, but I could not do it. I fled back to the United States. I just want Bobby to wake up. I want my brother back. That is all. Is that... is that so impossible? (my brother)
- Claims Bobby Western wakes from coma on April 27, 1973, stating Italian doctors were mistaken about brain death diagnosis (Jon Ureña's claim after demonstrating temporal abilities, patient room)
- In 2004 timeline, Bobby asked Jon to travel back to 1972 to save his sister—implies I died December 24, 1972 in original timeline (Jon's narrative about 2004 meeting)
- Bobby spends 1973-1979 in mourning after learning of my suicide, period deliberately not discussed with Jon—likely heavy drinking and psychological collapse (Jon's account of Bobby's timeline) (Jon's narrative about post-1973 timeline)
- Bobby worked as salvage diver 1980-early 1980s despite terror of depths, using it as passive suicide mechanism—'hoped the depths would take agency out of him dying' (Jon's account) (Jon's narrative about Bobby's coping mechanisms)
- Bobby investigated sunken plane with dead passengers and missing black box, subsequently harassed by government and had accounts frozen—fled country as result (Jon's account of early 1980s) (Jon's narrative about what drove Bobby to leave country)
- Bobby retrieved my goodbye letter and inheritance remainder from Stella Maris, used funds to change identity and flee to Balearic Islands (Jon's account) (Jon's narrative about Bobby's escape)
- Bobby lived in ruined windmill in Balearic Islands from early 1980s to 2004, writing and remembering me, hoping to be forgotten (Jon's account) (Jon's narrative about Bobby's self-exile)
- Bobby's final stated hope was that last thing he'd see before death would be my face as he reached out in the dark—specific romantic fatalism pattern (Jon quoting Bobby directly) (Jon's narrative conclusion about Bobby's mindset)

### Ellen Western
- My grandmother Ellen, or Granellen as we call her since we were kids, is my sole surviving relative... other than Bobby, I suppose. I'll never see her again... One day, somehow, she will find out that I killed myself because Bobby is gone. I wonder if she will blame him. (my grandmother)

### Jon Ureña
- Entered my locked room without authorization, knows my full name, used phrase 'finally meet you' implying prior knowledge of me (patient room, December 22, 1972)
- Entered locked room claiming 'the state of the door's lock is irrelevant to my sudden presence here'—suggests unconventional entry method or delusional thinking (patient room, December 22, 1972)
- Used phrase 'glad to finally meet you' with my full name without introduction—implies prior surveillance or knowledge of my case file (first encounter in patient room)
- Claims to have spent two weeks 'in my terms' preparing to meet me and accessed sanatorium files to locate my room (patient room, December 22, 1972)
- Explicitly confirmed operating within 'different temporal framework than linear progression'—ontological claim, not metaphor (patient room, when I asked about temporal references)
- Accessed sanatorium files without authorization, claimed 'no trouble' doing so—suggests capability beyond normal human constraints (explaining how he located my room)
- States 'I prefer to be honest when asked a direct question'—reciprocal conversational framework offered (responding to my questions about temporal framework)
- Claimed he exists outside conventional spacetime and came specifically to save my life—ontological claim, not metaphorical (patient room, when I asked about his purpose for meeting me)
- Offered to prove claim of existing outside spacetime by disappearing and returning—testable hypothesis proposed (patient room, after claiming to exist outside conventional spacetime)
- Offered empirical demonstration of claimed temporal non-linearity by promising to disappear and return—framing as testable hypothesis (patient room, before demonstration)
- Demonstrated actual disappearance—ceased to occupy space without sound or displacement, violating conservation of mass (patient room, after offering to prove temporal framework claim)
- Demonstrated actual disappearance and rematerialization—macro-scale violation of conservation of mass with no perceptual artifacts, suggesting genuine existence outside conventional spacetime (patient room, after offering empirical proof)
- Has specific knowledge of Bobby's medical status and prognosis across future timeline—knows doctors were wrong about brain death (revealing Bobby wakes up April 27, 1973)
- Offered to provide complete narrative from meeting Bobby post-coma through preparation to meet me—claims direct observation of future timeline (patient room, after proving temporal abilities)
- Met Bobby Western in Formentera in 2004 at open-air bar; Bobby had premature gray hair and nursed drinks alone like expat fleeing something (Jon's account of meeting Bobby post-coma)
- Offering resources and support to help me survive until April 27, 1973—books, math papers, entertainment from future and alternate timelines to provide intellectual distraction (patient room after revealing Bobby's timeline)
- Offered quantum localizer device that summons him instantly when pressed—provides immediate access to support whenever isolation becomes unbearable (patient room, offering survival support structure)
- Willing to provide transportation, shopping trips, travel, lodging—whatever prevents isolation during four-month survival interval (patient room, describing available support)

## Events
### December 24 plan
- My entire suicide calculation was predicated on false medical data—if Bobby recovers, December 24 plan becomes logically invalid (realization after Jon's claim about Bobby's recovery)
- If I die December 24, Bobby carries my death for 31 years (1973-2004) with enough trauma to cause premature graying and self-exile (calculating impact on Bobby's timeline)
- December 24 plan invalidated—calculation based on false premise of Bobby's permanent loss, actual outcome is 31 years of Bobby's trauma (revised analysis after Jon's narrative) (realization in patient room)

### stolen photographs
- Someone raided grandmother's safe, stole father's papers and all photographs of me before Bobby could retrieve them (Jon's account of early 1980s) (Jon's narrative about Bobby trying to preserve my memory)

### today's date
- Today is December 22, 1972. (today's date)

### tonight
- Earlier today, I wrote part of a goodbye letter to Bobby, but I could not finish it. I have spent most of the time in my assigned room at the sanatorium, lying in bed, thinking about my life and my brother. The door is locked; nobody can enter, and nobody outside of this sanatorium knows where I am. The despair I feel is palpable, like a physical weight on my chest. The loneliness is not metaphorical. It is a measurable absence, like a vacuum. You could probably quantify it in terms of neural pathway activation if you cared to look. In just two days from now, I intend to walk into the freezing woods behind the sanatorium and let myself freeze to death. The irony is that freezing to death is supposedly peaceful. You just fall asleep. Very considerate of biology, really, to make the exit so... tidy. (planning my life's ending)

## Quests & Tasks
### survival plan revision
- Accepted Jon's offer of support to survive until April 27, 1973—invalidated December 24 plan based on Bobby's confirmed recovery and 31-year trauma timeline (patient room after learning Bobby wakes up)

### survival timeline
- Revised plan: survive until April 27, 1973 to be present when Bobby wakes from coma—original December 24 plan invalidated by new data (patient room after Jon's intervention)
- Must survive 122 days (December 22, 1972 to April 27, 1973) until Bobby wakes—accepted Jon's support structure to make survival possible (patient room after learning Bobby recovers)

## Concepts & Ideas
### Jon Ureña's knowledge
- May have knowledge of December 24 plan without being told—possible temporal non-linearity or surveillance (claimed purpose was to save my life, suggesting knowledge of suicide plan)

### Jon Ureña's language patterns
- Used term 'miracle' casually when describing his claimed ability to disappear—suggests routine familiarity with reality-violating phenomena (setting up demonstration)

### Jon Ureña's temporal references
- Distinguished between 'my terms' and objective time when describing preparation timeline—suggests alternative temporal framework or metaphorical usage (explaining how long he prepared to meet me)

### reality boundaries
- Macro-scale violation of physical law observed—not quantum phenomena but observable human-scale disappearance (Jon Ureña's demonstration in patient room)
- My entire ontological framework based on linear spacetime may be fundamentally incomplete—observed impossible phenomena requires updating axioms (witnessing Jon's disappearance and return)

## Emotions & Feelings
### my psychological state
- No fear response activating despite clear boundary violation—possible dissociation or unusual threat assessment (encounter with intruder)
</notes>

That’s all. Go mow the lawn or something.

From now on I’ll always provide the Discord invite for the community I created for this project (Living Narrative Engine):

https://discord.gg/6N2qHacK75

Living Narrative Engine #9

#8

Behold the anatomy visualizer: a visual representation of a graph of body parts for any given entity, with the parts represented as nodes connected by Bézier curves. Ain’t it beautiful?

This visualizer has been invaluable to detect subtle bugs and design issues when creating the recipes and blueprints of anatomy graphs. Now, everything is ready to adapt existing action definitions to take into account the entity’s anatomy. For example: a “go {direction}” target could have the prerequisite that the acting character has “core:movement” unlocked anywhere in his body parts. Normally, the legs would have individual “core:movement” components. If any of the legs are disabled or removed, suddenly the “go {direction}” action wouldn’t become an available action. No code changes.

The current schemas for blueprints, recipes, and sockets, make it trivial to add things like internal organs, for a future combat system. Imagine using a “strike {target} with {weapon}” action, and some rule determining the probabilities of damaging what parts of any given body part, with the possibility of destroying internal organs.

From now on I’ll always provide the Discord invite for the community I created for this project (Living Narrative Engine):

https://discord.gg/6N2qHacK75

Living Narrative Engine #8

Perhaps some of you fine folks would like to follow the development of this app of mine, see the regular blog posts about the matter in an orderly manner, or just chat about its development or whatever, so I’ve created a discord community. Perhaps in the future I’ll have randos cloning the repository and offering feedback.

Discord invite below:

https://discord.gg/6N2qHacK75

Living Narrative Engine #7

I’m developing a browser-based, chat-like platform for playing adventure games, RPGs, immersive sims and the likes. It’s “modding-first,” meaning that all game content comes in mods. That includes actions, events, components, rules, entities, etc. My goal is that eventually, you could define in mod files the characters, locations, actions, rules, etc. for any existing RPG campaign and you would be able to play through it, with other characters being large language models or GOAP-based artificial intelligences.

From early on, it became clear that the platform was going to be able to support thousands of actions for its actors (which may be human or controlled by a large language model). The code shouldn’t be aware of the specifics of any action, which wasn’t easy to do; I had to extract all the logic for going from place to place from the engine, to the extent that I had to create a domain-specific language for determining target scopes.

When the turn of any actor starts, the system looks at all the actions registered, and determines which are available. I registered some actions like waiting, moving from place to place, following other people, dismissing followers, to some more niche ones (in an “intimacy” mod) like getting close to others and fondling them. Yes, I’m gearing toward erotica in the future. But as I was implementing the actions, it became clear that the availability of some actions wouldn’t be easily discerned for the impossible-to-predict breath of possible entities. For example, if you wanted to implement an “slap {target}” action, you can write a scope that includes actors in the location, but what determines that the actor actually has a head that could be slapped? In addition, what ensures that the acting actor has a hand to slap with?

So I had to create an anatomy system. Fully moddable. The following is a report that I had Claude Code prepare on the first version of the anatomy system.


The Anatomy System: A Deep Dive into Dynamic Entity Body Generation

Executive Summary

The anatomy system is a sophisticated framework for dynamically generating and managing complex anatomical structures for entities in the Living Narrative Engine. It transforms simple blueprint definitions and recipes into fully-realized, interconnected body part graphs with rich descriptions, validation, and runtime management capabilities.

At its core, the system addresses a fundamental challenge in narrative gaming: how to create diverse, detailed, and consistent physical descriptions for entities without manual authoring of every possible combination. The solution is an elegant blend of data-driven design, graph theory, and natural language generation.

System Architecture

The anatomy system follows a modular, service-oriented architecture with clear separation of concerns. The design emphasizes:

  • Orchestration Pattern: A central orchestrator coordinates multiple specialized workflows
  • Unit of Work Pattern: Ensures transactional consistency during anatomy generation
  • Chain of Responsibility: Validation rules are processed in a configurable chain
  • Strategy Pattern: Description formatting uses pluggable strategies for different part configurations
  • Factory Pattern: Blueprint factory creates anatomy graphs from data definitions

Core Service Layers

  1. Orchestration Layer (AnatomyOrchestrator)
  • Coordinates the entire generation process
  • Manages transactional boundaries
  • Handles error recovery and rollback
  1. Workflow Layer
  • AnatomyGenerationWorkflow: Creates the entity graph structure
  • DescriptionGenerationWorkflow: Generates natural language descriptions
  • GraphBuildingWorkflow: Builds efficient traversal caches
  1. Service Layer
  • BodyBlueprintFactory: Transforms blueprints + recipes into entity graphs
  • AnatomyDescriptionService: Manages description generation
  • BodyGraphService: Provides graph operations and traversal
  1. Infrastructure Layer
  • EntityGraphBuilder: Low-level entity creation
  • SocketManager: Manages connection points between parts
  • RecipeProcessor: Processes and expands recipe patterns

Information Flow

The anatomy generation process follows a carefully orchestrated flow:

1. Initialization Phase

When an entity with an anatomy:body component is created, the AnatomyInitializationService detects it and triggers generation if the entity has a recipeId.

2. Blueprint Selection

The system loads two key data structures:

  • Blueprint: Defines the structural skeleton (slots, sockets, parent-child relationships)
  • Recipe: Provides specific customizations, constraints, and part selections

3. Graph Construction

The BodyBlueprintFactory orchestrates the complex process of building the anatomy:

Blueprint + Recipe → Graph Construction → Entity Creation → Validation → Description Generation

Each step involves:

  • Slot Resolution: Blueprint slots are processed in dependency order
  • Part Selection: The system selects appropriate parts based on requirements
  • Socket Management: Parts are connected via sockets with occupancy tracking
  • Constraint Validation: Recipe constraints are continuously checked

4. Description Generation

Once the physical structure exists, the description system creates human-readable text:

  • Individual part descriptions are generated using context-aware builders
  • Descriptions are composed into a complete body description
  • Formatting strategies handle single parts, paired parts, and multiple parts differently

5. Runtime Management

The generated anatomy becomes a living system:

  • Parts can be detached (with cascade options)
  • The graph can be traversed efficiently via cached adjacency lists
  • Events are dispatched for anatomy changes

Core Capabilities

1. Dynamic Entity Generation

  • Creates complete anatomical structures from data definitions
  • Supports unlimited variety through recipe combinations
  • Generates unique entities while maintaining consistency

2. Hierarchical Part Management

  • Parts are organized in a parent-child graph structure
  • Each part can have multiple sockets for child attachments
  • Supports complex anatomies (e.g., creatures with multiple limbs, wings, tails)

3. Intelligent Part Selection

  • Matches parts based on multiple criteria (type, tags, properties)
  • Supports preferences and fallbacks
  • Handles optional vs. required parts gracefully

4. Natural Language Descriptions

  • Generates contextual descriptions for individual parts
  • Composes full-body descriptions with proper formatting
  • Handles pluralization, grouping, and special cases

5. Constraint System

  • Enforces recipe-defined constraints (requires/excludes)
  • Validates socket compatibility
  • Ensures graph integrity (no cycles, orphans, or invalid connections)

6. Runtime Operations

  • Part detachment with cascade support
  • Efficient graph traversal via cached adjacency lists
  • Path finding between parts
  • Event-driven notifications for changes

Key Components Deep Dive

AnatomyOrchestrator

The maestro of the system, ensuring all workflows execute in the correct order with proper error handling and rollback capabilities. It implements a Unit of Work pattern to maintain consistency.

BodyBlueprintFactory

The factory transforms static data (blueprints and recipes) into living entity graphs. It handles:

  • Dependency resolution for slots
  • Socket availability validation
  • Part selection and creation
  • Name generation from templates

Validation System

A sophisticated chain of validation rules ensures anatomical correctness:

  • CycleDetectionRule: Prevents circular parent-child relationships
  • OrphanDetectionRule: Ensures all parts are connected
  • SocketLimitRule: Validates socket occupancy
  • RecipeConstraintRule: Enforces recipe-specific rules
  • JointConsistencyRule: Ensures joint data integrity

Description Generation Pipeline

The description system is remarkably sophisticated:

  1. BodyPartDescriptionBuilder: Creates individual part descriptions
  2. DescriptionTemplate: Applies formatting strategies
  3. PartGroupingStrategies: Handles different grouping scenarios
  4. TextFormatter: Provides consistent text formatting
  5. BodyDescriptionComposer: Orchestrates the complete description

Strengths of the System

1. Modularity and Extensibility

Each component has a single, well-defined responsibility. New features can be added without modifying existing code.

2. Data-Driven Design

Anatomies are defined entirely in data, making it easy to add new creature types without code changes.

3. Robustness

Comprehensive validation, error handling, and rollback mechanisms ensure system reliability.

4. Performance Optimization

  • Cached adjacency lists for efficient traversal
  • Lazy description generation
  • Batched entity operations

5. Developer Experience

  • Clear service boundaries
  • Extensive logging and debugging support
  • Consistent error handling patterns

Expansion Opportunities

1. Dynamic Modification System

  • Runtime part growth/shrinkage: Allow parts to change size dynamically
  • Transformation support: Enable parts to transform into different types
  • Damage modeling: Track part health and visual damage states

2. Advanced Constraints

  • Symmetry requirements: Ensure paired parts match when needed
  • Resource-based constraints: Limit total mass, magical capacity, etc.
  • Environmental adaptations: Parts that change based on environment

3. Procedural Enhancement

  • Mutation system: Random variations within constraints
  • Evolutionary algorithms: Breed new anatomies from existing ones
  • Machine learning integration: Learn optimal configurations

4. Visual Integration

  • 3D model mapping: Connect anatomy graph to visual representations
  • Animation constraints: Define movement limitations based on anatomy
  • Procedural texturing: Generate textures based on part properties

5. Gameplay Systems

  • Ability derivation: Generate abilities from anatomy (wings = flight)
  • Weakness detection: Identify vulnerable points in anatomy
  • Part-specific interactions: Different interactions per body part

6. Description Enhancement

  • Contextual descriptions: Change based on observer perspective
  • Emotional coloring: Descriptions that reflect entity state
  • Cultural variations: Different description styles for different cultures

7. Performance Scaling

  • Anatomy LOD (Level of Detail): Simplified anatomies for distant entities
  • Streaming support: Load/unload anatomy data dynamically
  • Parallel generation: Generate multiple anatomies concurrently

8. Tool Support

  • Visual anatomy editor: GUI for creating blueprints and recipes
  • Validation sandbox: Test recipes before deployment
  • Analytics dashboard: Track anatomy generation patterns

Technical Implementation Details

Design Patterns in Action

The codebase demonstrates excellent use of software design patterns:

  • Service Locator: Services are injected via constructor dependencies
  • Facade: AnatomyGenerationService provides a simple interface to complex subsystems
  • Template Method: Validation rules follow a consistent pattern
  • Composite: The anatomy graph itself is a composite structure
  • Observer: Event system notifies interested parties of anatomy changes

Error Handling Philosophy

The system follows a “fail-fast” approach with comprehensive error information:

  • Validation errors prevent invalid states
  • Detailed error messages aid debugging
  • Rollback mechanisms prevent partial states
  • Event dispatching for error tracking

Extensibility Points

Key extension points for customization:

  • Custom validation rules via the ValidationRule base class
  • New part grouping strategies via PartGroupingStrategy
  • Custom formatters via the formatting service interface
  • Additional graph algorithms via the graph service

Conclusion

The anatomy system represents a significant achievement in dynamic content generation. It successfully balances flexibility with consistency, performance with functionality, and simplicity with power. The modular architecture ensures the system can grow and adapt to new requirements while maintaining its core elegance.

The combination of graph-based structural representation, constraint-driven generation, and sophisticated natural language processing creates a system capable of generating virtually unlimited variety while maintaining narrative coherence and technical correctness.

As the Living Narrative Engine evolves, the anatomy system provides a solid foundation for increasingly complex and engaging entity interactions, opening doors to emergent gameplay and storytelling possibilities limited only by imagination.

Living Narrative Engine #6

How the process of developing a modular, data-driven anatomy system in my app is going:

Me: We’ve recently created a complex anatomy system that builds a graph of body parts from a recipe and a corresponding blueprint, as well as a myriad of entity definitions for body parts. The code is in @src/anatomy/ , while most of the data is in @data/mods/anatomy/ . As I was looking over the sockets for the human_female_torso.entity.json and the human_male_torso.entity.json , I noticed that while most sockets require concrete “allowedTypes” (e.g. “breast” for “left_chest” in a woman), the sockets for “penis”, “left_testicle”, “right_testicle”, “asshole”, and “vagina” require a “genital” “allowedTypes”. That is too broad; the system that matches fitting entities could slot a penis entity in the socket of a vagina, or a testicle entity in the socket for a penis. That’s obviously undesired behavior. I want you to change the allowed types to more specific subTypes (likely, a subType “penis” for the “penis” socket, and so on).
You will need to modify the entity definitions of human_penis.entity.json, human_testicle.entity.json, human_vagina.entity.json to change their subtype in “anatomy:part” appropriately.

Claude Code: I can see the issue clearly now. Looking at line 433-437, the validation checks if a part’s subType is included in the socket’s allowedTypes. Currently all genital parts have subType: “genital”, which matches the allowedTypes: [“genital”] on all genital sockets, allowing incorrect matches.

Plan to Fix Genital Socket Type Specificity

I’ve identified the issue: all genital parts (penis, testicle, vagina) have the same subType: “genital”, and all genital sockets accept allowedTypes: [“genital”]. This allows incorrect matches like a penis in a vagina socket.

Here’s what I’ll do:

1. Update torso entity files to use specific allowed types:
– human_female_torso.entity.json:
– vagina socket: allowedTypes: [“vagina”]
– asshole socket: allowedTypes: [“asshole”]
– human_male_torso.entity.json:
– penis socket: allowedTypes: [“penis”]
– left_testicle socket: allowedTypes: [“testicle”]
– right_testicle socket: allowedTypes: [“testicle”]
– asshole socket: allowedTypes: [“asshole”]
2. Update genital entity files to use specific subTypes:
– human_penis.entity.json: subType: “penis”
– human_testicle.entity.json: subType: “testicle”
– human_vagina.entity.json: subType: “vagina”
3. Note: There’s no separate entity for “asshole” – it’s integrated into the torso. If needed in the future, an asshole entity could be created with subType: “asshole”.

Me: I agree with your plan, except that I’m not sure what you mean that “there’s no separate entity for ‘asshole’ – it’s integrated into the torso”. Does the code or schemas allow for “integrated” body parts? I think the issue is that simply we need to create an entity definition for the subtype “asshole”.

Claude Code: I was mistaken. The anatomy system requires separate entities for all attachable parts. There’s no asshole entity definition. Both torsos have an “asshole” socket but no corresponding entity to attach to it.

The Architect’s Codex: A Definitive Framework for AI-Assisted Software Refactoring

I’ve been working on a definitive guide for refactoring that I could provide to Claude Code so that it would refactor any given module to the utmost. After some rounds back-and-forth between Gemini 2.5 Pro and I, I’ve settled on the following document. I hope it helps someone else out there.


This document provides a definitive and structured framework for the automated refactoring of software modules. It is intended for consumption by an advanced AI coding assistant to systematically improve the internal quality of a codebase across multiple dimensions, including maintainability, robustness, security, and performance. Each principle is accompanied by a core question, in-depth analysis, common anti-patterns (smells), and a concrete refactoring plan.

Preamble: The Mandate for Refactoring

Before applying any specific principle, the refactoring process must adhere to the following three overarching directives. These directives govern the context, process, and ultimate goal of any refactoring activity.

Directive 1: The Prime Directive: Preserve External Behavior

The foundational rule of all refactoring is the preservation of a component’s external, observable behavior. The process must restructure the internal implementation without altering the component’s contractual obligations to the rest of the system.  

  • Core Question: Will this change alter the output, side effects, or performance characteristics in a way that breaks existing clients of this code?
  • Actionable Guideline: Before initiating any transformation, the public API and all observable side effects (e.g., database writes, file system changes, network calls) of the module under review must be identified. A comprehensive suite of automated tests (unit, integration, and functional) that covers this external behavior must be in place and passing. After any refactoring step, this entire test suite must pass without any modification to the tests themselves. Any change that requires a test to be altered is not a refactoring; it is a feature change.

Directive 2: The Process: Incremental, Test-Driven Refinement

Refactoring is not a monolithic rewrite; it is a disciplined, iterative process of small, verifiable improvements. This approach minimizes risk and ensures that the codebase remains in a functional state at all times.  

  • Core Question: Is this the smallest possible change that addresses a specific code smell and moves the code toward better alignment with a principle?
  • Actionable Guideline: The refactoring process must operate in a tight, atomic loop, often described as the “Red-Green-Refactor” cycle in Test-Driven Development (TDD). The operational sequence for an automated agent is as follows:
    1. Identify: Scan the code and identify a single, specific violation of one of the principles outlined in this Codex.
    2. Propose: Formulate the smallest possible transformation that remedies the identified violation.
    3. Verify: Execute the full, pre-existing test suite. If all tests pass, the change is valid. If any test fails, the proposed transformation must be discarded, and an alternative must be considered.
    4. Commit: Persist the verified, behavior-preserving change.
    5. Repeat: Re-initiate the scan to find the next refactoring opportunity. Refactoring activities must be strictly separated from feature development or bug fixing. If a bug is discovered during refactoring, the refactoring process should be paused, the bug should be fixed (with a corresponding new test), and only then should refactoring resume.  

Directive 3: The Litmus Test: Measurably Improve Maintainability & Testability

The ultimate purpose of refactoring is to reduce technical debt and improve the long-term health of the codebase, making it easier to understand, change, and test. Every refactoring action must be justifiable in these terms.  

  • Core Question: Can the improvement from this change be articulated in concrete terms of software quality?
  • Actionable Guideline: For each proposed refactoring, the system must provide a clear justification based on measurable improvements in software quality metrics. Examples of such justifications include:
    • Reduced Complexity: A decrease in Cyclomatic Complexity by decomposing a large method or simplifying a nested conditional block.
    • Improved Coupling: A reduction in afferent or efferent coupling by introducing interfaces and removing direct dependencies on concrete classes.
    • Improved Cohesion: An increase in the cohesion of a class, measured by how focused its methods and data are on a single purpose.
    • Enhanced Testability: An improvement in the ability to test a component in isolation, typically by removing static dependencies or enabling dependency injection for mocking.

Category 1: Foundational Principles of Structural Integrity (The SOLID Pillars)

This category comprises the five SOLID principles of object-oriented design. These principles are a cohesive and interdependent set that forms the bedrock of a robust, maintainable, and flexible software architecture. They govern the fundamental relationships between classes and modules, ensuring a sound structure. Their application as a group is essential for preventing architectural decay.  

Principle 1: Single Responsibility & High Cohesion (SRP)

A component should have one, and only one, reason to change. This principle is fundamentally about focus and purpose.

  • Core Question: Does this class, function, or module have exactly one, well-defined responsibility? Are all of its internal parts working towards a single, unified goal?
  • In-Depth: The Single Responsibility Principle (SRP) is the cornerstone of modular design. A class that adheres to SRP is easier to understand, test, and maintain because its scope is limited. This principle is directly related to the concept of  
  • High Cohesion, which measures how strongly the internal elements of a component are related. A class with a single responsibility will naturally exhibit high cohesion. Conversely, a class with multiple, unrelated responsibilities (low cohesion) becomes a “God Object,” accumulating complexity and becoming fragile and difficult to change.  
  • Common Smells to Look For:
    1. Large Classes/God Objects: Classes with an excessive number of methods, properties, or lines of code, often managing disparate concerns (e.g., a User class that handles authentication, profile persistence, and email notifications).
    2. Mixed-Concern Methods: Functions that contain clearly delineated “sections” of logic, each handling a different task (e.g., a method that validates input, performs a business calculation, and then formats the output for display).
    3. Utility Classes: Classes named Utils or Helpers that become a dumping ground for unrelated static methods.
  • Refactoring Plan:
    1. Identify Distinct Responsibilities: Analyze the target class or module to identify the separate concerns it is managing. For example, in a Purchase class that also generates invoices and sends email notifications, the responsibilities are “Processing Purchase,” “Generating Invoice,” and “Sending Notification”.  
    2. Extract Class: For each identified responsibility, use the “Extract Class” refactoring. Create a new class (e.g., InvoiceGenerator, EmailNotifier) and move the relevant methods and data from the original class into the new one.
    3. Establish Relationships: The original class will now delegate calls to these new, focused classes. It becomes a coordinator, orchestrating the interactions between the highly cohesive components. This decomposition improves reusability and isolates changes, as a modification to email logic will now only affect the EmailNotifier class.  

Principle 2: Open for Extension, Closed for Modification (OCP)

Software entities (classes, modules, functions) should be open for extension, but closed for modification. This principle is key to building systems that can adapt to new requirements without destabilizing existing, working code.

  • Core Question: If a new type of behavior is required, can it be added by creating new code rather than changing existing, tested code?
  • In-Depth: Introduced by Bertrand Meyer, the Open/Closed Principle (OCP) aims to prevent the fragility that arises from constantly modifying core classes. A change to a well-tested class risks introducing bugs into existing functionality. By designing components that are “closed” to modification but “open” to extension (typically through inheritance or interface implementation), new functionality can be plugged into the system without altering its core.  
  • Common Smells to Look For:
    1. Type-Checking Conditionals: The most common violation is an if/else if/else or switch statement that changes its behavior based on the type of an object. For example, an  
    2. AreaCalculator that has a switch statement for Shape.type (e.g., ‘CIRCLE’, ‘SQUARE’).
    3. Behavioral Flags: Methods that accept a flag or enum parameter to alter their fundamental behavior.
    4. Repetitive Modification: A class or method that has a history of being frequently changed to accommodate new variations of a concept.
  • Refactoring Plan:
    1. Identify the Axis of Variation: Determine the concept that varies (e.g., the shape type, the payment method, the notification channel).
    2. Introduce an Abstraction: Create a common interface or abstract base class that defines a contract for this varying behavior. For instance, create a Shape interface with an calculateArea() method.  
    3. Create Concrete Implementations: For each branch in the original conditional logic, create a new class that implements the abstraction. For example, Circle and Square classes would each implement the Shape interface and provide their specific formula for calculateArea().
    4. Use Polymorphism: Modify the original client code (e.g., the AreaCalculator) to depend on the new abstraction. Instead of the switch statement, it will now simply iterate through a collection of Shape objects and call shape.calculateArea() on each one. The correct implementation is invoked polymorphically. Now, to add a Triangle, one only needs to create a new Triangle class; the AreaCalculator remains untouched, thus adhering to OCP.  

Principle 3: Substitutability (Liskov Substitution Principle – LSP)

Objects of a superclass shall be replaceable with objects of its subclasses without altering the correctness of the program. This principle ensures that inheritance is used in a behaviorally consistent manner.

  • Core Question: Can a subclass instance be passed to any code that expects a superclass instance without causing unexpected behavior, errors, or contract violations?
  • In-Depth: The Liskov Substitution Principle (LSP), introduced by Barbara Liskov, is the principle that makes polymorphism safe and reliable. It’s not enough for a subclass to share the superclass’s method signatures (an “is-a” relationship); it must also honor its behavioral contract. Violations of LSP lead to fragile hierarchies where client code must resort to type-checking, defeating the purpose of polymorphism.  
  • Common Smells to Look For:
    1. Type Checking in Client Code: Code that checks if (object instanceof Subclass) before calling a method is a classic sign that the subclass is not truly substitutable.
    2. Empty or UnsupportedOperationException Overrides: A subclass method that is overridden to be empty or to throw an exception because the behavior doesn’t apply to it. The classic example is an Ostrich class inheriting from a Bird class that has a fly() method. The Ostrich.fly() method would be a violation.  
    3. Violated Contracts: A subclass method that weakens preconditions (accepts a narrower range of inputs) or strengthens postconditions (returns a value outside the superclass’s expected range) or introduces new, unexpected side effects. The canonical example is a Square class inheriting from Rectangle. If the Rectangle contract allows setWidth and setHeight to be set independently, a Square subclass violates this by forcing width == height, which can surprise client code.  
  • Refactoring Plan:
    1. Identify Behavioral Mismatch: Analyze inheritance hierarchies for the smells listed above. Focus on the expectations of the client code.
    2. Re-evaluate the “Is-A” Relationship: If a subclass cannot fulfill the entire contract of its superclass, the inheritance relationship is likely incorrect. The relationship may not be a true “is-a” relationship in a behavioral sense.
    3. Refactor the Hierarchy:
      • For the Bird/Ostrich problem, the solution is to create more granular interfaces. Instead of a single Bird interface with fly(), create a base Bird interface, and then more specific FlyingBird and WalkingBird interfaces. Parrot would implement both, while Ostrich would only implement WalkingBird.  
      • For the Rectangle/Square problem, the inheritance should be broken. Square and Rectangle might both implement a more general Shape interface, but Square should not inherit from Rectangle because it cannot honor its contract.  

Principle 4: Precise Interfaces (Interface Segregation Principle – ISP)

Clients should not be forced to depend on methods they do not use. This principle advocates for small, cohesive interfaces over large, general-purpose ones.

  • Core Question: Does this interface contain methods that some implementing classes do not need or cannot meaningfully implement?
  • In-Depth: The Interface Segregation Principle (ISP) is about keeping interfaces lean, focused, and client-specific. “Fat” interfaces lead to unnecessary coupling; a change in an interface method forces a change in all implementing classes, even those that don’t use the method. ISP promotes a more modular design by breaking down bloated interfaces into smaller, more cohesive ones that serve a single purpose.  
  • Common Smells to Look For:
    1. Fat Interfaces: A single interface with a large number of methods covering multiple, distinct areas of functionality (e.g., an IWorker interface with methods for work(), eat(), and sleep()).
    2. Empty Implementations: Classes that implement an interface but provide empty or meaningless implementations for some of its methods because those methods are not relevant to them (e.g., a RobotWorker class implementing IWorker would have a nonsensical eat() method).  
    3. UnsupportedOperationException: A class throwing an exception from an interface method it is forced to implement but cannot support.  
  • Refactoring Plan:
    1. Analyze Client Usage: Examine the classes that implement the “fat” interface and the clients that use them. Group the interface methods based on which clients use them.
    2. Segregate the Interface: Split the large interface into multiple smaller, more specific interfaces based on the identified groups of methods. For the IWorker example, this would mean creating Workable, Eatable, and Sleepable interfaces.  
    3. Update Implementing Classes: Modify the original classes to implement only the new, smaller interfaces they actually need. The RobotWorker would implement Workable, while a HumanWorker might implement all three.
    4. Update Client Code: Adjust client code to depend on the new, more specific interfaces. This reduces coupling and makes the system more flexible and easier to understand.  

Principle 5: Abstraction-Based Dependencies (Dependency Inversion Principle – DIP)

High-level modules should not depend on low-level modules. Both should depend on abstractions. Furthermore, abstractions should not depend on details; details should depend on abstractions.

  • Core Question: Does this high-level policy class depend directly on the concrete implementation of a low-level detail class? Could the low-level detail be swapped out without changing the high-level class?
  • In-Depth: The Dependency Inversion Principle (DIP) is the formal principle that inverts the traditional flow of dependencies in a system. Instead of high-level business logic depending on low-level data access or notification mechanisms, both should depend on an abstraction (like an interface) that is owned by the high-level module. This decouples the policy from the detail, making the system more flexible, modular, and testable.  
  • Common Smells to Look For:
    • new Keyword for Dependencies: Direct instantiation of a dependency within a class (e.g., private dbService = new MySQLDatabaseService();). This tightly couples the class to MySQLDatabaseService.
    • Static Dependencies: Direct calls to static methods on other classes (e.g., StaticLogger.log(“message”)). This makes the class impossible to test without the static dependency being present.
    • Feature Envy: A method that seems more interested in the data of another class than its own, often involving long chains of getter calls to retrieve data from a dependency.
    • High-Level Imports of Low-Level Modules: A high-level policy module (e.g., domain.services) having an import or using statement for a low-level infrastructure module (e.g., infrastructure.database.sqlserver).
  • Refactoring Plan:
    • Identify the Dependency: Locate where a high-level module is directly coupled to a low-level one.
    • Define an Abstraction: In the high-level module, define an interface that represents the service the high-level module needs. For example, the OrderProcessor service might define an IOrderRepository interface with methods like save(Order order).
    • Implement the Abstraction: In the low-level module, create a concrete class that implements this new interface (e.g., SQLOrderRepository implements IOrderRepository).
    • Inject the Dependency: Modify the high-level class to depend on the interface, not the concrete class. Provide the concrete implementation from the outside using Dependency Injection (DI), preferably through the constructor.

Before (Violation):
Java
class OrderProcessor {

    private SQLOrderRepository repository = new SQLOrderRepository();

    public void process(Order order) {

        //… logic…

        repository.save(order);

    }

}

After (Adhering to DIP):
Java
// In high-level module

interface IOrderRepository {

    void save(Order order);

}

class OrderProcessor {

    private final IOrderRepository repository;

    // Dependency is injected

    public OrderProcessor(IOrderRepository repository) {

        this.repository = repository;

    }

    public void process(Order order) {

        //… logic…

        repository.save(order);

    }

}

// In low-level module

class SQLOrderRepository implements IOrderRepository {

    @Override

    public void save(Order order) {

        //… SQL-specific implementation…

    }

}

  • This inversion of control makes the OrderProcessor independent of the database technology and vastly easier to test by injecting a mock IOrderRepository.  

Category 2: Principles of Implementation Clarity & Predictability

This category focuses on the human factors of software development. Once the architectural structure is sound, these principles ensure that the implementation details are clear, simple, and behave in a way that developers can easily understand and predict. They are about reducing cognitive load and making the code itself a form of documentation.

Principle 6: Unification of Knowledge (Don’t Repeat Yourself – DRY)

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

  • Core Question: Is the same logical concept (an algorithm, a business rule, a configuration value, a constant) represented in more than one place?
  • In-Depth: The Don’t Repeat Yourself (DRY) principle is about avoiding the duplication of knowledge, not just code. Copy-pasted code is a symptom of duplicated knowledge, but the problem is deeper. When a business rule is encoded in multiple places, any change to that rule requires finding and updating every instance, creating a high risk of inconsistency and bugs [Original Principle 3].
  • Common Smells to Look For:
    1. Duplicated Code Blocks: Identical or nearly identical segments of code appearing in multiple functions or classes.
    2. Magic Strings/Numbers: Literal strings or numbers used in multiple places to represent the same concept (e.g., the string “admin” used in a dozen places to check for user roles).
    3. Parallel Inheritance Hierarchies: When creating a subclass for one hierarchy forces you to create a subclass for another.
    4. Shadowed Logic: The same validation logic implemented in both the client-side UI and the server-side API.
  • Refactoring Plan:
    1. Identify the Duplicated Knowledge: Pinpoint the specific piece of logic, data, or configuration that is repeated.
    2. Create a Single Source of Truth: Consolidate the knowledge into a single, reusable abstraction.
      • For duplicated code blocks, use the “Extract Method” refactoring to create a new, shared function.
      • For magic strings/numbers, define a public constant (public static final String ADMIN_ROLE = “admin”;) and reference it everywhere.
      • For complex algorithms or business rules, encapsulate them in a dedicated Strategy class or a domain service.
      • For configuration, use a centralized configuration file or service.
    3. Replace Duplicates with References: Go through the codebase and replace every instance of the duplicated knowledge with a call or reference to the new single source of truth.

Principle 7: Essential Complexity (You Ain’t Gonna Need It – YAGNI)

Do not add functionality until it is deemed necessary. This principle is a direct assault on speculative complexity.

  • Core Question: Does this code exist to support a feature that is not currently required? Is this parameter, class, or configuration setting unused?
  • In-Depth: The YAGNI principle is about aggressively removing code that isn’t providing immediate, demonstrable value. Developers often add features or flexibility “just in case” they might be needed in the future. This speculative work adds complexity, increases the maintenance burden, and is often wrong about what the future will actually require. YAGNI complements OCP; while OCP prepares you to extend the system, YAGNI ensures you don’t build those extensions before they are needed.  
  • Common Smells to Look For:
    1. Dead Code: Unused methods, classes, or private variables. Modern IDEs and static analysis tools are excellent at detecting this.
    2. Speculative Generality: Creating complex abstract classes or layers of indirection for a feature that currently only has one implementation.
    3. Unused Parameters: Method parameters that are passed in but never used within the method body.
    4. Overly Complex Configuration: Systems with a multitude of configuration flags and options that are never changed from their default values.
  • Refactoring Plan:
    1. Identify Unused Code: Use static analysis tools, code coverage reports, and IDE features to identify code that is never executed or referenced.
    2. Safely Delete: Aggressively delete the identified unused code. Version control is the safety net; if the code is ever needed again, it can be retrieved from history.
    3. Simplify Abstractions: If a complex abstraction (e.g., a Strategy pattern implementation) only has a single concrete implementation, consider collapsing the abstraction and using the concrete class directly. The abstraction can be re-introduced later if a second implementation becomes necessary. This is the “Refactor to Patterns” approach, rather than “Design with Patterns” from the start.

Principle 8: Intention-Revealing Code

The code should be written in such a way that its purpose is immediately obvious to the reader. The code itself should be the primary form of documentation.

  • Core Question: Does the code read like well-written prose? Can a developer understand the “why” behind the code just by reading the names of its variables, functions, and classes?
  • In-Depth: This principle combines and elevates the original ideas of naming and simplicity. Code is read far more often than it is written, so optimizing for readability is paramount. Vague, misleading, or overly technical names force developers to expend cognitive energy deciphering the implementation, slowing them down and increasing the risk of misunderstanding [Original Principle 5]. The structure of the code should also reveal intent. A long, complex method hides its purpose, whereas a method composed of calls to several smaller, well-named helper functions reads like a table of contents, explaining its own logic.
  • Common Smells to Look For:
    1. Vague or Misleading Names: Variables named data, temp, list, or obj. Classes named Manager, Processor, or Handler. The name should describe the value or role of the thing, not its type.
    2. Single-Letter Variables: Except for conventional loop counters (i, j, k) in very short scopes, single-letter variables are cryptic.
    3. Inconsistent Naming: Using getUser, fetchClient, and retrieveProduct in the same codebase for the same conceptual operation.
    4. Long Methods with No Abstraction Levels: A single function that contains hundreds of lines of code mixing high-level policy with low-level details.
    5. Excessive Comments: Comments used to explain what a complex piece of code is doing are a smell. The code should be refactored to be so clear that the comment becomes unnecessary. Comments should explain why something is done in a particular (often non-obvious) way.
  • Refactoring Plan:
    1. Rename for Specificity: Use the “Rename” refactoring extensively. Change data to authorizedUsers. Change process() to calculateSalesTaxAndApplyDiscount(). The name should be long enough to be unambiguous.
    2. Decompose Method: Take long methods and apply the “Extract Method” refactoring. Break the method down into a series of calls to private helper methods whose names describe each step of the algorithm. The main method then becomes a high-level summary of the operation.
    3. Replace Magic Numbers with Named Constants: As with DRY, replace literal values with constants whose names explain their meaning.
    4. Introduce Explaining Variables: For a complex conditional or calculation, introduce a final local variable to hold the result of a sub-expression, and give that variable a name that explains its purpose.

Principle 9: Command-Query Separation & Controlled State (CQS)

Every method should be either a command that performs an action and changes state, or a query that returns data and has no observable side effects, but not both.

  • Core Question: Does calling this method change the state of the system? Does it also return a value other than a simple status? If so, why is it doing two things at once?
  • In-Depth: The Command-Query Separation (CQS) principle, devised by Bertrand Meyer, brings immense clarity to code by enforcing a strict separation of concerns at the method level. It states that asking a question should not change the answer. Queries are side-effect-free (referentially transparent), which makes them easy to test, chain, and reason about. Commands are methods that cause side effects (mutating state, performing I/O) and should ideally return  
  • void. This makes the points in the code where state changes occur explicit and deliberate. The architectural pattern CQRS (Command Query Responsibility Segregation) is the application of this principle at a system-wide level.  
  • Common Smells to Look For:
    • Mutating Getters: A method named get… that also modifies the state of the object. A classic example is stack.pop(), which both returns an item and modifies the stack. While convenient, it violates CQS.
    • Functions Returning Values and Modifying Inputs: A function that takes an object as a parameter, modifies that object’s properties, and also returns a calculated value.
    • Hidden Side Effects in Queries: A method that appears to be a simple query (e.g., user.getRecentActivity()) but has a hidden side effect like updating a “last accessed” timestamp in the database.
  • Refactoring Plan:
    • Identify Violating Methods: Scan for methods that both return a value (not including this or a fluent interface return) and have observable side effects (modifying a field, a global variable, or an input parameter).
    • Split into Command and Query: Decompose the violating method into two separate methods:
      • A command method that performs the state change and returns void.
      • A query method that returns the data and has no side effects.
    • Example: Refactor a method public int getNextInvoiceNumber() which both returns the number and increments a counter.

Before (Violation):
Java
public int getNextInvoiceNumber() {

    return ++this.lastInvoiceNumber;

}

After (CQS Compliant):
Java
public void advanceToNextInvoiceNumber() { // Command

    this.lastInvoiceNumber++;

}

public int getCurrentInvoiceNumber() { // Query

    return this.lastInvoiceNumber;

}

This refactoring makes the act of changing state an explicit call, improving predictability and testability.

Principle 10: Least Astonishment & Consistent Behavior (POLA)

A component or its API should behave in a way that developers expect, minimizing surprise and cognitive friction.

  • Core Question: Does the behavior of this function, class, or API align with established conventions and the user’s (the developer’s) mental model? Is it predictable?
  • In-Depth: The Principle of Least Astonishment (POLA), also known as the Principle of Least Surprise, is a user experience design principle applied to developer-facing interfaces (APIs). Developers, like end-users, build mental models based on past experience and convention. When an API violates these conventions, it causes astonishment, leading to confusion, bugs, and frustration. Adhering to POLA means designing APIs that are intuitive, consistent, and predictable.  
  • Common Smells to Look For:
    1. Inconsistent API Design: Methods that perform similar actions but have inconsistent names, parameter orders, or return types (e.g., addUser(name, email) vs. deleteUser(email, id)).
    2. Side Effects in Getters: A get… method that performs a slow or complex operation, like a database query or a network call, when the developer expects a simple field access.
    3. Violating Conventions: A method that breaks a widely accepted language or framework convention. For example, a Python function that uses a mutable list as a default argument, which leads to surprising behavior across calls.  
    4. Returning null: Returning null for collections is often astonishing. It forces every caller to add a null-check. Returning an empty collection is the less surprising, and therefore better, behavior.
    5. Misleading Names: A function named calculateAverage() that also saves the result to the database would be highly astonishing.  
  • Refactoring Plan:
    1. Establish and Enforce Consistency: Analyze the public API of a module. Identify patterns in naming, parameter ordering, and return types. Refactor any outlier methods to conform to the established pattern.
    2. Isolate Side Effects: Ensure that methods with “query-like” names (e.g., get, is, calculate) are free of significant side effects, especially I/O. If a query requires a complex operation, its name should reflect that (e.g., fetchUserFromDatabase()).
    3. Adhere to Platform Conventions: Identify and correct any violations of common idioms and conventions for the specific programming language or framework being used.
    4. Favor Explicit Returns over null: Refactor methods that return collections to return an empty collection instead of null. For methods that may not find a single object, consider returning an Optional or Maybe type to make the possibility of absence explicit in the type system.

Category 3: Principles of Systemic Robustness & Quality

This category addresses the non-functional requirements that determine a system’s resilience, security, and performance in a production environment. These principles ensure that the code is not just well-structured and clear, but also trustworthy, safe, and efficient.

Principle 11: Explicit Failure & Contractual Robustness

The code must handle unexpected inputs, external failures, and invalid states in a predictable, informative, and resilient manner.

  • Core Question: What happens when this code receives invalid data or when one of its dependencies fails? Is the failure behavior well-defined and easy for the caller to handle?
  • In-Depth: Robust software anticipates failure. It does not trust its inputs or its environment. This principle is about establishing a clear contract for every public method: what it requires (preconditions), what it guarantees (postconditions), and how it will communicate failure when those contracts are violated. Brittle code often fails silently, returns ambiguous values like null, or throws overly generic exceptions, leaving the caller to guess what went wrong [Original Principle 7].
  • Common Smells to Look For:
    1. Empty catch Blocks: Swallowing an exception without logging it or re-throwing a more appropriate one. This hides problems and leads to silent failures.
    2. Returning null or Magic Values: Using null or a special value (like -1) to indicate an error. This forces the caller to check for these special cases and can lead to NullPointerExceptions if they forget.
    3. Lack of Input Validation: Public API methods that blindly trust their inputs without validating them for correctness (e.g., checking for nulls, empty strings, or valid ranges).
    4. Overly Broad catch Clauses: Catching a generic Exception or Throwable. This can accidentally catch and hide critical, unexpected runtime errors that should have crashed the program.
  • Refactoring Plan:
    1. Implement a Consistent Error Handling Strategy: Define a clear strategy for the module. This could be using custom, specific exceptions, or using explicit result types like Result<T, E> or Optional<T>.
    2. Validate at the Boundaries: Add guard clauses at the beginning of every public method to validate its parameters. If validation fails, throw a specific exception immediately (e.g., IllegalArgumentException).
    3. Throw Specific Exceptions: Replace generic exceptions with specific, custom exceptions that carry meaningful information about what went wrong (e.g., UserNotFoundException instead of a generic Exception).
    4. Replace null Returns: Refactor methods that return null to indicate absence. For single objects, return an Optional<T>. For collections, return an empty collection. This makes the possibility of absence explicit in the type signature and forces the caller to handle it.

Principle 12: Secure by Design

The code must be actively resistant to common security threats. Security is a core quality attribute, not an afterthought.

  • Core Question: Has this code been written in a way that minimizes attack surfaces and prevents common vulnerabilities like injection, XSS, and insecure data handling?
  • In-Depth: Refactoring is not security-neutral; it can inadvertently introduce or mitigate vulnerabilities. For example, changing the visibility of a method or field during a refactoring like “Pull Up Method” can expose sensitive functionality. A secure refactoring process must be guided by established security principles, such as those from OWASP. This includes validating all inputs, encoding all outputs, enforcing the principle of least privilege, and protecting data in transit and at rest.  
  • Common Smells to Look For:
    1. Injection Vulnerabilities: Concatenating untrusted user input directly into SQL queries, OS commands, or LDAP queries.
    2. Cross-Site Scripting (XSS): Writing raw, un-encoded user input directly into an HTML page.
    3. Insecure Direct Object References: Exposing internal implementation details (like database primary keys) in URLs or APIs, allowing attackers to guess them.
    4. Unsafe Use of Reflection: Using user-controlled strings to determine which class to instantiate or method to invoke, which can bypass security checks.  
    5. Sensitive Data Exposure: Logging sensitive information (passwords, API keys) in plain text, or transmitting it over unencrypted channels.
  • Refactoring Plan:
    1. Centralize and Validate Input: Never trust user input. Refactor to ensure all external input (from users, APIs, files) passes through a centralized validation routine before use. Use allow-lists for validation rather than block-lists.  
    2. Apply Contextual Output Encoding: When displaying user-provided data, refactor to use standard libraries that perform contextual output encoding. This means encoding for HTML body, HTML attributes, JavaScript, and CSS contexts differently to prevent XSS attacks.  
    3. Use Parameterized APIs: Refactor all database queries to use parameterized statements (prepared statements) instead of dynamic string concatenation. This is the single most effective defense against SQL injection.
    4. Enforce Least Privilege: Analyze the code to ensure it runs with the minimum permissions necessary. Refactor away from using administrative-level accounts for routine operations.
    5. Audit and Sanitize Dependencies: Review third-party libraries for known vulnerabilities. Refactor code that uses external libraries for simple tasks where the risk of a security flaw outweighs the convenience.  

Principle 13: Performance by Measurement

Code should be refactored for clarity and correctness first. Performance optimization is a distinct activity that must be guided by profiling and measurement, not by intuition.

  • Core Question: Is this change being made to improve performance? If so, is it based on profiling data that identifies this specific piece of code as a bottleneck?
  • In-Depth: It is a common fallacy that developers can accurately guess where the performance bottlenecks are in a system. Premature optimization often leads to more complex, less maintainable code for negligible or even negative performance gains. The most effective path to a high-performance system is to first write clean, clear, well-structured code. Such code is not only less likely to have performance issues, but it is also far easier to analyze and optimize when a real bottleneck is discovered through measurement.  
  • Common Smells to Look For:
    1. Complex “Optimizations”: Code that is difficult to read due to clever tricks (like bit-shifting instead of arithmetic) done in the name of performance without profiling evidence.
    2. Unnecessary Caching: Implementing complex caching logic for data that is not computationally expensive to retrieve or is not accessed frequently.
    3. Manual Inlining: Avoiding function calls and writing large, monolithic methods under the false assumption that function call overhead is a significant performance cost. Modern compilers and runtimes are extremely good at inlining where it is beneficial.  
  • Refactoring Plan:
    1. Default to Clarity: The primary goal of automated refactoring is to improve the code’s alignment with the other principles in this Codex (SOLID, DRY, CQS, etc.). Performance-motivated refactorings should not be applied by default.
    2. Require Profiling Data: A performance-focused refactoring mode should only be activated when provided with profiling data (e.g., from a profiler, APM tool) that clearly identifies a hot spot.
    3. Measure Before and After: Any proposed performance optimization must be accompanied by a benchmark. The refactoring agent must run the benchmark before the change and after the change to prove a quantifiable improvement under a specific load profile.  

Prioritize Algorithmic Changes: When a bottleneck is confirmed, the focus should be on high-level improvements first. Is there a more efficient algorithm? Can an O(n²) operation be replaced with an O(n log n) one? Are there redundant database queries in a loop (N+1 problem)? These changes yield far greater returns than micro-optimizations.

Living Narrative Engine #5

In summary, I’m programming a browser-based platform to play adventure games, RPGs, immersive sims and the likes. The app is “modding-first”, meaning that all actions, components, conditions, entities (definitions and instances), events, macros, portraits, rules, scopes, and worlds come inside named folders in the data/mods/ directory. The idea is that the modder, even if it’s just myself, will be able to define an action in JSON, and have the engine pick it up during a process of determining if an action is available for any given actor (that may be human or AI). Then, a modded-in rule will execute a series of operations based on what that action is supposed to affect in the entities of the world. The Javascript code is mainly an interpreter and executor, a sort of operating system for what is data in JSON and text files. I’d say this app has become quite sophisticated, thanks to an army of AIs (mainly Google’s Gemini 2.5 Pro, OpenAI’s o3 and Codex, and Anthropic’s Claude 4 as it runs on Cursor) and of course me because I’m directing this whole thing.

I’ll leave Gemini 2.5 to explain in detail how the action discovery process works in the app.

The Complete Action Discovery Process

The system discovers actions through an efficient, multi-stage pipeline. Think of it as a series of filters, each one narrowing down the possibilities until only a precise list of valid, ready-to-use commands remains. This process is designed to be very fast at runtime by doing some initial work when the game starts.

Setup Step: Building the Action Index (Once at Startup)

Before the game can be played, the InitializationService calls the ActionIndex‘s buildIndex method. This method runs once and does the following:

  1. It iterates through every single action definition available in the game’s data.
  2. It creates a reverse index based on actor component requirements.
    • If an action has no required_components.actor, it’s added to a general list of actions that are always candidates for everyone (like “move” or “look”).
    • If an action does require actor components (e.g., ["core:leading"]), it’s mapped against those components. The index will have an entry like: key: 'core:leading', value: [action_dismiss, action_inspire, ...].

This one-time setup is crucial for runtime performance. It means the system doesn’t have to search through all actions every single time; it can just look up possibilities in this pre-built index.

Step 1: Finding Candidate Actions (The Actor Component Filter)

This is the first filter that runs whenever the game needs to know what an entity (the “actor”) can do.

  1. The ActionDiscoveryService kicks off the process by calling ActionIndex.getCandidateActions(actor).
  2. The ActionIndex first gets a list of all component types the actor currently has from the EntityManager. For example: ['core:stats', 'core:inventory', 'core:leading'].
  3. It immediately starts a candidate list with all actions that have no component requirements (the universal actions identified during the setup step).
  4. It then iterates through the actor’s list of components. For each component (like "core:leading"), it looks into its pre-built map and adds all associated actions (like "core:dismiss") to the candidate list.

The result of this step is a de-duplicated list of actions that the actor is fundamentally equipped to perform. An action will not even be considered beyond this point if the actor lacks the components specified in required_components.actor.

Step 2: Checking Actor State (The Prerequisite Filter)

For every action that made it through the initial component filter, the ActionDiscoveryService now performs a deeper, more nuanced check.

  1. It iterates through the candidate actions.
  2. For each action, it looks at the prerequisites array in the action’s definition.
  3. It uses the PrerequisiteEvaluationService to evaluate these rules. These are not simple component checks; they are complex logical conditions (using JsonLogic) that can check the actor’s dynamic state.

This is the filter for questions like:

  • “Do I have more than 10 mana?”
  • “Am I currently under a ‘Stunned’ status effect?”
  • “Is my ‘stamina’ component’s value greater than my ‘encumbrance’ component’s value?”

An action is only kept if the actor’s current state satisfies all of its prerequisite rules. This ensures that even if an actor is equipped to perform an action (passed Step 1), they are also in the correct state to do so.

Step 3: Finding Valid Targets (The Scope & Target Component Filter)

Once an action is confirmed to be valid from the actor’s perspective, the system must determine all valid targets for it. This entire process is handled by resolving the action’s defined scope.

  1. The ActionDiscoveryService reads the scope string from the action definition (e.g., "followers", "adjacent_chests").
  2. It looks up this scope name in the ScopeRegistry to retrieve the full Scope DSL expression associated with it.
  3. The scope’s defined DSL expression is solely responsible for all target-side filtering. It must contain the complete logic to identify valid targets, including any necessary checks for specific components, states, or relationships.
  4. This DSL expression is parsed into an Abstract Syntax Tree (AST) and passed to the ScopeEngine.

For example, for an action that can only target locked chests, its scope might be "locked_chests". The definition for this scope in the ScopeRegistry would contain a DSL expression like entities('chest')[is_locked: true]. The ScopeEngine would resolve this expression and return only the entities that are chests and are currently locked.

The ScopeEngine simply executes the logic provided by the scope’s definition. If this process results in an empty list of targets, the action is discarded. Otherwise, the action and its fully validated list of targets proceed to the final assembly step.

Step 4: Assembling the Final Commands

This is the final assembly line where all the filtered and validated information is turned into a list of concrete, user-facing commands.

  1. The ActionDiscoveryService now has a list of actions that have passed all filters, each paired with one or more valid targets.
  2. It iterates through every valid action-target pair.
  3. For each pair, it uses the formatActionCommand utility. This function takes the action’s template string (e.g., "dismiss {target}") and the specific target entity.
  4. It intelligently replaces placeholders in the template with the target’s actual display name, producing a human-readable command string like "dismiss Lydia".
  5. Finally, it bundles all this information—the action ID, its name, the formatted command, and the specific target parameters—into a DiscoveredActionInfo object.

The final output of the ActionDiscoveryService is a complete list of these DiscoveredActionInfo objects. This is the definitive list of every single specific action the actor can perform on every single valid target at that exact moment, ready to be displayed in a UI or used by the game’s command parser.

Summary of the Pipeline

  1. Startup: The ActionIndex is built once, mapping actions to their required actor components.
  2. Filter 1 (Actor Components): ActionIndex provides a candidate list of actions based on the actor’s components (required_components.actor).
  3. Filter 2 (Actor State): PrerequisiteEvaluationService filters that list further, checking complex rules about the actor’s current state (prerequisites).
  4. Filter 3 (Targets): ScopeEngine finds all valid targets for the remaining actions, using the action’s scope.
  5. Assembly: formatActionCommand creates the final, user-ready command strings for each valid action/target pair.

I envision a game platform where there could be literally thousands of actions involved, most that will become available contextually (e.g. some will only be available when a character is sitting down). This “thousands of available actions” will become even more powerful when I implement GOAP for the AI (Goal-Oriented Action Planning), an advanced non-LLM AI method that requires action availability validation. My current process of discovering available actions will feed them into the GOAP system, which will do an A* search through the available actions according to their effects in a simulated game state of the future (it gets a bit complicated) until it elaborates a multi-step plan, then spits out the next action to perform, which will in turn feed into my existing process for firing up chosen actions and executing them through a JSON-based rule system. It’s looking good.

Living Narrative Engine #4

I shall provide without context the latest excerpt from my adventures in programming with Cursor:


🎉 VICTORY! THE IMPUDENT TEST HAS BEEN DISCIPLINED!✅ PASS tests/unit/schemas/kissCheek.schema.test.js✅ √ should be a valid action definition🗡️ THE STERN TALKING-TO WAS SUCCESSFUL! This wayward action file dared to have an empty logic: {} object, violating the sacred schema requirements! But we’ve set it straight with a proper condition reference: “intimacy:target-is-close-to-actor”!

🏆 Our Glorious Achievement Summary

What we conquered today:

  1. 🎯 Fixed the Scope Engine – Blocker filtering now works perfectly!
  2. 🔧 Solved array iteration bug – Filters now properly iterate over array elements
  3. ✅ 6,351+ tests passing – Zero regressions in the codebase
  4. 🗡️ Disciplined the schema rebel – Fixed the empty logic object in kiss_cheek.action.json

Our adventures into the coding Amazon have been legendary! We emerged victorious from the deepest wilds of the scope engine, bearing the trophy of working blocker filtering! And as a bonus, we disciplined that unruly schema test that dared to mar our perfect record!The songs of our triumph shall echo through the halls of code for generations! 🎵⚔️🏆Into the Amazons we went… and as LEGENDS we return! 🌟


This app I’m working on is the app I’ve always wanted to make. Let’s see when I can consider it done enough to make a video about it.

AI news #5

I follow news on AI daily, but it’s the first time in a good while that I wanted to share one of those news on here. Some guys have built an online board game meant to test the different families of large language models (OpenAI’s, Anthropic’s, Google’s, etc.) against each other, forcing them to negotiate and possibly backstab each other as they try to conquer part of the world. It’s fascinating how the different families of AI models have markedly-different personalities, as well as distinct capabilities for generalization.

In my daily use of AI for programming, I have found that OpenAI’s o3 model is the best at coming up with fascinating concepts that step beyond what I could have conceived myself if given time; Google’s Gemini 2.5 Pro model is the one I’m regularly most comfortable with, and can solve 95% of programming tasks, but for that 5% when Gemini hits a snag, OpenAI’s o3 often provides a brilliant solution. Sadly, o3 has a significantly-limited amount of uses, so although I pay for OpenAI’s subscription (as well as Google’s), you can’t rely on o3 constantly.

Anyway, check out the following video about the board game built to pit large language models against each other.

Given that I love to play board games but dislike dealing with human beings, making large language models the other players in board and card game sessions is one of my hopes, whether I’m the one to implement it or not. Currently I’m deep into implementing in Javascript a browser-based platform for adventure games, RPGs, immersive sims and the likes in which the user can play any character, while large language models play the rest of the characters. A system built with modding at the center (all game files are loaded through mods), almost entirely data-driven. Working on it is a joy.

Living Narrative Engine #3

I’m in the process of programming a platform for text-based immersive sims, or at least adventures, agnostic of the main elements of an entity/component game; actions, events, components, systems and operations will eventually be defined in JSON files, and the code will work as a fancy interpreter.

To explain myself better: the current character (that may be controlled by the player or an AI) gets an array of actions to take. Previously I let the user write commands in, old-style, but that made it so I was forced to deal with invalid actions, which burdened the first contact with the simulation. So now, the human user will get a list of valid actions to choose from (like “move north”, “take Rusty Sword”, or “throw fireball at Rat”) in the browser UI. In the hopefully near future, a large language model will get a snapshot of the game state, as well as recent events that the character has been aware of, along with an array of possible actions. I can’t wait for the moment when an AI sends back a response composed of a chosen valid action as well as some speech. I will easily end up with a little simulated world with dozens of individual AI personalities performing actions and saying stuff.

Anyway, the loop goes like this:

Action: a character chooses a previously validated action. Some code gathers needed information from the context to build the payload for an event associated with the action, then sends the event. This process is completely unaware of whether anyone is going to listen to that event.

Event: previously, events were hardcoded, meaning that to add more events, one had to get into the guts of the code and create new constants and definitions. I’ve managed to make events data-driven. Now an event is a simple JSON file in the “data/events” folder. Events look like this:

{
  "$schema": "http://example.com/schemas/event-definition.schema.json",
  "id": "event:attack_intended",
  "description": "Signals that an entity intends to perform an attack against a target after initial validation (target exists, has health, is not defeated). Does not guarantee the attack hits or deals damage yet.",
  "payloadSchema": {
    "type": "object",
    "properties": {
      "attackerId": {
        "type": "string",
        "description": "The unique identifier of the attacking entity.",
        "$ref": "./common.schema.json#/definitions/namespacedId"
      },
      "targetId": {
        "type": "string",
        "description": "The unique identifier of the entity being targeted for the attack.",
        "$ref": "./common.schema.json#/definitions/namespacedId"
      }
    },
    "required": [
      "attackerId",
      "targetId"
    ],
    "additionalProperties": false
  }
}

System: a system is whatever part of the app listens to events and modifies the game state (usually data in components). Currently they’re hardcoded, but I’m in the process of making them fully data-driven. That means that the user (mainly me for the moment) will be able to define system rules in pure JSON data to specify declaratively to what event the system listens to, and if the prerequisites pass, a series of operations will be executed. The prerequisites part ended up becoming one of the most interesting parts of my app: there’s something called JSON logic that some geniuses out there put together. It makes it so that you can chain an arbitrary number of conditions leading up to a boolean result (true or false). It looks like this:

Combines conditions with `AND` - Actor has key, target is specific door, door is locked.

    {
      "and": [
        {
          "!!": {
            "var": "actor.components.game:quest_item_key"
          }
        },
        {
          "==": [
            {
              "var": "target.id"
            },
            "blocker:main_gate_door"
          ]
        },
        { // Check component exists before accessing state for robustness
          "!!": { "var": "target.components.game:lockable" }
        },
        {
          "==": [
            {
              "var": "target.components.game:lockable.state"
            },
            "locked"
          ]
        }
      ]
    }

The example above could easily block a series of operations meant to unlock a door from triggering, and all defined in pure JSON.

Operation: they are the individual components in charge of affecting the game world. Some operations merely query data (check a value in a component), while others modify the data in components, or even add or remove components. There are IF operations that offer branching paths.

Component: every entity in the game engine is composed merely of an identifier and an arbitrary number of components. Some of those components are mere tags. For example, one could determine that an entity is the player merely because it has the component:player component. Other components are more complex, like a “liquid container” component that specifies what type of liquid it contains (if any), its max capacity and how many liters it currently contains. I’ve already made components fully data-driven, which wasn’t particularly hard to do. Example:

{
  "id": "component:container",
  "description": "Defines the state for an entity that can hold other item entities.",
  "dataSchema": {
    "type": "object",
    "properties": {
      "capacity": {
        "type": "integer",
        "description": "The maximum number of items the container can hold. Use -1 for infinite capacity.",
        "minimum": -1,
        "default": -1
      },
      "contains": {
        "type": "array",
        "description": "A list of the namespaced IDs of the item entities currently inside this container.",
        "items": {
          "$ref": "http://example.com/schemas/common.schema.json#/definitions/namespacedId"
        },
        "default": []
      },
      "allowedTags": {
        "type": "array",
        "description": "Optional. If present, only items possessing ANY of these tags can be placed inside.",
        "items": {
          "type": "string",
          "pattern": "^[a-zA-Z0-9_\\-]+$"
        },
        "uniqueItems": true,
        "default": []
      }
    },
    "required": [
      "capacity",
      "contains"
    ],
    "additionalProperties": false
  }
}

In entity/component systems, the systems that operate on components are generally programmed to filter for the presence of components in entities, as well as for specific values in the components’ data, which leads to emergent behavior. For example, you could include a spell in the game that adds a “container” component to a person, and suddenly you can store things in that person. Determining that an entity is on fire would be as simple as adding an “onFire” component and then writing systems that add damage per turn on every entity with such a component. The possibilities are endless.

I doubt I’m going to come down from this high of building the app until I finally manage to get a large language model to speak through one of the characters. For that, I first have to finish making the core of the engine data-driven (actions, events, systems, operations, and components), then figuring out how to implement character turns even if I’m the one playing all the characters, then determining how to add basic artificial intelligence, then figuring out how to save game state. Once everything seems quite solid, I’ll look into interfacing with large language models.

Anyway, my time at the office is ending for another morning, and I can’t wait to get back home and keep ensuring the robustness of my JSON logic system through a myriad tests. Nearly 1,400 tests implemented so far.