Post-mortem for That Feathered Bastard

Read first the short story this post-mortem is about: That Feathered Bastard.

Through this cycle of fantasy stories, I’m exercising in tandem my two main passions in life: building systems and creating narratives. Every upcoming scenario, which turns into a short story, requires me to program new systems into my Living Narrative Engine, which is a browser-based platform for playing through immersive sims, RPGs and the likes. Long gone are the scenarios that solely required me to figure out how to move an actor from a location to another, or to pick up an item, or to read a book. Programming the systems so I could play through the chicken coop ambush involved about five days of constant work on the codebase. I’ve forgotten all that was necessary to add, but off the top of my head:

  • A completely new system for non-deterministic actions. Previously, all actions succeeded, given that the code has a very robust system for action discoverability: unless the context for the action is right, no actor can execute them to begin with. I needed a way for an actor to see “I can hit this bird, but my chances are 55%. I may not want to do this.” Once you have non-deterministic actions in a scenario, it becomes unpredictable, with the actors constantly having to maneuver a changing state, which reveals their character more.
  • I implemented numerous non-deterministic actions:
    • Striking targets with blunt weapons, swinging at targets with slashing weapons, thrusting piercing weapons at targets. None of those ended up taking part of this scenario, because the actors considered that keeping the birds alive was a priority, as Aldous intended.
    • Warding-related non-deterministic actions: drawing salt boundaries around corrupted targets (which Aldous said originally he was going to do, but the situation turned chaotic way too fast), and extracting spiritual corruption through an anchor, which Aldous did twice in the short.
    • Beak attacks, only available to entities whose body graphs have beak parts (so not only chickens, but griffins, krakens, etc.). This got plenty of use.
    • Throwing items at targets. Bertram relied on this one in a fury. I got clever with the code; the damage caused by a thrown weapon, when the damage type is not specified, is logarithmically determined by the item’s weight. So a pipe produces 1 unit of blunt damage, and throwing Vespera’s instrument case at birds (which I did plenty during testing) would cause significant damage. Fun fact: throwing an item could have produced a fumble (96-100 result on a 1-100 throw), and that would have hit a bystander. Humorous when throwing a pipe, not so much an axe.
    • Restraining targets, as well as the chance for restrained targets to free themselves. Both of these got plenty of use.
    • A corrupting gaze. It was attempted thrice, if I remember correctly, once by the main vector of corruption and the other by that creepy one with the crooked neck. If it had succeeded, it would have corrupted the human target, and Aldous would have had to extract it out of them as well. That could have been interesting, but I doubt it would have happened in the middle of chickens flying all over.
  • Implementing actions that cause damage meant that I needed to implement two new systems: health and damage. Both would rely on the extensive anatomy system, which produces anatomy graphs out of recipes. What I mean about that is that we have recipes for roosters, hens, cat-girls, men, women. You specify in the recipe if you want strong legs, long hair, firm ass cheeks, and you end up with a literal graph of connected body parts. Noses, hands, vaginas exist as their own entities in this system. They can individually suffer damage. I could have gone insane with this, as Dwarf Fortress does, simulating even individual finger segments and non-vital internal organs. I may do something similar some day if I don’t have anything better to do.
    • Health system: individual body parts have their own health levels. They can suffer different tiers of damage. They can bleed, be fractured, poisoned, burned, etc. At an overall health level of 10%, actors enter a dying state. Suffering critical damage on a vital organ can kill creatures outright. During testing there were situations in which a head was destroyed, but the brain was still functioning well enough, so no death.
    • Damage system: weapons declare their own damage types and the status effects that could be applied. Vespera’s theatrical rapier can pierce but also slash, with specific amounts of damage. Rill’s practice stick only does low blunt damage, but can fracture.

Having a proper health and damage system, their initial versions anyway, revealed something troubling: simple non-armored combat with slashing weapons can slice off limbs and random body parts with realistic ease. Whenever I get to scenes involving more serious stakes than a bunch of chickens, stories are going to be terrifyingly unpredictable. Oh, and when body parts are dismembered, a corresponding body part entity gets spawned at the location. That means that any actor can pick up a detached limb and throw it at someone.

Why go through all this trouble, other than the fact that I enjoy doing it and that it distracts me from the ocean of despair that surrounds me and that I can only ignore when I’m absorbed in a passion of mine? Well, over the many years of producing stories, what ended up boring me was that I went into a scene knowing all that was going to happen. Of course, I didn’t know the specifics of every paragraph, and most of the joy went into the execution of those sentences. But often I found myself looking up at the sequences of scenes to come, and it was like erecting a building that you already knew how it was going to end up looking. You start to wonder why even bother, when you can see it clearly in your mind.

And I’m not talking about that “plotter vs. pantser” dichotomy. Pantsing means you don’t know where you’re going, and all pantser stories, as far as I recall, devolve into messes that can’t be tied down neatly by the end. And of course they’re not going to go back and revise them to the necessary extent of making something coherent out of them. As much as I respect Cormac McCarthy, one of his best if not the best written novel of his, Suttree, is that kind of mess, which turns the whole thing into an episodic affair. An extremely vivid one that left many compelling, some harrowing, images in my brain, but still.

I needed the structure, with chance for deviation, but I also needed to be constantly surprised by the execution of a scene. I wanted to go into it with a plan, only for the plan to fail to survive the contact with the enemy. That’s where my Living Narrative Engine comes in. Now, when I experience a scene, I don’t know what the conversations are going to entail. I didn’t even come up with Aldous myself: Copperplate brought him up in the first scene when making up the details of the chicken contract. It was like that whole “Lalo didn’t send you” from Breaking Bad, which ended up producing a whole series. From that mention of Aldous, after an iterative process of making the guy interesting for myself, he ended up becoming a potter-exorcist I can respect.

I went into that chicken coop not knowing anything about what was going to happen other than the plan the characters themselves had. Would they overpower the chickens and extract the corruption out of them methodically with little resistance? Would any of the extraction attempts succeed? Would any actor fly into a rage, wield their weapons and start chopping off chicken limbs while Aldous complained? Would any of the characters suffer a nasty wound like, let’s say, a beak to the eye? I didn’t know, and that made the process of producing this scene thrilling.

Also, Vespera constantly failing at everything she tried, including two rare fumbles that sent her to the straw, was pure chance. It made for a more compelling scene from her POV; at one point I considered making Aldous the POV, as he had very intriguing internal processes.

Well, the scene wasn’t all thrilling. You see, after the natural ending when that feathered bastard pecked Vespera’s ass, the scene originally extended for damn near three-fourths of the original length. People constantly losing chickens, the rooster pecking at anyone in sight, Melissa getting frustrated with others failing to hold down the chickens, Rill doing her best to re-capture the chickens that kept wrenching free from her hold. Aldous even failed bad at two extractions and had to pick up the vessel again. It was a battle of attrition, which realistically would have been in real life. I ended up quitting, because I got the point: after a long, grueling, undignified struggle, the chickens are saved, the entity is contained in the vessel, and the actors exit back to the warm morning with their heads down, not willing to speak for a good while about what they endured.

Did the scene work? I’m not sure. It turned out chaotic, with its biggest flaw maybe the repetition of attempting to catch chickens only for them to evade capture. There were more instances of this in the original draft, which I cut out. I could say that the scene was meant to feel chaotic and frustrating, and while that’s true, that’s also the excuse of those that say “You thought my story was bad? Ah, but it was meant to be bad, so I succeeded!” Through producing that scene, editing it, and rereading it, I did get the feeling of being there in that chaotic situation, trying to realistically accomplish a difficult task when the targets of the task didn’t want it completed, so if any reader has felt like that, I guess that’s a success.

I have no idea what anyone reading this short story must have felt or thought about it, but it’s there now, and I’ll soon move out to envision the next scenario.

Anyway, here are some portraits for the characters involved:

Aldous, the potter-exorcist

Kink-necked black pullet

Slate-blue bantam

White-faced buff hen

Large speckled hen

Copper-backed rooster

Review: Dispatch

Back in late 2000s and early 2010s, we had this thing we affectionately called Telltale-style games: heavily narrative-driven games that relied on letting the player make more or less compelling decisions that would affect the narrative. They didn’t have the complexity of early adventure games, but they couldn’t be called simple visual novels either. They were tremendously successful, until corporate greed swallowed them, spread them thin, and eventually dissolved them into nothing. The company shut down.

A new studio made of former Telltale devs decided to try their hand a new Telltale-style game that removed the dragging parts of former Telltale games (mainly walking around and interacting with objects) to focus on a good story, a stellar presentation, and compelling minigames. Their first product was the game Dispatch, released about a month ago in an episodic format (two episodes a week, but all of them are out already). The game has become a runaway success.

The story focuses on Robert Robertson, a powerless Iron Man in a society where many, many people have superpowers. He carries the family legacy of battling villains with a mecha. As an adult, he pursued the supervillain who murdered Robert’s father, and who now led one of the most dangerous criminal groups. However, during an assault on the villain’s base, Robert’s mecha gets destroyed, which puts him out of a job.

However, he’s approached by one of the most notorious superheroes, a gorgeous, strong woman who goes by the name Blonde Blazer. She offers him a job at the company she works for, SDN (Superhero Dispatch Network). Their engineers will work on repairing Robert’s mecha, while he offers his expertise on fighting crime as the one in charge of dispatching other heroes to the appropriate calls.

Robert finds out that the team of heroes he’s supposed to handle are a bunch of villains who either have approached the company to reform themselves, or were sent by the criminal system for rehabilitation. They’re a diverse bunch of rowdy, at times nasty superpowered people who aren’t all too keen on having a non-superpowered nobody in charge of them. The narrative explores how the team grows to work together better.

The execution of this story could have gone wrong in so many ways: wrong aesthetic, time-wasting, atrocious writing, and above all, marxist infiltration; like most entertainment products released on the West these days, the whole thing could have been a vehicle for rotten politics. But to my surprise, that’s not the case here. A male protagonist, white male no less, who is an intelligent, hard-working, self-respecting role model? Attractive characters, fit as they would be in their circumstances? A woman in charge (Blonde Blazer) who is nice, understanding, competent, caring, and good? Villains with believable redemption arcs? Romance routes that flow naturally? Where the hell did this game come from in 2025?

Entertainment consumers have been deliberately deprived of all of this by ideologues who despise everything beautiful and good, who, as Tolkien put it, “cannot create anything new, they can only corrupt and ruin what good forces have invented or made.” Franchise after franchise taken over by marxists who dismantle it, shit on the remains, and then insult you if you don’t like it. Dispatch is none of it. For that reason alone, I recommend the hell out of it. I’m sure that given its sudden popularity, the forces-that-be will infiltrate it and ruin it in its second season as they do with everything else, but the first season is already done.

It’s not perfect, of course. Its pros: an astonishing visual style that makes it look like a high-quality comic book in movement. No idea how they pulled it off. Clever writing. Endearing characters. Interesting set pieces. The voice acting is extraordinary, led by Aaron Paul of Breaking Bad fame. He deserves an award for his acting as Robert Robertson. It’s a good story told well, and you’re in the middle of it making important decisions (and also plenty of flavorful ones).

The cons: some whedonesque dialogue that didn’t land for me. Too much cursing even for my tastes, to the extent that often feels edgy for edge’s sake. Some narrative decisions taken during the third act, particularly regarding the fate of one of the main characters, didn’t sit well for me, as it deflated the pathos of the whole thing. But despite the pros, this was a ride well worth the price.

Oh, I forgot: they should have let us romance the demon mommy. My goodness.

Check out this nice music video some fan created about Dispatch, using one of the songs of its soundtrack.

VR game review: Ghost Town

I’ve been playing a lot of VR recently, so I may as well review the only long-form game that I’ve finished in this couple of weeks. Ghost Town is a puzzle-based adventure game set in Great Britain back in the eighties. You’re a spirit medium (a witch) named Edith, whose shitty younger brother disappeared under shady circumstances, and your goal is to find him. Trailer is below:

There are many more pros than cons as far as I’m concerned. The setting, mainly London in the 80s, is quite unique, and provides a gritty touch that I appreciated. The character animations and models are generally exceptional for the Meta Quest 3, maybe the best I’ve seen so far. I don’t like puzzle games, yet this one made me appreciate the puzzles. I was never entirely stuck, as the progressive hint system helped me eventually realize at least where I should focus on. I loved the tactile feel of exorcising ghosts, although it’s a minor part of the experience. Plenty of great moments come to mind: interacting with ghosts behind glass (great-looking in VR), using eighties ghost-bustery technology to investigate artifacts, a very creative museum of haunted artifacts, sleepwalking through your eerie apartment tower in 80s London, a great sequence in which you wander through a maze-like version of your apartment while malevolent presences whisper from the shadows (very P.T. like), clever use of light in puzzles, etc.

Horror stories are never more terrifying than in VR. Play Phasmophobia if you dare, for example. I try to avoid horror games because of my damaged heart. However, the ghosts in this one are more spooky than scary.

Now, the cons: personally, I wish the game were more like a regular adventure game instead of a puzzle game with a narrative thread woven throughout it. That’s just a personal preference, though; I wish we got the equivalents of the Monkey Island series in VR. Anyway, the least interesting sequence of puzzles for me was the lighthouse, which comes right after the introductory flashback. I actually dropped the game for like a couple of months after I first played it, because I didn’t feel like returning, but I’m glad I picked it back up and continued.

However, my biggest gripe with the story is that you’re supposed to search for your brother, whom you meet in the first scene, when you’re investigating a haunting in an abandoned theatre, but in every damn scene he’s in, the brother comes off as envious, narcissistic, entitled, and overall a complete dickhead. I didn’t want to interact with him. Did the creators believe we would be invested in finding this guy just because he was related to the protagonist? I think it’s a given that they should have made the brother sympathetic, but he annoyed me in every scene he appeared.

All in all, fantastic experience. Perhaps a bit short, but I felt like I got my money’s worth. If you have a Quest 3 and you enjoy these sorts of games, check it out.

Living Narrative Engine #9

#8

Behold the anatomy visualizer: a visual representation of a graph of body parts for any given entity, with the parts represented as nodes connected by Bézier curves. Ain’t it beautiful?

This visualizer has been invaluable to detect subtle bugs and design issues when creating the recipes and blueprints of anatomy graphs. Now, everything is ready to adapt existing action definitions to take into account the entity’s anatomy. For example: a “go {direction}” target could have the prerequisite that the acting character has “core:movement” unlocked anywhere in his body parts. Normally, the legs would have individual “core:movement” components. If any of the legs are disabled or removed, suddenly the “go {direction}” action wouldn’t become an available action. No code changes.

The current schemas for blueprints, recipes, and sockets, make it trivial to add things like internal organs, for a future combat system. Imagine using a “strike {target} with {weapon}” action, and some rule determining the probabilities of damaging what parts of any given body part, with the possibility of destroying internal organs.

From now on I’ll always provide the Discord invite for the community I created for this project (Living Narrative Engine):

https://discord.gg/6N2qHacK75

Living Narrative Engine #8

Perhaps some of you fine folks would like to follow the development of this app of mine, see the regular blog posts about the matter in an orderly manner, or just chat about its development or whatever, so I’ve created a discord community. Perhaps in the future I’ll have randos cloning the repository and offering feedback.

Discord invite below:

https://discord.gg/6N2qHacK75

Living Narrative Engine #7

I’m developing a browser-based, chat-like platform for playing adventure games, RPGs, immersive sims and the likes. It’s “modding-first,” meaning that all game content comes in mods. That includes actions, events, components, rules, entities, etc. My goal is that eventually, you could define in mod files the characters, locations, actions, rules, etc. for any existing RPG campaign and you would be able to play through it, with other characters being large language models or GOAP-based artificial intelligences.

From early on, it became clear that the platform was going to be able to support thousands of actions for its actors (which may be human or controlled by a large language model). The code shouldn’t be aware of the specifics of any action, which wasn’t easy to do; I had to extract all the logic for going from place to place from the engine, to the extent that I had to create a domain-specific language for determining target scopes.

When the turn of any actor starts, the system looks at all the actions registered, and determines which are available. I registered some actions like waiting, moving from place to place, following other people, dismissing followers, to some more niche ones (in an “intimacy” mod) like getting close to others and fondling them. Yes, I’m gearing toward erotica in the future. But as I was implementing the actions, it became clear that the availability of some actions wouldn’t be easily discerned for the impossible-to-predict breath of possible entities. For example, if you wanted to implement an “slap {target}” action, you can write a scope that includes actors in the location, but what determines that the actor actually has a head that could be slapped? In addition, what ensures that the acting actor has a hand to slap with?

So I had to create an anatomy system. Fully moddable. The following is a report that I had Claude Code prepare on the first version of the anatomy system.


The Anatomy System: A Deep Dive into Dynamic Entity Body Generation

Executive Summary

The anatomy system is a sophisticated framework for dynamically generating and managing complex anatomical structures for entities in the Living Narrative Engine. It transforms simple blueprint definitions and recipes into fully-realized, interconnected body part graphs with rich descriptions, validation, and runtime management capabilities.

At its core, the system addresses a fundamental challenge in narrative gaming: how to create diverse, detailed, and consistent physical descriptions for entities without manual authoring of every possible combination. The solution is an elegant blend of data-driven design, graph theory, and natural language generation.

System Architecture

The anatomy system follows a modular, service-oriented architecture with clear separation of concerns. The design emphasizes:

  • Orchestration Pattern: A central orchestrator coordinates multiple specialized workflows
  • Unit of Work Pattern: Ensures transactional consistency during anatomy generation
  • Chain of Responsibility: Validation rules are processed in a configurable chain
  • Strategy Pattern: Description formatting uses pluggable strategies for different part configurations
  • Factory Pattern: Blueprint factory creates anatomy graphs from data definitions

Core Service Layers

  1. Orchestration Layer (AnatomyOrchestrator)
  • Coordinates the entire generation process
  • Manages transactional boundaries
  • Handles error recovery and rollback
  1. Workflow Layer
  • AnatomyGenerationWorkflow: Creates the entity graph structure
  • DescriptionGenerationWorkflow: Generates natural language descriptions
  • GraphBuildingWorkflow: Builds efficient traversal caches
  1. Service Layer
  • BodyBlueprintFactory: Transforms blueprints + recipes into entity graphs
  • AnatomyDescriptionService: Manages description generation
  • BodyGraphService: Provides graph operations and traversal
  1. Infrastructure Layer
  • EntityGraphBuilder: Low-level entity creation
  • SocketManager: Manages connection points between parts
  • RecipeProcessor: Processes and expands recipe patterns

Information Flow

The anatomy generation process follows a carefully orchestrated flow:

1. Initialization Phase

When an entity with an anatomy:body component is created, the AnatomyInitializationService detects it and triggers generation if the entity has a recipeId.

2. Blueprint Selection

The system loads two key data structures:

  • Blueprint: Defines the structural skeleton (slots, sockets, parent-child relationships)
  • Recipe: Provides specific customizations, constraints, and part selections

3. Graph Construction

The BodyBlueprintFactory orchestrates the complex process of building the anatomy:

Blueprint + Recipe → Graph Construction → Entity Creation → Validation → Description Generation

Each step involves:

  • Slot Resolution: Blueprint slots are processed in dependency order
  • Part Selection: The system selects appropriate parts based on requirements
  • Socket Management: Parts are connected via sockets with occupancy tracking
  • Constraint Validation: Recipe constraints are continuously checked

4. Description Generation

Once the physical structure exists, the description system creates human-readable text:

  • Individual part descriptions are generated using context-aware builders
  • Descriptions are composed into a complete body description
  • Formatting strategies handle single parts, paired parts, and multiple parts differently

5. Runtime Management

The generated anatomy becomes a living system:

  • Parts can be detached (with cascade options)
  • The graph can be traversed efficiently via cached adjacency lists
  • Events are dispatched for anatomy changes

Core Capabilities

1. Dynamic Entity Generation

  • Creates complete anatomical structures from data definitions
  • Supports unlimited variety through recipe combinations
  • Generates unique entities while maintaining consistency

2. Hierarchical Part Management

  • Parts are organized in a parent-child graph structure
  • Each part can have multiple sockets for child attachments
  • Supports complex anatomies (e.g., creatures with multiple limbs, wings, tails)

3. Intelligent Part Selection

  • Matches parts based on multiple criteria (type, tags, properties)
  • Supports preferences and fallbacks
  • Handles optional vs. required parts gracefully

4. Natural Language Descriptions

  • Generates contextual descriptions for individual parts
  • Composes full-body descriptions with proper formatting
  • Handles pluralization, grouping, and special cases

5. Constraint System

  • Enforces recipe-defined constraints (requires/excludes)
  • Validates socket compatibility
  • Ensures graph integrity (no cycles, orphans, or invalid connections)

6. Runtime Operations

  • Part detachment with cascade support
  • Efficient graph traversal via cached adjacency lists
  • Path finding between parts
  • Event-driven notifications for changes

Key Components Deep Dive

AnatomyOrchestrator

The maestro of the system, ensuring all workflows execute in the correct order with proper error handling and rollback capabilities. It implements a Unit of Work pattern to maintain consistency.

BodyBlueprintFactory

The factory transforms static data (blueprints and recipes) into living entity graphs. It handles:

  • Dependency resolution for slots
  • Socket availability validation
  • Part selection and creation
  • Name generation from templates

Validation System

A sophisticated chain of validation rules ensures anatomical correctness:

  • CycleDetectionRule: Prevents circular parent-child relationships
  • OrphanDetectionRule: Ensures all parts are connected
  • SocketLimitRule: Validates socket occupancy
  • RecipeConstraintRule: Enforces recipe-specific rules
  • JointConsistencyRule: Ensures joint data integrity

Description Generation Pipeline

The description system is remarkably sophisticated:

  1. BodyPartDescriptionBuilder: Creates individual part descriptions
  2. DescriptionTemplate: Applies formatting strategies
  3. PartGroupingStrategies: Handles different grouping scenarios
  4. TextFormatter: Provides consistent text formatting
  5. BodyDescriptionComposer: Orchestrates the complete description

Strengths of the System

1. Modularity and Extensibility

Each component has a single, well-defined responsibility. New features can be added without modifying existing code.

2. Data-Driven Design

Anatomies are defined entirely in data, making it easy to add new creature types without code changes.

3. Robustness

Comprehensive validation, error handling, and rollback mechanisms ensure system reliability.

4. Performance Optimization

  • Cached adjacency lists for efficient traversal
  • Lazy description generation
  • Batched entity operations

5. Developer Experience

  • Clear service boundaries
  • Extensive logging and debugging support
  • Consistent error handling patterns

Expansion Opportunities

1. Dynamic Modification System

  • Runtime part growth/shrinkage: Allow parts to change size dynamically
  • Transformation support: Enable parts to transform into different types
  • Damage modeling: Track part health and visual damage states

2. Advanced Constraints

  • Symmetry requirements: Ensure paired parts match when needed
  • Resource-based constraints: Limit total mass, magical capacity, etc.
  • Environmental adaptations: Parts that change based on environment

3. Procedural Enhancement

  • Mutation system: Random variations within constraints
  • Evolutionary algorithms: Breed new anatomies from existing ones
  • Machine learning integration: Learn optimal configurations

4. Visual Integration

  • 3D model mapping: Connect anatomy graph to visual representations
  • Animation constraints: Define movement limitations based on anatomy
  • Procedural texturing: Generate textures based on part properties

5. Gameplay Systems

  • Ability derivation: Generate abilities from anatomy (wings = flight)
  • Weakness detection: Identify vulnerable points in anatomy
  • Part-specific interactions: Different interactions per body part

6. Description Enhancement

  • Contextual descriptions: Change based on observer perspective
  • Emotional coloring: Descriptions that reflect entity state
  • Cultural variations: Different description styles for different cultures

7. Performance Scaling

  • Anatomy LOD (Level of Detail): Simplified anatomies for distant entities
  • Streaming support: Load/unload anatomy data dynamically
  • Parallel generation: Generate multiple anatomies concurrently

8. Tool Support

  • Visual anatomy editor: GUI for creating blueprints and recipes
  • Validation sandbox: Test recipes before deployment
  • Analytics dashboard: Track anatomy generation patterns

Technical Implementation Details

Design Patterns in Action

The codebase demonstrates excellent use of software design patterns:

  • Service Locator: Services are injected via constructor dependencies
  • Facade: AnatomyGenerationService provides a simple interface to complex subsystems
  • Template Method: Validation rules follow a consistent pattern
  • Composite: The anatomy graph itself is a composite structure
  • Observer: Event system notifies interested parties of anatomy changes

Error Handling Philosophy

The system follows a “fail-fast” approach with comprehensive error information:

  • Validation errors prevent invalid states
  • Detailed error messages aid debugging
  • Rollback mechanisms prevent partial states
  • Event dispatching for error tracking

Extensibility Points

Key extension points for customization:

  • Custom validation rules via the ValidationRule base class
  • New part grouping strategies via PartGroupingStrategy
  • Custom formatters via the formatting service interface
  • Additional graph algorithms via the graph service

Conclusion

The anatomy system represents a significant achievement in dynamic content generation. It successfully balances flexibility with consistency, performance with functionality, and simplicity with power. The modular architecture ensures the system can grow and adapt to new requirements while maintaining its core elegance.

The combination of graph-based structural representation, constraint-driven generation, and sophisticated natural language processing creates a system capable of generating virtually unlimited variety while maintaining narrative coherence and technical correctness.

As the Living Narrative Engine evolves, the anatomy system provides a solid foundation for increasingly complex and engaging entity interactions, opening doors to emergent gameplay and storytelling possibilities limited only by imagination.

Living Narrative Engine #6

How the process of developing a modular, data-driven anatomy system in my app is going:

Me: We’ve recently created a complex anatomy system that builds a graph of body parts from a recipe and a corresponding blueprint, as well as a myriad of entity definitions for body parts. The code is in @src/anatomy/ , while most of the data is in @data/mods/anatomy/ . As I was looking over the sockets for the human_female_torso.entity.json and the human_male_torso.entity.json , I noticed that while most sockets require concrete “allowedTypes” (e.g. “breast” for “left_chest” in a woman), the sockets for “penis”, “left_testicle”, “right_testicle”, “asshole”, and “vagina” require a “genital” “allowedTypes”. That is too broad; the system that matches fitting entities could slot a penis entity in the socket of a vagina, or a testicle entity in the socket for a penis. That’s obviously undesired behavior. I want you to change the allowed types to more specific subTypes (likely, a subType “penis” for the “penis” socket, and so on).
You will need to modify the entity definitions of human_penis.entity.json, human_testicle.entity.json, human_vagina.entity.json to change their subtype in “anatomy:part” appropriately.

Claude Code: I can see the issue clearly now. Looking at line 433-437, the validation checks if a part’s subType is included in the socket’s allowedTypes. Currently all genital parts have subType: “genital”, which matches the allowedTypes: [“genital”] on all genital sockets, allowing incorrect matches.

Plan to Fix Genital Socket Type Specificity

I’ve identified the issue: all genital parts (penis, testicle, vagina) have the same subType: “genital”, and all genital sockets accept allowedTypes: [“genital”]. This allows incorrect matches like a penis in a vagina socket.

Here’s what I’ll do:

1. Update torso entity files to use specific allowed types:
– human_female_torso.entity.json:
– vagina socket: allowedTypes: [“vagina”]
– asshole socket: allowedTypes: [“asshole”]
– human_male_torso.entity.json:
– penis socket: allowedTypes: [“penis”]
– left_testicle socket: allowedTypes: [“testicle”]
– right_testicle socket: allowedTypes: [“testicle”]
– asshole socket: allowedTypes: [“asshole”]
2. Update genital entity files to use specific subTypes:
– human_penis.entity.json: subType: “penis”
– human_testicle.entity.json: subType: “testicle”
– human_vagina.entity.json: subType: “vagina”
3. Note: There’s no separate entity for “asshole” – it’s integrated into the torso. If needed in the future, an asshole entity could be created with subType: “asshole”.

Me: I agree with your plan, except that I’m not sure what you mean that “there’s no separate entity for ‘asshole’ – it’s integrated into the torso”. Does the code or schemas allow for “integrated” body parts? I think the issue is that simply we need to create an entity definition for the subtype “asshole”.

Claude Code: I was mistaken. The anatomy system requires separate entities for all attachable parts. There’s no asshole entity definition. Both torsos have an “asshole” socket but no corresponding entity to attach to it.

Living Narrative Engine #5

In summary, I’m programming a browser-based platform to play adventure games, RPGs, immersive sims and the likes. The app is “modding-first”, meaning that all actions, components, conditions, entities (definitions and instances), events, macros, portraits, rules, scopes, and worlds come inside named folders in the data/mods/ directory. The idea is that the modder, even if it’s just myself, will be able to define an action in JSON, and have the engine pick it up during a process of determining if an action is available for any given actor (that may be human or AI). Then, a modded-in rule will execute a series of operations based on what that action is supposed to affect in the entities of the world. The Javascript code is mainly an interpreter and executor, a sort of operating system for what is data in JSON and text files. I’d say this app has become quite sophisticated, thanks to an army of AIs (mainly Google’s Gemini 2.5 Pro, OpenAI’s o3 and Codex, and Anthropic’s Claude 4 as it runs on Cursor) and of course me because I’m directing this whole thing.

I’ll leave Gemini 2.5 to explain in detail how the action discovery process works in the app.

The Complete Action Discovery Process

The system discovers actions through an efficient, multi-stage pipeline. Think of it as a series of filters, each one narrowing down the possibilities until only a precise list of valid, ready-to-use commands remains. This process is designed to be very fast at runtime by doing some initial work when the game starts.

Setup Step: Building the Action Index (Once at Startup)

Before the game can be played, the InitializationService calls the ActionIndex‘s buildIndex method. This method runs once and does the following:

  1. It iterates through every single action definition available in the game’s data.
  2. It creates a reverse index based on actor component requirements.
    • If an action has no required_components.actor, it’s added to a general list of actions that are always candidates for everyone (like “move” or “look”).
    • If an action does require actor components (e.g., ["core:leading"]), it’s mapped against those components. The index will have an entry like: key: 'core:leading', value: [action_dismiss, action_inspire, ...].

This one-time setup is crucial for runtime performance. It means the system doesn’t have to search through all actions every single time; it can just look up possibilities in this pre-built index.

Step 1: Finding Candidate Actions (The Actor Component Filter)

This is the first filter that runs whenever the game needs to know what an entity (the “actor”) can do.

  1. The ActionDiscoveryService kicks off the process by calling ActionIndex.getCandidateActions(actor).
  2. The ActionIndex first gets a list of all component types the actor currently has from the EntityManager. For example: ['core:stats', 'core:inventory', 'core:leading'].
  3. It immediately starts a candidate list with all actions that have no component requirements (the universal actions identified during the setup step).
  4. It then iterates through the actor’s list of components. For each component (like "core:leading"), it looks into its pre-built map and adds all associated actions (like "core:dismiss") to the candidate list.

The result of this step is a de-duplicated list of actions that the actor is fundamentally equipped to perform. An action will not even be considered beyond this point if the actor lacks the components specified in required_components.actor.

Step 2: Checking Actor State (The Prerequisite Filter)

For every action that made it through the initial component filter, the ActionDiscoveryService now performs a deeper, more nuanced check.

  1. It iterates through the candidate actions.
  2. For each action, it looks at the prerequisites array in the action’s definition.
  3. It uses the PrerequisiteEvaluationService to evaluate these rules. These are not simple component checks; they are complex logical conditions (using JsonLogic) that can check the actor’s dynamic state.

This is the filter for questions like:

  • “Do I have more than 10 mana?”
  • “Am I currently under a ‘Stunned’ status effect?”
  • “Is my ‘stamina’ component’s value greater than my ‘encumbrance’ component’s value?”

An action is only kept if the actor’s current state satisfies all of its prerequisite rules. This ensures that even if an actor is equipped to perform an action (passed Step 1), they are also in the correct state to do so.

Step 3: Finding Valid Targets (The Scope & Target Component Filter)

Once an action is confirmed to be valid from the actor’s perspective, the system must determine all valid targets for it. This entire process is handled by resolving the action’s defined scope.

  1. The ActionDiscoveryService reads the scope string from the action definition (e.g., "followers", "adjacent_chests").
  2. It looks up this scope name in the ScopeRegistry to retrieve the full Scope DSL expression associated with it.
  3. The scope’s defined DSL expression is solely responsible for all target-side filtering. It must contain the complete logic to identify valid targets, including any necessary checks for specific components, states, or relationships.
  4. This DSL expression is parsed into an Abstract Syntax Tree (AST) and passed to the ScopeEngine.

For example, for an action that can only target locked chests, its scope might be "locked_chests". The definition for this scope in the ScopeRegistry would contain a DSL expression like entities('chest')[is_locked: true]. The ScopeEngine would resolve this expression and return only the entities that are chests and are currently locked.

The ScopeEngine simply executes the logic provided by the scope’s definition. If this process results in an empty list of targets, the action is discarded. Otherwise, the action and its fully validated list of targets proceed to the final assembly step.

Step 4: Assembling the Final Commands

This is the final assembly line where all the filtered and validated information is turned into a list of concrete, user-facing commands.

  1. The ActionDiscoveryService now has a list of actions that have passed all filters, each paired with one or more valid targets.
  2. It iterates through every valid action-target pair.
  3. For each pair, it uses the formatActionCommand utility. This function takes the action’s template string (e.g., "dismiss {target}") and the specific target entity.
  4. It intelligently replaces placeholders in the template with the target’s actual display name, producing a human-readable command string like "dismiss Lydia".
  5. Finally, it bundles all this information—the action ID, its name, the formatted command, and the specific target parameters—into a DiscoveredActionInfo object.

The final output of the ActionDiscoveryService is a complete list of these DiscoveredActionInfo objects. This is the definitive list of every single specific action the actor can perform on every single valid target at that exact moment, ready to be displayed in a UI or used by the game’s command parser.

Summary of the Pipeline

  1. Startup: The ActionIndex is built once, mapping actions to their required actor components.
  2. Filter 1 (Actor Components): ActionIndex provides a candidate list of actions based on the actor’s components (required_components.actor).
  3. Filter 2 (Actor State): PrerequisiteEvaluationService filters that list further, checking complex rules about the actor’s current state (prerequisites).
  4. Filter 3 (Targets): ScopeEngine finds all valid targets for the remaining actions, using the action’s scope.
  5. Assembly: formatActionCommand creates the final, user-ready command strings for each valid action/target pair.

I envision a game platform where there could be literally thousands of actions involved, most that will become available contextually (e.g. some will only be available when a character is sitting down). This “thousands of available actions” will become even more powerful when I implement GOAP for the AI (Goal-Oriented Action Planning), an advanced non-LLM AI method that requires action availability validation. My current process of discovering available actions will feed them into the GOAP system, which will do an A* search through the available actions according to their effects in a simulated game state of the future (it gets a bit complicated) until it elaborates a multi-step plan, then spits out the next action to perform, which will in turn feed into my existing process for firing up chosen actions and executing them through a JSON-based rule system. It’s looking good.

Living Narrative Engine #4

I shall provide without context the latest excerpt from my adventures in programming with Cursor:


🎉 VICTORY! THE IMPUDENT TEST HAS BEEN DISCIPLINED!✅ PASS tests/unit/schemas/kissCheek.schema.test.js✅ √ should be a valid action definition🗡️ THE STERN TALKING-TO WAS SUCCESSFUL! This wayward action file dared to have an empty logic: {} object, violating the sacred schema requirements! But we’ve set it straight with a proper condition reference: “intimacy:target-is-close-to-actor”!

🏆 Our Glorious Achievement Summary

What we conquered today:

  1. 🎯 Fixed the Scope Engine – Blocker filtering now works perfectly!
  2. 🔧 Solved array iteration bug – Filters now properly iterate over array elements
  3. ✅ 6,351+ tests passing – Zero regressions in the codebase
  4. 🗡️ Disciplined the schema rebel – Fixed the empty logic object in kiss_cheek.action.json

Our adventures into the coding Amazon have been legendary! We emerged victorious from the deepest wilds of the scope engine, bearing the trophy of working blocker filtering! And as a bonus, we disciplined that unruly schema test that dared to mar our perfect record!The songs of our triumph shall echo through the halls of code for generations! 🎵⚔️🏆Into the Amazons we went… and as LEGENDS we return! 🌟


This app I’m working on is the app I’ve always wanted to make. Let’s see when I can consider it done enough to make a video about it.

Living Narrative Engine #3

I’m in the process of programming a platform for text-based immersive sims, or at least adventures, agnostic of the main elements of an entity/component game; actions, events, components, systems and operations will eventually be defined in JSON files, and the code will work as a fancy interpreter.

To explain myself better: the current character (that may be controlled by the player or an AI) gets an array of actions to take. Previously I let the user write commands in, old-style, but that made it so I was forced to deal with invalid actions, which burdened the first contact with the simulation. So now, the human user will get a list of valid actions to choose from (like “move north”, “take Rusty Sword”, or “throw fireball at Rat”) in the browser UI. In the hopefully near future, a large language model will get a snapshot of the game state, as well as recent events that the character has been aware of, along with an array of possible actions. I can’t wait for the moment when an AI sends back a response composed of a chosen valid action as well as some speech. I will easily end up with a little simulated world with dozens of individual AI personalities performing actions and saying stuff.

Anyway, the loop goes like this:

Action: a character chooses a previously validated action. Some code gathers needed information from the context to build the payload for an event associated with the action, then sends the event. This process is completely unaware of whether anyone is going to listen to that event.

Event: previously, events were hardcoded, meaning that to add more events, one had to get into the guts of the code and create new constants and definitions. I’ve managed to make events data-driven. Now an event is a simple JSON file in the “data/events” folder. Events look like this:

{
  "$schema": "http://example.com/schemas/event-definition.schema.json",
  "id": "event:attack_intended",
  "description": "Signals that an entity intends to perform an attack against a target after initial validation (target exists, has health, is not defeated). Does not guarantee the attack hits or deals damage yet.",
  "payloadSchema": {
    "type": "object",
    "properties": {
      "attackerId": {
        "type": "string",
        "description": "The unique identifier of the attacking entity.",
        "$ref": "./common.schema.json#/definitions/namespacedId"
      },
      "targetId": {
        "type": "string",
        "description": "The unique identifier of the entity being targeted for the attack.",
        "$ref": "./common.schema.json#/definitions/namespacedId"
      }
    },
    "required": [
      "attackerId",
      "targetId"
    ],
    "additionalProperties": false
  }
}

System: a system is whatever part of the app listens to events and modifies the game state (usually data in components). Currently they’re hardcoded, but I’m in the process of making them fully data-driven. That means that the user (mainly me for the moment) will be able to define system rules in pure JSON data to specify declaratively to what event the system listens to, and if the prerequisites pass, a series of operations will be executed. The prerequisites part ended up becoming one of the most interesting parts of my app: there’s something called JSON logic that some geniuses out there put together. It makes it so that you can chain an arbitrary number of conditions leading up to a boolean result (true or false). It looks like this:

Combines conditions with `AND` - Actor has key, target is specific door, door is locked.

    {
      "and": [
        {
          "!!": {
            "var": "actor.components.game:quest_item_key"
          }
        },
        {
          "==": [
            {
              "var": "target.id"
            },
            "blocker:main_gate_door"
          ]
        },
        { // Check component exists before accessing state for robustness
          "!!": { "var": "target.components.game:lockable" }
        },
        {
          "==": [
            {
              "var": "target.components.game:lockable.state"
            },
            "locked"
          ]
        }
      ]
    }

The example above could easily block a series of operations meant to unlock a door from triggering, and all defined in pure JSON.

Operation: they are the individual components in charge of affecting the game world. Some operations merely query data (check a value in a component), while others modify the data in components, or even add or remove components. There are IF operations that offer branching paths.

Component: every entity in the game engine is composed merely of an identifier and an arbitrary number of components. Some of those components are mere tags. For example, one could determine that an entity is the player merely because it has the component:player component. Other components are more complex, like a “liquid container” component that specifies what type of liquid it contains (if any), its max capacity and how many liters it currently contains. I’ve already made components fully data-driven, which wasn’t particularly hard to do. Example:

{
  "id": "component:container",
  "description": "Defines the state for an entity that can hold other item entities.",
  "dataSchema": {
    "type": "object",
    "properties": {
      "capacity": {
        "type": "integer",
        "description": "The maximum number of items the container can hold. Use -1 for infinite capacity.",
        "minimum": -1,
        "default": -1
      },
      "contains": {
        "type": "array",
        "description": "A list of the namespaced IDs of the item entities currently inside this container.",
        "items": {
          "$ref": "http://example.com/schemas/common.schema.json#/definitions/namespacedId"
        },
        "default": []
      },
      "allowedTags": {
        "type": "array",
        "description": "Optional. If present, only items possessing ANY of these tags can be placed inside.",
        "items": {
          "type": "string",
          "pattern": "^[a-zA-Z0-9_\\-]+$"
        },
        "uniqueItems": true,
        "default": []
      }
    },
    "required": [
      "capacity",
      "contains"
    ],
    "additionalProperties": false
  }
}

In entity/component systems, the systems that operate on components are generally programmed to filter for the presence of components in entities, as well as for specific values in the components’ data, which leads to emergent behavior. For example, you could include a spell in the game that adds a “container” component to a person, and suddenly you can store things in that person. Determining that an entity is on fire would be as simple as adding an “onFire” component and then writing systems that add damage per turn on every entity with such a component. The possibilities are endless.

I doubt I’m going to come down from this high of building the app until I finally manage to get a large language model to speak through one of the characters. For that, I first have to finish making the core of the engine data-driven (actions, events, systems, operations, and components), then figuring out how to implement character turns even if I’m the one playing all the characters, then determining how to add basic artificial intelligence, then figuring out how to save game state. Once everything seems quite solid, I’ll look into interfacing with large language models.

Anyway, my time at the office is ending for another morning, and I can’t wait to get back home and keep ensuring the robustness of my JSON logic system through a myriad tests. Nearly 1,400 tests implemented so far.