Post-mortem for Custody of the Rot

If you’re reading these words without having read the story mentioned in the title, don’t be a fucking moronski; read it first.

I assume you’ve read some of my previous posts on my ongoing fantasy cycle, so you may remember that I’m producing these stories in tandem with improvements to my app, named Living Narrative Engine. It’s a browser-based system for playing scenarios like immersive sims, RPGs, etc. I’m compelled by the mutual pulls of adding more features to my engine and experiencing new scenarios; sometimes I come up with the scenario first, sometimes with the mechanics. That has my brain on a constant “solve this puzzle” mode, which is the ideal way to live for me.

Anyway, the following scenarios involving a brave bunch of dredgers in a fantasy world, tasked with extracting a dangerous arcane artifact from some gods-forsaken hole, will require me to develop the following new mechanics:

  1. Lighting mechanics. Currently, every location is considered constantly lit. Given that we’re going underground and that the narrative itself requires using lanterns, I have to implement mechanics for recognizing when a location is naturally dark, and whether there are light sources active. There are other mechanics providing information about the location and actors in it, so from now on, when a location is naturally dark and nobody has switched on a flashlight, we have to block offering descriptions of the location and other actors in it, and instead display text like “You can’t see shit.”
  2. Once lighting mechanics exist, we need actions for lighting up and snuffing out lanterns and lantern-like entities. By far the easiest part.
  3. Currently, when an actor speaks in a location, the speech is only received by actors in that location. At the same time, I consider an entity a location when it has defined exits. Now we find ourselves in a situation in which we have a thirty-feet-long underground corridor separated by grates. That would make each segment between grates a location (which would be correct, given the boundary), but an actor could step from a boundary into the next and suddenly not hear a character on the other side of a grate’s bars. Obviously idiotic. So I need to implement a mechanical system for declaring “if an actor speaks here, the voice will be heard in these other places too.” That will need to extent to actions too: if you have eyes, you can see someone scratching his ass on the other side of bars.
  4. No other scenario has featured water sources that could play a part. And by play a part I mean that actors could get in or fall in, exit them, struggle in the water, and drown. I really don’t want to see my characters drowning, but that’s part of the stakes, so the mechanics need to exist. Given that water sources tend to be connected to other locations and not through the regular exits, I will need some way of allowing “I’m in the water, so I want to swim upstream or downstream to a connected stretch of this water source.” This whole water system will be arduous.
  5. Line-tending mechanics. Until I started researching matters for this story, I doubt that the notion of line-tending had ever entered my mind. Now we need mechanics for: 1) making an owned rope available to others. 2) Clipping and unclipping oneself from the available rope. 3) pulling on the rope to draw back someone clipped that’s wandering away. 4) possibly other cool line-tending-related mechanics. I can see line-tending reappearing in future scenarios such as traditional dungeon delves (for example, to avoid falling in Moria-like environments). Admittedly, though, this whole thing is quite niche.
  6. Blocker-breaking mechanics. Basically: this door is bar-based, so this allows a hacksaw to hack through the bars. I don’t want to make it a single action, but a progressive one (e.g. if you succeed once, it only progresses a step toward completion).
  7. Mechanics related to mind control. To even use those actions, I will need to create a new type of actor for the scenarios: a dungeon master of sorts. Basically a human player that’s not accessible to others, as if it were invisible, but that can act on present actors. I would give that dungeon master for this run the can_mind_control component, then allow actions such as putting people into trances, making them walk off, dive into water, etc. This means that there would need to be opposite actions, with the victims fighting to snap out of the trance. It will be fun to find out what happens when the scenario plays out. In the future, this dungeon master could be controlled by a large language model without excessive difficulty: for example, feeding it what’s happened in the story so far, what are the general notions about what should happen, and giving it actions such as “spawn a hundred murder dragons.”

That’s all that comes to mind now regarding the mechanics to add.

About the story: so far, it seems I want magic to be treated in this fantasy world as if it were toxic material. That’s not a decision I’ve made about worldbuilding, but a natural consequence of the stories I’ve felt like telling. I actually don’t believe in the kind of worldbuilding in which you come up with imaginary words for the warts on an invented race’s ass. I’m all about use and tools. My mind always goes for “what can I build with this.” I’m very rarely interested in a subject if I can’t see myself creating a system out of it. It also doesn’t help that due to autism, abstractions tend to slip through my fingers, so I need to feel like I’m sensing something to understand it.

In a way, I wanted to create a story about specialists working through a problem that needs to be solved. Jorren Weir, Kestrel Brune, Saffi Two-Tides, Pitch… these people don’t have superpowers. Most of them are glad they can keep a job. There is no grand evil here, just people’s self-interest. I want them to do well so that they can return home at the end of the ordeal. But given that we’re dealing with chance-based tests, that’s not a guarantee. And that tension alone makes it exciting for me to experience these scenarios.

As usual, if you’re enjoying these stories, then great. Otherwise, fuck off.

Review: Dispatch

Back in late 2000s and early 2010s, we had this thing we affectionately called Telltale-style games: heavily narrative-driven games that relied on letting the player make more or less compelling decisions that would affect the narrative. They didn’t have the complexity of early adventure games, but they couldn’t be called simple visual novels either. They were tremendously successful, until corporate greed swallowed them, spread them thin, and eventually dissolved them into nothing. The company shut down.

A new studio made of former Telltale devs decided to try their hand a new Telltale-style game that removed the dragging parts of former Telltale games (mainly walking around and interacting with objects) to focus on a good story, a stellar presentation, and compelling minigames. Their first product was the game Dispatch, released about a month ago in an episodic format (two episodes a week, but all of them are out already). The game has become a runaway success.

The story focuses on Robert Robertson, a powerless Iron Man in a society where many, many people have superpowers. He carries the family legacy of battling villains with a mecha. As an adult, he pursued the supervillain who murdered Robert’s father, and who now led one of the most dangerous criminal groups. However, during an assault on the villain’s base, Robert’s mecha gets destroyed, which puts him out of a job.

However, he’s approached by one of the most notorious superheroes, a gorgeous, strong woman who goes by the name Blonde Blazer. She offers him a job at the company she works for, SDN (Superhero Dispatch Network). Their engineers will work on repairing Robert’s mecha, while he offers his expertise on fighting crime as the one in charge of dispatching other heroes to the appropriate calls.

Robert finds out that the team of heroes he’s supposed to handle are a bunch of villains who either have approached the company to reform themselves, or were sent by the criminal system for rehabilitation. They’re a diverse bunch of rowdy, at times nasty superpowered people who aren’t all too keen on having a non-superpowered nobody in charge of them. The narrative explores how the team grows to work together better.

The execution of this story could have gone wrong in so many ways: wrong aesthetic, time-wasting, atrocious writing, and above all, marxist infiltration; like most entertainment products released on the West these days, the whole thing could have been a vehicle for rotten politics. But to my surprise, that’s not the case here. A male protagonist, white male no less, who is an intelligent, hard-working, self-respecting role model? Attractive characters, fit as they would be in their circumstances? A woman in charge (Blonde Blazer) who is nice, understanding, competent, caring, and good? Villains with believable redemption arcs? Romance routes that flow naturally? Where the hell did this game come from in 2025?

Entertainment consumers have been deliberately deprived of all of this by ideologues who despise everything beautiful and good, who, as Tolkien put it, “cannot create anything new, they can only corrupt and ruin what good forces have invented or made.” Franchise after franchise taken over by marxists who dismantle it, shit on the remains, and then insult you if you don’t like it. Dispatch is none of it. For that reason alone, I recommend the hell out of it. I’m sure that given its sudden popularity, the forces-that-be will infiltrate it and ruin it in its second season as they do with everything else, but the first season is already done.

It’s not perfect, of course. Its pros: an astonishing visual style that makes it look like a high-quality comic book in movement. No idea how they pulled it off. Clever writing. Endearing characters. Interesting set pieces. The voice acting is extraordinary, led by Aaron Paul of Breaking Bad fame. He deserves an award for his acting as Robert Robertson. It’s a good story told well, and you’re in the middle of it making important decisions (and also plenty of flavorful ones).

The cons: some whedonesque dialogue that didn’t land for me. Too much cursing even for my tastes, to the extent that often feels edgy for edge’s sake. Some narrative decisions taken during the third act, particularly regarding the fate of one of the main characters, didn’t sit well for me, as it deflated the pathos of the whole thing. But despite the pros, this was a ride well worth the price.

Oh, I forgot: they should have let us romance the demon mommy. My goodness.

Check out this nice music video some fan created about Dispatch, using one of the songs of its soundtrack.

Living Narrative Engine #12

Ever since I started developing my app (Living Narrative Engine repo) months ago, I knew I wanted to reach a state in which any scenario could have three types of actors: human, LLM, and GOAP. LLMs are mostly understood these days; ChatGPT is one of them. You send them a prompt, they respond like a person would. They can also return JSON content, which is easily processed by programs. Ironically, implementing AI that resembles real intelligence (and that, as some recent papers have demonstrated, have achieved emergent introspective awareness) in an app these days is actually much easier than implementing an algorithmic intelligence, the kind used in complex simulations. But if you want to populate a scenario with sentient beings and beasts, having LLMs control beasts is potentially counterproductive; they could have them making too-intelligent decisions. For that, GOAP, or Goal-Oriented Action Planning) comes in.

Later, I will copy-paste the report I made Claude Sonnet 4.5 write on the system as it’s currently implemented in my app. The point I want to make is that implementing GOAP was for me the holy grail of this app, and its lack gated away properly-complex scenarios. For example, I couldn’t create scenarios involving a dungeon run, or even having a simple house cat, because I would need to handle the monsters myself. But with GOAP present, I could populate a whole multi-level dungeon with GOAP-controlled creatures and they would live their lives naturally, seeking food, pursuing targets, resting, etc. Even better, the action discoverability system that I implemented early on in my app means that actions already come filtered to those available, so the GOAP system only needs to consider the goals of the acting actor to determine what action to take.

This morning, while reviewing my goals for this app, I considered that maybe it was mature enough to handle implementing a Phase 1 of the GOAP system. I asked Sonnet to produce a brainstorming document regarding how we could implement it, and to my surprise, it wouldn’t be particularly hard at this point. Hours later, I’m already validating the entire system through end-to-end tests, and it all looks fantastic so far. However, given the complexity of this system, I won’t try using it in practice until it’s 100% covered by e2e tests. I know very well the kind of strange bugs that can pop up otherwise.

I can hardly wait to implement a medieval-fantasy scenario in which a group of adventurers go into a cave to exterminate some goblins, only for GOAP-controlled, lore-accurate goblins to consistently seek fondling-related actions towards anything that has boobs or a bubbly butt.

Anyway, without further ado, here’s the report about GOAP in my app, the proudly-named Living Narrative Engine.


# The GOAP System: Teaching NPCs to Think (and Tell Better Stories)

## A Blog Report on Living Narrative Engine’s New AI Decision-Making System

*Written for readers interested in AI, storytelling, and game development*

## What is GOAP? (In Human Terms)

Imagine you’re watching a character in a story who’s hungry. They don’t just magically teleport to the nearest restaurant. They think: “I need food. There’s a sandwich in the kitchen. But first, I need to get up from this chair, walk to the kitchen, and open the fridge.” That’s essentially what GOAP (Goal-Oriented Action Planning) does for NPCs (non-player characters) in games.

**GOAP is a system that lets AI characters figure out how to achieve their goals by planning a series of actions**, much like how you or I would solve a problem. Instead of following pre-programmed scripts, characters can reason about what they want and figure out the steps to get there.

In the Living Narrative Engine, this system is now fully implemented and working. After months of development, all three architectural tiers are complete, tested, and ready to transform how characters behave in narrative games.

## The Problem Before GOAP

Before GOAP, creating believable AI behavior was like writing a gigantic flowchart of “if this, then that” rules. Want an NPC to find food when hungry?

**Old way:**

“`

IF hungry AND food_nearby:

  → walk to food

  → pick up food

  → eat food

“`

Seems simple, right? But what if:

– The food is in a locked container?

– The character needs to pick up a key first?

– The character is sitting and needs to stand up?

– There are multiple ways to get food?

You’d need dozens of rules for every possible situation. It quickly becomes a nightmare to maintain, and characters feel robotic because they can only do exactly what you programmed, nothing more.

## What GOAP Changes: Characters That Think

With GOAP, you don’t tell characters *how* to do things—you tell them *what* they can do, and they figure out the rest.

**The GOAP Way:**

**You define:**

**Goals**: “I want to have food” or “I want to rest safely”

**Actions**: “Pick up item,” “Open container,” “Stand up,” “Move to location”

**Effects**: What each action changes in the world

**The AI figures out:**

– Which actions will help achieve the goal

– The correct order to perform them

– Alternative paths if the first plan doesn’t work

### A Real Example: The Hungry Cat

One of the end-to-end tests in the system demonstrates a cat NPC with a “find food” goal. The cat:

1. **Recognizes** it’s hungry (goal becomes relevant)

2. **Evaluates** available actions (pick up food, search container, etc.)

3. **Plans** which action brings it closer to having food

4. **Acts** by picking up a nearby food item

If the food were locked in a container, the cat would automatically:

1. Check if it can open the container

2. Open the container first

3. Then take the food

**You didn’t program this specific sequence**. The cat figured it out based on understanding what actions are possible and what effects they have.

## What This Means for Modders

If you’re creating content (or “mods”) for the Living Narrative Engine, GOAP gives you superpowers:

### 1. **Define Actions, Not Scripts**

Instead of writing complex scripts for every situation, you define simple actions:

**Example: “Sit Down” Action**

**What it does**: Removes “standing” state, adds “sitting” state

**When it’s available**: When the character is standing and near a chair

GOAP handles everything else. The character will automatically:

– Consider sitting when tired

– Stand up before walking if they’re sitting

– Chain actions together naturally

### 2. **Mix and Match Content from Different Mods**

The system supports **cross-mod goals and actions**. This means:

– You create a “rest when tired” goal in your mod

– Someone else creates “lie down on bed” and “close door” actions in their mods

– Characters automatically combine these: close door → lie down → rest

**No coordination required**. The AI figures out how different mods’ actions work together to achieve goals.

### 3. **Create Believable Motivations**

You can define character goals with priorities:

**Critical (100+)**: Flee from danger, seek medical help

**High (80-99)**: Combat, finding food when starving

**Medium (60-79)**: Rest when tired, seek shelter

**Low (40-59)**: Social interaction, grooming

**Optional (20-39)**: Exploration, collecting items

Characters automatically pursue their highest-priority relevant goal. If a character is tired (60 priority) but suddenly becomes hungry (80 priority), they’ll switch to finding food first. **This creates emergent, believable behavior.**

### 4. **Test Multiple Actors Simultaneously**

The system includes **multi-actor support** with smart caching. Tests show 5 actors can make independent decisions in under 5 seconds, with each actor’s plans cached separately to improve performance.

## What This Means for Players

### Emergent Storytelling

Characters don’t follow scripts—they respond to situations. This creates:

**Unexpected Moments:**

– A guard who’s supposed to patrol might sit down because they’re tired

– An NPC who notices you’re injured might abandon their task to help

– Characters might form plans you didn’t anticipate

**Reactive Behavior:**

– NPCs adapt to world changes

– If you take the food they were going to get, they find another way

– Characters respond to your actions in contextually appropriate ways

### Consistent Character Behavior

The system includes **plan caching and multi-turn goal achievement**. This means:

– Characters remember their plans across turns

– They persist in pursuing goals until achieved

– Behavior remains consistent unless the world changes

If a character decides to rest, they’ll follow through: find a bed, lie down, and rest. They won’t randomly change their mind unless something more important happens.

## The Narrative Potential

This is where GOAP becomes truly exciting for storytelling:

### 1. **Character-Driven Stories**

Instead of railroading players through pre-scripted sequences, stories can emerge from character motivations:

– A villain isn’t just “evil”—they have goals (power, revenge, safety) and will take sensible actions to achieve them

– Allies don’t just follow you—they have their own needs and will act on them

– Every character becomes a potential plot thread

### 2. **Meaningful Choices**

Player decisions have weight because NPCs respond intelligently:

– Steal someone’s food → they seek alternative food sources → maybe they steal from someone else → chain reactions

– Help someone achieve their goal → they remember and might reciprocate

– Block someone’s plans → they adapt and try alternative approaches

### 3. **Living Worlds**

The world feels alive because characters are actively pursuing goals even when you’re not watching:

– Merchants restock inventory when supplies run low

– Guards patrol but take breaks when tired

– NPCs form relationships based on shared goals and repeated interactions

### 4. **Complex Scenarios Without Complex Code**

Want to create a scenario where:

– NPCs negotiate for resources?

– Characters form alliances based on complementary goals?

– A character pursues revenge but struggles with moral constraints?

With GOAP, you define the goals and constraints. The AI figures out the behavior. **You focus on storytelling, not programming edge cases.**

## Real Examples from the System

The GOAP implementation includes several behavioral tests that demonstrate the potential:

### The Cat and the Food

**Scenario**: Cat is hungry, food is on the floor

**Goal**: Acquire food (priority: 80)

**Result**: Cat identifies “pick up food” as the best action and executes it

**What makes this special**: If the food were in a container, the cat would automatically plan: open container → take food. No special programming needed.

### The Goblin Warrior

**Scenario**: Goblin encounters combat situation

**Goal**: Be prepared for combat

**Available Actions**: Pick up weapon, attack, defend, flee

**Result**: Goblin evaluates current state (unarmed) and picks up weapon before engaging

**What makes this special**: The goblin reasons about prerequisites. It doesn’t blindly attack—it first ensures it has the tools to succeed.

## Technical Achievements (Simplified)

For those curious about how this works under the hood:

### Three-Tier Architecture

1. **Tier 1: Effects Auto-Generation**

   – Analyzes game rules to understand what actions actually do

   – Automatically generates planning metadata

   – No manual annotation needed

2. **Tier 2: Goal-Based Action Selection**

   – Evaluates which actions move characters closer to goals

   – Simulates action outcomes to predict results

   – Selects optimal actions based on goal progress

3. **Tier 3: Multi-Step Planning & Optimization**

   – Plans sequences of actions across multiple turns

   – Caches plans for performance

   – Handles multiple actors making concurrent decisions

   – Recovers gracefully from failures

### Smart Performance

**Plan caching**: Once a character figures out a plan, it’s saved and reused

**Selective invalidation**: Only affected plans are recalculated when the world changes

**Multi-actor isolation**: Multiple characters can plan simultaneously without interfering

**Proven performance**: 5 actors complete decision-making in under 5 seconds

### Comprehensive Testing

The system includes **15 end-to-end tests** covering:

– Complete decision workflows with real game mods

– Goal relevance and satisfaction checking

– Multi-turn goal achievement

– Cross-mod action and goal compatibility

– Error recovery and graceful degradation

– Performance under load

**Test coverage**: 90%+ branches, 95%+ lines for critical components. This isn’t experimental—it’s production-ready.

## What’s Next?

The GOAP system is fully implemented and tested. Here’s what this enables:

### Immediate Opportunities

1. **Richer Mods**: Content creators can define sophisticated AI behaviors without complex scripting

2. **Emergent Gameplay**: Players experience stories that unfold based on character decisions, not scripts

3. **Easier Development**: Creating believable NPCs becomes dramatically simpler

### Future Possibilities

1. **Social Goals**: Characters pursuing relationships, status, or influence

2. **Long-Term Planning**: Goals that span hours or days of game time

3. **Learning and Adaptation**: Characters whose priorities shift based on experiences

4. **Collaborative AI**: Multiple characters coordinating on shared goals

### Integration with Other Systems

GOAP integrates with the engine’s existing systems:

**Event System**: Planning decisions trigger events that other systems can respond to

**Memory System**: Characters remember past successes and failures

**Action System**: Works seamlessly with the existing 200+ actions across mods

**Rule System**: Analyzes existing rules without requiring rewrites

## Why This Matters

### For Storytellers

GOAP gives you characters that feel alive. Instead of puppets following scripts, you get actors with agency who make decisions based on their needs and circumstances. **Your stories become dynamic and emergent rather than fixed and predictable.**

### For Players

You get to experience stories that respond to you. Characters aren’t following invisible rails—they’re making choices based on their situation. Every playthrough can unfold differently because characters adapt and respond to changing circumstances.

### For Developers

Building believable AI becomes dramatically simpler. Instead of writing thousands of lines of conditional logic, you define goals and actions. The system handles the complexity of figuring out how to achieve those goals.

## The Bigger Picture

AI in games has traditionally been about smoke and mirrors—making NPCs seem smart through carefully scripted sequences. GOAP represents a different approach: **give characters the tools to reason about their world and let them figure out how to achieve their goals**.

This aligns perfectly with the Living Narrative Engine’s philosophy: **create systems that enable emergent stories rather than prescribing specific narratives**. With GOAP, characters become collaborators in storytelling, not just props.

## Try It Yourself

The Living Narrative Engine is open source and available now. The GOAP system is fully integrated and ready to use. If you’re interested in:

– Creating narrative games with intelligent NPCs

– Experimenting with emergent storytelling

– Building mods with sophisticated AI behavior

– Contributing to an AI-driven narrative platform

The code is on GitHub, documented and tested. The GOAP docs at `docs/goap/` provide complete guides for:

– Understanding the system architecture

– Creating goals and actions

– Testing AI behavior

– Troubleshooting common issues

## Final Thoughts

GOAP represents months of development work (note by me: we actually finished it in a day, if you don’t count the infrastructure): designing the architecture, implementing three complete tiers, writing comprehensive tests, and documenting everything. But the real achievement isn’t the code—it’s what it enables.

**It enables stories where characters have agency.**

**It enables worlds that feel alive.**

**It enables gameplay that adapts and responds.**

**It enables narratives that emerge from character decisions rather than following predetermined scripts.**

This is the future of narrative games: not scripted sequences, but simulated worlds where characters pursue their goals and stories emerge from their choices. The technology is here, implemented, tested, and ready.

Now comes the fun part: seeing what stories people tell with it.

## Technical Resources

For those who want to dive deeper:

**Full Documentation**: `/docs/goap/README.md`

**Test Examples**: `/tests/e2e/goap/`

**Operation Reference**: `/docs/goap/operation-mapping.md`

**Planning System Details**: `/docs/goap/planning-system.md`

**Effects System Guide**: `/docs/goap/effects-system.md`

**Troubleshooting**: `/docs/goap/troubleshooting.md`

The system is fully documented with examples, test cases, and integration guides. Everything you need to understand and use GOAP is included.

VR game review: Ghost Town

I’ve been playing a lot of VR recently, so I may as well review the only long-form game that I’ve finished in this couple of weeks. Ghost Town is a puzzle-based adventure game set in Great Britain back in the eighties. You’re a spirit medium (a witch) named Edith, whose shitty younger brother disappeared under shady circumstances, and your goal is to find him. Trailer is below:

There are many more pros than cons as far as I’m concerned. The setting, mainly London in the 80s, is quite unique, and provides a gritty touch that I appreciated. The character animations and models are generally exceptional for the Meta Quest 3, maybe the best I’ve seen so far. I don’t like puzzle games, yet this one made me appreciate the puzzles. I was never entirely stuck, as the progressive hint system helped me eventually realize at least where I should focus on. I loved the tactile feel of exorcising ghosts, although it’s a minor part of the experience. Plenty of great moments come to mind: interacting with ghosts behind glass (great-looking in VR), using eighties ghost-bustery technology to investigate artifacts, a very creative museum of haunted artifacts, sleepwalking through your eerie apartment tower in 80s London, a great sequence in which you wander through a maze-like version of your apartment while malevolent presences whisper from the shadows (very P.T. like), clever use of light in puzzles, etc.

Horror stories are never more terrifying than in VR. Play Phasmophobia if you dare, for example. I try to avoid horror games because of my damaged heart. However, the ghosts in this one are more spooky than scary.

Now, the cons: personally, I wish the game were more like a regular adventure game instead of a puzzle game with a narrative thread woven throughout it. That’s just a personal preference, though; I wish we got the equivalents of the Monkey Island series in VR. Anyway, the least interesting sequence of puzzles for me was the lighthouse, which comes right after the introductory flashback. I actually dropped the game for like a couple of months after I first played it, because I didn’t feel like returning, but I’m glad I picked it back up and continued.

However, my biggest gripe with the story is that you’re supposed to search for your brother, whom you meet in the first scene, when you’re investigating a haunting in an abandoned theatre, but in every damn scene he’s in, the brother comes off as envious, narcissistic, entitled, and overall a complete dickhead. I didn’t want to interact with him. Did the creators believe we would be invested in finding this guy just because he was related to the protagonist? I think it’s a given that they should have made the brother sympathetic, but he annoyed me in every scene he appeared.

All in all, fantastic experience. Perhaps a bit short, but I felt like I got my money’s worth. If you have a Quest 3 and you enjoy these sorts of games, check it out.

Living Narrative Engine #9

#8

Behold the anatomy visualizer: a visual representation of a graph of body parts for any given entity, with the parts represented as nodes connected by Bézier curves. Ain’t it beautiful?

This visualizer has been invaluable to detect subtle bugs and design issues when creating the recipes and blueprints of anatomy graphs. Now, everything is ready to adapt existing action definitions to take into account the entity’s anatomy. For example: a “go {direction}” target could have the prerequisite that the acting character has “core:movement” unlocked anywhere in his body parts. Normally, the legs would have individual “core:movement” components. If any of the legs are disabled or removed, suddenly the “go {direction}” action wouldn’t become an available action. No code changes.

The current schemas for blueprints, recipes, and sockets, make it trivial to add things like internal organs, for a future combat system. Imagine using a “strike {target} with {weapon}” action, and some rule determining the probabilities of damaging what parts of any given body part, with the possibility of destroying internal organs.

From now on I’ll always provide the Discord invite for the community I created for this project (Living Narrative Engine):

https://discord.gg/6N2qHacK75

Living Narrative Engine #8

Perhaps some of you fine folks would like to follow the development of this app of mine, see the regular blog posts about the matter in an orderly manner, or just chat about its development or whatever, so I’ve created a discord community. Perhaps in the future I’ll have randos cloning the repository and offering feedback.

Discord invite below:

https://discord.gg/6N2qHacK75

Living Narrative Engine #7

I’m developing a browser-based, chat-like platform for playing adventure games, RPGs, immersive sims and the likes. It’s “modding-first,” meaning that all game content comes in mods. That includes actions, events, components, rules, entities, etc. My goal is that eventually, you could define in mod files the characters, locations, actions, rules, etc. for any existing RPG campaign and you would be able to play through it, with other characters being large language models or GOAP-based artificial intelligences.

From early on, it became clear that the platform was going to be able to support thousands of actions for its actors (which may be human or controlled by a large language model). The code shouldn’t be aware of the specifics of any action, which wasn’t easy to do; I had to extract all the logic for going from place to place from the engine, to the extent that I had to create a domain-specific language for determining target scopes.

When the turn of any actor starts, the system looks at all the actions registered, and determines which are available. I registered some actions like waiting, moving from place to place, following other people, dismissing followers, to some more niche ones (in an “intimacy” mod) like getting close to others and fondling them. Yes, I’m gearing toward erotica in the future. But as I was implementing the actions, it became clear that the availability of some actions wouldn’t be easily discerned for the impossible-to-predict breath of possible entities. For example, if you wanted to implement an “slap {target}” action, you can write a scope that includes actors in the location, but what determines that the actor actually has a head that could be slapped? In addition, what ensures that the acting actor has a hand to slap with?

So I had to create an anatomy system. Fully moddable. The following is a report that I had Claude Code prepare on the first version of the anatomy system.


The Anatomy System: A Deep Dive into Dynamic Entity Body Generation

Executive Summary

The anatomy system is a sophisticated framework for dynamically generating and managing complex anatomical structures for entities in the Living Narrative Engine. It transforms simple blueprint definitions and recipes into fully-realized, interconnected body part graphs with rich descriptions, validation, and runtime management capabilities.

At its core, the system addresses a fundamental challenge in narrative gaming: how to create diverse, detailed, and consistent physical descriptions for entities without manual authoring of every possible combination. The solution is an elegant blend of data-driven design, graph theory, and natural language generation.

System Architecture

The anatomy system follows a modular, service-oriented architecture with clear separation of concerns. The design emphasizes:

  • Orchestration Pattern: A central orchestrator coordinates multiple specialized workflows
  • Unit of Work Pattern: Ensures transactional consistency during anatomy generation
  • Chain of Responsibility: Validation rules are processed in a configurable chain
  • Strategy Pattern: Description formatting uses pluggable strategies for different part configurations
  • Factory Pattern: Blueprint factory creates anatomy graphs from data definitions

Core Service Layers

  1. Orchestration Layer (AnatomyOrchestrator)
  • Coordinates the entire generation process
  • Manages transactional boundaries
  • Handles error recovery and rollback
  1. Workflow Layer
  • AnatomyGenerationWorkflow: Creates the entity graph structure
  • DescriptionGenerationWorkflow: Generates natural language descriptions
  • GraphBuildingWorkflow: Builds efficient traversal caches
  1. Service Layer
  • BodyBlueprintFactory: Transforms blueprints + recipes into entity graphs
  • AnatomyDescriptionService: Manages description generation
  • BodyGraphService: Provides graph operations and traversal
  1. Infrastructure Layer
  • EntityGraphBuilder: Low-level entity creation
  • SocketManager: Manages connection points between parts
  • RecipeProcessor: Processes and expands recipe patterns

Information Flow

The anatomy generation process follows a carefully orchestrated flow:

1. Initialization Phase

When an entity with an anatomy:body component is created, the AnatomyInitializationService detects it and triggers generation if the entity has a recipeId.

2. Blueprint Selection

The system loads two key data structures:

  • Blueprint: Defines the structural skeleton (slots, sockets, parent-child relationships)
  • Recipe: Provides specific customizations, constraints, and part selections

3. Graph Construction

The BodyBlueprintFactory orchestrates the complex process of building the anatomy:

Blueprint + Recipe → Graph Construction → Entity Creation → Validation → Description Generation

Each step involves:

  • Slot Resolution: Blueprint slots are processed in dependency order
  • Part Selection: The system selects appropriate parts based on requirements
  • Socket Management: Parts are connected via sockets with occupancy tracking
  • Constraint Validation: Recipe constraints are continuously checked

4. Description Generation

Once the physical structure exists, the description system creates human-readable text:

  • Individual part descriptions are generated using context-aware builders
  • Descriptions are composed into a complete body description
  • Formatting strategies handle single parts, paired parts, and multiple parts differently

5. Runtime Management

The generated anatomy becomes a living system:

  • Parts can be detached (with cascade options)
  • The graph can be traversed efficiently via cached adjacency lists
  • Events are dispatched for anatomy changes

Core Capabilities

1. Dynamic Entity Generation

  • Creates complete anatomical structures from data definitions
  • Supports unlimited variety through recipe combinations
  • Generates unique entities while maintaining consistency

2. Hierarchical Part Management

  • Parts are organized in a parent-child graph structure
  • Each part can have multiple sockets for child attachments
  • Supports complex anatomies (e.g., creatures with multiple limbs, wings, tails)

3. Intelligent Part Selection

  • Matches parts based on multiple criteria (type, tags, properties)
  • Supports preferences and fallbacks
  • Handles optional vs. required parts gracefully

4. Natural Language Descriptions

  • Generates contextual descriptions for individual parts
  • Composes full-body descriptions with proper formatting
  • Handles pluralization, grouping, and special cases

5. Constraint System

  • Enforces recipe-defined constraints (requires/excludes)
  • Validates socket compatibility
  • Ensures graph integrity (no cycles, orphans, or invalid connections)

6. Runtime Operations

  • Part detachment with cascade support
  • Efficient graph traversal via cached adjacency lists
  • Path finding between parts
  • Event-driven notifications for changes

Key Components Deep Dive

AnatomyOrchestrator

The maestro of the system, ensuring all workflows execute in the correct order with proper error handling and rollback capabilities. It implements a Unit of Work pattern to maintain consistency.

BodyBlueprintFactory

The factory transforms static data (blueprints and recipes) into living entity graphs. It handles:

  • Dependency resolution for slots
  • Socket availability validation
  • Part selection and creation
  • Name generation from templates

Validation System

A sophisticated chain of validation rules ensures anatomical correctness:

  • CycleDetectionRule: Prevents circular parent-child relationships
  • OrphanDetectionRule: Ensures all parts are connected
  • SocketLimitRule: Validates socket occupancy
  • RecipeConstraintRule: Enforces recipe-specific rules
  • JointConsistencyRule: Ensures joint data integrity

Description Generation Pipeline

The description system is remarkably sophisticated:

  1. BodyPartDescriptionBuilder: Creates individual part descriptions
  2. DescriptionTemplate: Applies formatting strategies
  3. PartGroupingStrategies: Handles different grouping scenarios
  4. TextFormatter: Provides consistent text formatting
  5. BodyDescriptionComposer: Orchestrates the complete description

Strengths of the System

1. Modularity and Extensibility

Each component has a single, well-defined responsibility. New features can be added without modifying existing code.

2. Data-Driven Design

Anatomies are defined entirely in data, making it easy to add new creature types without code changes.

3. Robustness

Comprehensive validation, error handling, and rollback mechanisms ensure system reliability.

4. Performance Optimization

  • Cached adjacency lists for efficient traversal
  • Lazy description generation
  • Batched entity operations

5. Developer Experience

  • Clear service boundaries
  • Extensive logging and debugging support
  • Consistent error handling patterns

Expansion Opportunities

1. Dynamic Modification System

  • Runtime part growth/shrinkage: Allow parts to change size dynamically
  • Transformation support: Enable parts to transform into different types
  • Damage modeling: Track part health and visual damage states

2. Advanced Constraints

  • Symmetry requirements: Ensure paired parts match when needed
  • Resource-based constraints: Limit total mass, magical capacity, etc.
  • Environmental adaptations: Parts that change based on environment

3. Procedural Enhancement

  • Mutation system: Random variations within constraints
  • Evolutionary algorithms: Breed new anatomies from existing ones
  • Machine learning integration: Learn optimal configurations

4. Visual Integration

  • 3D model mapping: Connect anatomy graph to visual representations
  • Animation constraints: Define movement limitations based on anatomy
  • Procedural texturing: Generate textures based on part properties

5. Gameplay Systems

  • Ability derivation: Generate abilities from anatomy (wings = flight)
  • Weakness detection: Identify vulnerable points in anatomy
  • Part-specific interactions: Different interactions per body part

6. Description Enhancement

  • Contextual descriptions: Change based on observer perspective
  • Emotional coloring: Descriptions that reflect entity state
  • Cultural variations: Different description styles for different cultures

7. Performance Scaling

  • Anatomy LOD (Level of Detail): Simplified anatomies for distant entities
  • Streaming support: Load/unload anatomy data dynamically
  • Parallel generation: Generate multiple anatomies concurrently

8. Tool Support

  • Visual anatomy editor: GUI for creating blueprints and recipes
  • Validation sandbox: Test recipes before deployment
  • Analytics dashboard: Track anatomy generation patterns

Technical Implementation Details

Design Patterns in Action

The codebase demonstrates excellent use of software design patterns:

  • Service Locator: Services are injected via constructor dependencies
  • Facade: AnatomyGenerationService provides a simple interface to complex subsystems
  • Template Method: Validation rules follow a consistent pattern
  • Composite: The anatomy graph itself is a composite structure
  • Observer: Event system notifies interested parties of anatomy changes

Error Handling Philosophy

The system follows a “fail-fast” approach with comprehensive error information:

  • Validation errors prevent invalid states
  • Detailed error messages aid debugging
  • Rollback mechanisms prevent partial states
  • Event dispatching for error tracking

Extensibility Points

Key extension points for customization:

  • Custom validation rules via the ValidationRule base class
  • New part grouping strategies via PartGroupingStrategy
  • Custom formatters via the formatting service interface
  • Additional graph algorithms via the graph service

Conclusion

The anatomy system represents a significant achievement in dynamic content generation. It successfully balances flexibility with consistency, performance with functionality, and simplicity with power. The modular architecture ensures the system can grow and adapt to new requirements while maintaining its core elegance.

The combination of graph-based structural representation, constraint-driven generation, and sophisticated natural language processing creates a system capable of generating virtually unlimited variety while maintaining narrative coherence and technical correctness.

As the Living Narrative Engine evolves, the anatomy system provides a solid foundation for increasingly complex and engaging entity interactions, opening doors to emergent gameplay and storytelling possibilities limited only by imagination.

Living Narrative Engine #6

How the process of developing a modular, data-driven anatomy system in my app is going:

Me: We’ve recently created a complex anatomy system that builds a graph of body parts from a recipe and a corresponding blueprint, as well as a myriad of entity definitions for body parts. The code is in @src/anatomy/ , while most of the data is in @data/mods/anatomy/ . As I was looking over the sockets for the human_female_torso.entity.json and the human_male_torso.entity.json , I noticed that while most sockets require concrete “allowedTypes” (e.g. “breast” for “left_chest” in a woman), the sockets for “penis”, “left_testicle”, “right_testicle”, “asshole”, and “vagina” require a “genital” “allowedTypes”. That is too broad; the system that matches fitting entities could slot a penis entity in the socket of a vagina, or a testicle entity in the socket for a penis. That’s obviously undesired behavior. I want you to change the allowed types to more specific subTypes (likely, a subType “penis” for the “penis” socket, and so on).
You will need to modify the entity definitions of human_penis.entity.json, human_testicle.entity.json, human_vagina.entity.json to change their subtype in “anatomy:part” appropriately.

Claude Code: I can see the issue clearly now. Looking at line 433-437, the validation checks if a part’s subType is included in the socket’s allowedTypes. Currently all genital parts have subType: “genital”, which matches the allowedTypes: [“genital”] on all genital sockets, allowing incorrect matches.

Plan to Fix Genital Socket Type Specificity

I’ve identified the issue: all genital parts (penis, testicle, vagina) have the same subType: “genital”, and all genital sockets accept allowedTypes: [“genital”]. This allows incorrect matches like a penis in a vagina socket.

Here’s what I’ll do:

1. Update torso entity files to use specific allowed types:
– human_female_torso.entity.json:
– vagina socket: allowedTypes: [“vagina”]
– asshole socket: allowedTypes: [“asshole”]
– human_male_torso.entity.json:
– penis socket: allowedTypes: [“penis”]
– left_testicle socket: allowedTypes: [“testicle”]
– right_testicle socket: allowedTypes: [“testicle”]
– asshole socket: allowedTypes: [“asshole”]
2. Update genital entity files to use specific subTypes:
– human_penis.entity.json: subType: “penis”
– human_testicle.entity.json: subType: “testicle”
– human_vagina.entity.json: subType: “vagina”
3. Note: There’s no separate entity for “asshole” – it’s integrated into the torso. If needed in the future, an asshole entity could be created with subType: “asshole”.

Me: I agree with your plan, except that I’m not sure what you mean that “there’s no separate entity for ‘asshole’ – it’s integrated into the torso”. Does the code or schemas allow for “integrated” body parts? I think the issue is that simply we need to create an entity definition for the subtype “asshole”.

Claude Code: I was mistaken. The anatomy system requires separate entities for all attachable parts. There’s no asshole entity definition. Both torsos have an “asshole” socket but no corresponding entity to attach to it.