Living Narrative Engine #17

I’ve recently implemented an emotions and expressions system in my app, which is a browser-based platform to play immersive sims, RPGs, adventure games, and the likes. If you didn’t know about the emotions and expressions system, you should check out the linked previous post for reference.

I was setting up a private scenario to further test the processing of emotions for characters during changing situations. This was the initial state of one Marla Kern:

Those current emotions look perfectly fine for the average person. There’s just one problem: Marla Kern is a sociopath. So that “compassion: moderate” would have wrecked everything, and triggered expressions (which are narrative beats) that would have her acting with compassion.

This is clearly a structural issue, and I needed to solve it in the most robust, psychologically realistic way possible, that at the same time strengthened the current system. I engaged in some deep research with my pal ChatGPT, and we came up with (well, mostly he did):

We lacked a trait dimension that captured stable empathic capacity. The current 7 mood axes are all fast-moving state/appraisal variables that swing with events. Callousness vs empathic concern is a stable personality trait that should modulate specific emotions. That meant creating a new component, named affect_traits, that the actors would need to have (defaults would be applied otherwise), and would include the following properties:

  • Affective empathy: capacity to feel what others feel. Allows emotional resonance with others’ joy, pain, distress. (0=absent, 50=average, 100=hyper-empathic)
  • Cognitive empathy: ability to understand others’ perspectives intellectually. Can be high even when affective empathy is low. (0=none, 50=average, 100=exceptional)
  • Harm aversion: aversion to causing harm to others. Modulates guilt and inhibits cruelty. (0=enjoys harm, 50=normal aversion, 100=extreme aversion)

In addition, this issue revealed a basic problem with our represented mood axes, which are fast-moving moods: we lacked one for affiliation, whose definition is now “Social warmth and connectedness. Captures momentary interpersonal orientation. (-100=cold/detached/hostile, 0=neutral, +100=warm/connected/affiliative)”. We already had “engagement” as a mood axis, but that doesn’t necessarily encompass affiliation, so we had a genuine gap in our representation of realistic mood axes.

Emotions are cooked from prototypes. Given these changes, we now needed to update affected prototypes:

"compassion": {
  "weights": {
    "valence": 0.15,
    "engagement": 0.70,
    "threat": -0.35,
    "agency_control": 0.10,
    "affiliation": 0.40,
    "affective_empathy": 0.80
  },
  "gates": [
    "engagement >= 0.30",
    "valence >= -0.20",
    "valence <= 0.35",
    "threat <= 0.50",
    "affective_empathy >= 0.25"
  ]
}

"empathic_distress": {
  "weights": {
    "valence": -0.75,
    "arousal": 0.60,
    "engagement": 0.75,
    "agency_control": -0.60,
    "self_evaluation": -0.20,
    "future_expectancy": -0.20,
    "threat": 0.15,
    "affective_empathy": 0.90
  },
  "gates": [
    "engagement >= 0.35",
    "valence <= -0.20",
    "arousal >= 0.10",
    "agency_control <= 0.10",
    "affective_empathy >= 0.30"
  ]
}

"guilt": {
  "weights": {
    "self_evaluation": -0.6,
    "valence": -0.4,
    "agency_control": 0.2,
    "engagement": 0.2,
    "affective_empathy": 0.45,
    "harm_aversion": 0.55
  },
  "gates": [
    "self_evaluation <= -0.10",
    "valence <= -0.10",
    "affective_empathy >= 0.15"
  ]
}

That fixes everything emotions-wise. A character with low affective empathy won’t feel much in terms of compassion despite the engagement, will feel even less empathic distress, and won’t suffer as much guilt.

This will cause me to review the prerequisites of the currently 76 implemented expressions, which are as complex as the following summary for a “flat reminiscence” narrative beat:

“Flat reminiscence triggers when the character is low-energy and mildly disengaged, with a notably bleak sense of the future, and a “flat negative” tone like apathy/numbness/disappointment—but without the emotional bite of lonely yearning. It also refuses to trigger if stronger neighboring states would better explain the moment (nostalgia pulling warmly, grief hitting hard, despair bottoming out, panic/terror/alarm spiking, or anger/rage activating). Finally, it only fires when there’s a noticeable recent drop in engagement or future expectancy (or a clean crossing into disengagement), which prevents the beat from repeating every turn once the mood is already flat.”

That is all modeled mathematically, not by a large language model. In addition, I’ve created an extremely-robust analysis system using static analysis, Monte Carlo simulation, and witness state generation to determine how feasible any given set of prerequisites is. I’ll make a video about that in the future.

Living Narrative Engine #16

A couple of nights ago, at two in the morning, I was rolling around in bed thinking about my current obsessions: the browser-based app Living Narrative Engine as well as Alicia Western, the tragic character from Cormac McCarthy’s last two novels. Recently I mixed them both by playing through a scenario in LNE that featured Alicia. I “novelized” that little bit of theater in the short story You Will Spend the Rest of Your Life.

Well, I wasn’t entirely happy with Alicia’s acting. Yes, she’s an analytical gal, but she’s in a deep hole there. I wanted to feel the despair from her. The relief. I wanted to see her cry. I wanted to cause a beautiful, blonde woman at the end of her rope to cry. And she didn’t.

As I thought about whether this was a solvable issue, my dear subconscious had a spark of genius: LLM-based characters in LNE already create thoughts, speech, notes, and choose actions. Why not task them with tracking mood changes?

Some deep research and several iterations later, ChatGPT and I came up with the following notions, which are displayed below in a lovely manner, as they appear on the game page of LNE.

The simulation relies on seven base mood axes: valence, arousal, agency control, threat, engagement, future expectancy, and self-evaluation. Apparently that basic breakdown is psychologically sound, but I’m trusting ChatGPT on that. The sexual variables apparently are also well-known: an excitation component is the accelerator, and the inhibition is the brake. Added to a baseline libido dependent on the individual, that calculates the arousal. As seen in the picture, Alicia right now is dry as sandpaper.

The most interesting part for me is that the mood axes and basic sexual variables are ingredients to form emotions and sexual “moods”. I have dozens of them defined, as I’ve been working with ChatGPT in order to depict the whole breadth of emotions that are distinct and meaningful. Here are the current listings of emotions and sexual “moods” that my app calculates:

  • Emotions: calm, contentment, relief, confidence, joy, euphoria, enthusiasm, amusement, awe, inspiration, aesthetic appreciation, interest, curiosity, fascination, flow, entrancement, hope, optimism, determination, anticipation, sadness, grief, disappointment, despair, numbness, fatigue, loneliness, nostalgia, boredom, apathy, unease, stress, anxiety, craving, thrill, fear, terror, dread, hypervigilance, courage, alarm, suspicion, irritation, frustration, anger, rage, resentment, contempt, disgust, cynicism, pride, triumph, shame, embarrassment, awkwardness, guilt, regret, humiliation, submission, envy, trusting surrender, jealousy, trust, admiration, adoration, gratitude, affection, love attachment, compassion, empathic distress, hatred, surprise startle, confusion
  • Sexual moods: sexual lust, passionate love, sexual sensual pleasure, sexual submissive pleasure, sexual playfulness, romantic yearning, sexual confidence, aroused but ashamed, aroused but threatened, sexual craving, erotic thrill, sexual performance anxiety, sexual frustration, afterglow, sexual disgust conflict, sexual indifference, sexual repulsion

Emotions are calculated based on detailed prototypes. Here’s one:

"anxiety": {
      "weights": {
        "threat": 0.8,
        "future_expectancy": -0.6,
        "agency_control": -0.6,
        "arousal": 0.4,
        "valence": -0.4
      },
      "gates": [
        "threat >= 0.20",
        "agency_control <= 0.20"
      ]
    }

Those emotions and sexual moods are fed to LLM-based actors. They figure out “hmm, I’m intensely disappointed, strongly cynical, strongly sad, etc., so that needs to color my thoughts, speech, notes, and the actions I take.” I haven’t tested the system much in practice, but the little I tested, the results were like night and day regarding the LLM’s roleplaying.

In real life, we not only do things, but our bodies do things to us. We are aware of how our emotional states change us, and those turn into “tells” to the other people present. In addition, when one thinks in terms of stories, you add “reaction beats” when the emotional state of an actor changes, so I did exactly that: if the LLM has returned changes to the previous mood axes and sexual variables, the library of expressions have a change to trigger (one at a time), based on whether some prerequisite triggers. The following example makes it self-explanatory:

{
    "$schema": "schema://living-narrative-engine/expression.schema.json",
    "id": "emotions:lingering_guilt",
    "description": "Moderate, non-spiking guilt—an apologetic, sheepish self-consciousness after a minor mistake",
    "priority": 57,
    "prerequisites": [
        {
            "logic": {
                "and": [
                    {
                        ">=": [
                            {
                                "var": "emotions.guilt"
                            },
                            0.35
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.guilt"
                            },
                            0.70
                        ]
                    },
                    {
                        "<=": [
                            {
                                "-": [
                                    {
                                        "var": "emotions.guilt"
                                    },
                                    {
                                        "var": "previousEmotions.guilt"
                                    }
                                ]
                            },
                            0.12
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.humiliation"
                            },
                            0.25
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.terror"
                            },
                            0.20
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.fear"
                            },
                            0.35
                        ]
                    }
                ]
            }
        }
    ],
    "actor_description": "It wasn't a catastrophe, but it still sits wrong. I replay the moment and feel that small, sour twist—like I owe someone a cleaner version of myself. I want to smooth it over without making a scene.",
    "description_text": "{actor} looks faintly apologetic—eyes dipping, a tight little wince crossing their face as if they're privately conceding a mistake.",
    "alternate_descriptions": {
        "auditory": "I catch a small, hesitant exhale—like a half-swallowed \"sorry.\""
    },
    "tags": [
        "guilt",
        "sheepish",
        "apology",
        "minor_mistake",
        "lingering"
    ]
}

I’ve created about 53 expressions already, but they’re surprisingly hard to trigger, as they require very specific (and psychologically logical) conditions.

Because testing this new system through playing scenarios would be a nightmare, I’ve created a dev page that makes testing the combinations trivial. In fact, I’ve recorded a video and uploaded to YouTube. So if you want to hear me struggle through my accent that never goes away, and the fact that I very, very rarely speak in real life, here’s ten minutes of it.

I think that’s all. Now I’m going to play through the same Alicia Western scenario as in the short story I posted. If the result is different enough, I will upload it as a short story.

Post-mortem for Custody of the Rot

If you’re reading these words without having read the story mentioned in the title, don’t be a fucking moronski; read it first.

I assume you’ve read some of my previous posts on my ongoing fantasy cycle, so you may remember that I’m producing these stories in tandem with improvements to my app, named Living Narrative Engine. It’s a browser-based system for playing scenarios like immersive sims, RPGs, etc. I’m compelled by the mutual pulls of adding more features to my engine and experiencing new scenarios; sometimes I come up with the scenario first, sometimes with the mechanics. That has my brain on a constant “solve this puzzle” mode, which is the ideal way to live for me.

Anyway, the following scenarios involving a brave bunch of dredgers in a fantasy world, tasked with extracting a dangerous arcane artifact from some gods-forsaken hole, will require me to develop the following new mechanics:

  1. Lighting mechanics. Currently, every location is considered constantly lit. Given that we’re going underground and that the narrative itself requires using lanterns, I have to implement mechanics for recognizing when a location is naturally dark, and whether there are light sources active. There are other mechanics providing information about the location and actors in it, so from now on, when a location is naturally dark and nobody has switched on a flashlight, we have to block offering descriptions of the location and other actors in it, and instead display text like “You can’t see shit.”
  2. Once lighting mechanics exist, we need actions for lighting up and snuffing out lanterns and lantern-like entities. By far the easiest part.
  3. Currently, when an actor speaks in a location, the speech is only received by actors in that location. At the same time, I consider an entity a location when it has defined exits. Now we find ourselves in a situation in which we have a thirty-feet-long underground corridor separated by grates. That would make each segment between grates a location (which would be correct, given the boundary), but an actor could step from a boundary into the next and suddenly not hear a character on the other side of a grate’s bars. Obviously idiotic. So I need to implement a mechanical system for declaring “if an actor speaks here, the voice will be heard in these other places too.” That will need to extent to actions too: if you have eyes, you can see someone scratching his ass on the other side of bars.
  4. No other scenario has featured water sources that could play a part. And by play a part I mean that actors could get in or fall in, exit them, struggle in the water, and drown. I really don’t want to see my characters drowning, but that’s part of the stakes, so the mechanics need to exist. Given that water sources tend to be connected to other locations and not through the regular exits, I will need some way of allowing “I’m in the water, so I want to swim upstream or downstream to a connected stretch of this water source.” This whole water system will be arduous.
  5. Line-tending mechanics. Until I started researching matters for this story, I doubt that the notion of line-tending had ever entered my mind. Now we need mechanics for: 1) making an owned rope available to others. 2) Clipping and unclipping oneself from the available rope. 3) pulling on the rope to draw back someone clipped that’s wandering away. 4) possibly other cool line-tending-related mechanics. I can see line-tending reappearing in future scenarios such as traditional dungeon delves (for example, to avoid falling in Moria-like environments). Admittedly, though, this whole thing is quite niche.
  6. Blocker-breaking mechanics. Basically: this door is bar-based, so this allows a hacksaw to hack through the bars. I don’t want to make it a single action, but a progressive one (e.g. if you succeed once, it only progresses a step toward completion).
  7. Mechanics related to mind control. To even use those actions, I will need to create a new type of actor for the scenarios: a dungeon master of sorts. Basically a human player that’s not accessible to others, as if it were invisible, but that can act on present actors. I would give that dungeon master for this run the can_mind_control component, then allow actions such as putting people into trances, making them walk off, dive into water, etc. This means that there would need to be opposite actions, with the victims fighting to snap out of the trance. It will be fun to find out what happens when the scenario plays out. In the future, this dungeon master could be controlled by a large language model without excessive difficulty: for example, feeding it what’s happened in the story so far, what are the general notions about what should happen, and giving it actions such as “spawn a hundred murder dragons.”

That’s all that comes to mind now regarding the mechanics to add.

About the story: so far, it seems I want magic to be treated in this fantasy world as if it were toxic material. That’s not a decision I’ve made about worldbuilding, but a natural consequence of the stories I’ve felt like telling. I actually don’t believe in the kind of worldbuilding in which you come up with imaginary words for the warts on an invented race’s ass. I’m all about use and tools. My mind always goes for “what can I build with this.” I’m very rarely interested in a subject if I can’t see myself creating a system out of it. It also doesn’t help that due to autism, abstractions tend to slip through my fingers, so I need to feel like I’m sensing something to understand it.

In a way, I wanted to create a story about specialists working through a problem that needs to be solved. Jorren Weir, Kestrel Brune, Saffi Two-Tides, Pitch… these people don’t have superpowers. Most of them are glad they can keep a job. There is no grand evil here, just people’s self-interest. I want them to do well so that they can return home at the end of the ordeal. But given that we’re dealing with chance-based tests, that’s not a guarantee. And that tension alone makes it exciting for me to experience these scenarios.

As usual, if you’re enjoying these stories, then great. Otherwise, fuck off.

Review: Dispatch

Back in late 2000s and early 2010s, we had this thing we affectionately called Telltale-style games: heavily narrative-driven games that relied on letting the player make more or less compelling decisions that would affect the narrative. They didn’t have the complexity of early adventure games, but they couldn’t be called simple visual novels either. They were tremendously successful, until corporate greed swallowed them, spread them thin, and eventually dissolved them into nothing. The company shut down.

A new studio made of former Telltale devs decided to try their hand a new Telltale-style game that removed the dragging parts of former Telltale games (mainly walking around and interacting with objects) to focus on a good story, a stellar presentation, and compelling minigames. Their first product was the game Dispatch, released about a month ago in an episodic format (two episodes a week, but all of them are out already). The game has become a runaway success.

The story focuses on Robert Robertson, a powerless Iron Man in a society where many, many people have superpowers. He carries the family legacy of battling villains with a mecha. As an adult, he pursued the supervillain who murdered Robert’s father, and who now led one of the most dangerous criminal groups. However, during an assault on the villain’s base, Robert’s mecha gets destroyed, which puts him out of a job.

However, he’s approached by one of the most notorious superheroes, a gorgeous, strong woman who goes by the name Blonde Blazer. She offers him a job at the company she works for, SDN (Superhero Dispatch Network). Their engineers will work on repairing Robert’s mecha, while he offers his expertise on fighting crime as the one in charge of dispatching other heroes to the appropriate calls.

Robert finds out that the team of heroes he’s supposed to handle are a bunch of villains who either have approached the company to reform themselves, or were sent by the criminal system for rehabilitation. They’re a diverse bunch of rowdy, at times nasty superpowered people who aren’t all too keen on having a non-superpowered nobody in charge of them. The narrative explores how the team grows to work together better.

The execution of this story could have gone wrong in so many ways: wrong aesthetic, time-wasting, atrocious writing, and above all, marxist infiltration; like most entertainment products released on the West these days, the whole thing could have been a vehicle for rotten politics. But to my surprise, that’s not the case here. A male protagonist, white male no less, who is an intelligent, hard-working, self-respecting role model? Attractive characters, fit as they would be in their circumstances? A woman in charge (Blonde Blazer) who is nice, understanding, competent, caring, and good? Villains with believable redemption arcs? Romance routes that flow naturally? Where the hell did this game come from in 2025?

Entertainment consumers have been deliberately deprived of all of this by ideologues who despise everything beautiful and good, who, as Tolkien put it, “cannot create anything new, they can only corrupt and ruin what good forces have invented or made.” Franchise after franchise taken over by marxists who dismantle it, shit on the remains, and then insult you if you don’t like it. Dispatch is none of it. For that reason alone, I recommend the hell out of it. I’m sure that given its sudden popularity, the forces-that-be will infiltrate it and ruin it in its second season as they do with everything else, but the first season is already done.

It’s not perfect, of course. Its pros: an astonishing visual style that makes it look like a high-quality comic book in movement. No idea how they pulled it off. Clever writing. Endearing characters. Interesting set pieces. The voice acting is extraordinary, led by Aaron Paul of Breaking Bad fame. He deserves an award for his acting as Robert Robertson. It’s a good story told well, and you’re in the middle of it making important decisions (and also plenty of flavorful ones).

The cons: some whedonesque dialogue that didn’t land for me. Too much cursing even for my tastes, to the extent that often feels edgy for edge’s sake. Some narrative decisions taken during the third act, particularly regarding the fate of one of the main characters, didn’t sit well for me, as it deflated the pathos of the whole thing. But despite the pros, this was a ride well worth the price.

Oh, I forgot: they should have let us romance the demon mommy. My goodness.

Check out this nice music video some fan created about Dispatch, using one of the songs of its soundtrack.

Living Narrative Engine #12

Ever since I started developing my app (Living Narrative Engine repo) months ago, I knew I wanted to reach a state in which any scenario could have three types of actors: human, LLM, and GOAP. LLMs are mostly understood these days; ChatGPT is one of them. You send them a prompt, they respond like a person would. They can also return JSON content, which is easily processed by programs. Ironically, implementing AI that resembles real intelligence (and that, as some recent papers have demonstrated, have achieved emergent introspective awareness) in an app these days is actually much easier than implementing an algorithmic intelligence, the kind used in complex simulations. But if you want to populate a scenario with sentient beings and beasts, having LLMs control beasts is potentially counterproductive; they could have them making too-intelligent decisions. For that, GOAP, or Goal-Oriented Action Planning) comes in.

Later, I will copy-paste the report I made Claude Sonnet 4.5 write on the system as it’s currently implemented in my app. The point I want to make is that implementing GOAP was for me the holy grail of this app, and its lack gated away properly-complex scenarios. For example, I couldn’t create scenarios involving a dungeon run, or even having a simple house cat, because I would need to handle the monsters myself. But with GOAP present, I could populate a whole multi-level dungeon with GOAP-controlled creatures and they would live their lives naturally, seeking food, pursuing targets, resting, etc. Even better, the action discoverability system that I implemented early on in my app means that actions already come filtered to those available, so the GOAP system only needs to consider the goals of the acting actor to determine what action to take.

This morning, while reviewing my goals for this app, I considered that maybe it was mature enough to handle implementing a Phase 1 of the GOAP system. I asked Sonnet to produce a brainstorming document regarding how we could implement it, and to my surprise, it wouldn’t be particularly hard at this point. Hours later, I’m already validating the entire system through end-to-end tests, and it all looks fantastic so far. However, given the complexity of this system, I won’t try using it in practice until it’s 100% covered by e2e tests. I know very well the kind of strange bugs that can pop up otherwise.

I can hardly wait to implement a medieval-fantasy scenario in which a group of adventurers go into a cave to exterminate some goblins, only for GOAP-controlled, lore-accurate goblins to consistently seek fondling-related actions towards anything that has boobs or a bubbly butt.

Anyway, without further ado, here’s the report about GOAP in my app, the proudly-named Living Narrative Engine.


# The GOAP System: Teaching NPCs to Think (and Tell Better Stories)

## A Blog Report on Living Narrative Engine’s New AI Decision-Making System

*Written for readers interested in AI, storytelling, and game development*

## What is GOAP? (In Human Terms)

Imagine you’re watching a character in a story who’s hungry. They don’t just magically teleport to the nearest restaurant. They think: “I need food. There’s a sandwich in the kitchen. But first, I need to get up from this chair, walk to the kitchen, and open the fridge.” That’s essentially what GOAP (Goal-Oriented Action Planning) does for NPCs (non-player characters) in games.

**GOAP is a system that lets AI characters figure out how to achieve their goals by planning a series of actions**, much like how you or I would solve a problem. Instead of following pre-programmed scripts, characters can reason about what they want and figure out the steps to get there.

In the Living Narrative Engine, this system is now fully implemented and working. After months of development, all three architectural tiers are complete, tested, and ready to transform how characters behave in narrative games.

## The Problem Before GOAP

Before GOAP, creating believable AI behavior was like writing a gigantic flowchart of “if this, then that” rules. Want an NPC to find food when hungry?

**Old way:**

“`

IF hungry AND food_nearby:

  → walk to food

  → pick up food

  → eat food

“`

Seems simple, right? But what if:

– The food is in a locked container?

– The character needs to pick up a key first?

– The character is sitting and needs to stand up?

– There are multiple ways to get food?

You’d need dozens of rules for every possible situation. It quickly becomes a nightmare to maintain, and characters feel robotic because they can only do exactly what you programmed, nothing more.

## What GOAP Changes: Characters That Think

With GOAP, you don’t tell characters *how* to do things—you tell them *what* they can do, and they figure out the rest.

**The GOAP Way:**

**You define:**

**Goals**: “I want to have food” or “I want to rest safely”

**Actions**: “Pick up item,” “Open container,” “Stand up,” “Move to location”

**Effects**: What each action changes in the world

**The AI figures out:**

– Which actions will help achieve the goal

– The correct order to perform them

– Alternative paths if the first plan doesn’t work

### A Real Example: The Hungry Cat

One of the end-to-end tests in the system demonstrates a cat NPC with a “find food” goal. The cat:

1. **Recognizes** it’s hungry (goal becomes relevant)

2. **Evaluates** available actions (pick up food, search container, etc.)

3. **Plans** which action brings it closer to having food

4. **Acts** by picking up a nearby food item

If the food were locked in a container, the cat would automatically:

1. Check if it can open the container

2. Open the container first

3. Then take the food

**You didn’t program this specific sequence**. The cat figured it out based on understanding what actions are possible and what effects they have.

## What This Means for Modders

If you’re creating content (or “mods”) for the Living Narrative Engine, GOAP gives you superpowers:

### 1. **Define Actions, Not Scripts**

Instead of writing complex scripts for every situation, you define simple actions:

**Example: “Sit Down” Action**

**What it does**: Removes “standing” state, adds “sitting” state

**When it’s available**: When the character is standing and near a chair

GOAP handles everything else. The character will automatically:

– Consider sitting when tired

– Stand up before walking if they’re sitting

– Chain actions together naturally

### 2. **Mix and Match Content from Different Mods**

The system supports **cross-mod goals and actions**. This means:

– You create a “rest when tired” goal in your mod

– Someone else creates “lie down on bed” and “close door” actions in their mods

– Characters automatically combine these: close door → lie down → rest

**No coordination required**. The AI figures out how different mods’ actions work together to achieve goals.

### 3. **Create Believable Motivations**

You can define character goals with priorities:

**Critical (100+)**: Flee from danger, seek medical help

**High (80-99)**: Combat, finding food when starving

**Medium (60-79)**: Rest when tired, seek shelter

**Low (40-59)**: Social interaction, grooming

**Optional (20-39)**: Exploration, collecting items

Characters automatically pursue their highest-priority relevant goal. If a character is tired (60 priority) but suddenly becomes hungry (80 priority), they’ll switch to finding food first. **This creates emergent, believable behavior.**

### 4. **Test Multiple Actors Simultaneously**

The system includes **multi-actor support** with smart caching. Tests show 5 actors can make independent decisions in under 5 seconds, with each actor’s plans cached separately to improve performance.

## What This Means for Players

### Emergent Storytelling

Characters don’t follow scripts—they respond to situations. This creates:

**Unexpected Moments:**

– A guard who’s supposed to patrol might sit down because they’re tired

– An NPC who notices you’re injured might abandon their task to help

– Characters might form plans you didn’t anticipate

**Reactive Behavior:**

– NPCs adapt to world changes

– If you take the food they were going to get, they find another way

– Characters respond to your actions in contextually appropriate ways

### Consistent Character Behavior

The system includes **plan caching and multi-turn goal achievement**. This means:

– Characters remember their plans across turns

– They persist in pursuing goals until achieved

– Behavior remains consistent unless the world changes

If a character decides to rest, they’ll follow through: find a bed, lie down, and rest. They won’t randomly change their mind unless something more important happens.

## The Narrative Potential

This is where GOAP becomes truly exciting for storytelling:

### 1. **Character-Driven Stories**

Instead of railroading players through pre-scripted sequences, stories can emerge from character motivations:

– A villain isn’t just “evil”—they have goals (power, revenge, safety) and will take sensible actions to achieve them

– Allies don’t just follow you—they have their own needs and will act on them

– Every character becomes a potential plot thread

### 2. **Meaningful Choices**

Player decisions have weight because NPCs respond intelligently:

– Steal someone’s food → they seek alternative food sources → maybe they steal from someone else → chain reactions

– Help someone achieve their goal → they remember and might reciprocate

– Block someone’s plans → they adapt and try alternative approaches

### 3. **Living Worlds**

The world feels alive because characters are actively pursuing goals even when you’re not watching:

– Merchants restock inventory when supplies run low

– Guards patrol but take breaks when tired

– NPCs form relationships based on shared goals and repeated interactions

### 4. **Complex Scenarios Without Complex Code**

Want to create a scenario where:

– NPCs negotiate for resources?

– Characters form alliances based on complementary goals?

– A character pursues revenge but struggles with moral constraints?

With GOAP, you define the goals and constraints. The AI figures out the behavior. **You focus on storytelling, not programming edge cases.**

## Real Examples from the System

The GOAP implementation includes several behavioral tests that demonstrate the potential:

### The Cat and the Food

**Scenario**: Cat is hungry, food is on the floor

**Goal**: Acquire food (priority: 80)

**Result**: Cat identifies “pick up food” as the best action and executes it

**What makes this special**: If the food were in a container, the cat would automatically plan: open container → take food. No special programming needed.

### The Goblin Warrior

**Scenario**: Goblin encounters combat situation

**Goal**: Be prepared for combat

**Available Actions**: Pick up weapon, attack, defend, flee

**Result**: Goblin evaluates current state (unarmed) and picks up weapon before engaging

**What makes this special**: The goblin reasons about prerequisites. It doesn’t blindly attack—it first ensures it has the tools to succeed.

## Technical Achievements (Simplified)

For those curious about how this works under the hood:

### Three-Tier Architecture

1. **Tier 1: Effects Auto-Generation**

   – Analyzes game rules to understand what actions actually do

   – Automatically generates planning metadata

   – No manual annotation needed

2. **Tier 2: Goal-Based Action Selection**

   – Evaluates which actions move characters closer to goals

   – Simulates action outcomes to predict results

   – Selects optimal actions based on goal progress

3. **Tier 3: Multi-Step Planning & Optimization**

   – Plans sequences of actions across multiple turns

   – Caches plans for performance

   – Handles multiple actors making concurrent decisions

   – Recovers gracefully from failures

### Smart Performance

**Plan caching**: Once a character figures out a plan, it’s saved and reused

**Selective invalidation**: Only affected plans are recalculated when the world changes

**Multi-actor isolation**: Multiple characters can plan simultaneously without interfering

**Proven performance**: 5 actors complete decision-making in under 5 seconds

### Comprehensive Testing

The system includes **15 end-to-end tests** covering:

– Complete decision workflows with real game mods

– Goal relevance and satisfaction checking

– Multi-turn goal achievement

– Cross-mod action and goal compatibility

– Error recovery and graceful degradation

– Performance under load

**Test coverage**: 90%+ branches, 95%+ lines for critical components. This isn’t experimental—it’s production-ready.

## What’s Next?

The GOAP system is fully implemented and tested. Here’s what this enables:

### Immediate Opportunities

1. **Richer Mods**: Content creators can define sophisticated AI behaviors without complex scripting

2. **Emergent Gameplay**: Players experience stories that unfold based on character decisions, not scripts

3. **Easier Development**: Creating believable NPCs becomes dramatically simpler

### Future Possibilities

1. **Social Goals**: Characters pursuing relationships, status, or influence

2. **Long-Term Planning**: Goals that span hours or days of game time

3. **Learning and Adaptation**: Characters whose priorities shift based on experiences

4. **Collaborative AI**: Multiple characters coordinating on shared goals

### Integration with Other Systems

GOAP integrates with the engine’s existing systems:

**Event System**: Planning decisions trigger events that other systems can respond to

**Memory System**: Characters remember past successes and failures

**Action System**: Works seamlessly with the existing 200+ actions across mods

**Rule System**: Analyzes existing rules without requiring rewrites

## Why This Matters

### For Storytellers

GOAP gives you characters that feel alive. Instead of puppets following scripts, you get actors with agency who make decisions based on their needs and circumstances. **Your stories become dynamic and emergent rather than fixed and predictable.**

### For Players

You get to experience stories that respond to you. Characters aren’t following invisible rails—they’re making choices based on their situation. Every playthrough can unfold differently because characters adapt and respond to changing circumstances.

### For Developers

Building believable AI becomes dramatically simpler. Instead of writing thousands of lines of conditional logic, you define goals and actions. The system handles the complexity of figuring out how to achieve those goals.

## The Bigger Picture

AI in games has traditionally been about smoke and mirrors—making NPCs seem smart through carefully scripted sequences. GOAP represents a different approach: **give characters the tools to reason about their world and let them figure out how to achieve their goals**.

This aligns perfectly with the Living Narrative Engine’s philosophy: **create systems that enable emergent stories rather than prescribing specific narratives**. With GOAP, characters become collaborators in storytelling, not just props.

## Try It Yourself

The Living Narrative Engine is open source and available now. The GOAP system is fully integrated and ready to use. If you’re interested in:

– Creating narrative games with intelligent NPCs

– Experimenting with emergent storytelling

– Building mods with sophisticated AI behavior

– Contributing to an AI-driven narrative platform

The code is on GitHub, documented and tested. The GOAP docs at `docs/goap/` provide complete guides for:

– Understanding the system architecture

– Creating goals and actions

– Testing AI behavior

– Troubleshooting common issues

## Final Thoughts

GOAP represents months of development work (note by me: we actually finished it in a day, if you don’t count the infrastructure): designing the architecture, implementing three complete tiers, writing comprehensive tests, and documenting everything. But the real achievement isn’t the code—it’s what it enables.

**It enables stories where characters have agency.**

**It enables worlds that feel alive.**

**It enables gameplay that adapts and responds.**

**It enables narratives that emerge from character decisions rather than following predetermined scripts.**

This is the future of narrative games: not scripted sequences, but simulated worlds where characters pursue their goals and stories emerge from their choices. The technology is here, implemented, tested, and ready.

Now comes the fun part: seeing what stories people tell with it.

## Technical Resources

For those who want to dive deeper:

**Full Documentation**: `/docs/goap/README.md`

**Test Examples**: `/tests/e2e/goap/`

**Operation Reference**: `/docs/goap/operation-mapping.md`

**Planning System Details**: `/docs/goap/planning-system.md`

**Effects System Guide**: `/docs/goap/effects-system.md`

**Troubleshooting**: `/docs/goap/troubleshooting.md`

The system is fully documented with examples, test cases, and integration guides. Everything you need to understand and use GOAP is included.

VR game review: Ghost Town

I’ve been playing a lot of VR recently, so I may as well review the only long-form game that I’ve finished in this couple of weeks. Ghost Town is a puzzle-based adventure game set in Great Britain back in the eighties. You’re a spirit medium (a witch) named Edith, whose shitty younger brother disappeared under shady circumstances, and your goal is to find him. Trailer is below:

There are many more pros than cons as far as I’m concerned. The setting, mainly London in the 80s, is quite unique, and provides a gritty touch that I appreciated. The character animations and models are generally exceptional for the Meta Quest 3, maybe the best I’ve seen so far. I don’t like puzzle games, yet this one made me appreciate the puzzles. I was never entirely stuck, as the progressive hint system helped me eventually realize at least where I should focus on. I loved the tactile feel of exorcising ghosts, although it’s a minor part of the experience. Plenty of great moments come to mind: interacting with ghosts behind glass (great-looking in VR), using eighties ghost-bustery technology to investigate artifacts, a very creative museum of haunted artifacts, sleepwalking through your eerie apartment tower in 80s London, a great sequence in which you wander through a maze-like version of your apartment while malevolent presences whisper from the shadows (very P.T. like), clever use of light in puzzles, etc.

Horror stories are never more terrifying than in VR. Play Phasmophobia if you dare, for example. I try to avoid horror games because of my damaged heart. However, the ghosts in this one are more spooky than scary.

Now, the cons: personally, I wish the game were more like a regular adventure game instead of a puzzle game with a narrative thread woven throughout it. That’s just a personal preference, though; I wish we got the equivalents of the Monkey Island series in VR. Anyway, the least interesting sequence of puzzles for me was the lighthouse, which comes right after the introductory flashback. I actually dropped the game for like a couple of months after I first played it, because I didn’t feel like returning, but I’m glad I picked it back up and continued.

However, my biggest gripe with the story is that you’re supposed to search for your brother, whom you meet in the first scene, when you’re investigating a haunting in an abandoned theatre, but in every damn scene he’s in, the brother comes off as envious, narcissistic, entitled, and overall a complete dickhead. I didn’t want to interact with him. Did the creators believe we would be invested in finding this guy just because he was related to the protagonist? I think it’s a given that they should have made the brother sympathetic, but he annoyed me in every scene he appeared.

All in all, fantastic experience. Perhaps a bit short, but I felt like I got my money’s worth. If you have a Quest 3 and you enjoy these sorts of games, check it out.

Living Narrative Engine #9

#8

Behold the anatomy visualizer: a visual representation of a graph of body parts for any given entity, with the parts represented as nodes connected by Bézier curves. Ain’t it beautiful?

This visualizer has been invaluable to detect subtle bugs and design issues when creating the recipes and blueprints of anatomy graphs. Now, everything is ready to adapt existing action definitions to take into account the entity’s anatomy. For example: a “go {direction}” target could have the prerequisite that the acting character has “core:movement” unlocked anywhere in his body parts. Normally, the legs would have individual “core:movement” components. If any of the legs are disabled or removed, suddenly the “go {direction}” action wouldn’t become an available action. No code changes.

The current schemas for blueprints, recipes, and sockets, make it trivial to add things like internal organs, for a future combat system. Imagine using a “strike {target} with {weapon}” action, and some rule determining the probabilities of damaging what parts of any given body part, with the possibility of destroying internal organs.

From now on I’ll always provide the Discord invite for the community I created for this project (Living Narrative Engine):

https://discord.gg/6N2qHacK75

Living Narrative Engine #8

Perhaps some of you fine folks would like to follow the development of this app of mine, see the regular blog posts about the matter in an orderly manner, or just chat about its development or whatever, so I’ve created a discord community. Perhaps in the future I’ll have randos cloning the repository and offering feedback.

Discord invite below:

https://discord.gg/6N2qHacK75