LudoForge #4

Now that the evolutionary process to grow game definitions is progressing at a steady pace in my app, named LudoForge, I fed the architectural docs to ChatGPT so that it would write a good explanation on how it works. It may be interesting to those curious about how complex systems can grow organically through an evolutionary algorithm that mimics biological evolution.


Teaching a computer to invent tabletop games (and occasionally rediscover the classics)

If you squint, evolving game designs is a lot like evolving creatures: you start with a messy ecosystem of “mostly viable” little organisms, you test which ones can survive in their environment, you keep the best survivors—but you also keep a diverse set of survivors so the whole population doesn’t collapse into one boring species.

In our system, the “organisms” are game definitions written in a small, strict game DSL (a structured way to describe rules: players, state variables, actions, effects, win/lose conditions, turn order, and so on). Each candidate game definition is wrapped in a genome: basically an ID plus the full definition.

From there, the evolutionary loop repeats: seed → simulate → score → place into niches → mutate elites → repeat.

1) Seeding: where the first games come from

Evolution needs a starting population. We can generate seeds automatically, or import them from disk, or mix both approaches. The important bit isn’t “are the seeds good?”—it’s “are they valid and diverse enough to start exploring?”

So seeds must pass two kinds of checks before they’re even allowed into the ecosystem:

  • Schema validation: does the JSON structure match the DSL’s required shape?
  • Semantic validation: does it make sense as a playable ruleset (no broken references, impossible requirements, etc.)?

And there’s a third, subtle filter: when we place games into “niches” (more on that below), seeds that land only in junk bins like unknown/under/over are rejected during seed generation, because they don’t help us cover the design space.

Think of this as: we don’t just want “a bunch of seeds,” we want seeds scattered across different climates so evolution has many directions to run in.

2) Playtesting at machine speed: simulation as the “environment”

A human can’t playtest 10,000 games a day. A simulation engine can.

For every candidate game, we run automated playthroughs using AI agents (simple ones like random and greedy are enough to expose lots of structural problems). The engine repeatedly:

  1. Lists the legal moves available right now
  2. Checks termination (win/lose/draw conditions, cutoffs, loop detection)
  3. Lets an agent pick an action (with concrete targets if needed)
  4. Applies costs and effects, recording what happened
  5. Advances the turn/phase according to the game’s turn scheduler

Crucially: when a complex game definition tries to do something illegal (like decrementing below a minimum, or targeting something that doesn’t exist), the engine records skipped effects/triggers instead of crashing, so the system can observe “this design is broken in these ways” rather than just failing outright.

This is the equivalent of an organism interacting with the world and leaving tracks: “it tried to fly, but its wings didn’t work.”

3) Turning playthroughs into numbers: metrics, degeneracy, and fitness

After simulation, we compute a set of analytics that act like proxies for design quality—things like:

  • Agency: did players have meaningful choices, or were they railroaded?
  • Strategic depth (proxy): how large is the typical decision space?
  • Variety: do many different actions get used, or does one dominate?
  • Interaction rate: are players affecting each other or only themselves?
  • Structural complexity: is this a tiny toy, or something richer?

These are not “fun detectors.” They’re sensors.

Then we run degeneracy detection: filters that catch the classic failure modes of randomly-generated rulesets:

  • infinite loops / repeated states
  • non-terminating games (hits max turns/steps)
  • games with no real choices
  • trivial wins, dominant actions, excessive skipped effects, etc.

Some degeneracy flags cause an outright reject (hard gate), others apply a penalty (soft pressure), and too many penalties at once can also trigger rejection.

Finally, all of this becomes a feature vector, and we compute an overall fitness score—the number evolution tries to increase.

4) “Growing in niches”: why we don’t keep only the top 1%

If you only keep the single highest-scoring game each generation, you get premature convergence: the population collapses into one design family and stops surprising you.

Instead, we use MAP-Elites, which you can picture as a big grid of “design neighborhoods.” Each neighborhood is defined by a few chosen descriptors (think: agency bucket, variety bucket, etc.). Each candidate game gets “binned” into a niche based on its descriptor values, and then it competes only with others in that same niche.

  • Each niche keeps its best resident (the “elite”).
  • Over time, the map fills with many different elites: fast games, slow games, chaotic games, skillful games, high-interaction games, and so on.

This is how you get a museum of interesting survivors, not one monoculture.

5) Reproduction: mutation (and why mutation is structured, not random noise)

Once we have elites across niches, we generate the next generation by mutating them.

Mutation operators aren’t “flip random bits.” They are rule-aware edits that make plausible changes to game structure, such as:

  • tweak a number (thresholds, magnitudes)
  • add/remove/duplicate actions (with safety guards so you can’t delete the last action)
  • add/remove variables (while rewriting dangling references)
  • change turn schedulers (round-robin ↔ simultaneous ↔ priority-based, etc.)
  • add triggers and conditional effects
  • modify termination rules (win/lose/draw conditions)

The key is: the operator library is rich enough to explore mechanics, not just parameters.

Mutation retries (because many mutations are duds)

Some mutations do nothing (“no-op”), or produce something that can’t be repaired. The runner will retry with a different operator a few times; if it still can’t produce a productive mutation, it falls back to keeping the parent for that offspring slot.

This keeps evolution moving without pretending every random change is meaningful.

6) Repair, rejection reasons, and staying honest about failure

After mutation, we may run repair (optional), then validation and safety gates. If a candidate fails, it’s not just dropped silently—the system classifies why it failed:

  • repair failure
  • validation failure
  • safety failure
  • evaluation error
  • evaluation returned null fitness/descriptors

And it persists these outcomes for observability and debugging.

This matters because “evolution” is only useful if you can tell whether the ecosystem is healthy—or if you’ve started breeding nonsense.

7) Adaptive evolution: learning which mutations are actually useful

Not all mutation operators are created equal. Some will be reliably productive; others will mostly create broken genomes.

So the runner tracks per-operator telemetry every generation: attempts, no-ops, repair failures, rejection counts, and how often an operator actually helped fill a new niche or improve an elite. evolution-runner

Those stats feed into adaptive operator weighting, so the system gradually shifts its mutation choices toward what’s working in the current region of the design space—without hardcoding that “operator X is always good.”

8) Optional superpower: motif mining (stealing good patterns from winners)

Sometimes an evolved game contains a little “mechanical phrase” that’s doing real work—like a neat resource exchange loop, or a repeating pattern of effects that creates tension.

When motif mining is enabled, we:

  1. select elites (top per niche + global best)
  2. re-simulate them to extract trajectories
  3. mine repeated effect sequences (“motifs”)
  4. convert those motifs back into DSL effects
  5. feed them into a special mutation operator that can inject those motifs into new games

That’s evolution discovering a useful mechanic, then turning it into reusable genetic material.

9) Human taste enters the loop (without turning it into manual curation)

Metrics are helpful, but “fun” is subjective. So we can add human feedback:

  • ratings (“how good is this?”)
  • pairwise comparisons (“A or B?”)

Rather than asking the human to judge random games, the system uses active learning to choose the most informative comparisons—especially cases where its preference model is uncertain, and it tries to include underrepresented niches so taste is learned across the map.

Under the hood, the preference model is an ensemble trained online (so it can update continuously) and its uncertainty controls how much feedback to request per generation (adaptive budget).

Fitness can then blend:

  • “objective-ish” signals (metrics/degeneracy)
  • “human preference” signals

So the system doesn’t just breed games that are non-broken—it breeds games that align with what you actually enjoy.

10) Why this can rediscover known games and invent new ones

If your DSL is expressive enough to describe the rules of an existing game, then in principle there exists a genome that encodes it. Evolution doesn’t need to “know” the game—it only needs:

  1. Search operators that can reach that region of the rulespace (structural mutations, not just numeric tweaks)
  2. Selection pressure that rewards the behaviors that make that game work (choice, balance, interaction, clean termination, etc.)
  3. Diversity preservation so the system keeps exploring many styles instead of collapsing early

Once those are in place, rediscovery becomes a side-effect of searching a huge space under the right constraints. And invention is what happens when evolution stumbles into combinations nobody tried on purpose—then keeps them because the ecosystem rewards them.

The simplest mental model

If you want the non-technical version in one breath:

We generate lots of small rulesets, machine-play them thousands of times, score them for “does this behave like a real game?”, sort the survivors into many different “design neighborhoods,” keep the best in each neighborhood, then make mutated children from those survivors—occasionally learning from human taste and reusing discovered mechanics—until the map fills with strong and varied games.

That’s the evolutionary process: not magic, not random, but a relentless loop of variation + playtesting + selection + diversity.

LudoForge #3

If you don’t know about LudoForge, you should probably read the previous posts. In summary: I’m developing an app to evolve tabletop game prototypes according to a “fun” factor made up of plenty of fun proxies like player agency, strategic depth, etc. The code is now mature enough to run X evolutionary generations on demand, produce shortlists of the best games, and finish. That’s great because it means the code works well, although there’s always room for improvement. But the shortlist of four winners is absolutely terrible in a way that has had me giggling for a while. I’ll let ChatGPT explain these “games.”


Game 1: “The Shared Counter Chicken”

Two players take turns. There’s a shared number on the table that starts at 0 and can be increased up to 15. The “win condition” is simple: whoever makes it hit 15 wins instantly.

The game also includes a couple of other buttons that change other numbers… but those numbers can never actually produce a win. They’re basically decorative dials. One of them always gets set to 12, but the victory threshold for it is 20, so it’s like having a “Launch Rocket” button that always fuels you to 60% and then stops forever.

So what do players really do? They increment the shared counter… and then start playing a tiny psychological game of chicken: “I could push the counter closer to 15… but then you might get the final move.” So the game hands them some useless actions that function as “stalling.” If both players are stubborn, they can just keep stalling until the match times out.

It’s not strategy so much as who blinks first.

Game 2: “The Game Where You Can’t Move”

This one is my favorite kind of failure because it’s so clean.

The game defines two counters. The only action in the entire game says: “Decrease both counters.”

But the action is only allowed if both counters are greater than zero.

And the game starts with both counters at zero.

So on turn one, a player looks at the rules and… there is literally nothing they’re allowed to do. No move exists. The game doesn’t even fail dramatically. It just sits there like a vending machine with the power unplugged.

In other words: it’s a tabletop game prototype that begins in a stalemate.

Game 3: “First Player Loses: The Speedrun”

This one actually runs, which makes it even funnier.

There are two counters. One counter is the only one that can possibly lead to a win — the other one only goes downward forever, so its “win condition” is a permanent lie.

The “real” counter starts at 0, and there’s an action that increases it by 2. The victory threshold is 4.

Here’s what happens:

  • Player 1 goes first. Their only sensible move is: 0 → 2.
  • Player 2 goes next and does the same move: 2 → 4.
  • Player 2 instantly wins.

So the entire “game” is basically:

“Player 1 sets up the win for Player 2.”

It’s like a sport where the first player is forced to place the ball on the penalty spot and then politely step aside.

Game 4: “The Unwinnable Ritual”

Three players this time. There’s one shared counter. Winning requires the counter to reach 4.

But the rules only let you do two things:

  • Set it to 2.
  • Decrease it by 2 (if it’s above zero).

Notice what’s missing: any way to make it bigger than 2.

So the win condition is a castle in the sky. The game is a ritual where players take turns setting the number to 2, or knocking it back down toward 0. It can never, ever reach 4. No amount of cleverness changes that.

It’s essentially a machine that cycles between “2” and “less than 2” until you get bored or the turn limit ends it.


ChatGPT finished its report with this note: “The first evolutionary run didn’t produce brilliant board games. It produced life, in the same way early evolution produced algae and sea foam. These were rule-sets that technically existed… but didn’t yet deserve to.”

These idiotic games already provided fun by existing, so as far as I care, this app has already been a success. The good thing is that I have code to handle degeneracy, fitness, etc., so I simply have to tighten that code so some of these nonsensical points would either kill a genome or penalize its fitness.

By the way, earlier tonight I was playing tennis in VR with people across Europe while I waited for Claude Code to work through the tickets of one of the app’s new features. And yet the world is terrible. We live in the lamest cyberpunk dystopia imaginable.

LudoForge #1

Two nights ago I was rolling around in bed trying to sleep when a notion came into my head, one that has returned from time to time: some of the most flow-like fun I’ve ever had was playing tabletop games. I’m a systems builder by nature, and I love to solve problems with a variety of tools. Tabletop games are complex problems to solve with specific series of tools. My favorite tabletop game is Arkham Horror LCG, although I’ve loved many more like Terraforming Mars, Ark Nova, Baseball Highlights: 2045, Core Worlds, Imperium, Labyrinth, Renegade… But none of them fully captured me. Like some potential game exists that has exactly every feature my brain yearns for, but that game doesn’t exist. I’ve cyclically thought that I should create that game, but I never know where to start. I don’t even know what exactly I want, other than knowing that what I’ve experienced isn’t enough.

These past few weeks I’ve been implementing extremely-complex analytics reports generators for my repository Living Narrative Engine. I was surprised to find out that it’s feasible to mathematically find gaps in extremely complex spaces (dozens of dimensions) as long as they’re mathematically defined. I guess Alicia was justified to be obsessed with math. So I started wondering: what makes a tabletop game good? Surely, the fun you have with it. Can “fun” be mathematically defined? Is it the agency you have? The strategic depth? The variety? If any of such metrics could be mathematically defined, then “fun” is a fitness score that combines them.

And what if you didn’t need to design the game yourself? If you can map a simulated game’s activity to metrics such as the agency per player, the strategic depth, the variety… Then you can evolve a population of game definitions in a way that, generation after generation, the “fun” score improves. If you can turn all game mechanics into primitives, the primitives will mutate in and prove their worth throughout the generations, composing coherent mechanics or even inventing new ones. Initially, a human may need to score game definition variants according to how “fun” the playthrough of those games were, but in the end that could be automated as well.

Because this is the era of Claude Code and Codex, I’ve already implemented the first version of the app. I’ve fed ChatGPT the architectural docs and told it to write a report. You can read it down below.


LudoForge: evolving tabletop games with a deterministic “taste loop”

I’m building LudoForge, a system that tries to answer a pretty blunt question:

What if we treated tabletop game design like search—simulate thousands of candidates, kill the broken ones fast, and let a human “taste model” steer evolution toward what’s actually fun?

Under the hood, it’s a seeded-population evolution loop: you start with a set of game definitions (genomes), run simulations, extract metrics, filter degeneracy, blend in learned human preferences, and then evolve the population using MAP-Elites and genetic operators. Then you repeat.

The big picture: the loop

LudoForge is structured as a pipeline with clean seams so each layer can be tested and swapped without turning the whole thing into spaghetti. The stages look like this: seed → evaluate → simulate → analytics → (optional) human feedback → fitness → MAP-Elites → (optional) mutate/crossover/repair → next generation. pipeline-overview

A key design choice: the core expects a seeded population. There’s no “magic generator” hidden inside that invents games from scratch. If you want a generator, you build it outside and feed it in. That keeps the engine honest and debuggable. Note by me after rereading this part of the report: this will change soon enough.

Games as genomes: a DSL that can be validated and repaired

Each candidate game is a genome: { id, definition }, where definition is a DSL game definition. Before any evaluation happens, the definition goes through schema + semantic validation—and optionally a repair pass if you enable repair operators. Invalid DSL gets rejected before it can contaminate simulation or preference learning.

Repair is deliberately conservative: it’s mostly “DSL safety” (e.g., clamp invalid variable initial values to bounds). Anything that’s “this game is technically valid but dumb/unplayable” is handled by simulation + degeneracy detection, not by sweeping edits that hide the real problem.

The simulation engine: deterministic playthroughs with real termination reasons

The simulation layer runs a single playthrough via runSimulation(config) (or wrapped via createSimulationEngine). It builds initial state from the definition, picks the active agent, lists legal actions, applies costs/effects/triggers, advances turns/phases, and records a trajectory of step snapshots and events.

It’s also built to fail safely:

  • No legal actions → terminates as a draw with terminationReason = "stalemate".
  • Max turns exceededterminationReason = "max-turns" with an outcome computed in that cutoff mode.
  • Loop detection (optional hashing + repetition threshold) → terminationReason = "loop-detected".

Most importantly: runs are reproducible. The RNG is a seeded 32-bit LCG, so identical seeds give identical behavior.

Metrics: cheap proxies first, expensive rollouts only when you ask

After simulation, LudoForge summarizes trajectories into analytics: step/turn counts, action frequencies, unique state counts, termination reasons, and sampled “key steps” that include legalActionCount.

From there it computes core metrics like:

  • Agency (fraction of steps with >1 legal action)
  • Strategic depth (average legal actions per step)
  • Variety (action entropy proxy)
  • Pacing tension (steps per turn)
  • Interaction rate (turn-taking proxy)

Extended metrics exist too, and some are intentionally opt-in because they’re expensive:

  • Meaningful choice spread via per-action rollouts at sampled decision points
  • Comeback potential via correlation between early advantage and final outcome

Here’s the honest stance: these metrics are not “fun”. They’re proxies. They become powerful when you combine them with learned human preference.

Degeneracy detection: kill the boring and the broken early

This is one of the parts I’m most stubborn about. Evolution will happily optimize garbage if you let it.

So LudoForge explicitly detects degeneracy patterns like:

  • loops / non-termination
  • stalemates
  • forced-move and no-choice games
  • dominant-action spam
  • trivial wins metrics-and-fitness

By default, those flags can reject candidates outright, and degeneracy flags also become part of the feature vector so the system can learn to avoid them even when they slip through.

Human feedback: turning taste into a model

Metrics get you a feature vector. Humans supply the missing ingredient: taste.

LudoForge supports two feedback modes:

  1. Ratings (1–5) with optional tags and rationale
  2. Pairwise comparisons (A/B/Tie) with optional tags and rationale

Pairwise comparisons are the main signal: they’re cleaner than ratings and train a preference model using a logistic/Bradley–Terry style update. Ratings still matter, but they’re weighted lower by default.

There’s also active learning: it selects comparison pairs where the model is most uncertain (predicted preference closest to 0.5), while reserving slots to ensure underrepresented MAP-Elites niches get surfaced. That keeps your feedback from collapsing into “I only ever see one genre of game.”

Fitness: blending objective proxies, diversity pressure, and learned preference

Fitness isn’t a single magic number pulled from the void. It’s a blend:

  • Base composite score from metrics (weighted sum/objectives)
  • Diversity contribution (pressure toward exploring niches)
  • Preference contribution from the learned preference model (centered/capped, with bootstrap limits early on)

Feature vectors are keyed by metric id (not positional arrays), which matters a lot: adding a new metric doesn’t silently scramble your model weights. Renaming metrics, though, becomes a migration event (and that’s correct—you should feel that pain explicitly).

Evolution: MAP-Elites + mutation/crossover that respect DSL validity

Instead of selecting “top N” and converging into a monoculture, LudoForge uses MAP-Elites: it bins candidates into descriptor niches and keeps the best elite per niche.

Descriptor binning is explicit and deterministic (normalize → floor into bin count; clamp to range), and niche ids serialize coordinates like descriptorId:bin|....

Then you can evolve elites with genetic operators:

  • Mutations like numeric tweaks, boolean toggles, enum cycling, duplicating/removing actions, nudging effect magnitudes, adding/removing phases, rewriting token/zone references safely, etc.
  • Crossover via subtree swaps of state.variables or actions, followed by DSL re-validation.

Optional “shortlisting” exists too: it picks a diversified subset of elites for human review using a max-min distance heuristic over descriptor coordinates.

What’s already proven (and what isn’t yet)

This isn’t vaporware; the end-to-end tests already prove key behaviors like:

  • the ordered phases of the pipeline
  • invalid DSL rejection before evaluation
  • safety cutoffs (max-turns) and deterministic seeded outputs
  • human prompt loops and legality enforcement
  • deterministic state transitions
  • MAP-Elites producing stable ids
  • active learning selection behavior
  • mutation + repair at scale, including crossover

And there are explicitly documented gaps—like extended metrics aggregation and worker-thread batch simulations.

The point of LudoForge

I’m not trying to build a “game designer replacement.” I’m building a design pressure cooker:

  • Simulate hard
  • Reject degeneracy ruthlessly
  • Measure what you can
  • Ask humans the right questions
  • Let evolution explore breadth, not just a single hill

If you’re into procedural design, evolutionary search, or just enjoy the idea of treating “fun” as something you can iteratively approximate with a human-in-the-loop model, that’s what this project is for.

Living Narrative Engine #19

I have quite the treat for you fuckers. I’ve recorded myself playing through my test scenario involving Alicia Western. More than an hour of me speaking in my accented English even though I rarely speak in real life, and showing off a fun, frustrating playthrough that made me hungry.

This is, of course, related to my beloved Living Narrative Engine. Repo here.

Living Narrative Engine #18

I’m building a browser-based app to play immersive sims, RPGs and the likes. In practice, I use it to set up short story scenarios or elaborate gooning sessions. I dared myself to build the most comprehensive psychological system imaginable, so that Sibylle Brunne, a 34-year-old orphan living in her parents rustic home somewhere in the Swiss mountains, while controlled by a large language model, would realistically bring her blue-eyed, blonde-hair-braided, full-breasted self to seduce my teenage avatar who is backpacking through the country, eventually convincing me to stay in her house so she can asphyxiate me with her mommy milkers.

Here’s a visual glimpse of the current complexity:

Alicia has become my test subject, as if she didn’t have enough with freezing to death. The system works like this: at the base you have mood axes (like pleasant <-> unpleasant), which change throughout a scene. Actors also have permanent biological or personality-based traits like aversion to harm. Together, mood axes and affect traits serve as weights and gates to specific emotion prototypes like disappointment, suspicion, grief. Delta changes to those polar mood axes naturally intensify or lessen the emotions. I also have sexual state prototypes, which work the same as the emotional states.

These emotional and sexual states serve as the prerequisites for certain expressions to trigger during play. An expression is a definition that tells you “when disappointment is very high and suspicion is high, but despair is relatively low, trigger this narrative beat.” Then, the program would output some text like “{actor} seems suspicious but at the same time as if they had been let down.” The descriptions are far better than that, though. The actors themselves receive in their internal log a first-person version of the narrative beat, which serves as an internal emotional reaction they need to process.

It all works amazingly well. However, to determine if I was truly missing mood axes, affect traits or prototypes, I had to create extremely complex analytics tools. I’ve learned far too much about statistical analysis recently, and I don’t really care about it other than for telling a system, “hey, here are my prototype sets. Please figure out if we have genuine gaps to cover.” Turns out that to answer such a request, some complex calculations need to map 20-dimensional spaces and find out diagonal vectors that run through them.

Anyway, I guess at some point I’ll run my good ol’ test scenario involving Alicia, with her now showing far more emotion than she used to before I implemented this system. That’s a win in my book.

Living Narrative Engine #17

I’ve recently implemented an emotions and expressions system in my app, which is a browser-based platform to play immersive sims, RPGs, adventure games, and the likes. If you didn’t know about the emotions and expressions system, you should check out the linked previous post for reference.

I was setting up a private scenario to further test the processing of emotions for characters during changing situations. This was the initial state of one Marla Kern:

Those current emotions look perfectly fine for the average person. There’s just one problem: Marla Kern is a sociopath. So that “compassion: moderate” would have wrecked everything, and triggered expressions (which are narrative beats) that would have her acting with compassion.

This is clearly a structural issue, and I needed to solve it in the most robust, psychologically realistic way possible, that at the same time strengthened the current system. I engaged in some deep research with my pal ChatGPT, and we came up with (well, mostly he did):

We lacked a trait dimension that captured stable empathic capacity. The current 7 mood axes are all fast-moving state/appraisal variables that swing with events. Callousness vs empathic concern is a stable personality trait that should modulate specific emotions. That meant creating a new component, named affect_traits, that the actors would need to have (defaults would be applied otherwise), and would include the following properties:

  • Affective empathy: capacity to feel what others feel. Allows emotional resonance with others’ joy, pain, distress. (0=absent, 50=average, 100=hyper-empathic)
  • Cognitive empathy: ability to understand others’ perspectives intellectually. Can be high even when affective empathy is low. (0=none, 50=average, 100=exceptional)
  • Harm aversion: aversion to causing harm to others. Modulates guilt and inhibits cruelty. (0=enjoys harm, 50=normal aversion, 100=extreme aversion)

In addition, this issue revealed a basic problem with our represented mood axes, which are fast-moving moods: we lacked one for affiliation, whose definition is now “Social warmth and connectedness. Captures momentary interpersonal orientation. (-100=cold/detached/hostile, 0=neutral, +100=warm/connected/affiliative)”. We already had “engagement” as a mood axis, but that doesn’t necessarily encompass affiliation, so we had a genuine gap in our representation of realistic mood axes.

Emotions are cooked from prototypes. Given these changes, we now needed to update affected prototypes:

"compassion": {
  "weights": {
    "valence": 0.15,
    "engagement": 0.70,
    "threat": -0.35,
    "agency_control": 0.10,
    "affiliation": 0.40,
    "affective_empathy": 0.80
  },
  "gates": [
    "engagement >= 0.30",
    "valence >= -0.20",
    "valence <= 0.35",
    "threat <= 0.50",
    "affective_empathy >= 0.25"
  ]
}

"empathic_distress": {
  "weights": {
    "valence": -0.75,
    "arousal": 0.60,
    "engagement": 0.75,
    "agency_control": -0.60,
    "self_evaluation": -0.20,
    "future_expectancy": -0.20,
    "threat": 0.15,
    "affective_empathy": 0.90
  },
  "gates": [
    "engagement >= 0.35",
    "valence <= -0.20",
    "arousal >= 0.10",
    "agency_control <= 0.10",
    "affective_empathy >= 0.30"
  ]
}

"guilt": {
  "weights": {
    "self_evaluation": -0.6,
    "valence": -0.4,
    "agency_control": 0.2,
    "engagement": 0.2,
    "affective_empathy": 0.45,
    "harm_aversion": 0.55
  },
  "gates": [
    "self_evaluation <= -0.10",
    "valence <= -0.10",
    "affective_empathy >= 0.15"
  ]
}

That fixes everything emotions-wise. A character with low affective empathy won’t feel much in terms of compassion despite the engagement, will feel even less empathic distress, and won’t suffer as much guilt.

This will cause me to review the prerequisites of the currently 76 implemented expressions, which are as complex as the following summary for a “flat reminiscence” narrative beat:

“Flat reminiscence triggers when the character is low-energy and mildly disengaged, with a notably bleak sense of the future, and a “flat negative” tone like apathy/numbness/disappointment—but without the emotional bite of lonely yearning. It also refuses to trigger if stronger neighboring states would better explain the moment (nostalgia pulling warmly, grief hitting hard, despair bottoming out, panic/terror/alarm spiking, or anger/rage activating). Finally, it only fires when there’s a noticeable recent drop in engagement or future expectancy (or a clean crossing into disengagement), which prevents the beat from repeating every turn once the mood is already flat.”

That is all modeled mathematically, not by a large language model. In addition, I’ve created an extremely-robust analysis system using static analysis, Monte Carlo simulation, and witness state generation to determine how feasible any given set of prerequisites is. I’ll make a video about that in the future.

Living Narrative Engine #16

A couple of nights ago, at two in the morning, I was rolling around in bed thinking about my current obsessions: the browser-based app Living Narrative Engine as well as Alicia Western, the tragic character from Cormac McCarthy’s last two novels. Recently I mixed them both by playing through a scenario in LNE that featured Alicia. I “novelized” that little bit of theater in the short story You Will Spend the Rest of Your Life.

Well, I wasn’t entirely happy with Alicia’s acting. Yes, she’s an analytical gal, but she’s in a deep hole there. I wanted to feel the despair from her. The relief. I wanted to see her cry. I wanted to cause a beautiful, blonde woman at the end of her rope to cry. And she didn’t.

As I thought about whether this was a solvable issue, my dear subconscious had a spark of genius: LLM-based characters in LNE already create thoughts, speech, notes, and choose actions. Why not task them with tracking mood changes?

Some deep research and several iterations later, ChatGPT and I came up with the following notions, which are displayed below in a lovely manner, as they appear on the game page of LNE.

The simulation relies on seven base mood axes: valence, arousal, agency control, threat, engagement, future expectancy, and self-evaluation. Apparently that basic breakdown is psychologically sound, but I’m trusting ChatGPT on that. The sexual variables apparently are also well-known: an excitation component is the accelerator, and the inhibition is the brake. Added to a baseline libido dependent on the individual, that calculates the arousal. As seen in the picture, Alicia right now is dry as sandpaper.

The most interesting part for me is that the mood axes and basic sexual variables are ingredients to form emotions and sexual “moods”. I have dozens of them defined, as I’ve been working with ChatGPT in order to depict the whole breadth of emotions that are distinct and meaningful. Here are the current listings of emotions and sexual “moods” that my app calculates:

  • Emotions: calm, contentment, relief, confidence, joy, euphoria, enthusiasm, amusement, awe, inspiration, aesthetic appreciation, interest, curiosity, fascination, flow, entrancement, hope, optimism, determination, anticipation, sadness, grief, disappointment, despair, numbness, fatigue, loneliness, nostalgia, boredom, apathy, unease, stress, anxiety, craving, thrill, fear, terror, dread, hypervigilance, courage, alarm, suspicion, irritation, frustration, anger, rage, resentment, contempt, disgust, cynicism, pride, triumph, shame, embarrassment, awkwardness, guilt, regret, humiliation, submission, envy, trusting surrender, jealousy, trust, admiration, adoration, gratitude, affection, love attachment, compassion, empathic distress, hatred, surprise startle, confusion
  • Sexual moods: sexual lust, passionate love, sexual sensual pleasure, sexual submissive pleasure, sexual playfulness, romantic yearning, sexual confidence, aroused but ashamed, aroused but threatened, sexual craving, erotic thrill, sexual performance anxiety, sexual frustration, afterglow, sexual disgust conflict, sexual indifference, sexual repulsion

Emotions are calculated based on detailed prototypes. Here’s one:

"anxiety": {
      "weights": {
        "threat": 0.8,
        "future_expectancy": -0.6,
        "agency_control": -0.6,
        "arousal": 0.4,
        "valence": -0.4
      },
      "gates": [
        "threat >= 0.20",
        "agency_control <= 0.20"
      ]
    }

Those emotions and sexual moods are fed to LLM-based actors. They figure out “hmm, I’m intensely disappointed, strongly cynical, strongly sad, etc., so that needs to color my thoughts, speech, notes, and the actions I take.” I haven’t tested the system much in practice, but the little I tested, the results were like night and day regarding the LLM’s roleplaying.

In real life, we not only do things, but our bodies do things to us. We are aware of how our emotional states change us, and those turn into “tells” to the other people present. In addition, when one thinks in terms of stories, you add “reaction beats” when the emotional state of an actor changes, so I did exactly that: if the LLM has returned changes to the previous mood axes and sexual variables, the library of expressions have a change to trigger (one at a time), based on whether some prerequisite triggers. The following example makes it self-explanatory:

{
    "$schema": "schema://living-narrative-engine/expression.schema.json",
    "id": "emotions:lingering_guilt",
    "description": "Moderate, non-spiking guilt—an apologetic, sheepish self-consciousness after a minor mistake",
    "priority": 57,
    "prerequisites": [
        {
            "logic": {
                "and": [
                    {
                        ">=": [
                            {
                                "var": "emotions.guilt"
                            },
                            0.35
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.guilt"
                            },
                            0.70
                        ]
                    },
                    {
                        "<=": [
                            {
                                "-": [
                                    {
                                        "var": "emotions.guilt"
                                    },
                                    {
                                        "var": "previousEmotions.guilt"
                                    }
                                ]
                            },
                            0.12
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.humiliation"
                            },
                            0.25
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.terror"
                            },
                            0.20
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.fear"
                            },
                            0.35
                        ]
                    }
                ]
            }
        }
    ],
    "actor_description": "It wasn't a catastrophe, but it still sits wrong. I replay the moment and feel that small, sour twist—like I owe someone a cleaner version of myself. I want to smooth it over without making a scene.",
    "description_text": "{actor} looks faintly apologetic—eyes dipping, a tight little wince crossing their face as if they're privately conceding a mistake.",
    "alternate_descriptions": {
        "auditory": "I catch a small, hesitant exhale—like a half-swallowed \"sorry.\""
    },
    "tags": [
        "guilt",
        "sheepish",
        "apology",
        "minor_mistake",
        "lingering"
    ]
}

I’ve created about 53 expressions already, but they’re surprisingly hard to trigger, as they require very specific (and psychologically logical) conditions.

Because testing this new system through playing scenarios would be a nightmare, I’ve created a dev page that makes testing the combinations trivial. In fact, I’ve recorded a video and uploaded to YouTube. So if you want to hear me struggle through my accent that never goes away, and the fact that I very, very rarely speak in real life, here’s ten minutes of it.

I think that’s all. Now I’m going to play through the same Alicia Western scenario as in the short story I posted. If the result is different enough, I will upload it as a short story.