Living Narrative Engine #16

A couple of nights ago, at two in the morning, I was rolling around in bed thinking about my current obsessions: the browser-based app Living Narrative Engine as well as Alicia Western, the tragic character from Cormac McCarthy’s last two novels. Recently I mixed them both by playing through a scenario in LNE that featured Alicia. I “novelized” that little bit of theater in the short story You Will Spend the Rest of Your Life.

Well, I wasn’t entirely happy with Alicia’s acting. Yes, she’s an analytical gal, but she’s in a deep hole there. I wanted to feel the despair from her. The relief. I wanted to see her cry. I wanted to cause a beautiful, blonde woman at the end of her rope to cry. And she didn’t.

As I thought about whether this was a solvable issue, my dear subconscious had a spark of genius: LLM-based characters in LNE already create thoughts, speech, notes, and choose actions. Why not task them with tracking mood changes?

Some deep research and several iterations later, ChatGPT and I came up with the following notions, which are displayed below in a lovely manner, as they appear on the game page of LNE.

The simulation relies on seven base mood axes: valence, arousal, agency control, threat, engagement, future expectancy, and self-evaluation. Apparently that basic breakdown is psychologically sound, but I’m trusting ChatGPT on that. The sexual variables apparently are also well-known: an excitation component is the accelerator, and the inhibition is the brake. Added to a baseline libido dependent on the individual, that calculates the arousal. As seen in the picture, Alicia right now is dry as sandpaper.

The most interesting part for me is that the mood axes and basic sexual variables are ingredients to form emotions and sexual “moods”. I have dozens of them defined, as I’ve been working with ChatGPT in order to depict the whole breadth of emotions that are distinct and meaningful. Here are the current listings of emotions and sexual “moods” that my app calculates:

  • Emotions: calm, contentment, relief, confidence, joy, euphoria, enthusiasm, amusement, awe, inspiration, aesthetic appreciation, interest, curiosity, fascination, flow, entrancement, hope, optimism, determination, anticipation, sadness, grief, disappointment, despair, numbness, fatigue, loneliness, nostalgia, boredom, apathy, unease, stress, anxiety, craving, thrill, fear, terror, dread, hypervigilance, courage, alarm, suspicion, irritation, frustration, anger, rage, resentment, contempt, disgust, cynicism, pride, triumph, shame, embarrassment, awkwardness, guilt, regret, humiliation, submission, envy, trusting surrender, jealousy, trust, admiration, adoration, gratitude, affection, love attachment, compassion, empathic distress, hatred, surprise startle, confusion
  • Sexual moods: sexual lust, passionate love, sexual sensual pleasure, sexual submissive pleasure, sexual playfulness, romantic yearning, sexual confidence, aroused but ashamed, aroused but threatened, sexual craving, erotic thrill, sexual performance anxiety, sexual frustration, afterglow, sexual disgust conflict, sexual indifference, sexual repulsion

Emotions are calculated based on detailed prototypes. Here’s one:

"anxiety": {
      "weights": {
        "threat": 0.8,
        "future_expectancy": -0.6,
        "agency_control": -0.6,
        "arousal": 0.4,
        "valence": -0.4
      },
      "gates": [
        "threat >= 0.20",
        "agency_control <= 0.20"
      ]
    }

Those emotions and sexual moods are fed to LLM-based actors. They figure out “hmm, I’m intensely disappointed, strongly cynical, strongly sad, etc., so that needs to color my thoughts, speech, notes, and the actions I take.” I haven’t tested the system much in practice, but the little I tested, the results were like night and day regarding the LLM’s roleplaying.

In real life, we not only do things, but our bodies do things to us. We are aware of how our emotional states change us, and those turn into “tells” to the other people present. In addition, when one thinks in terms of stories, you add “reaction beats” when the emotional state of an actor changes, so I did exactly that: if the LLM has returned changes to the previous mood axes and sexual variables, the library of expressions have a change to trigger (one at a time), based on whether some prerequisite triggers. The following example makes it self-explanatory:

{
    "$schema": "schema://living-narrative-engine/expression.schema.json",
    "id": "emotions:lingering_guilt",
    "description": "Moderate, non-spiking guilt—an apologetic, sheepish self-consciousness after a minor mistake",
    "priority": 57,
    "prerequisites": [
        {
            "logic": {
                "and": [
                    {
                        ">=": [
                            {
                                "var": "emotions.guilt"
                            },
                            0.35
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.guilt"
                            },
                            0.70
                        ]
                    },
                    {
                        "<=": [
                            {
                                "-": [
                                    {
                                        "var": "emotions.guilt"
                                    },
                                    {
                                        "var": "previousEmotions.guilt"
                                    }
                                ]
                            },
                            0.12
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.humiliation"
                            },
                            0.25
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.terror"
                            },
                            0.20
                        ]
                    },
                    {
                        "<=": [
                            {
                                "var": "emotions.fear"
                            },
                            0.35
                        ]
                    }
                ]
            }
        }
    ],
    "actor_description": "It wasn't a catastrophe, but it still sits wrong. I replay the moment and feel that small, sour twist—like I owe someone a cleaner version of myself. I want to smooth it over without making a scene.",
    "description_text": "{actor} looks faintly apologetic—eyes dipping, a tight little wince crossing their face as if they're privately conceding a mistake.",
    "alternate_descriptions": {
        "auditory": "I catch a small, hesitant exhale—like a half-swallowed \"sorry.\""
    },
    "tags": [
        "guilt",
        "sheepish",
        "apology",
        "minor_mistake",
        "lingering"
    ]
}

I’ve created about 53 expressions already, but they’re surprisingly hard to trigger, as they require very specific (and psychologically logical) conditions.

Because testing this new system through playing scenarios would be a nightmare, I’ve created a dev page that makes testing the combinations trivial. In fact, I’ve recorded a video and uploaded to YouTube. So if you want to hear me struggle through my accent that never goes away, and the fact that I very, very rarely speak in real life, here’s ten minutes of it.

I think that’s all. Now I’m going to play through the same Alicia Western scenario as in the short story I posted. If the result is different enough, I will upload it as a short story.