I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.
In the last part, our protagonist, having been sent by a ditzy goddess into a scorching desert world, or at least a deserty part of a fantasy world, deals with an imposing half-person, half-scorpion guardian, who offers him sanctuary in their safe house as long as the protagonist passes an initiation rite.

That was one of the funnest interactions I’ve had through this app. I’ve got a soft spot for that incompetent goddess. And the scene ends with the driving lesson of isekai: sometimes we must lose one world entirely to find our true place in another.
Although a week ago I programmed the ability for the user to add participants to an ongoing dialogue, I hadn’t programmed the feature to remove participants from one. It was necessary to do so given the circumstances; otherwise, the AI might have chosen to speak as Seraphina even though she was supposed to be gone. In addition, when a dialogue ends, a summary is generated and added as a memory to the participants. In the case of the participants leaving mid-conversation, it wouldn’t make sense to know what happened after they left, so now, for each character leaving mid-convo, the summary of the dialogue up to that point is added to their memories.
My app has a section called Story Hub that allows the user to generate story concepts, to help them figure out where the story may be going. They could already generate plot blueprints, scenarios, goals, dilemmas, and plot twists. Thanks to the massive refactoring I did of the whole width of story concepts in the app, adding new ones was easy.

I’ve also involved the facts added by the player in many prompts to the AI, including dialogue. Facts are supposed to represent well-known information about the world, such as legends, properties of animals or sentient races, etc. For example, one of the generated pieces of lore named the twin moons of this world, so I added that information to the facts. My biggest worry is the context window of some large language models: my favorite right now, Magnum 72B, has a tiny context of 16,000 tokens, and the more you add to memories and facts, the more they eat of the context, until you’re forced to switch to a subpar model.
That’s all for now. Stay whimsical.
Pingback: Neural narratives in Python #29 – The Domains of the Emperor Owl
Pingback: Neural narratives in Python #31 – The Domains of the Emperor Owl