Neural narratives in Python #26

I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.

In the previous part, the protagonist realized that the alien Zha’thik, who had subjected young Elizabeth Harrow to a ritual intended to turn her into some sort of cosmic entity, had fallen in love with the earthly teenager. The team convinced Zha’thik to let Elizabeth endure her changes back at home. The alien was even kind enough to open a dimensional portal back to Earth.

Here’s the somber resolution of this story.

Three days later, the pair of detectives are expecting to meet up with brilliant scholar Elara Thorn at the hole-in-the-wall where they first met.

The end. That’s all the cosmic horror I had to give for now.

This first serious playthrough of my app proved that the system can handle a full story. Highlights for me: how unique most characters sounded with the combination of dedicated bios and speech patterns, along with the voice models. The brilliance of their speeches regularly surprised me (and highlighted the chasm between their intelligence and my limited human capabilities), and there were times in which I forgot that I wasn’t writing to an actual human being.

Quite a few times, I wasn’t sure how to continue with the story (I didn’t want to create a plan beforehand, given that I intended to play this out as if I were partaking in roleplaying sessions), but thankfully the “concepts system,” in which the user can generate scenarios, goals, plot twists, dilemmas, etc. helped push me along. When more complicated feedback was required, the Writers’ Room feature, in which a team of AI agents representing the various role in a writers’ room handle your requests, solved the remaining issues. When I wanted to brainstorm the specifics of a location or a character, I proposed the topic to the swarm of agents, and they always provided just the stuff I needed.

Issues: first, a mechanical one of my app: when your characters are going to interact with a place that isn’t connected to the world > region in which they started in, that involved me editing the new hierarchy of places into the JSON data files. I solved that issue this morning: there’s now an Attach Places page that displays the available templates, and lets the user attach them with a simple click. That could solve most of such issues.

The bigger issue, though, were the large language models (the AI) themselves. Right now there are various contenders for the heavy-hitters depending on how much you’re willing to spend. Hermes 405B was great for regular writing and dialogue, the best one I had come across that remained uncensored, until I came across Magnum 72B. Unfortunately Magnum is considerably slower, and much worse, it has a 16k context window due to the sole provider, meaning that I had to change back to Hermes 405B when the text sent to the LLM became too long. By far, though, the best large language model for dialogue I’ve come across is Claude Sonnet, at 15 dollars per million words of output. That’s steep, although not remotely as much as OpenAI’s Orion preview. Sonnet is likely censored, but I haven’t had issues with moderation in the tests I’ve gotten it tangled with (and they involved steamy stuff).

Next up, something to which I’m drawn instinctively: deranged silliness with perverted undertones. The protagonist will be a somewhat over-the-top teenager who gets reincarnated into a fantasy world. Expect loads of bizarre characters and zany situations. Possibly some monster sex. I’ve already produced the first “episode” of it, and it has been delightful.

Anyway, don’t know if anyone has followed this first story, but if you have enjoyed it, then great.