I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.
In the previous part of this little tale, the team of heroes found the missing girl, Elizabeth Harrow, but the ritual she had been subjected to had turned her into something not quite human. The culprit, an alien named Zha’thik, showed herself to be impervious to bullets.
Here’s the disconcerting climax of this little story.
Notes on this part: I genuinely had no clue how to resolve this situation, hence the protagonist requesting valid plans from others. When Zha’thik referred to her and Elizabeth being together, I saw an opening, and ran with it. Turned out better than I expected.
Anyway, the story will end in the following part. I have already produced it, so I’ll probably post it tomorrow.
I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.
We abandoned our team of heroes as they frantically tried to collapse the pocket dimension that contained them, before the multiplying tears in reality overwhelmed them.
As I was preparing the setting for this scene, it became obvious that I needed a new abstraction in my hierarchy of places. You see, many prompts to the large language models get fed the combination of places where the scene happens: story universe > world > region > area, and possibly location as well. But in this case, the characters were in a distinct inner chamber inside a sanctuary. The sanctuary would be the location, while the inner chamber was a sub-location, or a room. So clearly, given that the story has demanded it, instead of “half-assing” such places into the hierarchy, I should modify all the code that touches the hierarchy to have in mind a new type of place: a Room. Perhaps they won’t be used that often, but if a serious playthrough has required it, then I need to program it in. This may take a while; the last time I touched the hierarchy of places, that is almost the spine of the whole app, it took me days to return the app to normal. We’ll see how it goes.
I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.
In the previous part, our team of heroes found themselves trapped in a pocket dimension created by the malicious alien Zha’thik, a dimension made from the memories of Elizabeth Harrow so she would stay inside in a sort of dream state. The team will attempt to collapse the pocket dimension to return to the containing dimension, the so-called Mirage Wastes.
They finally prepare themselves to do something as troublesome as collapsing a pocket dimension with themselves inside.
I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.
At the end of the previous part, we left our jaded protagonist boxing an eldritch horror while his long-dead teenage lover ran for her life.
I had no clue where Cassidy’s portal would lead, or even if it would work. I organized a session with my Writers’ Room to put together the following scene, that involved generating a couple of new characters.
Those noises are artifacts that rarely happen when generating voices. It’s odd that it happened three times in this segment, but they seemed to fit the eeriness, so I left them in. I apologize for minor errors such as using “his” when referring to Elizabeth. My brain is mostly mush at this point.
When this scene started, I had only included the main team in the dialogue, but I wanted Gideon’s deceased wife and the young version of their daughter to show up in the middle of the conversation. I had no such system implemented, so I wrote it in: a new collapsible section in the chat page that if there are characters present in the same place who aren’t involved in the dialogue, you can simply add them, and they’ll be included in the prompts to the large language model from then onwards. It was surprisingly easy to do, which I suppose is a testament to how mature the app is at this point.
I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.
Here’s the next episode of the eldritch show. Last time, the sadistic alien Zha’thik prepared an unfathomable feast in a non-euclidean hall. Our jaded detective decided to take the first bite, only to realize that the morphing mass had turned into the corpse of Emilia, his long-lost teenage love. Now he’s bound to face his personal horror.
Well, that was poignant. I shed tears while “playing” through it, I shed tears while editing it. That’s good.
At the final part of that scene, I needed a fight “beat,” but I didn’t have anything programmed that would provide such a thing specifically. In addition, this fight would take part in the middle of a conversation. What to do, then? Create a new Confrontation action like the Investigate, Research, and Gather Supplies ones, which are meant to be isolated, and then stitch together two dialogues? It didn’t feel right. Besides, in that case, I would need to write the summary of the full conversation to include the physical confrontation with the eldritch horror. So instead, I came up with the notion of a Confrontation Round: the account, taking about ten seconds, of a confrontation of whatever kind, given the provided context. The prompt is focused on that very subject with no distractions, as seen below:
You are to generate a detailed narrative describing a brief segment (approximately ten seconds) of a confrontation within a rich, immersive world. Use the provided information about the player, the other characters, and the place involved. Your narrative should be engaging, realistic, and reflect the abilities and circumstances of the participants suggested by the user-provided context.
Instructions: 1. Viability and Realism Assessment: Carefully consider the feasibility and dynamics of the confrontation based on: - The information about the characters, including their skills, abilities, health, equipment, and emotional state. - The characteristics of the opponents, including their skills, abilities, equipment, motivations, and weaknesses. - The characteristics of the location, including terrain, obstacles, and any environmental factors. - The time of day and how it might affect the confrontation (e.g., visibility, shadows, crowd presence). - The context for the confrontation provided by the user, including any prior events leading up to this moment. - Potential advantages or disadvantages, such as surprise, preparation, or environmental hazards.
2. Narrative of the Confrontation: - Write a vivid, dynamic narrative that depicts approximately ten seconds of the confrontation, focusing on the immediate actions and reactions of the characters involved. - Include specific actions taken by the characters on either side of the confrontation. - Describe the use of skills, abilities, equipment, tactics, and any maneuvers. - Illustrate the interactions between the characters, including attacks, defenses, dodges, or spells. - Convey the emotions, thoughts, and physical sensations experienced by the characters. - Ensure the depiction is consistent with the world's lore, the characters' personal histories, and their abilities. - Determine the immediate results of this segment of the confrontation based on your assessment. - Be specific about any successes or failures of actions taken, including hits landed, misses, blocks, or counterattacks. - Describe any injuries sustained, advantages gained, positions changed, or other consequences. - Reflect any costs incurred, such as stamina lost, equipment damage, or exposure to danger. - The confrontation doesn't need to end in this segment, unless you can clearly determine that the confrontation should conclude now.
3. Tone and Style: - Write in the third person, present tense, maintaining an engaging and immersive narrative style. - Use rich, descriptive language to bring the scene to life, focusing on action and immediacy. - Ensure consistency with the established world setting and the characters' personalities.
Example Format: "With a fierce battle cry, [Player Name] charges forward, swinging their sword toward [Opponent Name]..."
Your Task: Using the above instructions and information, craft a compelling narrative that captures this intense moment of confrontation, reflecting both the actions and immediate consequences, while advancing the story in a meaningful way. Important: do not repeat verbatim previous confrontation rounds: there should always be a progress in the confrontation.
If the segment at the end of the previous audio was anything to go by, it worked perfectly. To be honest, I was dreading creating a Confrontation action, because I thought it would be the Next Big Part of the app, but turns out that at this point of the AI revolution, LLMs are intelligent enough to “get it.”
I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.
The last part ended with our team of heroes begrudgingly agreeing to partake in the trials of a shadowy, shady alien from the dimension they had ventured into.
For this part, I decided to implement a Grow Event button in the chat interface. Basically, the user writes the guideline for what they would like to happen at that point of the dialogue. Then, a dedicated prompt sent to a large language model will cause the AI to expand the suggested event into a literary form. There are two or three examples of that in the audio you may have listened to. Great for when you don’t want to bother behaving as both the narrator and the player character.
I’ve already played through most of the scene that will be featured in the next part. Man, it was emotionally taxing.
I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.
In the previous episode of this thing, our team of Earthlings, having ventured through a rift into another dimension, encountered a half-buried building in an obsidian desert. Now, they enter it. The following audio is an episode-long account of what happened.
That’s it for today narrative-wise. By the way, this is the portrait that the app generated for Zha’thik.
As I was thinking what else I could do with character memories, which are used by the large language models to roleplay the characters better, I ended up implementing a system to generate any character’s worldview, his or her personal philosophies, influenced by their bio and their experiences. That means that when the user wants to deepen the roleplaying of any of the characters at an arbitrary point of the journey, their worldview can be generated, and it will reflect the journey up to that point. Some characters could start naive, only for their worldview to change after nasty experiences. And of course, the generation of self-reflections receives all the memories as input, including any possible worldview texts.
This is the worldview generated for the brilliant scholar Elara Thorn, based on her bio and experiences up to this point:
Elara Thorn views the world as a place full of untapped knowledge waiting to be discovered, even if it means venturing into dangerous and morally ambiguous territory. Her philosophical beliefs are rooted in the idea that the pursuit of truth and understanding justifies the risks and sacrifices made along the way. She believes that knowledge is power and that those who possess it have an obligation to use it for the greater good, even if it means pushing the boundaries of conventional morality. However, her worldview is not without its challenges and doubts. As she delves deeper into the mysteries of the multiverse, Elara is faced with the harsh realities of the consequences of her actions. She begins to question whether her obsession with knowledge has blind her to the cost of her pursuit, and whether the sacrifices she’s made have been worth it. Elara’s moral standing is complex and evolving. While she believes in the importance of doing what is right and protecting those who cannot protect themselves, she also recognizes the necessity of making difficult choices in the face of unimaginable danger. She struggles with the weight of her own humanity, constantly grappling with the question of whether her actions are truly justified or if she has become a monster in pursuit of her own desires. Despite the doubts and challenges that confront her, Elara remains steadfast in her belief that the pursuit of knowledge is worth the risks and sacrifices, and that she has a duty to use her brilliance to make a difference in the world, even if it means navigating the treacherous waters of the unknown.
My Writers’ Room has come up with an intriguing concept for the next episode. I can’t wait to play it out.
I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.
We abandoned our heroes as they were about to approach an interdimensional rift, hoping to rescue a certain missing teenager.
Her comes a problem: although I had reworked the entire app to include an overarching Story Universe concept so that other worlds could be added, that doesn’t change the fact that the world on the other side of the rift isn’t connected to Earth, but to the Story Universe. I’m not sure how often this is going to happen in a story, so instead of programming the action of adding a new world, with its regions and areas, into the story universe, I just added that hierarchy manually in the JSON file that houses the map. As I use the app more, I’ll see if adding new worlds happens so often that it warrants programming it into the interface. What I did program was the simple notion of teleporting to another area from any area or location, as that could be useful from a storytelling perspective. A simple select to choose the area, then a button to teleport the player and his or her followers there.
The team dares to venture through an interdimensional rift, into somewhere else.
That’s all narrative-wise for today. Still twenty minutes of audio.
Being honest here, I wasn’t sure what the team might find inside the ruins. I wished I had implemented that Writers’ Room idea of mine, to rely on a little swarm of AI agents focused on aspects of storytelling so I could ask them questions about the story, and for them to elaborate on it. So I did, I programmed the Writers’ Room.
I have created a small team of AI agents. The user’s request first lands on the showrunner agent, that considers if he/she/it can handle it. If not, the showrunner delegates the task to more specialized agents, namely the story editor, the character development agent, the world-building agent, the continuity manager, the researcher, the theme agent, the plot development agent, and the pacing agent. The code provides each agent with data from other corners of the app; for example, the continuity agent receives the list of facts that one can introduce into the Story Hub to keep track of what’s going on, while the plot development agent receives all the goals, plot twists, and scenarios that have been generated in the Story Hub. It all works surprisingly well. Fun times!
I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants. Here’s the GitHub repo, that now even features a proper readme.
Three or four days ago, the RunPod templates that render voices stopped working. At least one of them is operative again, so that’s great. In the meantime, it was off-putting enough for me to use the system with silent characters that I took advantage of the break to improve fundamental parts of the app. I’ve forgotten most of what I’ve worked on, but the point is that I’m back, so here’s more voiced narrative.
I implemented a Travel action that is only accessible when one moves from an area (such as a city) to another. The ongoing adventure in this eldritch setting had led us away from Providence toward some interdimensional rifts in the wilderness, to where a missing girl could have traveled or have been brought.
After rendering those voice lines, I realized that the travel audio should be produced in the player’s voice, not the narrator’s. Oops. Anyway, fixed for the next time.
Such long voice lines require about forty seconds in total to render. I thought that they were perfect candidates for parallelism, which Python isn’t particularly fond of, but after implementing it, turns out that the RunPod server that handles voice generation isn’t fond of multiprocessing either, so I had to revert all that programming.
My player character had some observations to make about his new surroundings.
The team gathers in the wilderness, an unusual setting for scholars and police detectives.
I usually discover new features to implement as I’m playing, whenever I realize I’d love for some functionality to be present. At times, as I was chatting, I wanted for the AI to give me a “push,” to move the narrative just a few sentences forward based on the ongoing dialogue and what it knows about the characters. So I implemented a button that injects a narrative beat. That part at 2:55 in the previous audio is an example of it.
I also had to redesign the interface of the chat page. Those buttons are showing spinners because the server is processing a speech part for that form. The first button sends the speech written above. The second button produces ambient narration, and the third a narrative beat. The part below allows the user to introduce an event; sometimes I just want to state that the characters are doing something without having to trigger a dialogue response from other characters. The event gets voiced, and injected into the transcription so the rest of the characters can react to it.
Before we continue with the narrative, which will involve the results of an Investigate action, I wanted to showcase some character portraits that the app has generated these past few days for other test stories.
It’s so strange to have gotten used to this level of quality coming from artificial intelligence. My reaction to seeing those pictures generated for a newly created character was, “Oh, that looks good. Moving on.”
Anyway, here’s the result of the Investigate action:
I considered for a while if I should create a Track action, but that isn’t different enough from an investigation, as far as I’m concerned. One of the great things of this LLM-led system is that I had no idea what was going to happen, and now it’s “canon” that Elara Thorn has found a rift, and also discovered an artifact that could guide them to rifts on either side of them.
Forgive my player character for his inability to say “vicinity” correctly. He’s way over his head.
I recommend you to check out the previous parts if you don’t know what this “neural narratives” thing is about. In short, I wrote in Python a system to have multi-character conversations with large language models (like Llama 3.1), in which the characters are isolated in terms of memories and bios, so no leakage to other participants like in Mantella. Here’s the GitHub repo.
As I mentioned recently, I realized that I hadn’t taken into account that all stories that would play out through this app wouldn’t take place in the same “world,” which at some point it had come to mean mechanically a single planet. Rather obvious in retrospect. So I had to program in the overarching notion of a Story Universe. Along the way, I realized I would need to tinker with plenty of other parts of the app that I dreaded touching, mainly the chat system and the classes that interact with the large language models (the AI). Those were the first ones I programmed in, so they were a mess. Thanks to the vulture library, that finds unused code in a project, I’ve removed every single remnant of the initial implementation, and now the chat system is very sturdy.
I intended to write an app focused on offering the user (mainly me) the choice to experience stories through it, but I was spending plenty of time programming code to interact with the AI and parse out its responses. I figured that there should be some industry-standard libraries out there to save me the trouble, and indeed there are two: pydantic and instructor. Pydantic lets you create “response models” without having to write JSON files; it handles all the parsing by itself, programmed by far more intelligent people. Instructor wraps the OpenAI API and offers compelling functionalities on top of it, such as retries, validators, streaming, etc. Check out its website.
Anyway, it has taken me days to return the app to a usable state, but now it’s far more robust, bolstered by about 400 tests, and the changes will allow me to introduce new actions, pages, etc. far faster.
Anyway, let’s get back to my first serious playthrough, focused on exploring the edges of the system to figure out what stuff I should program in.
My protagonist meets with his two allies, his partner and Elizabeth’s distraught father, in the rainy streets of Providence.
A few days ago I programmed in a new type of action, named Gather Supplies. As I mentioned in the past, every kind of action that could come up in a story, and that wouldn’t get resolved through a dialogue, I intend to program it in. Here is the narrative and the outcome of the team’s efforts to gather supplies for this endeavor.
As with any other action, the AI is prompted to enter the player and their followers’ personalities and general resourcefulness into the equation when coming up with the resolution.
Both teams gather at night in the outskirts of Providence, ready for a trip into the unknown. Elara has brought an acquaintance of hers, a middle-aged professor named Thaddeus Armitage.
Are those fused fingers I spy? Quite eldritch, I’d say.
Next up, adding a new area to the map and traveling to it, which will involve an audible narrative.
You must be logged in to post a comment.