Intelligent agents for simulations (using LLMs) #2

The previous entry ended in a sour note because I had become frustrated by a problem with the navigation system: when updating the nodes of the environment tree after the changes that some agent had caused, other agents were losing the parent and/or children of the nodes they were currently in. Although Python’s casual way of handling references contributed, it was my fault: I was replacing nodes in the main tree and disconnecting them, but other agents were ending up with the disconnected copies because they had already stored them. Oops. These are the kind of errors that Rust was made to prevent.

Anyway, I rewrote that whole part of the code; now instead of replacing the whole node, I just modify the values of the Location or the SandboxObject the nodes contain. Not sure why I thought that replacing the whole node was a good idea.

Here’s the GitHub repository with all the code.

In addition, I was getting annoyed because running the unit tests took several seconds, as some of the tests made requests to the AI models. There was no reason to keep testing those calls, because I know they work, so I refactored the calls to functions passed by parameter. A cool thing about (outrageously slow) garbage-collected languages like Python is that you can do crazy shit like casually passing functions as parameters, or duck-typing (passing any kind of object to a function and the code working as long as the object responds to the same calls). Regardless, now the unit tests run in less than a second.

I also figured out quickly how to implement human-handled agents. If you set “is_player”: “true” in the corresponding agent’s section of the agents.json file, whenever the code attempts to call the AI model, it prompts you to answer with your fingers and keyboard. That means that in the future, when the simulation involves observations, planning, reflections, and dialogue, AI handled agents could stop the player to have an in-character conversation. Very nice.

The biggest change to the architecture involved separating environment trees into a main one for each simulation, then agent-specific environment trees that come with their own json files. A bit more annoying to set up, but that allows agent-specific knowledge of the environment. For example, an agent knows of their house and some nearby stores, but not of the interior of other houses. I could implement without much issues the notion of an agent walking to the edge of his or her world and “getting to see” an adjacent node from the simulation’s tree, which would get added to that agent’s environment. Therefore, that new location would become a part of the agent’s utility functions that choose the most suitable locations and sandbox objects. Dynamic environment discovery almost for free.

One thought on “Intelligent agents for simulations (using LLMs) #2

  1. Pingback: Intelligent agents for simulations (using LLMs) #1 – The Domains of the Emperor Owl

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s