Intelligent agents for simulations (using LLMs) #2


The previous entry ended in a sour note because I had become frustrated by a problem with the navigation system: when updating the nodes of the environment tree after the changes that some agent had caused, other agents were losing the parent and/or children of the nodes they were currently in. Although Python’s casual way of handling references contributed, it was my fault: I was replacing nodes in the main tree and disconnecting them, but other agents were ending up with the disconnected copies because they had already stored them. Oops. These are the kind of errors that Rust was made to prevent.

Anyway, I rewrote that whole part of the code; now instead of replacing the whole node, I just modify the values of the Location or the SandboxObject the nodes contain. Not sure why I thought that replacing the whole node was a good idea.

Here’s the GitHub repository with all the code.

In addition, I was getting annoyed because running the unit tests took several seconds, as some of the tests made requests to the AI models. There was no reason to keep testing those calls, because I know they work, so I refactored the calls to functions passed by parameter. A cool thing about (outrageously slow) garbage-collected languages like Python is that you can do crazy shit like casually passing functions as parameters, or duck-typing (passing any kind of object to a function and the code working as long as the object responds to the same calls). Regardless, now the unit tests run in less than a second.

I also figured out quickly how to implement human-handled agents. If you set “is_player”: “true” in the corresponding agent’s section of the agents.json file, whenever the code attempts to call the AI model, it prompts you to answer with your fingers and keyboard. That means that in the future, when the simulation involves observations, planning, reflections, and dialogue, AI handled agents could stop the player to have an in-character conversation. Very nice.

The biggest change to the architecture involved separating environment trees into a main one for each simulation, then agent-specific environment trees that come with their own json files. A bit more annoying to set up, but that allows agent-specific knowledge of the environment. For example, an agent knows of their house and some nearby stores, but not of the interior of other houses. I could implement without much issues the notion of an agent walking to the edge of his or her world and “getting to see” an adjacent node from the simulation’s tree, which would get added to that agent’s environment. Therefore, that new location would become a part of the agent’s utility functions that choose the most suitable locations and sandbox objects. Dynamic environment discovery almost for free.

Intelligent agents for simulations (using LLMs) #1

Next part (2)


A week ago I came across this damned paper (Generative Agents: Interactive Simulacra of Human Behavior). Also check out this explanation of the paper and its implementation. I called the paper damned because the moment I understood its possibilities (creating intelligent virtual agents has always been a dream of mine), the spark of obsession flared up in me, and for someone this obsessive, that could only mean one thing:

I figured that I was capable enough of implementing that paper using the Python programming language. In my first try, I thrust ahead, with very few unit tests, to implement the memories database, the planning and reflective functionalities, as well as the navigation system. It was when I hit the paper’s promise that GPT-3.5-turbo was going to be able to output the name of the chosen destination when I realized that the writers of the paper must have simplified the process of navigation (if not finagled it quite a bit), because large language models don’t work like that. And without the agents being able to move around in the environment tree, this system wouldn’t work remotely as intended.

I gave it some thought and I ended up opting for utility functions. Used in basic AI for decades, utility functions output a rating value between a bunch of options, from which the code ends up choosing the best rated option (with some room for randomness). Thankfully, if you ask GPT-3.5-turbo to give a rating for how well a “farmhouse” would fit the purpose of “eating lunch”, then that location may be chosen, and from there you could ask the AI to rate the individual locations of the farmhouse to continue the path.

Implementing the navigation of the agents in such a way gave me the opportunity to start the code from scratch. This time I relied mostly on test-driven development (meaning that one or more unit tests support the part of the code you are working on), very important for a system that can suffer from plenty of regressions. I have 51 unit tests supporting the code so far.

Before I delve into further details, here’s the link to my GitHub repository with all the code I have written.

Here’s as far as I’ve gotten:

  • The user writes the environment of the simulation in a json file. I’m using the Python library ‘anytree’ to represent the graph, which also comes with this fancy command-line method of displaying any given graph:

Location: plot of land (plot_of_land)
├── Location: barn (barn)
│ └── Sandbox object: tools (tools) | description: lots of tools for farming
├── Location: field (field)
│ ├── Sandbox object: crop of corn (corn_crop) | description: a crop of growing corn
│ └── Sandbox object: crop of potatoes (potatoes_crop) | description: a crop of growing potatoes
└── Location: farmhouse (farmhouse)
├── Location: bedroom (bedroom)
│ ├── Sandbox object: bed (bed) | description: a piece of furniture where people sleep
│ └── Sandbox object: desk (desk) | description: a piece of furniture where a person can read or write comfortably
├── Location: kitchen (kitchen)
│ └── Sandbox object: fridge (fridge) | description: an appliance that keeps food cold
└── Location: bathroom (bathroom)
├── Sandbox object: toilet (toilet) | description: a simple toilet
└── Sandbox object: bathtub (bathtub) | description: a tub in which to take baths

  • For this test, I pictured a farmhouse, a nearby barn, and a couple of nearby crops.
  • I added in the corresponding json file the needed information for the agents involved:

{
“Joel”: {
“age”: 38,
“planned_action”: null,
“action_status”: null,
“current_location_node”: “farmhouse”,
“destination_node”: null,
“using_object”: null
},
“Betty”: {
“age”: 24,
“planned_action”: null,
“action_status”: null,
“current_location_node”: “farmhouse”,
“destination_node”: null,
“using_object”: null
}
}

  • As depicted, agents can have a planned action, an action status, their current location, their destination, and what object they are using, apart from their age and name.
  • As the paper says, each agent’s memories are seeded from a few memories that are written in a simple text file.

Betty started living with Joel six years ago;Betty enjoys planting crops and tending to them;Betty wishes that she and her boyfriend Joel could afford to buy a few horses;Betty wishes she didn’t have to travel to the nearby town often;Betty loves peace and quiet;When Betty feels overwhelmed, she lies down on her bed and listens to ASMR through her noise-cancelling headphones;Betty wants to have a child, hopefully a daughter, but Joel told her years ago that he didn’t want children;Joel and Betty saw strange lights in the skies in the 11th of May of 2023;On the 11th of May of 2023, the local news broadcast was cut off in the middle of a transmission as they were reporting on the strange lights in the sky;Joel and Betty have heard ominous explosions in the distance throughout the morning of the 12th of May of 2023

Joel is a farmer who has lived in his plot of land for ten years;Joel is living with his girlfriend, Betty, whom he loves;Joel enjoys listening to music from the sixties;Joel wishes he and Betty could buy a few horses, but he doesn’t think they can afford them;Joel and Betty saw strange lights in the skies in the 11th of May of 2023;On the 11th of May of 2023, the local news broadcast was cut off in the middle of a transmission as they were reporting on the strange lights in the sky;Joel and Betty have heard ominous explosions in the distance throughout the morning of the 12th of May of 2023

  • These seed memories are inserted into a memories database. Its implementation is one of the most important parts of this system. The database is composed of a vector database (which stores values according to some eldritch, multidimensional distance between them), and a json file that records readable values of those memories. You can query the vector database with simple text, and it will return the most relevant text it contains regarding the query you sent it. Sort of like a recommendation system.
  • In addition, when a query is made to the agents’ memories, they are rated according to the memory’s importance and recency. The recency is a normalized value in the range of [0.0, 1.0] depending on when that memory was accessed last, and importance is a rating that the AI itself gave to how poignant the memory was.
  • Once the agents are loaded in the simulation, a character summary is created for each agent. The AI is prompted to write all the segments of these summaries only from the memories present in their vector databases.

Joel’s AI-written character summary

Joel (age: 38)

Innate traits: Hardworking, resilient, curious, observant, caring, nostalgic, pragmatic.

Joel is a farmer who has lived in his plot of land for ten years and wishes he and Betty could buy a few horses, but he doesn’t think they can afford them. He enjoys listening to music from the sixties and is living with his girlfriend, Betty, whom he loves. They have experienced ominous explosions in the distance and saw strange lights in the skies on the 11th of May of 2023, and the local news broadcast was cut off in the middle of a transmission as they were reporting on the strange lights.

Joel is a farmer who has lived in his plot of land for ten years.

It is difficult to accurately describe Joel’s overall feeling about his recent progress in life based on these statements alone. However, we can infer that he enjoys listening to old music, has heard ominous explosions and seen strange lights with Betty, is a farmer, loves his girlfriend, and wishes they could afford to buy horses. These statements do not provide enough information to determine his overall feelings about his recent progress in life.”

Betty’s AI-written character summary

Betty (age: 24)

Innate traits: resilient, domestic, nurturing, longing, introverted, anxious, curious, intuitive, calm

Betty is a woman who enjoys living a peaceful and simple life with her partner Joel. She is interested in farming and wishes to own horses. Betty also has a desire to have a child, but this conflicts with Joel’s wishes. She dislikes having to travel to nearby towns and finds solace in listening to ASMR when she feels overwhelmed. Betty is aware of strange occurrences happening around her, such as explosions and lights in the sky, but she seems to maintain a calm and composed demeanor.

Betty is a homesteader or a farmer.

Betty is feeling somewhat unsettled and uncertain about her progress in life, as indicated by her mixed feelings about her living situation with Joel, her desire for a more stable and fulfilling life with horses and a child, and her occasional need for escapist relaxation techniques like ASMR. She enjoys the sense of purpose that comes from tending to her crops, but is also wary of the potential dangers and disruptions of the outside world, as shown by the ominous sounds and sightings of explosions and strange lights in the sky. Overall, she seems to value the simple pleasures of a quiet country life, but is also aware of the limitations and difficulties of this lifestyle.

  • If the simulation detects that the agents haven’t decided upon an action, it generates one through a complicated process.

2023-05-13T09:30:00 Joel produced action: Saturday May 13 of 2023, 9 AM. Joel is going to check with his neighbors to see if they have any information about the strange lights and ominous explosions they have been experiencing. The length of time it takes for Joel to check with his neighbors is not specified, so it is impossible to provide an answer in minutes.

2023-05-13T09:30:00 Betty produced action: Saturday May 13 of 2023, 9 AM. Betty is going to talk to Joel about their conflicting desires regarding having a child and try to come to a mutual understanding or compromise. The duration of this action is not specified in the given information. It can vary based on the discussion and resolution reached by Betty and Joel.

  • The AI model (GPT-3.5-turbo in this case) went over each agent’s memories and decided what action the agents would take, according to their circumstances. A problem with the paper’s original implementation already comes up: the agents decide a plan, then they search for the most fitting location, and then object, to fulfill that plan. But in this case, the AI decided that Joel should check with his neighbors to see if they have any information about weird happenings. But there are no neighbors in this environment. In addition, Betty decided that she wants to talk to Joel, but the paper doesn’t offer a system of locating other agents that may not be in the immediate surroundings. There’s a system for dialogue, but I haven’t gotten that far.
  • Through the rating system for locations and objects (going from the root of the environment tree deeper and deeper), Joel came to this conclusion about what should be his destination given the action he came up with:

2023-05-13T09:30:00 Joel changed destination node to: Node(‘/Location: plot of land (plot_of_land)/Location: farmhouse (farmhouse)/Location: kitchen (kitchen)/Sandbox object: fridge (fridge) | description: an appliance that keeps food cold’)

  • Apparently Joel (therefore, the AI model) thought that the neighbors could be found in the fridge. From the root of the environment tree, between the choices of “farmhouse”, “barn” and “crops”, obviously farmhouse was the most adequate option to find people. Inside that house, it opted for the kitchen between all the possible rooms. Inside there was only a fridge, so the navigation system had no choice but to choose that. If I had added a telephone in the kitchen, the AI model likely would have chosen that instead to communicate with the neighbors.
  • Regarding Betty, to fulfill her action of talking to Joel, she chose the following destination:

2023-05-13T09:30:00 Betty changed destination node to: Node(‘/Location: plot of land (plot_of_land)/Location: barn (barn)/Sandbox object: tools (tools) | description: lots of tools for farming’)

  • Interestingly enough, the AI model believed that her farmer husband was more likely to be in the barn than in the farmhouse. Once the barn was chosen, the tools were the only option as to which object would help the agent fulfill her action.
  • The navigation system detected that the agents weren’t at their destination, so it started moving them:

2023-05-13T09:30:00 Joel needs to move to: Node(‘/Location: plot of land (plot_of_land)/Location: farmhouse (farmhouse)/Location: kitchen (kitchen)/Sandbox object: fridge (fridge) | description: an appliance that keeps food cold’)
2023-05-13T09:30:00 Joel changed the action status to: Joel is heading to use fridge (located in Location: kitchen (kitchen)), due to the following action: Saturday May 13 of 2023, 9 AM. Joel is going to check with his neighbors to see if they have any information about the strange lights and ominous explosions they have been experiencing. The length of time it takes for Joel to check with his neighbors is not specified, so it is impossible to provide an answer in minutes.

2023-05-13T09:30:00 Betty needs to move to: Node(‘/Location: plot of land (plot_of_land)/Location: barn (barn)/Sandbox object: tools (tools) | description: lots of tools for farming’)
2023-05-13T09:30:00 Betty changed the action status to: Betty is heading to use tools (located in Location: barn (barn)), due to the following action: Saturday May 13 of 2023, 9 AM. Betty is going to talk to Joel about their conflicting desires regarding having a child and try to come to a mutual understanding or compromise. The duration of this action is not specified in the given information. It can vary based on the discussion and resolution reached by Betty and Joel.

  • The simulation advances in steps of a specified amount of time, in this case thirty minutes, and the navigation system will move the agents a single location each step. This means that an agent would take thirty minutes to move from a bedroom to the entrance of the same house, but whatever.

2023-05-13T10:00:00 Joel changed the current location node to: Node(‘/Location: plot of land (plot_of_land)/Location: farmhouse (farmhouse)/Location: kitchen (kitchen)’)
2023-05-13T10:00:00 Joel changed the current location node to: Node(‘/Location: plot of land (plot_of_land)/Location: farmhouse (farmhouse)/Location: kitchen (kitchen)/Sandbox object: fridge (fridge) | description: a piece of furniture that keeps food cold’)

  • Joel moved from the entrance of the farmhouse to the kitchen, and once the agent is in the same location as his or her destination sandbox object, he or she starts using it. In this case, the fridge.

2023-05-13T10:00:00 Betty changed the current location node to: Node(‘/Location: plot of land (plot_of_land)’)

  • Betty has moved from the farmhouse to the root of the environment tree.
  • In the next step of the simulation, Joel continues using the fridge, while Betty moves further to the barn, where she starts using the tools:

2023-05-13T10:30:00 Joel continues using object: Node(‘/Sandbox object: fridge (fridge) | description: a piece of furniture that keeps food cold’)
2023-05-13T10:30:00 Betty changed the current location node to: Node(‘/Location: plot of land (plot_of_land)/Location: barn (barn)’)
2023-05-13T10:30:00 Betty changed the current location node to: Node(‘/Location: plot of land (plot_of_land)/Location: barn (barn)/Sandbox object: tools (tools) | description: lots of tools for farming’)

  • From then on, the agents will continue using the tools they have chosen to perform their actions:

2023-05-13T11:00:00 Joel continues using object: Node(‘/Sandbox object: fridge (fridge) | description: a piece of furniture that keeps food cold’)
2023-05-13T11:00:00 Betty continues using object: Node(‘/Sandbox object: tools (tools) | description: lots of tools for farming’)
2023-05-13T11:30:00 Joel continues using object: Node(‘/Sandbox object: fridge (fridge) | description: a piece of furniture that keeps food cold’)
2023-05-13T11:30:00 Betty continues using object: Node(‘/Sandbox object: tools (tools) | description: lots of tools for farming’)
2023-05-13T12:00:00 Joel continues using object: Node(‘/Sandbox object: fridge (fridge) | description: a piece of furniture that keeps food cold’)
2023-05-13T12:00:00 Betty continues using object: Node(‘/Sandbox object: tools (tools) | description: lots of tools for farming’)

That’s unfortunately as far as I’ve gotten with my implementation, even though it took me a week. The paper explains more systems:

  • Observation: as the agents use the objects, they are prompted with observations about what other objects in the same location are doing, and about the other agents that may have entered the location. That gives the busy agent an opportunity to stop what they’re doing and engage with other objects or agents.
  • Dialogue: the paper came up with an intriguing dialogue system that became one of my main reasons for programming this, but it requires the Observation system to be implemented fully.
  • Planning: once a day, the agents should come up with day-long plans that will be stored in memory, and that will influence the actions taken.
  • Reflection: when some condition is triggered, the agents retrieve the hundred most recent memories from their databases, and prompt the AI model to create high-level reflections of those memories. The reflections will be stored in the memory stream as well, so that they influence their planning, their decision-taking, as well as the creation of further reflections.

That’s as far as I remember of the paper at this moment. I wanted to get the navigation system done as early as possible, because I considered it the hardest part, and it annoyed the hell out of me partly due to how Python handles references (at one point I was losing the list of ancestors and descendants of the agents’ destination nodes between simple function calls to the same instance of Simulation, for no fucking reason that I could determine). I may move directly onto the Observation system, or take a rest. I already feel my sanity slipping quite a bit, likely because I took a sabbatical from my ongoing novel to do this.

AI news #3


If you are interested in what GPT-4 can do in games, check out part two of this series, because it showed how people are using ChatGPT for Skyrim and Fallout. Things have developed a bit further. The following video shows someone’s modded companion with whom the player can interact naturally, and who is aware of the world around her.

Truly amazing. As long as the world doesn’t end, imagine how amazing games will become in a few years. At least how amazing modders will make those games, if their companies can’t be arsed.

The following unassuming entry represented a turning point in how I use GPT-4:

Turns out that some clever people out there have proved that some prompts can squeeze significantly more intelligence out of GPT-4. The author of the video had the idea of pushing it a bit further by implementing the following architecture:

  • Given a prompt, ask GPT-4 through the API to give you three separate answers.
  • Send those three answers to GPT-4, asking it to act like a researcher and point out the flaws in those answers.
  • Send the answers along with the researcher’s analysis to GPT-4, and ask it to act like a resolver to determine which answer is more valid (according to the researcher’s analysis), improve it a bit, and present it as the final answer.

I have implemented that system in Python. It was very easy to do, as well. Although it takes significantly longer than just asking regular ChatGPT a question, the results are much better.

I thought about how mini-AGIs (Auto-GPT and BabyAGI) use GPT-4 to simulate artificial general intelligence, and I realized that if you can get GPT-4 to develop a list of tasks in a similar way the previous algorithm is structured, you can then link those tasks to the previous algorithm so that it executes each task in a loop.

I’m not that interested in programming an Auto-GPT at the moment; I’m happy enough getting a list of tasks that I can then send to GPT-4 at my leisure. However, I did code an addition to the previous program, one that sends an objective to GPT-4 and tells it to act like a manager developing a prioritized task list. In fact, as in the previous algorithm, it sends that request thrice, and then those answers are passed to a researcher, whose answer gets passed to a resolver as per the previously displayed algorithm.

Here’s the list of tasks that resulted from sending to my program the objective of programming a particular card game, one I’m developing, in the Rust programming language:

  1. Initialize the game engine, including a GUI that supports displaying cards and decks on screen.
  2. Create data structures to represent the different card types: Exploration Zone, Biomes, Encounters, Events, Resources, Afflictions, Player cards, and Portal cards.
  3. Load card data from toml files into the appropriate data structures.
  4. Develop a function to generate a random Exploration Zone card from the available Exploration Zone cards.
  5. Display the randomly selected Exploration Zone card on the top right corner of the screen.
  6. Develop a function to create the Biomes deck, selecting the matching Biome types as specified in the chosen Exploration Zone card.
  7. Ensure the first Biome card of the Biomes deck is displayed face up next to the Biomes deck.
  8. Develop a function to create the Encounter, Events, Resources, and Affliction decks based on the Feature icons from the chosen Exploration Zone card.
  9. Implement player classes (soldier, medic, scientist, scout) with their respective attributes and abilities.
  10. Design the player decks for 4 players, each with a unique role, and include a Portal card to allow escaping to the Hub.
  11. Design the Hub and implement the card drafting system for the Hub, allowing players to choose cards to add to their decks while trading resources.
  12. Develop a system for players to rebuild their decks while in the Hub between explorations.
  13. Design the goal cards that specify objectives required for each Exploration.
  14. Implement a system that tracks combos and interactions between cards, such as activating abilities based on certain conditions.
  15. Create a data gathering and analysis system for role combinations, deck compositions, and card interactions to identify balance issues and adjust card values and abilities accordingly.
  16. Test the game extensively, iterating on card mechanics, balance, and player engagement.
  17. Polish graphics, UI, and overall user experience to ensure an enjoyable gaming experience.
  18. Incorporate playtesting, gather feedback, and make improvements to game mechanics, card designs, and balance based on feedback and data-driven insights.
  19. Create game documentation, tutorial, or in-game help system to provide instructions and guidance for players on how to play the game and understand its mechanics.
  20. Market and promote the game, develop marketing materials and promotional events to raise awareness about the game, attract players, and grow the community.

As you can see, you could pass most of those tasks to the first algorithm, which would come up with a good answer.

I found myself at times copy-pasting ideas from a working document to the prompt I was sending to GPT-4. I thought that there must be some automated system that would pick the most appropriate domain knowledge for whatever prompt I wanted to send to GPT-4. I asked ChatGPT if it could come up with an answer, and it suggested vector databases. I had no fucking clue what vector databases were, but now I know.

Vector databases translate information into “webs” of data according to the distance between the elements, or something to that effect. What’s important is that if you query the database with some information, it will return the most similar information it contains related to what you asked. Sort of like a recommendation system.

The ‘annoy’ crate for Python seems to be one of the most commonly used systems for implementing vector databases, so that’s what I used. Now I just have to paste all the domain knowledge into a txt file, then run a simple program that registers the knowledge into the appropriate vector database. When I’m ordering the other couple of programs to either develop a task list or perform a task, they query the vector database to retrieve the most relevant domain knowledge. Pretty cool.

Interdimensional Prophets – Deckbuilder (Game Dev) #2


If someone had told me a few years ago, when I was obsessed with board and card games, that in a few days I would have developed a Python program that generates game cards effortlessly, I would have jumped for joy. Working with Python, coming from Rust in particular, is like going from aerospace engineering to building toy rockets; thankfully, a card generating program doesn’t require the speed and multiprocessing that Rust provides.

Anyway, check out the current cards I’m working with as I keep developing the program:

This is an Encounter card, which represent the weird shit that the team of explorers come across as they explore alternate Earths. The name of the card and the image are self-evident. The tree icon indicates that this encounter can only happen in a biome with an icon that matches that type. The row of icons below are the Struggle icons. In the game, the players should match those icons with their player cards to consider the encounter beaten. Those struggle icons depicted are Emotional, Cognitive and Environmental respectively.

Images courtesy of Midjourney, of course.

Here are some Biomes:

I know that the icon designs don’t match each other, but whatever.

I must thank again the most powerful large language model we have access to: GPT-4. It’s like having an extremely knowledgeable veteran programmer ready to help you at all times. For example, an hour ago I thought that the icons could use some subtle drop shadows. I had no clue how to even begin programming that, so I just asked GPT-4. After a short back and forth (in the first attempt it dropped shadows for the invisible part of the alpha channel), the icons now drop perfect shadows. How about that?

I have already started working on the Rust version of the game, using the Bevy crate, which seems to be the most advanced game development engine in that language. I have displayed a few encounter cards that move smoothly to the center of the screen for no particular reason other than that I wanted to figure out how to move stuff smoothly on screen.

Next up, I’ll focus on developing the necessary cards and Rust systems to make the following happen:

  • Design and generate an Exploration Zone card, which are the cards that determine which types of Biomes and Encounters can show up during an exploration (only those with matching icons can).
  • Display the Exploration Zone card in Rust.
  • Write the code to build the Biomes deck with only matching Biomes.
  • Display the Biomes deck on screen in Rust.
  • Write the code to build the Encounters deck with only matching Encounters.
  • Display the Encounters deck on screen in Rust.

Interdimensional Prophets – Deckbuilder (Game Dev) #1


A couple of weeks ago I kept myself busy programming an exploration game based on an old free verse poem of mine. I had developed the core of the game, the encounter system, when it became obvious that for the game to feel remotely compelling (even for myself), I’d have to manually develop dozens or hundreds of encounters. The game as it was conceived couldn’t continue past that point, so I thought about what I liked the most about that concept:

  • A team of players cooperating to solve some issue.
  • Each player having special abilities.
  • Gaining resources, abilities, etc, for one of the players at a time.
  • Exploring strange places.
  • Encountering weird shit.
  • Events that could alter how some encounters play out.
  • Gaining injuries, diseases, etc.
  • Gaining mental afflictions.
  • Being able to regroup at the hub and determine the resources that would be used for the next exploration.

Damn it if that isn’t a deckbuilding game. Not surprising, given that one of my favorite games ever is Arkham Horror LCG, a card game in which a team of at the most four players, each with his or her deck, uses the resources and abilities contained in that deck to solve perilous situations and beat weird monsters. It also features a location system that forces the team to move around, although that’s probably my least favorite part of the game.

So I thought, why can’t I program a deckbuilding game?

First of all, I need a fast system to produce cards. I had looked up programs to create cards in the past, and I was extremely disappointed due to how obscure their usages were. So I would need to develop one such program myself, tailored to the needs of my game.

So that’s what I’ve begun to do thanks to the insane Python skillz of ChatGPT. Behold the repository with the current version of my card generator:

Link to the GitHub repository for the card generator program

The first notion I had of such a program is that it should be able to take a background image, a card image, a frame image, and the necessary text, and generate a standard-sized card immediately. And so it does:

Yes, the cards even have rounded corners. Isn’t that fucking cool?

I have become emboldened by the fact that I could get this far in a few hours. So to come up with ideas I have relied on the current king of mini-AGIs (artificial general intelligences), that for me is godmode.space (requires a plus subscription to OpenAI and an API key; I had to wait for mine). Such AGIs are able to make plans, determine what tasks to perform, and criticize their own performance, in the pursuit of fulfilling some goals you’ve told it to focus on. As I’m writing this, GPT-4 is running in the background, coming up with game ideas and mechanics for the notions I fed it. For example, these are some of the texts that GPT-4 has written:

  • An Afflictions deck will be created, which will add an element of chance and difficulty to the game. The deck will consist of afflictions such as injuries, diseases, and mental statuses that will be detrimental to the player when drawn. The severity of each affliction will be determined, and a variety of afflictions will be included to ensure that the deck does not become predictable.
  • Illusion: A card type that represents deceptive, disorienting elements a player might encounter on specific biomes. These cards could have effects that remove enemy cards from the encounter deck or switch the order of Biome cards in the Biome deck.
  • A card drafting system will be developed to allow players to choose which cards to add to their deck when trading resources. The system will allow players to have more agency and control over their deck, and add another layer of strategic thinking to the gameplay. The rules for the drafting system will be determined, such as how many cards are presented to the player, and how many of those cards can be added to their deck. The card drafting system will be tested to ensure that it provides an engaging level of strategy without compromising the overall gameplay.
  • Encounter cards will be matched with one or more features of the Exploration Zone card in play. This will reflect the biome, geography, or climate the team is exploring, making the game more interesting and exciting for players. Additionally, we will develop a mechanism for how Encounter cards are ‘beaten’ through spent Player cards that feature certain icons. This will give players more agency in the game and create more engaging gameplay.

Born too late to explore the Earth, born too early to explore space, born just in time for the AI revolution.

Repository for word-information-generator

Yesterday I made a blog post about developing a program in Python that provided plenty of useful information given any word passed as a command-line argument. Well, the program seems feature complete now, so I have created a GitHub repository for it (to be honest, I should have created one from the beginning, as I had to backtrack some severe changes). Anybody who knows how to run Python programs should be able to use it. Otherwise, the instructions given on the repository might be enough.

Link to the GitHub repository for word-information-generator

I have been running the program already while writing my current novel. Along the way, if I realize that I want the program to offer me further information, I’ll likely add that in, so there may be further updates in the future.

Information about words thanks to ChatGPT

Yesterday, the shady company behind ChatGPT sent me an API key so I could do extra stuff with their GPT-4 AI model. I was mainly interested in using it for Auto-GPT.

Don’t you know what’s Auto-GPT? Some clever people figured out that if you give ChatGPT access to the internet and various other tools (such as your operating system’s commands), and trap it in a loop of reasoning, planning and criticizing itself, you can drop into that loop some task, such as growing your business or gathering particular information from the web, and ChatGPT will work itself to the bone for you. They called this implementation Auto-GPT, and it’s the closest thing we got, that I’m aware of, to AGI (artificial general intelligence), which is the holy grail of AI, and possibly the thing that will kill us all.

Anyway, here’s a video that shows you what this Auto-GPT can do (the video includes plenty of cool new stuff about AI):

I was aware that Auto-GPT can write, test and run Python code for you (apparently just Python; although I dislike the language, it seems to be the favorite of scripters who want stuff done quick, so you must be familiar with it). I started thinking about what I could tell Auto-GPT to do that would help me for real, and I came down to the fact that I look up words very, very often while writing, mostly to check particular definitions or to get synonyms. I search the definitions of words so often, in fact, that Google has at times demanded that I proved that I was human. So what if ChatGPT could write me a Python program that would provide all the information I need from a word, with a single command from Powershell?

The instructions were clear enough. Auto-GPT did write code that gave me synonyms, antonyms and some other shit for any word I would input, but when I ordered it to change the code so that the word got passed as an argument, Auto-GPT got mired in trying to figure out how to pass command-line arguments from within the Docker container with some dedicated functions.

When I gave it a break so it could write tests for the function in another file, it had trouble correcting the original code so that the tests would pass, but I think that was mostly my fault, as ChatGPT would need to have previous knowledge of, for example, what synonyms a word would have, and in that case, what’s the point of writing a test?

Anyway, I got bored with Auto-GPT itself, but not with the notion that ChatGPT could write that Python program, so that’s what I forced it to do in a couple of hours. Behold the results of passing the word “horse” as an argument:

Information about horse

Meaning of horse

  • solid-hoofed herbivorous quadruped domesticated since prehistoric times
  • a padded gymnastic apparatus on legs
  • troops trained to fight on horseback
  • a framework for holding wood that is being sawed
  • a chessman shaped to resemble the head of a horse; can move two squares horizontally and one vertically (or vice
    versa)
  • provide with a horse or horses

Part of speech for horse

  • verb (transitive)
  • noun

Etymology of horse

  • “solidungulate perissodactyl mammal of the family Equidæ and genus Equus” [Century Dictionary], Old
    Englishhors”horse,” from Proto-Germanicharss-(source also of Old Norsehross, Old Frisian, Old Saxonhors, Middle Dutchors, Dutchros, Old High Germanhros, GermanRoß”horse”), of unknown origin. By some, connected to PIE rootkers-“to run,” source of Latincurrere”to run.” Boutkan prefers the theory that it is a loan-word from an Iranian language
    (Sarmatian) also borrowed into Uralic (compare Finnishvarsa”foal”),The usual Indo-European word is represented by Old
    Englisheoh, Greekhippos, Latinequus, from PIE rootekwo-. Another Germanic “horse” word is Old Englishvicg, from Proto- Germanicwegja-(source also of Old Frisianwegk-, Old Saxonwigg, Old Norsevigg), which is of uncertain origin. In many
    other languages, as in English, this root has been lost in favor of synonyms, probably via superstitious taboo on
    uttering the name of an animal so important in Indo-European religion. For the Romanic words (Frenchcheval,
    Spanishcaballo) seecavalier(n.); for Dutchpaard, GermanPferd, seepalfrey; for Swedishhäst, Danishhestseehenchman. As
    plural Old English had collective singularhorseas well ashorses, in Middle English also sometimeshorsen, buthorseshas
    been the usual plural since 17c.Used at least since late 14c. of various devices or appliances which suggest a horse (as
    insawhorse), typically in reference to being “that upon which something is mounted.” For sense of “large, coarse,”
    seehorseradish. Slang use for “heroin” is attested by 1950. Toride a horse that was foaled of an acorn(1670s) was
    through early 19c. a way to say “be hanged from the gallows.”Horse latitudesfirst attested 1777, the name of unknown
    origin, despite much speculation.Horse-pistol, “large one-handed pistol used by horseback riders,” is by 1704. Adead
    horseas a figure for something that has ceased to be useful is from 1630s; toflog a dead horse”attempt to revive
    interest in a worn-out topic” is from 1864.HORSEGODMOTHER, a large masculine wench; one whom it is difficult to rank
    among the purest and gentlest portion of the community. [John Trotter Brockett, “A Glossary of North Country Words,”
    1829]The term itself is attested from 1560s.The horse’s mouthas a source of reliable information is from 1921, perhaps
    originally of racetrack tips, from the fact that a horse’s age can be determined accurately by looking at its teeth.
    Toswap horses while crossing the river(a bad idea) is from the American Civil War and appears to have been originally
    one of Abe Lincoln’s stories.Horse-and-buggymeaning “old-fashioned” is recorded from 1926 slang, originally in reference
    to a “young lady out of date, with long hair.” Tohold (one’s) horses”restrain one’s enthusiasm, be patient” is from
    1842, American English; the notion is of keeping a tight grip on the reins.

Synonyms of horse

  • horse_cavalry
  • Equus_caballus
  • sawbuck
  • buck
  • sawhorse
  • cavalry
  • horse
  • knight
  • gymnastic_horse

Related phrases and expressions with horse

  • to the horse in English
  • your high horse and don
  • is a horse dick !
  • : The horse ?
  • , that horse is peeing
  • a Spanish horse .
  • on a horse ?
  • [ a horse and carriage
  • [ the horse and carriage
  • get a horse too .
  • take a horse !
  • taking the horse and I
  • smells like horse shit .
  • I mean horse .
  • got that horse and his
  • — Horse carriages ,
  • only a horse could love
  • is a horse !
  • and the horse didn ‘
  • take this horse and I
  • your new horse , honey
  • light that horse on fire
  • get a horse thief financing
  • the smelly horse carriages on
  • with this horse .

Semantic field(s) of horse

  • chessman
  • provide
  • gymnastic_apparatus
  • framework
  • equine
  • military_personnel

Hyponyms of horse

  • pommel_horse
  • eohippus
  • pony
  • stalking-horse
  • pinto
  • sorrel
  • steeplechaser
  • liver_chestnut
  • mesohippus
  • roan
  • remount
  • hack
  • wild_horse
  • workhorse
  • palomino
  • pony
  • gee-gee
  • pacer
  • stablemate
  • male_horse
  • bay
  • racehorse
  • harness_horse
  • protohippus
  • chestnut
  • trestle
  • vaulting_horse
  • hack
  • mare
  • saddle_horse
  • stepper
  • post_horse
  • polo_pony

Hypernyms of horse

  • military_personnel
  • gymnastic_apparatus
  • chessman
  • equine
  • framework
  • provide

Meronyms of horse

  • horseback
  • cavalryman
  • encolure
  • foal
  • gaskin
  • horse’s_foot
  • poll
  • horsemeat
  • withers

Domain-specific words related to horse

  • armed_forces
  • military_machine
  • chess
  • war_machine
  • chess_game
  • armed_services
  • military

Associated nouns with horse

  • horse

Associated verbs with horse

  • horse

Stylistic variations of horse

  • bathorse
  • horsepox
  • horseflesh
  • dishorse
  • horselike
  • horselaughter
  • ahorseback
  • horsewhipper
  • sawhorse
  • demihorse
  • horsekeeper
  • Horsetown
  • horsefettler
  • horseshoe
  • horsehead
  • ahorse
  • horseman
  • horsetree
  • underhorsed
  • woodhorse
  • horsehide
  • drawhorse
  • horsewomanship
  • overhorse
  • horsetongue
  • horsemint
  • horseleech
  • horsecloth
  • clotheshorse
  • horseboy
  • horseherd
  • horseload
  • horseplay
  • horsepower
  • horsedom
  • horsefish
  • horsetail
  • horsepond
  • horsefair
  • horsehood
  • rearhorse
  • horselaugh
  • horsemastership
  • horsefly
  • cockhorse
  • waterhorse
  • horsehair
  • horseway
  • horsebreaker
  • horsemonger
  • horsewoman
  • unhorse
  • horsegate
  • horsehoof
  • horseweed
  • horser
  • horseshoer
  • horsefight
  • horsewood
  • horselaugher
  • horsemanship
  • studhorse
  • horsecraft
  • horsefoot
  • horsejockey
  • horseback
  • horseplayful
  • horsewhip
  • underhorse
  • horsehaired
  • horsebacker
  • hobbyhorse
  • horsecar
  • horseless

A couple of weird points about this implementation, although they don’t bother me:

  • I gather the etymology from a website, but some words end up stuck together for whatever reason. Also, no paragraphs. Not sure if it can be fixed, because the html tags don’t come through the request.
  • The section “Related phrases and expressions” only looks for a few words around the passed word, from a dataset that ChatGPT recommended. The results are often strange.

Because I’m obsessive (and compulsive), I kept bothering ChatGPT by telling it to come up with more useful information that the program could provide about any given word. I didn’t know what a hyponym was.

Anyway, this little program ended up being a great tool for writing, which is what I should have done with my afternoon instead of getting involved with ChatGPT. Its auto version has huge potential; I probably need to come up with better use cases.

Interdimensional Prophets (Game Dev) #6


I had finished programming the non-visual part of Team Struggles (a part of the encounter system that involves character traits and psychological dimensions against some performance thresholds) when I faced the fact that the game was loading too damn slow. I admit, I have been a bit overeager demanding more anime photo IDs from Midjourney, and they are completely unoptimized, but still, I figured that this project could load much faster. So I figured the following solutions:

  • Lazy loading. Instead of loading encounter, biome, and photo ID images at once, just the image path is registered. Right before I need to draw a certain image, I check if it has been loaded, and if it hasn’t, I load it. That makes it so that the many images that won’t be seen in a particular testing session won’t need to be loaded at all. This change alone has sped up game loading significantly.
  • Multithreading. This project didn’t feature any multithreading up to this point, as it is a static, 2D strategy game, but the process of loading the various parts (most of them from TOML files) could use some multithreading. My previous experience with this subject involved trying to develop a Dwarf Fortress-like simulation in Python, only to realize that Python isn’t suited for multiprocessing, nor remotely big simulations at all, due to its garbage-collected nature and a core that is locked to a single thread. However, Rust has mature crates that make multiprocessing relatively simple.

I asked GPT-4 to give me an overview of multiprocessing in the Rust programming language. It suggested a combination of the “rayon” and “crossbeam-channel” crates. The process works like this:


let (sender, ecs_receiver) = crossbeam_channel::unbounded();


You declare a sender and a receiver. The sender part will put on a queue the work done from a different thread, and the receiver will remain on the main thread to try to figure out what it can extract from the queue. However, those threads don’t need to disconnect: they are open channels. I assume that you could have a dedicated thread pumping out pathfinding-related calculations back to the main thread.

Spawning a thread is as easy as the following:

       std::thread::spawn(move || {

            load_ecs_threaded(sender);

        });

The “move” order, or whatever you would call it, is tricky. Any information at all that you are sending from the main thread changes its ownership, even if you clone it normally, so you need to use the “Arc” library to clone it in some special way. Not sure how expensive it is.

Anyway, “load_ecs_threaded(sender)” is in this case the function that will run in the spawned thread. The definition and contents are the following:

use crate::{

    gui::image_impl::ImageImpl,

    world::{create_world, ecs::ECS},

};

pub fn load_ecs_threaded(sender: crossbeam_channel::Sender<ECS<ImageImpl>>) {

    sender

        .send(create_world::<ImageImpl, ECS<ImageImpl>>())

        .unwrap();

}

That function merely sends through the sender the results of the “create_world” function, that registers all necessary components with “specs” Entity-Component System.

You won’t be able to check if the spawned threads have done anything unless you are running some sort of loop on the main thread. In this case I’m running the game with the 2D game dev “ggez” crate, which operates a simple, but well-working, game loop. From there, you need to rely on the “receiver” part of the channel to try to receive data:

        if let Ok(ecs) = self.ecs_receiver.try_recv() {

            match self.shared_resources.try_lock() {

                Ok(mut bound_shared_resources) => {

                    bound_shared_resources.set_ecs(ecs);

                    self.progress_text = Text::new(“Loaded Entity-Component System”.to_string());

                }

                Err(error) => return Err(GameError::CustomError(format!(“Couldn’t lock shared resources to set the world instance. Error: {}”,

                error))

                ),

            }

        }

Through the call “ecs_receiver.try_recv()” I will get either an Ok or an Error. An error may just be that the channel is empty because the remote function hasn’t finished working, so we just check Ok. In that case, the thread has finished doing its job. We gather the results (the “ecs” in this case) and store it into our shared_resources as I did previously.

That’s all. You need to be careful, though, because there are some structs that you can’t send through channels. For example, you can’t send the graphical context of “ggez”, meaning that you always need to load images in the main thread. You also can’t send the random number generator through, as it’s explicitly working on a single thread. But I haven’t found any issue sending my game structs.

Now that the game doesn’t seem to freeze on launch, I can focus on implementing the visual aspect of Team Struggles.

Interdimensional Prophets (Game Dev) #5


A couple of entries ago I presented my first version of the encounter screen. As the team of explorers wanders around in the map, the stored encounters will get shuffled, and the first one whose condition gets triggered will present itself. Here’s the somewhat updated screen:

I was checking out the moddability of this game by changing most pictures to manga/anime aesthetics, and I realized that I liked it more this way. With a simple change of directory names, all names and pictures could get swapped to American ones. In any case, this screen presents what encounter has been triggered. The description gives a brief overview of the situation. The rest of the text informs that this encounter, at least the psychological part of it, will test each team member’s self-regulation (which is one of the psychological dimensions, the grouping of a few psychological criteria).

Yes, I know that there’s a lot of black space. Don’t know what to do about that.

Once you click the round button on the lower right, you are shown the results of the psychological test:

A brief text indicates the reason of this psychological test; in this introductory event/encounter to a narrative line called “The Verdant Assembly,” the characters test their self-regulation against the overwhelmingly lush and alien surroundings. For each character, the average value for that psychological dimension gets tested against a series of performance thresholds in the TOML files. The highest threshold they pass, they get that reward (or punishment). In game terms, an Encounter is associated with a series of Outcomes. Here’s how the outcomes for this encounter look like in the raw TOML file:

# Possible placeholders:

#

# {CHARACTER_NAME}

# {CHARACTER_FIRST_NAME}

[[outcomes]]

identifier = 1

outcome_type = “PsychologicalTest”

description = “Overwhelmed by the alien nature of this plant-based world.”

consequences_identifier = 1

[[outcomes]]

identifier = 2

outcome_type = “PsychologicalTest”

description = “{CHARACTER_FIRST_NAME} becomes fascinated by the plant-based entities, leading to increased motivation and a desire to learn from them.”

consequences_identifier = 2

They are self-explanatory. The most important part is that they link to another store of game entities, the Consequences. I intended to unify the concept of game consequences to a single block of game logic that could be applied to psychological tests and, in the future, to team struggles. The TOML file of related consequences is the following:

[[consequences]]

identifier = 1

illness_identifiers = []

injury_identifiers = []

mental_status_effect_identifiers = [1, 2]

character_trait_identifiers = [1]

add_features_identifiers = []

remove_features_identifiers = []

[[consequences]]

identifier = 2

illness_identifiers = []

injury_identifiers = []

mental_status_effect_identifiers = [3]

character_trait_identifiers = []

add_features_identifiers = []

remove_features_identifiers = []

It is quite inexpressive in its contents because it only links to other entities through their identifiers. However, the outcome of each psychological test and team struggle could have any of the following consequences (or all of them):

  • The team member(s) involved receives one or many illnesses. Illnesses reduce the team member’s health every turn until they run out or are cured.
  • Receives one or many injuries. Instant reduction of health, and the permanent ones even reduce max health.
  • Receives one or many mental statuses (like Confused or Discouraged). They increase or reduce the value for associated psychological criteria.
  • Receives one or many character traits (like Terrified of Octopi, or Botanist). These help or hinder during team struggles.
  • The exploration zone the team is exploring either gains or loses features. For example, if some outcome enrages the natives, the exploration zone could gain the feature Enraged Natives, which would present more combat-oriented encounters in the future.

The consequences are already being applied in the code (which was some heavy amount of code, well-tested thanks to test-driven development), and once I get around to implementing team struggles, their consequences will work seamlessly with the code already written.

In the near future I’m going to focus on making sure that encounters can be blocked by other encounters or even their outcomes, if necessary. For example, if during a team struggle the team screws up bad enough to unleash something dangerous, that should cut off access to more positive branches of that same narrative.

A detail about Rust’s fastidious nature: this is a programming language built upon security and protection against the nastiest bugs from the C++ era. As far as I can tell, in Rust it’s impossible to corrupt some memory allocation that it wasn’t supposed to touch. That forces you to change your approach to programming in quite a few ways, but not because Rust is annoying for no reason, but because in other languages you were doing dangerous things. In my code, I was passing around a SharedGameResources entity that had access to “specs” Entity-Component System (that’s a whole thing; if you are interested, google it) as well as the stores of data loaded from TOML files. At one point I had to borrow that SharedGameResources entity both as immutable (just to read from the stores) as well as mutable (to write the results in the components). That’s impossible. Although it forced me to rewrite some basic architectural code, it illuminated the point that stores of game systems (like the “databases” of mental status effects or of character traits) are separate to the Entity-Component System, which handles a lot of mutation. In the end, Rust’s compiler steers you towards proper architecture, because you simply can’t run your program otherwise.

Interdimensional Prophets (Game Dev) #4


As I was writing unit tests for a perilous, convoluted part of the game logic, which I wanted to lock in place as I moved forward, I realized that to test one relatively small part of the code, I would need to create both World, the main entity of the Entity-Component System “specs”, as well as Image, which is tied to the Context of the 2D game dev “ggez” crate. World is heavy by itself to fire up for a simple unit test, but Images themselves may not even be feasible, as they are glued to the graphical context (no graphics should run during unit tests), and they are tied to a single thread, while the unit tests run in all CPUs.

Therefore, I came to the dreaded conclusion: I needed to refactor their entire code into traits (interfaces). Traits are contracts that encapsulate a behavior without implementing anything. If you program to traits (interfaces) instead of to classes, you are working with future, potentially unimplemented structures whose details are irrelevant to you or the compiler, because they are only forced to fulfill a contract (the trait/interface). It’s like telling an animal to say something; a dog would bark, a cat would meow, but both have fulfilled the contract of “talking,” which is apparently all you cared about.

It just happens that traits in Rust involve generics, and generics in Rust are unholy. Due to Rust’s welcome but tough borrowing rules and lifetime whatevers, Refactoring any behavior to traits involves dealing with bizarre generic declarations, and worse yet, nearly incomprehensible lifetime signatures.

Behold the horror:

pub struct MainState<‘a, T: ImageBehavior + ‘static + std::marker::Send + std::marker::Sync + Clone, U: WorldBehavior<T>> {

    active_stage: Option<GameStage>,

    base_state: BaseState<‘a, T, U>,

    expedition_state: ExpeditionState<‘a, T, U>,

    encounter_state: EncounterState<‘a, T, U>,

    shared_game_logic: Arc<Mutex<SharedGameLogic>>,

}

That’s the definition of MainState, the head honcho of the state machine that figures out which other stage needs to update, or draw on the screen. It declares that it’s somehow involved, to start, with an ImageBehavior contract. Every image throughout the program, except in the launcher, is now unaware of what type of structure it’s actually dealing with, except that it fulfills the contract of being static (I assume images are fixed in memory or something), they Send and Sync for multithreading, and can be Cloned. That’s the hardest one. Then we have WorldBehavior, which abstracts away the entirety of the “specs” Entity-Component System into a wrapped class. The resulting contract for WorldBehavior, if I may say so myself, is a thing of beauty:

pub trait WorldBehavior<T: ImageBehavior + std::marker::Sync + std::marker::Send + Clone> {

    fn new(world: World) -> Self;

    fn join_coordinates_and_tile_biomes(&self) -> Vec<((i32, i32), TileBiome)>;

    fn retrieve_player_coordinates(&self) -> Option<(i32, i32)>;

    fn retrieve_biome_at_player_position(&self) -> Option<Biome>;

    fn retrieve_unique_character_traits_of_team_members(&self)

        -> HashSet<CharacterTraitIdentifier>;

    fn count_team_members(&self) -> u32;

    fn calculate_average_mental_strain_of_team(&self) -> f32;

    fn calculate_average_health_of_team(&self) -> f32;

    fn set_player_position(&mut self, x: i32, y: i32);

    fn retrieve_player_entity(&self) -> Entity;

    fn retrieve_name_of_entity(&self, entity: Entity) -> Name;

    fn retrieve_psychological_profile_of_entity(&self, entity: Entity) -> PsychologicalProfile;

    fn retrieve_health_of_entity(&self, entity: Entity) -> CharacterHealth;

    fn retrieve_team_members(&self) -> Vec<Entity>;

    fn retrieve_team_members_except_player(&self) -> Vec<Entity>;

    fn retrieve_photo_id_of_entity(&self, entity: Entity) -> PhotoId<T>;

    fn move_direction(&mut self, direction: Direction);

    fn create_entity(&mut self) -> EntityBuilder<‘_>;

}

With that contract in place, there’s no more need to deal with “specs” way of working. You want to retrieve the player entity? Call “retrieve_player_entity”. Do you want a calculation of the average mental strain of the entire team? Call “calculate_average_mental_strain_of_team”. Of course, the contract can be developed further as more information or calculations need to be gathered.

I’m at ease now that I have managed to refactor those two unit test blockers into behaviors, but it took hours, and if it weren’t for the indefatigable help of GPT-4, I wouldn’t have been able to do it. But thankfully there shouldn’t be any major obstacles to unit test every part of the code now, which should speed up development significantly.

A boring entry compared with the three previous ones, perhaps, but programming is a fight against entropy: the further you build your system, the harder it becomes to change. You have to stop every few days (if not once a day) to make sure that some part of the code isn’t rotting already.