AI news #3

If you are interested in what GPT-4 can do in games, check out part two of this series, because it showed how people are using ChatGPT for Skyrim and Fallout. Things have developed a bit further. The following video shows someone’s modded companion with whom the player can interact naturally, and who is aware of the world around her.

Truly amazing. As long as the world doesn’t end, imagine how amazing games will become in a few years. At least how amazing modders will make those games, if their companies can’t be arsed.

The following unassuming entry represented a turning point in how I use GPT-4:

Turns out that some clever people out there have proved that some prompts can squeeze significantly more intelligence out of GPT-4. The author of the video had the idea of pushing it a bit further by implementing the following architecture:

  • Given a prompt, ask GPT-4 through the API to give you three separate answers.
  • Send those three answers to GPT-4, asking it to act like a researcher and point out the flaws in those answers.
  • Send the answers along with the researcher’s analysis to GPT-4, and ask it to act like a resolver to determine which answer is more valid (according to the researcher’s analysis), improve it a bit, and present it as the final answer.

I have implemented that system in Python. It was very easy to do, as well. Although it takes significantly longer than just asking regular ChatGPT a question, the results are much better.

I thought about how mini-AGIs (Auto-GPT and BabyAGI) use GPT-4 to simulate artificial general intelligence, and I realized that if you can get GPT-4 to develop a list of tasks in a similar way the previous algorithm is structured, you can then link those tasks to the previous algorithm so that it executes each task in a loop.

I’m not that interested in programming an Auto-GPT at the moment; I’m happy enough getting a list of tasks that I can then send to GPT-4 at my leisure. However, I did code an addition to the previous program, one that sends an objective to GPT-4 and tells it to act like a manager developing a prioritized task list. In fact, as in the previous algorithm, it sends that request thrice, and then those answers are passed to a researcher, whose answer gets passed to a resolver as per the previously displayed algorithm.

Here’s the list of tasks that resulted from sending to my program the objective of programming a particular card game, one I’m developing, in the Rust programming language:

  1. Initialize the game engine, including a GUI that supports displaying cards and decks on screen.
  2. Create data structures to represent the different card types: Exploration Zone, Biomes, Encounters, Events, Resources, Afflictions, Player cards, and Portal cards.
  3. Load card data from toml files into the appropriate data structures.
  4. Develop a function to generate a random Exploration Zone card from the available Exploration Zone cards.
  5. Display the randomly selected Exploration Zone card on the top right corner of the screen.
  6. Develop a function to create the Biomes deck, selecting the matching Biome types as specified in the chosen Exploration Zone card.
  7. Ensure the first Biome card of the Biomes deck is displayed face up next to the Biomes deck.
  8. Develop a function to create the Encounter, Events, Resources, and Affliction decks based on the Feature icons from the chosen Exploration Zone card.
  9. Implement player classes (soldier, medic, scientist, scout) with their respective attributes and abilities.
  10. Design the player decks for 4 players, each with a unique role, and include a Portal card to allow escaping to the Hub.
  11. Design the Hub and implement the card drafting system for the Hub, allowing players to choose cards to add to their decks while trading resources.
  12. Develop a system for players to rebuild their decks while in the Hub between explorations.
  13. Design the goal cards that specify objectives required for each Exploration.
  14. Implement a system that tracks combos and interactions between cards, such as activating abilities based on certain conditions.
  15. Create a data gathering and analysis system for role combinations, deck compositions, and card interactions to identify balance issues and adjust card values and abilities accordingly.
  16. Test the game extensively, iterating on card mechanics, balance, and player engagement.
  17. Polish graphics, UI, and overall user experience to ensure an enjoyable gaming experience.
  18. Incorporate playtesting, gather feedback, and make improvements to game mechanics, card designs, and balance based on feedback and data-driven insights.
  19. Create game documentation, tutorial, or in-game help system to provide instructions and guidance for players on how to play the game and understand its mechanics.
  20. Market and promote the game, develop marketing materials and promotional events to raise awareness about the game, attract players, and grow the community.

As you can see, you could pass most of those tasks to the first algorithm, which would come up with a good answer.

I found myself at times copy-pasting ideas from a working document to the prompt I was sending to GPT-4. I thought that there must be some automated system that would pick the most appropriate domain knowledge for whatever prompt I wanted to send to GPT-4. I asked ChatGPT if it could come up with an answer, and it suggested vector databases. I had no fucking clue what vector databases were, but now I know.

Vector databases translate information into “webs” of data according to the distance between the elements, or something to that effect. What’s important is that if you query the database with some information, it will return the most similar information it contains related to what you asked. Sort of like a recommendation system.

The ‘annoy’ crate for Python seems to be one of the most commonly used systems for implementing vector databases, so that’s what I used. Now I just have to paste all the domain knowledge into a txt file, then run a simple program that registers the knowledge into the appropriate vector database. When I’m ordering the other couple of programs to either develop a task list or perform a task, they query the vector database to retrieve the most relevant domain knowledge. Pretty cool.

One thought on “AI news #3

  1. Pingback: AI news #2 – The Domains of the Emperor Owl

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s