Cold Clear Books Showcase - Puyo Puyo Tetris 2 AI

Описание к видео Cold Clear Books Showcase - Puyo Puyo Tetris 2 AI

I've pretty much only made esoteric books so far.

A book is basically a database describing what move should be made for a set of boards, queues, and bag states. There's a memory usage problem though; having a hash map mapping a board+queue+bag triple to a move would use a ton of memory - for example, BT Loop has 2,548,316 board+queue+bag combinations. The PC book probably has about 10 billion - I don't have a way of calculating that precisely without recalculating the book, and that takes a CPU month. Anyway, that naive map strategy would use about 32 bytes per entry, making the BT Loop book ~78 MB.

A good first improvement is that since the board makes up a large part of the memory usage, you can make the map have two levels: an outer map from board+bag pairs to inner maps from queues to moves. The outer map would use 72 bytes per entry, but the inner map would only use 8 bytes per entry. The outer map would have about 500x fewer entries that the combined entries of all the inner maps. Using the BT Loop book as an example again, it has 4,932 board+bag pairs, so it would use ~20MB - an improvement by a factor of ~4. I don't think this is good enough, and for the PC book it certainly isn't.

There are two key observations. First, in most openings, there are very few unique moves in any given board+bag pair; for example, DT cannon only has 4 unique moves from the starting board. There are hundreds of unique queues possible from a given starting bag; with so few moves to choose from, there would be about a hundred queues that all map to the same move. The second observation is in how to exploit this redundancy. Frequently, it is only the first few pieces in the queue that determine which move should be made. This is most clear with the current and hold pieces; they are the only pieces that can be placed, so they greatly reduce the number of moves that could be suggested. If the queues are organized so that queues with the same first pieces are close together, we can do run-length encoding to compress the data. A lexicographic ordering of the queues achieves this.

So the way Cold Clear's book system works is it is a two-layer map. The outer map is a hash map from board+bag pairs to inner maps. The inner maps are arrays of queue+move pairs sorted by a lexicographic ordering of the queue. For each pair, the queue is the start of the run and the value of the run is the move. The length of the run is implicit; it is the number of queues between one pair's queue and the next pair's queue. This format allows us to use binary search to find which run a queue is part of, and thus, which move should be made.

Using the BT Loop book as an example again, it has 4,932 board+bag pairs and 112,681 runs in total (that's an average of 22.8 runs per board). Each board+bag pair mapping uses 40 bytes, and each queue+move pair uses 8 bytes, so the BT Loop book is now just ~1 MB - an improvement by a factor of ~20. That's 78 times smaller than the naive approach!

0:00 Hachispin + DPC
0:47 Okey Cannon + DPC
1:28 New DT Cannon
2:19 BT Loop
3:13 Perfect Clear

Discord:   / discord  
Source code: https://github.com/MinusKelvin/cold-c...

Комментарии

Информация по комментариям в разработке