macondo

A crossword board game AI, written in Go

View on GitHub

AI Explainability

What is it?

Macondo has an experimental feature to allow you to explain a move with generative AI. If you are curious, the prompts used are in the /explainer directory.

See this position:

   A B C D E F G H I J K L M N O     ->              player1  ACDEPQU  400
   ------------------------------                    player2           440
 1|C O X A     B =   R O P I N G |
 2|  -   T   D I N G Y       -   |   Bag + unseen: (16)
 3|    - t     Z O ' A I N E E   |
 4|'     A       V     F O W T H |   E E G I I I L L N O O R R S S U
 5|      S -     A     F O E   U |
 6|  L E K E "   T   M Y     " T |
 7|    R E L A T I V E     '   I |
 8|A R E D       N       '   B A |
 9|    '       ' g '       ' R   |
10|  "       "       "     W O   |   Turn 0:
11|        -           -   H A   |
12|'     -       '       - I D ' |
13|    -       '   '       M E   |
14|  -       "       "       N   |
15|=     '       =       J U S T |
   ------------------------------

You can load it as follows:

macondo> load cgp COXA2B2ROPING/3T1DINGY5/3t2ZO1AINEE1/3A3V2FOWTH/3S3A2FOE1U/1LEKE2T1MY3T/2RELATIVE4I/ARED3N5BA/7g5R1/12WO1/12HA1/12ID1/12ME1/13N1/11JUST ACDEPQU/ 400/440 0 lex CSW24;

Running AI Explainability

To test it out, please create a Gemini API key here:

https://ai.google.dev/gemini-api/docs

You can click Get a Gemini API Key in that page, and place it in an environment variable GEMINI_API_KEY.

Then, type the following command into Macondo, once you have a position loaded in:

macondo> script scripts/lua/genai_explain.lua

Note that behind the scenes, Macondo is doing a full simulation and passes a lot of the raw data to an LLM to explain it in plain text.

Gemini 2.5 Pro Experimental response:

This should print something like the following after a few seconds:

Model response: Okay, let’s break down this position. The simulation identifies 12K QU(ID) as the strongest play. Here’s why:

Models

At the moment of writing this (April 14, 2025) the model we are using, Gemini 2.5 Pro, is perhaps the strongest AI model out there. An experimental version of it is available for free. You should be able to run around 25 explanations per day. Normally, an explanation with this model would cost around $0.04.

You can check the Lua script above (in the scripts/lua/genai_explain.lua) to update the model. The default value is gemini-2.5-pro-exp-03-25. When the model becomes generally available, this default value is likely to change. You can also change the model used by modifying the GEMINI_MODEL API key. For example, gemini-2.0-flash is 1-2 orders of magnitude cheaper, and the quality of the response is almost as good:

Gemini 2.0 Flash response:

Model response: In this position, 12K QU(ID) performed best. This is why:

Using other providers?

You can also use OpenAI. Set the environment variable GENAI_PROVIDER to “openai” before opening Macondo. The default gpt-4.1 model is uses a lot fewer output tokens than Gemini 2.5 Pro, since it doesn’t “think”, and actually gives great results as well. The rough cost of the gpt-4.1 model is just around $0.01 per explanation.

gpt-4.1 response

Model response: In this position, 12K QU(ID) is the best performing play. This is why:

In short, QU(ID) at 12K wins because it keeps your comeback options open: it gives you direct access to a strong bingo setup, keeps good tiles for more bingos, and doesn’t sacrifice average score.

Other models

As before, you can set the OPENAI_MODEL environment variable to other values to use other models.