Modern large language models form the basis for image synthesis implementations that are far more capable than society has figured out yet, whether in the mainstream or at the edges. Knowing how capable one model compared to another tends to be, or one of many open source notebooks referencing those models might be, continues to be full of surprises. Whether hosted on powerful compute patforms or running localy at home, there is a lot more to discover that is actually known, from differences between training times to performance to number of parameters.
To help make the spellcrafting aspect of synths become more intentional and peer repeatabe, you can , , and
contribute to a community curated collection of the latest image synethesis models, notebooks, and APIs here.