I would be surprised if two rounds were totally identical but that's good to know you can get something consistent if you use the same seed
Here is a tutorial on how to operate VQGAN+CLIP by Katherine Crowson! No coding knowledge necessary. - 🟧heystacks
sourceful.us
This mentions how computationally intensive these neural networks are, which I was worried about.
One
could be thrifty in how these aesthetic experiments are handled, in the effort of forming a kind of pidgin interfacing technique with these things.
The "brown top | cyan bottom" input would be a simple probe, to see how this GAN interprets spatial/directional inputs.
And beyond just changing the parameters, one can manipulate and reconstruct the results.
@catalog @william_kent did one of us mentioned the possibility of using image files as input? That may allow a whole other panel of levers to pull.
But at this point maybe one should just learn to code them directly, and I am only starting to appreciate how energy-intensive this whole practice is.