Do you keep it simple? Just long enough? Go wild with it? How about embeddings, do you also use them?
The more I learn about this, the more I don't understand it. Outside of some basic enhancers (masterpiece, best quality, worst quality, and bad anatomy/hands etc. if I'm generating a human), I don't see any big improvements. Every combination gives different result; some look better, some look worse depending on the seed, sampler, etc. It's basically a matter of taste. Note that I only do illustrations/paintings so the differences might not be much. Do you keep tweaking your prompts or just settle with the prompts you've been using?
When I started I was just copying from online galleries like Civitai or Leonardo.ai, which gave me noticeable better images than what I have came up with myself before.
However, it seemed to me that many of these images may also just have copied prompts without understanding what's really going on with them and I started to experiment for myself.
What I will do right now is to build my images "from ground up" starting with super basic prompts like "a house on a lake" and work from there. First adding descriptions to get the image composition right, then work in the style I'm looking for (photography, digital artwork, cartoon, 3D render, ...). Then I will work in enhancers and see what they change. I found that one has to be patient, only change one thing at a time and always do a couple of images (at least a batch of 8) to see if and what the changes are.
So, I still comb though image galleries for inspiration in prompting, but I will now most of the time just pick one keyword or enhancer and see what it does to my own images.
It is a long process that requires many iterations, but I find it really enjoyable.
I don’t bother with prompt enhancers any more. Stable Diffusion isn’t MidJourney; quantity is far more important than quality. I just prompt for what I want and add negative prompts for things that show up that I don’t want. I’ll use textual inversions like badhandv4 if the details look really bad. If the model isn’t understanding at all then I’ll use ControlNet.
Agreed, although too much quantity seemed to water down results quite a bit. Too many and i have to up the weights of nearly everything to 1.2-1.4, otherwise aspects I want to show start to drop off.
Anecdotally I found the best length to be about 75 positive tokens, though I'd recommend to never go over the 150 token limit if you can help it.
I have a canned negative prompt list that I use that is super long though, easily 200 tokens. Just a hodge podge of some of the things you listed: bad_anatomy and missing_limbs and missing_hands for example are crucial to have. Adding ugly with a weight over 0.8 has strange results, too I've found. Hope that helps!
I meant the quantity of generated images, not the number of tokens. I rarely go over 50 tokens now. As you said, too many tokens and things start to interact in really odd ways. That’s why I’m not a fan of massive lists of negative tokens either; they are much more efficient as a textual inversion like badhandv4 or Easynegative.
However, I only use txt2img to get the rough composition of an image; most of my work is done in inpainting afterwards. If you’re looking to have good images just from txt2img then sometimes lots of tokens are necessary.
Just like traditional art though, this is all based on individual style. It’s important to use what works best for you.