It finds the “most similiar tokens” in the list. Similarity is the theta angle between the token vectors.
The angle is calculated using cosine similarity , where 1 = 100% similarity (parallell vectors) , and 0 = 0% similarity (perpendicular vectors).
Negative similarity is also possible.
How can I use it?
If you are bored of prompting “girl” and want something similiar you can run this notebook and use the “chick” token at 21.88% similarity , for example
You can also run a mixed search , like “cute+girl”/2 , where for example “kpop” has a 16.71% similarity
There are some strange tokens further down the list you go. Example: tokens similiar to the token "pewdiepie</w>" (yes this is an actual token that exists in CLIP)
Each of these correspond to a unique 1x768 token vector.
The higher the ID value , the less often the token appeared in the CLIP training data.
To reiterate; this is the CLIP model training data , not the SD-model training data.
So for certain models , tokens with high ID can give very consistent results , if the SD model is trained to handle them.
Example of this can be anime models , where japanese artist names can affect the output greatly.
Tokens with high ID will often give the "fun" output when used in very short prompts.
What about token vector length?
If you are wondering about token magnitude,
Prompt weights like (banana:1.2) will scale the magnitude of the corresponding 1x768 tensor(s) by 1.2 . So thats how prompt token magnitude works.
So TLDR; vector direction = “what to generate” , vector magnitude = “prompt weights”
How prompting works (technical summary)
There is no correct way to prompt.
Stable diffusion reads your prompt left to right, one token at a time, finding association from the previous token to the current token and to the image generated thus far (Cross Attention Rule)
Stable Diffusion is an optimization problem that seeks to maximize similarity to prompt and minimize similarity to negatives (Optimization Rule)
For every step (20 in total by default) for SD1.5 :
Prompt text => (tokenizer)
=> Nx768 token vectors =>(CLIP model) =>
1x768 encoding => ( the SD model / Unet ) =>
=> Desired image per Rule 3 => ( sampler)
=> Paint a section of the image => (image)
Disclaimer /Trivia
This notebook should be seen as a "dictionary search tool" for the vocab.json , which is the same for SD1.5 , SDXL and FLUX. Feel free to verify this by checking the 'tokenizer' folder under each model.
The CLIP image_encoding is not included in this Notebook.
If you spot errors / ideas for improvememts; feel free to fix the code in your own notebook and post the results.
I'd appreciate that over people saying "your math is wrong you n00b!" with no constructive feedback.
//---//
Regarding output
What are the </w> symbols?
The whitespace symbol indicate if the tokenized item ends with whitespace ( the suffix "banana</w>" => "banana " ) or not (the prefix "post" in "post-apocalyptic ")
For ease of reference , I call them prefix-tokens and suffix-tokens.
Sidenote:
Prefix tokens have the unique property in that they "mutate" suffix tokens
Example: "photo of a #prefix#-banana"
where #prefix# is a randomly selected prefix-token from the vocab.json
The hyphen "-" exists to guarantee the tokenized text splits into the written #prefix# and #suffix# token respectively. The "-" hypen symbol can be replaced by any other special character of your choosing.
Capital letters work too , e.g "photo of a #prefix#Abanana" since the capital letters A-Z are only listed once in the entire vocab.json.
You can also choose to omit any separator and just rawdog it with the prompt "photo of a #prefix#banana" , however know that this may , on occasion , be tokenized as completely different tokens of lower ID:s.
Curiously , common NSFW terms found online have in the CLIP model have been purposefully fragmented into separate #prefix# and #suffix# counterparts in the vocab.json. Likely for PR-reasons.