Clip Skip
Clip Skip is a feature in some text-to-image diffusion models that allows users to control the level of detail in the generated image. It works by skipping the last few layers of the CLIP text encoder, which is a neural network that is used to encode text prompts into a numerical representation.
Skipping the last few layers of the CLIP text encoder results in a more abstract representation of the text prompt. This can lead to more creative and interesting images, but it can also make it more difficult to generate images that are faithful to the text prompt.
Clip Skip is typically used in conjunction with other parameters, such as the prompt strength and the number of diffusion steps, to control the style and composition of the generated image.