DescriptionThis AI-generated woman does not exist.png | Algorithmically-generated AI photograph of a woman shopping at Costco, created using the Stable Diffusion V1-5 AI diffusion model. This 100% fictitious woman was generated by the AI diffusion model, and no such person exists in real life; her face is an approximate amalgamation of thousands of different human faces. Of particular note are the woman's anatomically incorrect and deformed hands, the nondescript and unrecognisable products for sale in the store, and the illegible signage; these are all common traits of photorealistic images generated by Stable Diffusion. - Procedure/Methodology
This image was generated using an NVIDIA RTX 4090; since Ada Lovelace chipsets (using compute capability 8.9, which requires CUDA 11.8) are not fully supported by the pyTorch dependency libraries currently used by Stable Diffusion, I've used a custom build of xformers, along with pyTorch cu116 and cuDNN v8.6, as a temporary workaround. Front-end used for the entire generation process is Stable Diffusion web UI created by AUTOMATIC1111. A single 768x1024 image was generated with txt2img using the following prompts: Prompt: Lovely cousin Matilda, fair daughter of England. In Costco. Eighteen years old, a woman full-grown. My timid darling, my shining angel, O good heavens! Her eyes are like the morning sun. Her strange clothes are so amusing. I would trace the contour of her delicate hair. She is my heart's delight. 2018
Negative prompt: toy, B&W, nudity, (painting), outside, greenscreen, studio, ugly child
Settings: Steps: 100, Sampler: DPM2, CFG scale: 7, Size: 768x1024, Highres. fix, Denoising strength: 0.7, First pass size: 448x640
The above prompt was written to imitate the style of a typical cringey Facebook post written by an old man, in order to fully exploit the quirks of how the model was originally trained, as the training data was collated from public web content scraped by web crawlers and then organised into captioned pairs. Imitating the posting style of a particular demographic within the text prompt will provide better image generation samples of the typical kind of photography such a demographic would post online. Afterwards, the image was extended by 128 pixels on the top and bottom using two passes of the "Outpainting mk2" script within img2img. This was done using a setting of 100 sampling steps with Euler a, denoising strength of 0.8, CFG scale of 7, mask blur of 8, fall-off exponent value of 1.8, colour variation set to 0.03. This subsequently increases the image's dimensions to 768x1280, while also revealing the top of the person's head and bottom of their shoe, which were previously absent from the original AI-generated image made within txt2img. Then, one pass of the SD upscale script using "LDSR" were run within img2img, using a tile overlap of 256, denoising strength of 0.1, 100 sampling steps with DPM2, and a CFG scale of 7. This creates our final 1536x2560 image. |
Permission (Reusing this file) | - Output images
As the creator of the output image, I release this image under the licence displayed within the template below. - Stable Diffusion AI model
The Stable Diffusion AI model is released under the CreativeML OpenRAIL-M License, which "does not impose any restrictions on reuse, distribution, commercialization, adaptation" as long as the model is not being intentionally used to cause harm to individuals, for instance, to deliberately mislead or deceive, and the authors of the AI models claim no rights over any image outputs generated, as stipulated by the license. - Addendum on datasets used to teach AI neural networks
Images generated by Stable Diffusion are algorithmically created based on the AI diffusion model's neural network as a result of learning from various datasets; the algorithm does not use preexisting images from the dataset to create the new image. Ergo, generated images cannot be considered derivative works of components from within the original dataset. |