Stable Diffusion (SD) and other AI diffusion models can generate images in seconds. These models, however, are trained on millions of images, and sometimes not all are taken with consent. This is problematic because models sometimes produce outputs similar to their training data [10]. To investigate how closely SD can mimic an art style, we fine-tune Stable Diffusion Version 1.5 (SDv1.5) on DreamBooth to generate images in the style of the webcomic Goddess of Chaos. Never before used in training, Goddess of Chaos has two main styles: realistic (Style 1, S1) and abstract (Style 2, S2). We also designed two prompts: one asking for a goddess (Prompt I, PI) and another with the addition of “riding a dragon” (Prompt II, PII). We train 3 models on 8, 16, and 24 S1 images each for PI and PII. We also trained two models (Models S and M) on 20 S1 images and 10 S1 and 10 S2 images. We will explore the following questions: 1) How does giving the SD Model more training images affect the model’s ability to mimic Style 1? 2) How does the model incorporate additional elements not shone in the training set for the fine-tuning, like “riding a dragon,” in the output? 3) Would the model perform better or worse with training data in an abstract style (like Style 2)? 4) How feasibly can SDv1.5 merge Styles 1 and 2? 5) Can viewers distinguish between the artist’s art and that of SDv1.5?