With the rapid development of large-scale and complex generative AI models, there is a there is a demand for new tools to explain the inner workings and behavior of such systems. While various visual explanation techniques for image-based AI systems exist in current literature, the intersection of these explanations and text-to-image generative AI is relatively less explored. In this paper, we explore the extension of optimization-based feature visualization to text-to-image generative AI models. We propose a general methodology for producing visualizations by learning the optimal text embeddings that maximally activate intermediate components of an image classifier network, and implement this procedure with the Stable Diffusion image generator and AlexNet and ResNet classifier networks. We evaluate how our produced visualizations affect human understanding of neuron activations and compare them to feature visualizations produced by the Lucent software library. We find that our method is able to produce text embeddings that generate an image that significantly increases a chosen neuron activation compared to a random text embedding. We also find that while our method of producing feature visualizations is less preferred than Lucent for explaining neuron activation behavior on average, it provides simpler, easier to comprehend explanations in special cases.