Wiggins, Paul AKoch, Daniela2025-08-012025-08-012025-08-012025Koch_washington_0250E_28389.pdfhttps://hdl.handle.net/1773/53738Thesis (Ph.D.)--University of Washington, 2025Quantitative analysis of biological images typically involves image segmentation; the identification of pixels belonging toindividual cells or structures present in each image. Given a sufficiently large and diverse set of training images, deep-learning based models are able to reliably automate segmentation with adequate precision for further analysis. However, the significant time and expertise required to annotate training images often makes the use of such models impractical when studying new systems or phenomena. To lower the burden of training data curation, we present approach for generating and training with synthetic images. We introduce OmniStyle, a custom conditional-generative-adversarial network (CGAN) for generating annotated synthetic images to artificially increase the size and/or diversity of training datasets. We demonstrate that training with images generated by OmniStyle improves segmentation performance on diverse datasets without additional hand-annotations. To evaluate the utility of our method, we train an identical segmentation model on datasets of equal size consisting of exclusively real images and compare performances with those trained with real and synthetic images.application/pdfen-USCC BYComputer visionDeep learninggenerative adversarial networksImage synthesismachine learningSegmentationPhysicsComputer scienceMicrobiologyPhysicsBacterial Deepfakes: Generating Synthetic Microscopy Data to Improve Adaptability of Deep Learning-Based Segmentation ModelsThesis