Abstract
AgTech is a burgeoning domain and holds a lot of potential for Artificial Intelligence and Machine Learning. Currently available data in AgTech is scarce, incomprehensive and inadequate. Training machine learning models on such data can lead to subpar performance in real-world environments. This includes scenarios such as changing weather conditions, different soil types, growing crops, shadows, angle of sunlight, light intensity, etc. In this paper, we showcase different augmentation and image synthesis techniques to align the available data in hand with the actual deployment environments and consequently, make the model generalize better in order to handle real world scenarios. We are achieving this with paired image-to-image translation GANs(Generative Adversarial Networks) like pix2pix to synthesize new images by either re-using existing masks or drawing new masks from scratch. We then simulate diverse projections to deal with camera angles and different directional data, including perspective transformation. Further, in order to augment our data with varied weather and lighting conditions, we are using Contrastive Unpaired Translation and state of the art brightening models like ZeroDCE++. We then augment our data with different soil types, and use SR-GAN(Super Resolution GAN) to make the images look more real. Additionally, we are leveraging the Stable Diffusion model to augment crop and weed classes found in the AgTech domain. We benchmark the efficacy of these efforts using qualitative as well as quantitative evaluation metrics.