Generative AI, especially in text-to-image tasks, is increasingly important in digital advertising. Models like DALLE 3 and MidJourney are already being used for ad creation and ideation. These models utilize image inpainting, a technique for editing specific areas of an image using an input mask to define the region of interest. While these models effectively inpaint and generate backgrounds, they often struggle with transitions between the product and the background, affecting Photorealism. Furthermore, there is no unified framework for adapting these models to industry- specific needs and product customization. A specialized solution is proposed for the advertising industry with a three- step process for creative ad automation, while maintaining Photorealism: (a) Control Image Generation: an enhanced image preprocessing pipeline using Canny Edge Detection to ensure only product edges are included in the mask, reducing unwanted extensions and distortions around the product; (b) Parameter Setting: based on the control image from step 1, we recommend settings for the Stable Diffusion ControlNet pipeline to produce high-quality, inpainted images tailored to advertising requirements; (c) Customization Module: allowing users to upload banner templates, select fonts, and add text on the inpainted image to create a complete advertisement. This approach simplifies the creative process, improves visual quality, and reduces costs, making it more efficient for designers.