Inpainting With Stable Diffusion: What it Is and How to Get Great Results
The use of artificial intelligence (AI) tools to generate images and other pieces of art has proved to be quite effective and progressive. AI-generated images are becoming quite the norm as more professional artists and amateurs continue to use the emerging AI art generators to advance their work and creativity.
One of the popular AI art generation tools today is Stable Diffusion. This diffusion model is helping artists to generate photorealistic images that are identical to original paintings done by hand. But although Stable Diffusion can generate high-quality images, it doesn’t always give you the exact results you desire.
For instance, it may not generate images with the precise style of hair or fingers you desire. This calls for the need to modify your images and fine-tune them as you do with photos taken with a camera. Thankfully, there are numerous image modification approaches you can take to enhance the quality of your images.
For instance, you can use DreamBooth with Stable Diffusion to enhance its training so that it can generate better images. Furthermore, Stable Diffusion offers an inpainting function that allows you to modify your images before you render and generate them. So, if you want to use Stable Diffusion effectively, you’ll want to understand what inpainting stands for and how you can use it to get the best results.
What Is Inpainting in Stable Diffusion?
In Stable Diffusion, inpainting is an indispensable way of fixing minor defects in your outputs (AI images). No matter how progressive your AI image generator is, it won’t always give you perfect images. Therefore, you’ll need the inpainting function to fix the imperfections and fill in the missing details of your images.
The main aim of implementing the inpainting function, when generating images with Stable Diffusion, is to hide any traces of image restoration. Many artists use this method to remove undesired elements from their images and to restore damaged parts of historical photos. Although Stable Diffusion’s inpainting function is a fairly new concept, it is yielding some promising effects.
This technique can also be used on images generated using the Deepfloyd AI art generator and other deep-learning models. With Stable Diffusion, the process of inpainting is quite straightforward. Just click on the Stable Diffusion ‘Inpainting’ button and choose the ‘Upload Image’ option. Once the image is successfully uploaded, erase the parts of your image you want to replace or modify.
Then, key in the text prompts you want to use to fix the image in the prompt bar provided. Then click on the ‘Run’ option and wait for the modifications to be implemented. Repeat this process with modified text prompts until you get the kind of image you need.
What Does Stable Diffusion Inpainting Do?
Previously, artists used inpainting to restore and reconstruct old and damaged photos by removing visible cracks, dust spots, scratches, and other blemishes from the photos. But with the advancing AI image generation technology, Stable Diffusion inpainting has become useful in many other ways with artists using it to achieve more.
For example, this inpainting function allows you to not only restore the missing element of your images but also render new elements in different parts of an image. With this image modification capability, you’re only limited by your imagination. This feature offers a wide variety of erasing brushes to help you remove unwanted parts of your images.
These brushes come in different shapes and sizes to meet the needs of every artist. You just need to select your preferred brush, adjust its size, and drag it over the areas you want to erase content from your image. It works the same way as an ordinary graphics editor. However, you have to ensure that the entire element you want to erase from your AI image is completely erased.
Leaving a few pixels of the unwanted components of the image will misguide your Stable Diffusion platform to generate more unwanted pieces. Once you’ve erased everything you wish to replace, and keyed in the right prompt to introduce the new elements of the image, the next important step is to render the new elements you’ve added to your image.
This involves moving the ‘Generation Frame’ to cover the erased areas. While doing this, make sure the ‘Inpaint/Outpaint’ option is selected. In the prompt bar, describe the changes you want to make to your image and click on the ‘Generate’ button. The AI Editor in Stable Diffusion will provide you with four images to choose from.
At this point, you can choose the image that contains all the elements you need and continue editing it for better quality, or cancel the whole modification process and start afresh. Lastly, make sure the rendering process captures everything you want to have on your image.