Basically, load your image and then take it into the mask editor and create a mask. ComfyUI is an advanced node based UI utilizing Stable Diffusion. Unless I'm mistaken, that inpaint_only +Lama capability is within ControlNet. Thanks. Workflow requirements. Support for FreeU has been added and is included in the v4. I reused my original prompt most of the time but edited it when it came to redoing the. It looks like this:From inpainting, which allows you to make internal edits, to outpainting for extending the canvas, and image-to-image transformations, the platform is designed for flexibility. The Load VAE node can be used to load a specific VAE model, VAE models are used to encoding and decoding images to and from latent space. When the regular VAE Encode node fails due to insufficient VRAM, comfy will automatically retry using the tiled implementation. 6, as it makes inpainted. Its a good idea to use the 'set latent noise mask' node instead of vae inpainting node. Join me in this journey as we uncover the most mind-blowing inpainting techniques you won't believe exist! Learn how to extract elements with surgical precision. "it can't be done!" is the lazy/stupid answer. Together with the Conditioning (Combine) node this can be used to add more control over the composition of the final image. Launch the ComfyUI Manager using the sidebar in ComfyUI. It will generate a mostly new image but keep the same pose. 🦙 LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions. ↑ Node setup 1: Classic SD Inpaint mode (Save portrait and image with hole to your PC and then drag and drop portrait into you ComfyUI. Interestingly, I may write a script to convert your model into an inpainting model. ComfyUI - Node Graph Editor . ComfyUI系统性. Launch ComfyUI by running python main. An advanced method that may also work these days is using a controlnet with a pose model. It's just another control net, this one is trained to fill in masked parts of images. Welcome to the unofficial ComfyUI subreddit. If you're interested in how StableDiffusion actually works, ComfyUI will let you experiment to your hearts content (or until it overwhelms you). I don’t think “if you’re too newb to figure it out try again later” is a productive way to introduce a technique. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. If you have another Stable Diffusion UI you might be able to reuse the dependencies. I find the results interesting for comparison; hopefully others will too. </p> <p dir="auto">Note that when inpaiting it is better to use checkpoints. This means the inpainting is often going to be significantly compromized as it has nothing to go off and uses none of the original image as a clue for generating an adjusted area. Suggest that ControlNet Inpainting is much better but in my personal experience it does things worse and with less control. DirectML (AMD Cards on Windows) Modern image inpainting systems, despite the significant progress, often struggle with mask selection and holes filling. Masks are blue pngs (0, 0, 255) I get from other people and I load them as an image and then convert them into masks using. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. Interface NodeOptions Save File Formatting Shortcuts Text Prompts Utility Nodes. With this plugin, you'll be able to take advantage of ComfyUI's best features while working on a canvas. I'd like to share Fooocus-MRE (MoonRide Edition), my variant of the original Fooocus (developed by lllyasviel), new UI for SDXL models. There are many possibilities. Imagine that ComfyUI is a factory that produces an image. Inpainting with SDXL in ComfyUI has been a disaster for me so far. beAt 20 steps, DPM2 a Karras produced the most interesting image, while at 40 steps, I preferred DPM++ 2S a Karras. no extra noise-offset needed. Realistic Vision V6. 1. Hypernetworks. Assuming ComfyUI is already working, then all you need are two more dependencies. This is the answer, we need to wait for controlnetXL comfyUI nodes, and then a whole new world opens up. Fuzzy_Time_3366. So you’re saying you take the new image with the lighter face and then put that into the inpainting with a new mask and run it again at a low noise level? I’ll give it a try, thanks. Prior to adoption I generated an image in A1111, auto-detected and masked the face, inpainted the face only (not whole image), which improved the face rendering 99% of the time. The extracted folder will be called ComfyUI_windows_portable. Run update-v3. Superior Strategies: Varied superior approaches are supported by the instrument, together with Loras (common, locon, and loha), Hypernetworks, ControlNet,. Normal models work, but they dont't integrate as nicely in the picture. 1 Inpainting work in ComfyUI? I already tried several variations of puttin a b/w mask into image-input of CN or encoding it into latent input, but nothing worked as expected. Mask is a pixel image that indicates which parts of the input image are missing or. I decided to do a short tutorial about how I use it. Credits Done by refering to nagolinc's img2img script and the diffuser's inpaint pipeline As for what it does. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool. Use global_inpaint_harmonious when you want to set the inpainting denoising strength high. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. 3. Added today your IPadapter plus. 0 model files. by default images will be uploaded to the input folder of ComfyUI. Note that in ComfyUI you can right click the Load image node and “Open in Mask Editor” to add or edit the mask for inpainting. I have found that the inpainting check point actually without any problems, however just as a single model, there are a couple that did not. Also how do you use inpaint with only masked option to fix chars faces etc like you could do in stable diffusion. Jattoe. The UNetLoader node is use to load the diffusion_pytorch_model. When an AI model like Stable Diffusion is paired with an automation engine, like ComfyUI, it allows. While the program appears to be in its early stages of development, it offers an unprecedented level of control with its modular nature. . In the case of ComfyUI and Stable Diffusion, you have a few different "machines," or nodes. VAE Encode (for Inpainting) Transform Transform Crop Latent Flip Latent Rotate Latent Loaders. For users with GPUs that have less than 3GB vram, ComfyUI offers a. 1. This is the result of my first venture into creating an infinite zoom effect using ComfyUI. It looks like I need at least 6GB VRAM to pass VAE Encode (for inpainting) step on 1920*1080 image. Text prompt: "a teddy bear on a bench". I use nodes from Comfyui-Impact-Pack to automatically segment image, detect hands, create masks and inpaint. • 4 mo. Here are the step-by-step instructions for installing ComfyUI: Windows Users with Nvidia GPUs: Download the portable standalone build from the releases page. Otherwise it’s no different than the other inpainting models already available on civitai. New Features. Welcome to the unofficial ComfyUI subreddit. ComfyUI ControlNet - How do I set Starting and Ending Control Step? I've not tried it, but Ksampler (advanced) has a start/end step input. • 28 days ago. Reply. Say you inpaint an area, generate, download the image. In the case of ComfyUI and Stable Diffusion, you have a few different "machines," or nodes. workflows " directory and replace tags. You can also use IP-Adapter in inpainting, but it has not worked well for me. Custom Nodes for ComfyUI: CLIPSeg and CombineSegMasks This repository contains two custom nodes for ComfyUI that utilize the CLIPSeg model to generate masks for image inpainting tasks based on text prompts. The Stable-Diffusion-Inpainting was initialized with the weights of the Stable-Diffusion-v-1-2. ComfyUI. Ctrl + Shift + Enter. . yeah ps will work fine, just cut out the image to transparent where you want to inpaint and load it as a separate image as mask. In comfyUI, the FaceDetailer distorts the face 100% of the time and. Works fully offline: will never download anything. Inpainting with inpainting models at low denoise levels. ControlNet and T2I-Adapter; Upscale Models (ESRGAN, ESRGAN variants, SwinIR, Swin2SR, etc. • 2 mo. x, 2. This node decodes latents in tiles allowing it to decode larger latent images than the regular VAE Decode node. With SD 1. Hi, comfyui is awesome!! I'm having a problem where any time the VAE recognizes a face, it gets distorted. Windows10, latest. It offers artists all of the available Stable Diffusion generation modes (Text To Image, Image To Image, Inpainting, and Outpainting) as a single unified workflow. 24:47 Where is the ComfyUI support channel. Dust spots and scratches. For AMD (Linux only) or Mac, check the beginner's guide to ComfyUI. IMO I would say InvokeAI is the best newbie AI to learn instead, then move to A1111 if you need all the extensions and stuff, then go to. lowering the denoising settings simply shifts the output towards the neutral grey that replaces the masked area. Fixed you just manually change the seed and youll never get lost. py has write permissions. g. This is the answer, we need to wait for controlnetXL comfyUI nodes, and then a whole new world opens up. For example. . so I sent it to inpainting and mask the left hand. Restart ComfyUI. An inpainting bug i found, idk how many others experience it. 1: Enables dynamic layer manipulation for intuitive image synthesis in ComfyUI. 1. Learn every step to install Kohya GUI from scratch and train the new Stable Diffusion X-Large (SDXL) model for state-of-the-art image generation. Ferniclestix. ai just released a suite of open source audio diffusion tools. Interface NodeOptions Save File Formatting Shortcuts Text Prompts Utility Nodes. If you're using ComfyUI you can right click on a Load Image node and select "Open in MaskEditor" to draw an inpanting mask. Part 6: SDXL 1. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. Inpainting (image interpolation) is the process by which lost or deteriorated image data is reconstructed, but within the context of digital photography can also refer to replacing or removing unwanted areas of an image. 6 after a few run, I got this: it's a big improvment, at least the shape of the palm is basically correct. ComfyUI shared workflows are also updated for SDXL 1. Just copy JSON file to " . First: Use MaskByText node, grab human, resize, patch into other image, go over it with a sampler node that doesn't add new noise and. controlnet doesn't work with SDXL yet so not possible. They are generally called with the base model name plus <code>inpainting</code>. How does ControlNet 1. Results are generally better with fine-tuned models. . you can still use atmospheric enhances like "cinematic, dark, moody light" etc. . With ComfyUI, the user builds a specific workflow of their entire process. The origin of the coordinate system in ComfyUI is at the top left corner. diffusers/stable-diffusion-xl-1. 5 and 1. Upload the image to the inpainting canvas. This feature combines img2img, inpainting and outpainting in a single convenient digital artist-optimized user interface. A recent change in ComfyUI conflicted with my implementation of inpainting, this is now fixed and inpainting should work again. Please share your tips, tricks, and workflows for using this software to create your AI art. Examples shown here will also often make use of these helpful sets of nodes: Follow the ComfyUI manual installation instructions for Windows and Linux. We also changed the parameters, as discussed earlier. Not hidden in a sub menu. Inpainting. In the case of features like pupils, where the mask is generated at a nearly point level, this option is necessary to create a sufficient mask for inpainting. Join. 23:06 How to see ComfyUI is processing the which part of the workflow. As an alternative to the automatic installation, you can install it manually or use an existing installation. I'm trying to create an automatic hands fix/inpaint flow. sdxl lora sdxl training sdxl inpainting sdxl fine tuning sdxl auto1111 + 8. Installing WindowscomfyUI和sdxl0. Save workflow. If you need perfection, like magazine cover perfection, you still need to do a couple of inpainting rounds with a proper inpainting model. I have a workflow that works. "VAE Encode for inpainting" should be used with denoise of 100%, it's for true inpainting and is best used with inpaint models but will work with all models. Hypernetworks. The node-based workflow builder makes it easy to experiment with different generative pipelines for state-of-the-art results. Discover amazing ML apps made by the community. Support for SD 1. CLIPSeg Plugin for ComfyUI. fills the mask with random unrelated stuff. Yes, you can add the mask yourself, but the inpainting would still be done with the amount of pixels that are currently in the masked area. Place the models you downloaded in the previous. bat file to the same directory as your ComfyUI installation. What Auto1111 does with "only masked" inpainting is it inpaints the masked area at the resolution you set (so 1024x1024 for examples) and then it downscales it back to stitch it into the picture. * The result should best be in the resolution-space of SDXL (1024x1024). Outputs will not be saved. The Conditioning (Set Mask) node can be used to limit a conditioning to a specified mask. @taabata There. 17:38 How to use inpainting with SDXL with ComfyUI. amount to pad above the image. After generating an image on the txt2img page, click Send to Inpaint to send the image to the Inpaint tab on the Img2img page. If you're happy with your inpainting without using any of the controlnet methods to condition your request then you don't need to use it. Provides a browser UI for generating images from text prompts and images. In the last few days I've upgraded all my Loras for SD XL to a better configuration with smaller files. For inpainting, I adjusted the denoise as needed and reused the model, steps, and sampler that I used in txt2img. ago. ComfyUI promises to be an invaluable tool in your creative path, regardless of whether you’re an experienced professional or an inquisitive newbie. thibaud_xl_openpose also. , Stable Diffusion) fill the "hole" according to the text. Modify the prompt as needed to focus on the face (I removed "standing in flower fields by the ocean, stunning sunset" and some of the negative prompt tokens that didn't matter)Impact packs detailer is pretty good. Tips. Now let’s choose the “Bezier Curve Selection Tool”: With this, let’s make a selection over the right eye, copy and paste it to a new layer, and. UI changesReady to take your image editing skills to the next level? Join me in this journey as we uncover the most mind-blowing inpainting techniques you won't believ. github. Launch the 3rd party tool and pass the updating node id as a parameter on click. You can Load these images in ComfyUI to get the full workflow. Quality Assurance Guy at Stability. 2 workflow ComfyUI got attention recently because the developer works for StabilityAI and was able to be the first to get SDXL running. Change your prompt to describe the dress and when you generate a new image it will only change the masked parts. Is there a version of ultimate SD upscale that has been ported to ComfyUI? I am hoping to find a way to implement image2image in a pipeline that includes multi controlnet and has a way that I can make it so that all generations automatically get passed through something like SD upscale without me having to run the upscaling as a separate step制作了中文版ComfyUI插件与节点汇总表,项目详见:【腾讯文档】ComfyUI 插件(模组)+ 节点(模块)汇总 【Zho】 20230916 近期谷歌Colab禁止了免费层运行SD,所以专门做了Kaggle平台的免费云部署,每周30小时免费冲浪时间,项目详见: Kaggle ComfyUI云部署1. 5 gives me consistently amazing results (better that trying to convert a regular model to inpainting through controlnet, by the way). Use SetLatentNoiseMask instead of that node. bat file. So I would probably try three of those nodes in sequence, with original conditioning going to the outer two, and your controlnet conditioning going to the middle sampler, then you might be able to add steps. Making a user-friendly pipeline with prompt-free inpainting (like FireFly) in SD can be difficult. Q: Why not use ComfyUI for inpainting? A: ComfyUI currently have issue about inpainting models, see issue for detail. Run git pull. Interface NodeOptions Save File Formatting Shortcuts Text Prompts Utility Nodes. • 1 yr. Then drag the output of the RNG to each sampler so they all use the same seed. Direct link to download. Answered by ltdrdata. Done! FAQ. Check [FAQ](#faq) Upload Seamless Face: Upload inpainting result to Seamless Face, and Queue Prompt again. ago. New comments cannot be posted. Once the image has been uploaded they can be selected inside the node. ComfyUI also allows you apply different prompt to different parts of your image or render images in multiple passes. Vom Laden der Basisbilder über das Anpass. This is where 99% of the total work was spent. Jattoe. Space (main sponsor) and Smugo. 1 of the workflow, to use FreeU load the newInpainting. And another general difference is that A1111 when you set 20 steps 0. 6. Wether or not to center-crop the image to maintain the aspect ratio of the original latent images. Obviously since it aint doin much GIMP would have to subjugate itself. Take the image out to a 1. Show more. HELP WITH "LoRa" in XL (colab) r/comfyui. Most other inpainting/outpainting apps use Stable Diffusion's standard inpainting function, which has trouble filling in blank areas with things that make sense and fit visually with the rest of the image. 20:57 How to use LoRAs with SDXL. • 19 days ago. 2 workflow. Also if you want better quality inpaint I would recommend the impactpack SEGSdetailer node. But, I don't know how to upload the file via api. If for some reason you cannot install missing nodes with the Comfyui manager, here are the nodes used in this workflow: ComfyLiterals, Masquerade Nodes, Efficiency Nodes for ComfyUI, pfaeff-comfyui, MTB Nodes. Stable Diffusion will redraw the masked area based on your prompt. You can paint rigid foam board insulation, but it is best to use water-based acrylic paint to do so, or latex which can work as well. 9vae. But these improvements do come at a cost; SDXL 1. Extract the downloaded file with 7-Zip and run ComfyUI. The flexibility of the tool allows. Info. A suitable conda environment named hft can be created and activated with: conda env create -f environment. 0 based on the effect you want) 3. I'm enabling ControlNet Inpaint inside of. On mac, copy the files as above, then: source v/bin/activate pip3 install. The denoise controls the amount of noise added to the image. See how to leverage inpainting to boost image quality. 2. Install the ComfyUI dependencies. inputs¶ samples. 投稿日 2023-03-15; 更新日 2023-03-15VAE Encode (for Inpainting) Transform Transform Crop Latent Flip Latent Rotate Latent Loaders. • 3 mo. Hello! I am starting to work with ComfyUI transitioning from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of. ComfyUI was created in January 2023 by Comfyanonymous, who created the tool to learn how Stable Diffusion works. Trying to encourage you to keep moving forward. The workflow also has TXT2IMG, IMG2IMG, up to 3x IP Adapter, 2x Revision, predefined (and editable) styles, optional up-scaling, Control Net Canny, Control Net Depth, Lora, selection of recommended SDXL resolutions, adjusting input images to the closest SDXL resolution, etc. other things that changed i somehow got right now, but cant get those 3 errors. Saved searches Use saved searches to filter your results more quicklyThe base image for inpainting is the currently displayed image. Uh, your seed is set to random on the first sampler. Second thoughts, heres. It works pretty well in my tests within the limits of. ago. Inpainting or other method? I found that none of the checkpoints know what a "eye monocle" is, they also struggle with "cigar" I wondered what the best way to get the dude with the eye monocle in this. To encode the image you need to use the "VAE Encode (for inpainting)" node which is under latent->inpaint. This allows to create ComfyUI nodes that interact directly with some parts of the webui's normal pipeline. It is typically used to selectively enhance details of an image, and to add or replace objects in the base image. Yet, it’s ComfyUI. This might be useful for example in batch processing with inpainting so you don't have to manually mask every image. Part 2: SDXL with Offset Example LoRA in ComfyUI for Windows. Works fully offline: will never download anything. In diesem Video zeige ich einen Schritt-für-Schritt Inpainting Workflow zur Erstellung kreativer Bildkompositionen. Setting the crop_factor to 1 considers only the masked area for inpainting, while increasing the crop_factor incorporates context relative to the mask for inpainting. I can build a simple workflow (loadvae, vaedecode, vaeencode, previewimage) with an input image. Thank you! Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them. 9模型下载和上传云空间. If you are looking for an interactive image production experience using the ComfyUI engine, try ComfyBox. All models, including Realistic Vision. github. Load VAE. an alternative is Impact packs detailer node which can do upscaled inpainting to give you more resolution but this can easily end up giving you more detail than the rest of. I already tried it and this doesnt seems to work. And + HF Spaces for you try it for free and unlimited. I have an SDXL inpainting workflow running with LORAs (1024*1024px, 2 LORAs stacked). Trying to use b/w image to make impaintings - it is not working at all. Is there any way to fix this issue? And is the "inpainting"-version really so much better than the standard 1. Other features include embeddings/textual inversion, area composition, inpainting with both regular and inpainting models, ControlNet and T2I-Adapter, upscale models, unCLIP models, and more. json" file in ". I. Two of the most popular repos. 2 workflow. ComfyUI - コーディング不要なノードベースUIでStable Diffusionワークフローを構築し実験可能なオープンソースインターフェイス!ControlNET、T2I、Lora、Img2Img、Inpainting、Outpaintingなどもサポート. Depends on the checkpoint. yaml conda activate hft. Seam Fix Inpainting: Use webui inpainting to fix seam. Inpainting is very effective in Stable Diffusion and the workflow in ComfyUI is really simple. 3 would have in Automatic1111. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. . Inpainting: UnstableFusion. Lora. Navigate to your ComfyUI/custom_nodes/ directory. 0 for ComfyUI. So far this includes 4 custom nodes for ComfyUI that can perform various masking functions like blur, shrink, grow, and mask from prompt. 17:38 How to use inpainting with SDXL with ComfyUI. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples FeaturesUse LatentKeyframe and TimestampKeyframe from ComfyUI-Advanced-ControlNet to apply diffrent weights for each latent index. With inpainting you cut out the mask from the original image and completely replace with something else (noise should be 1. canvas websocket vscode-extension webui painting lora inpainting upscaler img2img outpainting realesrgan txt2img stable -diffusion. 20230725 ; SDXL ComfyUI工作流(多语言版)设计 + 论文详解,详见:SDXL Workflow(multilingual version) in ComfyUI + Thesis. Outpainting: SD-infinity, auto-sd-krita extension. All the images in this repo contain metadata which means they can be loaded into ComfyUI. How does ControlNet 1. g. The VAE Decode (Tiled) node can be used to decode latent space images back into pixel space images, using the provided VAE. If anyone find a solution, please. 8. はStable Diffusionを簡単に使えるツールに関する話題で 便利なノードベースのウェブUI「ComfyUI」のインストール方法や使い方 を一通りまとめてみるという内容になっています。 Stable Diffusionを. ControlNet and T2I-Adapter; Upscale Models (ESRGAN, ESRGAN variants, SwinIR, Swin2SR, etc. The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. don't use a ton of negative embeddings, focus on few tokens or single embeddings. Open a command line window in the custom_nodes directory. inpainting. Feel like theres prob an easier way but this is all I could figure out. Latest Version Download. CLIPSeg. 35 or so. From top to bottom in Auto1111: Use an inpainting model. 4K views 2 months ago ComfyUI. Stable Diffusion Inpainting, a brainchild of Stability. 0 denoising, but set latent denoising can use the original background image because it just masks with noise instead of empty latent. Stable Diffusion保姆级教程无需本地安装. there are images you can download and just load into ComfyUI (via the menu on the right, which set up all the nodes for you. This is the area you want Stable Diffusion to regenerate the image. When comparing ComfyUI and stable-diffusion-webui you can also consider the following projects: stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Node setup 1 below is based on the original modular scheme found in ComfyUI_examples -> Inpainting. Install the ComfyUI dependencies. 0 ComfyUI workflows! Fancy something that in. I'm finding that I have no idea how to make this work with the inpainting workflow I am used to using in Automatic1111. Explanation. 5 inpainting model, and separately processing it (with different prompts) by both SDXL base and refiner models: ️ 3 bmc-synth, raoneel, and vionwinnie reacted with heart emoji Note that in ComfyUI you can right click the Load image node and “Open in Mask Editor” to add or edit the mask for inpainting. io) Can. Seam Fix Inpainting: Use webui inpainting to fix seam. The interface follows closely how SD works and the code should be much more simple to understand than other SD UIs. useseful for. SDXL-Inpainting. (custom node) 2. The latent images to be masked for inpainting. Requirements: WAS Suit [Text List, Text Concatenate] : ( Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab (Free) & RunPod, SDXL LoRA, SDXL InPainting. How to restore the old functionality of styles in A1111 v1. Original v1 description: After a lot of tests I'm finally releasing my mix model. Fernicles SDTools V3 - ComfyUI nodes. You can Load these images in ComfyUI to get the full workflow. Enhances ComfyUI with features like autocomplete filenames, dynamic widgets, node management, and auto-updates. A recent change in ComfyUI conflicted with my implementation of inpainting, this is now fixed and inpainting should work again. Simple LoRA workflows; Multiple LoRAs; Exercise: Make a workflow to compare with and without LoRA I'm an Automatic1111 user but was attracted to ComfyUI because of it's node based approach. Examples. VAE Encode (for Inpainting)¶ The VAE Encode For Inpainting node can be used to encode pixel space images into latent space images, using the provided VAE. • 3 mo. Open a command line window in the custom_nodes directory. 0_0. A good place to start if you have no idea how any of this works is the: ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base. - GitHub - Bing-su/adetailer: Auto detecting, masking and inpainting with detection model.