Draw mask comfyui reddit. I believe it does mostly the same things as OP's node. Use a "Mask from Color" node and set it to your first frame color. [Load image] -> [resize to match image being generated] -> [image-to-mask] -> [gaussian blur mask] to soften edges Then use [invert mask] to make a mask that is the exact opposite and [solid mask] to make a pure white mask. Share, discover, & run thousands of ComfyUI workflows. This workflow generates an image with SD1. one Mask after the other. At least that's what I think. I think the later combined with Area Composition and ControlNet will do what you want. This workflow, combined with Photoshop, is very useful for: - Drawing specific details (tattoos, special haircut, clothes patterns, …) Load the upscaled image to the workflow, use ComfyShop to draw a mask and inpaint. Finally, the story text image output from module 9 was pasted on the right side of the image. 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting masks), a new Watermarker, support for Kohya Deep Shrink, Self-Attention, StyleAligned, Perp-Neg, and IPAdapter attention mask Source image. Please share your tips, tricks, and workflows for using this software to create your AI art. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Imagine you have a 1000px image with a circular mask that's about 300px. Welcome to the unofficial ComfyUI subreddit. The Krita plugin is great but the nodal soup part isn't there so I can't change some things. Mask detailer allows you to simply draw where you want it to apply the detailing. Outline Mask: Unfortunately, it doesn't work well because apparently you can't just inpaint a mask; by default, you also end up painting the area around it, so the subject still loses detail IPAdapter: If you have to regenerate the subject or the background from scratch, it invariably loses too much likeness Still experimenting with it though. Regional prompting makes that rather simple all in one image, with multiple hand drawn masks all in app(my most complicated involved 8 hand drawn masks), sure I can paint a mask with an outside app, but why would I bother when it's built into an app in automatic1111. I suppose that does work for quick and dirty masks. Overall, I've had great success using this node to do a simple inpainting workflow. But when Krita plugin happened I switched to that. ) Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. TLDR, workflow: link. It needs a better quick start to get people rolling. 💡 Tip: Most of the image nodes integrate a mask editor. A way to draw inside comfyui? Are there any nodes for sketching/drawing directly in comfyui? Of course you can always take things into an external program like photoshop, but i want to try drawing simple shapes for controlnet or paint simple edits before putting things into inpaint. For the specific workflow, please download the workflow file attached to this article and run it. Any way to paint a mask inside Comfy or no choice but to use an external image editor ? It's not released yet, but i just finished 80% of features. It animates 16 frames and uses the looping context options to make a video that loops. Invoke AI has a super comfortable and easy to use regional prompter thats based on simply drawing, was wondering if there's such one in comfyui, even if it's an external node? suuuuup, :Dso, with set latent noise mask, it is trying to turn that blue/white sky into a space ship, this may not be enough for it, a higher denoise value is more likely to work in this instance, also if you want to creatively inpaint then inpainting models are not as good as they want to use what exists to make an image more than a normal model. Release: AP Workflow 7. I learned about MeshGraphormer from this youtube video of Scott Detweiler, but felt like simple inpainting does not do the trick for me, especially with SDXL. Does anyone else notice that you cannot mask the very bottom of the image with the right-click masking option? And I'm not talking about the mouse not being able to 'mask' it there. So from what I can tell, ComfyUI seems to be vastly more powerful than even Draw Things (which has a lot of configuration settings). Does anyone know why? I would have guessed that only the area inside of the mask would be modified. (And if you wanted 4 masks in one image, draw over a transparent background in a . If you're using the built in mask editor, just use a small brush and put dots outside the area you already masked. In ComfyUI, the easiest way to apply a mask for inpainting is: use the "Load Checkpoint" node to load a model. The mask editor suck. png file, and then R, G, B and Alpha can all mask different areas. I use the "Load Image" node and "Open in MaskEditor" to draw my masks. It doesn't replace the image (although that might seem to be what it's doing visually), it's saving a separate channel with that mask, so you get two outputs (image and mask) from that one node. Would you pls show how I can do this. A transparent PNG in the original size with only the newly inpainted part will be generated. What else is out there for drawing/painting a latent to be fed into ComfyUI other than the Photoshop one(s)? Welcome to the unofficial ComfyUI subreddit. I want to be able to use canny, ultimate SD upscale while inpainting, AND I want to be able to increase batch size. Uh, your seed is set to random on the first sampler. Yet, there is no mask node as a common denominator node. So, has someone…. As i can't draw the second mask on the result of the first character image (the goal is to do it in one workflow) i draw it on the original picture and i send this mask only in the new VAE Encode (for Inpainting). Discord Sign In. I kinda fake it by loading any image, than drawing mask on it, than convert mask to image and than send that image to controlnet. It's a more feature-rich and well-maintained alternative for dealing Welcome to the unofficial ComfyUI subreddit. 86s/it on a 4070 with the 25 frame model, 2. Reproducing the behavior of the most popular SD implementation (and then surpassing it) would be a very compelling goal I would think. You can also select non-face bbox models and facedetailer will detail hands etc ComfyUI is not supposed to reproduce A1111 behaviour I found the documentation for ComfyUI to be quite poor when I was learning it. Below is the effect image generated by the AI after I imported a simple bedroom line drawing: Welcome to the unofficial ComfyUI subreddit. Layer copy & paste this PNG on top of the original in your go to image editing software. use the "Load Image (as Mask)" to load the grayscale mask image, specifying "channel" as "red". I am working on a piece which requires me to have mask which reveals a texture. Use the mask tool to draw on specific areas, then use it for input to subsequent nodes for redrawing. Try drawing them over a black background, though, not a white background. I make them 512x512, but the size isn't important. Please keep posted images SFW. Is this more or less accurate? While obviously it seems like ComfyUI has big learning curve, my goal is to actually make pretty decent stuff, so if I have to put the time investment into Comfy, that's fine to me. I want to create a maks which follows the contours of thr subject (a lady in my case). Is there "drawing" node for comfyui that would be bit more user friendly? Like ability to zoom in on parts you are drawin on, colors etc. This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. Mar 10, 2024 · comfyui_facetools. Belittling their efforts will get you banned. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and TLDR: THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. You can do it with Masquerade nodes. If you spent more than a few days in comfyui, you will recognize that there is nothing here that cannot be done with the already available nodes. 0 for ComfyUI - Now with a next-gen upscaler (competitive against Magnific AI and Topaz Gigapixel!) and higher quality mask inpainting with Fooocus inpaint model Join the largest ComfyUI community. And above all, BE NICE. I'm not sure exactly what it stores, but i always draw a mask, send it to MaskToSEGS where I can set the crop factor to determine region for context, then to SEGS Detailer. But one thing I've noticed is that the image outside of the mask isn't identical to the input. There are many detailer nodes not just facedetailer. This will take our sketch image and crop it down to just the drawing in the first box. If you do a search for detailer, you will find both segs detailer and mask detailer. For some reason this isn't possible. "SEGS" is the format that Impact pack uses to bundle masks with additional information. Thanks. Thanks everyone always asks about inpainting at full resolution, comfyUI by default inpaints at the same resolution as the base image as it does full frame generation using masks. Step Two: Building the ComfyUI Partial Redrawing Workflow. But basically if you are doing manual inpainting make sure that the sampler producing your inpainting image is set to fixed that way it does inpainting on the same image you use for masking. . Release: AP Workflow 8. For these workflows we use mostly DreamShaper Inpainting. Feed this over to a "Bounded Image Crop with Mask" node, using our sketch image as the source with zero padding. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. png. an alternative is Impact packs detailer node which can do upscaled inpainting to give you more resolution but this can easily end up giving you more detail than the rest of As for the rest, if memory serves the mask segm custom_node has a couple of extra install steps which are easy to follow & if you load the workflow & see redded out nodes just go to the ComfyUi node manager in the side float menu & click install missing nodes then reset & you should be good to go. And you can't use soft brushes. use the "Load Image" node to load a source image to modify. You can choose your preferred drawing software, like Procreate on an iPad, and then import the doodled image into ComfyUI. Hi amazing ComfyUI community. And I never know what controlnet model to use. The flow is in shambles right now so I'll just share this screengrab. I need to combine 4 5 masks into 1 big mask for inpainting. Share art/workflow . Inpaint is pretty buggy when drawing masks in a1111. Edit: And rembg fails on closed shapes, so it's not ideal Welcome to the unofficial ComfyUI subreddit. The workflow that was replaced: When Canvas_Tab came out, it was awesome. It's not that slow, but I was wondering if there was a more direct Latent with 'fog' background -> Latent Mask node somewhere. How can I draw regional prompting like invokeAIs regional prompting (control layers) that allows drawing the regional prompting rather than typing numbers? Title says it all. i think, its hard to tell what you think is wrong. 75s/it with the 14 frame model. You can see how easily and effectively the size/placement of the subject can be controlled simply by drawing a new mask. In this example, it will be 255 0 0. Seems very hit and miss, most of what I'm getting look like 2d camera pans. In fact, from inpainting to face replacement, the usage of masks is prevalent in SD. Comfy Workflows Comfy Workflows. Import the image at the Load Image node. These custom nodes provide a rotation aware face extraction, paste back, and various face related masking options. It includes literally everything possible with AI image generation. In fact, there's a lot of inpainting stuff you can do with comfyui that you can't do with automatic1111. Aug 25, 2024 · Hello, ComfyUI community! I'm seeking advice on improving the background removal process in images. Currently, there are many extensions (custom nodes) available for background removal in ComfyUI, such as Easy-use, mixlab, WAS-node-suite, Inspyrenet-Rembg, and others. In addition to a whole image inpainting and mask only inpainting, I also have workflows that I was wondering if there is anyway to create Mask in depth in comfyUI. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. Basically though you’d be using a mask, you’d right click on the load images and draw the mask, then there is a node to snip it and stitch it back in … pretty sure the node was something like “stitch”. Right click on any image and select Open in Mask Editor. To blend the image and scroll naturally, I created a Border Mask on top: Mask. The first issue is the biggest for me though. A lot of people are just discovering this technology, and want to show off what they created. So far (Bitwise mask + mask) has only 2 masks and I use auto detect so mask can run from 5 too 10 masks. My mask images. Step One: Image Loading and Mask Drawing. It depends how you made the mask in the first place. This node pack was created as a dependency-free library before the ComfyUI Manager made installing dependencies easy for end-users. Create a black and white image that will be the mask. <edit 2> Actually now I understand what it's doing. This will set our red frame as the mask. The method is very simple; you still need to use the ControlNet model, but now you will import your hand-drawn draft. If something is off I can redraw the masks as needed, one by one or only one. Unless you specifically need a library without dependencies, I recommend using Impact Pack instead. After completing all the integrations, I output via AnythingAnywhere. I have this working, however to mask the upper layers after the initial sampling I VAE decode them and use rembg, then convert that to a latent mask. Alternatively you can create an alpha mask on any photo editing software. You can paint all the way down or the sides. They don't have to literally be single pixels, just small. That way, if you take just the red channel from the mask, it'll give you just the red man, and not the background. Wanted to share my approach to generate multiple hand fix options and then choose the best. What is the rationale behind the drawing of the mask? I don't want to break my drawing/painting workflow by editing csv files, calculating rectangle areas. Current Situation. Combine both methods: gen, draw, gen, draw, gen! Always check the inputs, disable the KSamplers you don’t intend to use, make sure to have the same resolution in Photoshop than in ComfyUI. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting! I don't know if there is a node for it (yet?) in ComfyUI, but I imagine that under the hood, it would take each colored region and make a mask of each color, then use attention coupling on each mask with the associated regional prompt. Even if you set the size of the masking circle to max and go over it close enough so that it appears to be fully masked, if you actually save it to the node and Yeah there are tools that do this , I can’t check them right now but I can later if you remind me. Turns out drawing "^" shaped masks seems to work a bit better than rectangles (especially for smaller masks) because it implies the leg positioning. For example, the Adetailer extension automatically detects faces, masks them, creates new faces, and scales them to fit the masks. Here i add one of my PNG so you can see the whole workflow : Here I come up against two problems: Look into Area Composition (comes with ComfyUI by default), GLIGEN (an alternative area composition), and IPAdapter (custom node on GitHub, available for manual or ComfyUI manager installation). One thing about human faces is that they are all unique. 4. vgjab fthq ssfcf btizt vsqzm wffporcl nuajtdg jma ylrm dpxdfd