Comfyui upscale models reddit


Comfyui upscale models reddit. You can't use that model for generations/ksampler, it's still only useful for swapping. This results is the same as with the newest Topaz. Here is an example: You can load this image in ComfyUI (opens in a new tab) to get the workflow. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. The restore functionality, that adds detail, doesn't work well with lightning/turbo models. Tried the llite custom nodes with lllite models and impressed. As well Juggernaut XL and other XL models. You can also do latent upscales. The model used for upscaling. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. Search for upscale and click on Install for the models you want. In ComfyUI, we can break their approach into components and make adjustments at each part to find workflows that get rid of artifacts. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. Upscaling: Increasing the resolution and sharpness at the same time. May 5, 2024 · こんにちは、はかな鳥です。 前回、明瞭化アップスケールの方法解説として、『clarity-upscaler』のやり方を A1111版&Forge版 で行いましたが、今回はその ComfyUI版 です。 『clarity-upscaler』というのは一つの拡張機能というわけではなく、ここでは Controlnet や LoRA 等、さまざまな機能を複合して作動 114 votes, 43 comments. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. Reddit page for Nucleus Co-op, a free and open source program for Windows that allows split-screen play on many games that do not initially support it, the app purpose is to make it as easy as possible for the average user to play games locally using only one PC and one game copy. Put the models here: ComfyUI\models\upscale_models; 1x Refiner Model - You can use the 1x models here for refining the video first. I would like to know or get some advice on how to do it properly to squeeze the maximum quality of the model. with a denoise setting of 0. I love to go with an SDXL model for the initial image and with a good 1. 5 combined with controlnet tile and foolhardy upscale model. All the models are located in M:\AI_Tools\StabilityMatrix-win-x64\Data\Models. Will be interesting seeing LDSR ported to comfyUI OR any other powerful upscaler. Thanks. The downside is that it takes a very long time. This is what A1111 also does under the hood, you just have to do it explicitly in comfyui. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. Edit: you could try the workflow to see it for yourself. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. Solution: click the node that calls the upscale model and pick one. These values can be changed by changing the "Downsample" value, which has its own documentation in the workflow itself on values for sizes. For SD 1. 10. Technical problems should go into r/stablediffusion We will ban anything that requires payment, credits or the likes. 56 denoise which is quite high and giving it just enough freedom to totally screw up your image. safetensors -- makes it easier to remember which one to choose where you're stringing together workflows. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. Would you mind providing even the briefest explanation on these? I feel like there is so much that is improving and new functionality being added to SD, but when new tools become available the explanation for what they do is non existent. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. safetensors vs 1. Any guide on creating comic books with SD, I’m interested in developing a workflow that maintains characters, scene, and style consistency, and… We’re on a journey to advance and democratize artificial intelligence through open source and open science. 43 votes, 16 comments. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. 5 if you want to divide by 2) after upscaling by a model. What can I do to fix these issues? if it helps, I'm on Python 3. I have played around with it but all the low step fast models require very low cfg also so it's difficult to make them follow prompts strongly, especially when you want to go against the models natural bias. 5x on 10GB NVIDIA GPU's. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. I created a workflow with comfy for upscaling images. It has more settings to deal with than ultimate upscale, and it's very important to follow all of the recommended settings in the wiki. We only approve open-source models and apps. This way it replicates the sd upscale/ultimate upscale scripts from A1111. The pixel images to be upscaled. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. You could also try a standard checkpoint with say 13, and 30. This is the 'latent chooser' node - it works but is slightly unreliable. This is why I want to add ComfyUI support for this technique. so i. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. The idea is simple, use the refiner as a model for upscaling instead of using a 1. 9 , euler I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's garbage collection because it's basically a janky wrapper for an Auto1111 extension. I haven't been able to replicate this in Comfy. Upscale x1. 5x upscale on 8GB VRAM NVIDIA GPU's without any major VRAM issues, as well as being able to go as high as 2. Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. Instead, I use Tiled KSampler with 0. inputs. From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. higher denoise), it adds appropriate details. . I want to upscale my image with a model, and then select the final size of it. It uses CN tile with ult SD upscale. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. 15-0. However, I'm facing an issue with sharing the model folder. For AI-generate video upscales, something like a chain of AD LCM + Ipadapter + Ultimate Upscale. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) Sames as Swin4R which details a lot the image. info Website. example usage text with workflow image. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. 50 votes, 20 comments. I rarely use upscale by model on its own because of the odd artifacts you can get. I am curious both which nodes are the best for this, and which models. And when purely upscaling, the best upscaler is called LDSR. DirectML (AMD Cards on Windows) pip install torch-directml Then you can launch ComfyUI with: python main. Usually I use two my wokrflows: We would like to show you a description here but the site won’t allow us. After generating my images I usually do Hires. r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. It tells me that I need to load a refiner_model, a vae_model, a main_upscale_model, a support_upscale_model, and a lora_model. I'm not entirely sure what ultimate SD upscale does, so I'll answer generally as to how I do upscales. 0 Base and Refiner models An automatic calculation of the steps required for both the Base and the Refiner models A quick selector for the right image width/height combinations based on the SDXL training set An XY Plot function (that works with the Refiner) Welcome to the unofficial ComfyUI subreddit. Thank you community! Hi, does anyone know if there's an Upscale Model Blend Node, like with A1111? Being able to get a mix of models in A1111 is great where two models bring something different to the party. Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. Final upscaling via UltimateSDupscale and ControlNet - ~7 minutes Reply reply Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) I've been using Stability Matrix and also installed ComfyUI portable. Does anyone have any suggestions, would it be better to do an ite A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) I get good results using stepped upscalers, ultimateSD upscaler and stuff. 4x Upscale Model - Choose from a variety of 1x,2x,4x or 8x model from the https://openmodeldb. Cause I run SDXL based models from start and through 3 ultimate upscale nodes. - latent upscale looks much more detailed, but gets rid of the detail of the original image. You can use folders too, so eg cascade/clip_model. For example, you might prompt the model differently when it's rendering the smaller patches, removing the "kangaroo" entirely. (also may want to try an upscale model>latent upscale, but thats just my personal preference really) In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. This is no tech support sub. May be somewhere i will point out the issue. It added nothing. Please share your tips, tricks, and… For upscaling with img2img, you first upscale/crop the source image (optionally using a dedicated scaling model like ultrasharp or something) convert it to latent and then run the ksampler on it. I can only make a stab at some of these, as I'm still very much learning. I'm using SIAX models or real_ersgan or foolhardy depending on the need when it need to 'go fast' or have an intermediary step to complete with something like zeroscope. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. safetensors (SD 4X Upscale Model) Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. It's especially amazing with SD1. I gave up on latent upscale. 9, end_percent 0. 0-RC , its taking only 7. You just have to use the node "upscale by" using bicubic method and a fractional value (0. But i want your guys opinion on the upscale you can download both images in my google drive cloud i cannot upload them since they are both 500mb - 700mb Thanks for all your comments. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Please share your tips, tricks, and… Now i am trying different start-up parameters for comfyui like disabling smarty memory, etc. image. There is no tiling in the default A1111 hires. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). Any paid-for service, model or otherwise running for profit and sales will be forbidden. Note: Remember to add your models, VAE, LoRAs etc. You can use it in any picture, you will need ComfyUI_UltimateSDUpscale If you'd like to load LoRA, you need to connect "MODEL" and "CLIP" to the node, and after that, all the nodes that require these two wires should be connected with the ones from load LoRAs, so of course, the workflow should work without any problems. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. I tried the same main prompt as last night, but this time, it all blew up in my face. Clearing up blurry images have it's practical use, but most people are looking for something like Magnific - where it actually fixes all the smudges and messy details of the SD generated images and in the same time produces very clean and sharp results. Aug 29, 2024 · Upscale Model Examples. Though, from what someone else stated it comes to use case. fix. Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. But then today, I loaded Searge SDXL Workflow, as so many people have suggested, and I am just absolutely lost. 19K subscribers in the comfyui community. upscale_model. 5 minutes. e. The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P It s not necessary an inferior model, 1. Good for depth, open pose so far so good. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Because the upscale model of choice can only output 4x image and they want 2x. in a1111 the controlnet /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. 5/clip_model_somemodel. Point the install path in the automatic 1111 settings to the comfyUI folder inside your comfy ui install folder which is probably something like comfyui_portable\comfyUI or something like that. The "FACE_MODEL" output from the ReActor node can be used with the Save Face Model node to create a insightface model, then that can be used as a ReActor input instead of an image. 5 based models, and 1024px for SDXL. These same models are working in A1111 but I prefer the workflow of ComfyUI. IMAGE. the factor 2. pth or 4x_foolhardy_Remacri. Thank you for helps If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. 5 it s in mature state where almost all the models and loras are based on it, so you get better quality and speed with it. The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. [2]. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. I have a custom image resizer that ensures the input image matches the output dimensions. You can construct an image generation workflow by chaining different blocks (called nodes) together. py --directml Here is a workflow that I use currently with Ultimate SD Upscale. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. There's "latent upscale by", but I don't want to upscale the latent image. Please share your tips, tricks, and… Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Upscale a favorite frame with a different model to increase detail, but keeping the overall structure of the frame - 1. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Welcome to the unofficial ComfyUI subreddit. There are also "face detailer" workflows for faces specifically. 1 and 6, etc. Here is an example of how to use upscale models like ESRGAN. 5=1024). Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. The last one takes time I must admit but it run well and allow me to generate good quality images (I managed to have a seams fix settings config that works well for the last one hence the long processing) FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. 5 for the diffusion after scaling. Hi! I've been experimenting and trying some workflows / tutorials but I don't seem to be getting good results with hires fix. For photo upscales, I'm a sucker for 1:1 matches so I'm using topaz. * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. Or maybe others might be able to offer further advice. After borrowing many ideas, and learning ComfyUI. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). It depends what you are looking for. The best method I 15K subscribers in the comfyui community. safetensors and 1. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. As a general rule you want to be rendering at the native size of the model you're using, so tile sizes are probably better set to 512px for 1. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Reply reply Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. That's because latent upscale turns the base image into noise (blur). I have been using 4x-ultrasharp for as long as I can remember, but just wondering what everyone else is using and which use case? I tried searching the subreddit but the other posts are like earlier this year or 2022 so I am looking for updated information. Remember that 2x, 4x, 8x means it will upscale the original resolution x2, x4, x8 times. The hires script is overriding the ksamplers denoise so your actually using . For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). SD upscaler and upscale from that. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting This new upscale workflow also runs very efficiently, being able to 1. My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. That's because of the model upscale. 5/clip_some_other_model. outputs. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. 6. Text2Image with SDXL 1. I do a first pass at low-res (say, 512x512), then I use the IterativeUpscale custo Honestly you can probably just swap out the model and put in the turbo scheduler, i don't think loras are working properly yet but you can feed the images into a proper sdxl model to touch up during generation (slower and tbh doesn't save time over just using a normal SDXL model to begin with), or generate a large amount of stuff to pick and If you let it get creative (i. Upscale Image (using Model) node. Always wanted to integrate one myself. The workflow offers many features, which requires some custom nodes (listed in one of the info boxes and available via the ComfyUI manager), models (also listed with link) and - especially with activated upscaler - may not work on devices with limited VRAM. example. The resolution is okay, but if possible I would like to get something better. Please keep posted images SFW. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. We would like to show you a description here but the site won’t allow us. Please share your tips, tricks, and workflows for using this software to create your AI art. Beyond that it might require some fiddling around to find the best results. Also, both have a denoise value that drastically changes the result. Because i dont understand why ultimate-sd-upscale can manage same resolution in same configuration but supir can not. Girl with flowers. And you may need to do some fiddling to get certain models to work but copying them over works if you are super duper uper lazy. 6 and am running an RTX 3090 with the Sytan workflow. The Upscale Image (using Model) node can be used to upscale pixel images using a model loaded with the Load Upscale Model node. 21K subscribers in the comfyui community. - image upscale is less detailed, but more faithful to the image you upscale. 25 i get a good blending of the face without changing the image to much. 5 I'd go for Photon, RealisticVision or epiCRealism. Vase Lichen. Haven't used it, but I believe this is correct. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. 5 model, and can be applied to Automatic easily. 5 to get a 1024x1024 final image (512 *4*0. Generates a SD1. The upscaled images. [3]. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Please share your tips, tricks, and workflows for using this… I liked the ability in MJ, to choose an image from the batch and upscale just that image. 5, euler, sgm_uniform or CNet strength 0. 2 - Custom models/LORA's: Tried a lot of CivitAI, epicrealism, cyberrealistic, absolutereality, realistic vision 5. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. messing around with upscale by model is pointless for high res fix. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. Look at this workflow : I am looking for good upscaler models to be used for SDXL in ComfyUI. 6 denoise and either: Cnet strength 0. I generate an image that I like then mute the first ksampler, unmute Ult. This is done after the refined image is upscaled and encoded into a latent. Additionally, the animatediff_models and clip_vision folders are placed in M:\AI_Tools\StabilityMatrix-win-x64\Data\Packages\ComfyUI\models. It's been trained to make any model produce higher quality images at very low steps like 4 or 5. skwvi lplm usafhbk gbec unoavs att bdkp baod hmjnlj yiybe