Comfyui workflow json example reddit. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. So, I just made this workflow ComfyUI. ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. I have an image that I want to do a simple zoom out on. The drawback of comfyui is that it cannot change the topology of the workflow once it has already started running. You can also turn each process on/off for each run. safetensors vs 1. Learned from the following video: Stable Cascade in ComfyUI Made Simple 6m 56s posted Feb 19, 2024 by How Do? channel on YouTube . safetensors 73. SDXL most definitely doesn't work with the old control net. 5/clip_model_somemodel. com or https://imgur. You can apply poses with it in same workflow. It looks freaking amazing! You signed in with another tab or window. json file, change your input images and your prompts and you are good to go! Inpainting Workflow. I was not aware that reddit strips off the metadata of the png. Table of contents. ControlNet Inpaint Example. ComfyUI Tip: Add a node to your workflow quickly via double-clicking For example, if you want to use "FaceDetailer", just type "Face". The experiments are more advanced examples Drag and drop doesn't work for . Search the sub for what you need and download the . com and then post a link back here if you are willing to share it. EZ way, kust download this one and run like another checkpoint ;) https://civitai. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Honestly the real way this needs to work is for every custom node author to use a json file that describes functionality of each node's inputs/outputs and general functionality of the node(s). Prompt: A couple in a Get the Reddit app Scan this QR code to download the app now. safetensors 3. In ComfyUI go into settings and enable dev mode options. I've also added a ` TaraApiKeySaver` I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. And that’s the best part Welcome to the unofficial ComfyUI subreddit. Even with 4 regions and a global condition, they just combine them all 2 at a It is a simple workflow of Flux AI on ComfyUI. Save your workflow using this format which is different than the normal json workflows. Then you finally have an idea of whats going on, and you can move on to control nets, ipadapters, detailers, clip vision, and 20 A repository of well documented easy to follow workflows for ComfyUI - cubiq/ComfyUI_Workflows Description. You create the workflow as you do in ComfyUI and then switch to that interfase. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. anyway. I 表情代码:修改自ComfyUI-AdvancedLivePortrait face crop 模型参考 comfyui-ultralytics-yolo 下载 face_yolov8m. I use a google colab VM to run Comfyui. As always, I'd like to remind you that this is a workflow designed to learn how to build a pipeline and how SDXL works. SD1. It is not much an inconvenience when I'm at my main PC. SDXL Turbo is a SDXL model that can generate consistent images in a single step. You can find the Flux Dev diffusion model weights here. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. Its default workflow works out of the box, and I definitely appreciate all the examples for different work flows. ) That's a bit presumptuous considering you don't know my requirements. Either you maintain a ComfyUI install with every custom node on the planet installed (don't do this), or you steal some code that consumes the JSON and draws the workflow & Here are approx. Upload your json workflow so that others can test for you. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Ignore the prompts and setup /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. the diagram doesn't load into comfyui so I can't test it out. All the images in this repo contain metadata which means they can be loaded into ComfyUI Go on github repos for the example workflows. A repository of well documented easy to follow workflows for ComfyUI - cubiq/ComfyUI_Workflows I'm making changes to several nodes in a workflow, but only specific ones are rerunning like for example the KSampler node. I really really love how lightweight and flexible it is. It's quite straight forward, but maybe it could be simpler. json” file format, which lets anyone using the ComfyUI Launcher import your workflow w/ 100% reproducibility. The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular schedulers. be/ppE1W0-LJas - the tutorial. I think ComfyUI is good for those who wish to do a reproducible workflow which then can be used to output multiple images of the same kind with the same steps. That's how I made and shared this. Also it's possible to share the setup as a project of some kind and share this workflow with others for finetuning. pt 或者 face_yolov8n. 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. The workflow is saved as a json file. The first one is very similar to the old workflow and just called "simple". Ability to change default paths (loaded from paths. 5 models like epicRealism or Jaugeraut, but I know once more models come out with the SDXL base, we'll see incredible results. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. com/models/628682/flux-1-checkpoint It would be great to have a set of nodes that can further process the metadata, for example extract the seed and prompt to re-use in the workflow. Mixing ControlNets But standard A1111 inpaint works mostly same as this ComfyUI example you provided. The only references I've been able to find makes mention of this inpainting model, using raw python or auto1111. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. You can Load these images in ComfyUI to get the full workflow. 😋 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. 4 - The best workflow examples are through the github examples pages. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. You switched accounts on another tab or window. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. sft file in your: ComfyUI/models/unet/ folder. It's perfect for animating hair while keeping the rest of the face still, as you can see in the examples. Just load your image, and prompt and go. json) will be/are For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers Welcome to the unofficial ComfyUI subreddit. json as a template). safetensors -- makes it easier to remember Im trying to understand how to control the animation from the notes of the author, it seems that if you reduce the linear_key_frame_influence_value of the Batch Creative interpolation node, like to 0. You can load this image in ComfyUI to get the full workflow. These are examples demonstrating how to do img2img. There is a latent workflow and a pixel space ESRGAN workflow in the examples. Since I used ComfyUI, I downloaded tons of workflows, but only around 10% of them work. \Stable_Diffusion\stable Makeing a bit of progress this week in ComfyUI. Look for the example that uses controlnet lineart. ComfyUI is a completely different conceptual approach to generative art. Hi everyone. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. And above all, BE NICE. You signed out in another tab or window. To add content, your account must be vetted/verified. Grab the ComfyUI workflow JSON here. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. All the images in this repo contain metadata which means they can be loaded into ComfyUI I just tried a few things, and it looks like the only way I'm able to make this work is to use the "Save (API Format)" button in Comfy and then upload the resulting Flux. json inside Resource - Update I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. Comfy UI is actually very good, it has many capabilities that are simply beyond other interfaces. As far as I can see from the workflow you sent the full image to clip_vision which is basically turning the full image into an embedding, which contain a Reddit removes the ComfyUI metadata when you upload your pic. Official list of SDXL resolutions (as defined in SDXL paper). It's a bit messy, but if you want to use it as a reference, it might help you. json you had used, helpful. 7 MB Stage B >> \models\unet\SD Cascade stage_b_bf16. It's not for beginners, but that's OK. You signed in with another tab or window. When rendering human creations, I still find significantly better results with 1. but mine do include workflows for the most part in the video description. json file, change your input images and your prompts and you are good to go! ControlNet Depth ComfyUI workflow I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. found sdxl_styles. The denoise controls the amount of noise added to the image. Load the . I've tried using an empty positive prompt (as suggested in demos) and describing the content to be replaced without success -- Welcome to the unofficial ComfyUI subreddit. Upcoming tutorial - SDXL Lora + using 1. Let's break down the main parts of this workflow so that you can understand it better. ComfyUI Examples. I've been using comfyui for a few weeks now and really like the flexibility it offers. When I saw a certain Reddit thread, I was immediately inspired to test and create my own PIXART-Σ (PixArt-Sigma) ComfyUI workflow. Please let me know if you have any questions! My Discord - jojo studio /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. or through searching reddit, the comfyUI manual needs updating imo. It's pretty easy to prune a workflow in json before sending it to ComfyUI. Nobody needs all that, LOL. 1 ComfyUI install guidance, workflow and example. Some very cool stuff! For those who don't know what One Button 18K subscribers in the comfyui community. Download the following inpainting workflow. here to share my current workflow for switching between prompts. it is VERY memory efficient and has a great deal of flexibility especially where a user has need of a complex set of instructions I'm looking for a workflow (or tutorial) that enables removal of an object or region (generative fill) in an image. (I've also edited the post to include a link to the workflow) That's awesome! ComfyUI had been one of the two repos I keep installed, SD-UX fork of auto and this. 1 or not. Or check it out in the app stores     TOPICS Welcome to the unofficial ComfyUI subreddit. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: Workflow. For example, this is what the workflow produces: Other than that, there were a few mistakes in version 3. Reply reply aliguana23 • when i download it, it downloads as webp without the workflow. ckpt A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. 13 GB Stage C >> \models\unet\SD Cascade Do you have ComfyUI manager. They can create the impression of watching an animation when presented as an animated GIF or other video format. Welcome to the unofficial ComfyUI subreddit. I also combined ELLA in the workflow to make it easier to get what I want. The entire comfy workflow is there which you can use. A good place to start if you have no idea how any of this works Does anyone know why ComfyUI produces images that look like this? Important: This is the output I get using the old tutorial. 5/clip_some_other_model. For example you have [11,22,33], then by default you "pluck" starting from the first element, which the first pin with type INT will output 11. Is there a way to copy normal webUI parameters ( the usual PNG info) into ComfyUI directly with a simple ctrlC ctrlV? Dragging and dropping 1111 PNGs into ComfyUI works most of the time. 9 leaked repo, you can read the README. You can write workflows in code instead of separate files, use control flows directly, call Python libraries, and cache results across different workflows. This is an example of an image that I generated with the advanced workflow. We would like to show you a description here but the site won’t allow us. With some nervous trepidation, I release my first node for ComfyUI, an implementation of the DemoFusion iterative mixing sampling process. Merge 2 images together with this ComfyUI workflow. Still have the problem. Please keep posted images SFW. Instructions and listing of necessary Resources are in Note files. Here are the models that you will need to run this workflow:- Loosecontrol Model ControlNet_Checkpoint v3_sd15_adapter. Where can one get such things? It would be nice to An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. I hope that having a comparison was useful nevertheless. Join the largest ComfyUI community. In the original post is a youtube link where everything is explained while zooming in on the workflow in Comfyui. Img2Img works by loading an image Starting workflow. So, if you are using that, I recommend you to take a look at this new one. mp4 -vf fps=10/1 frame%03d. It looks freaking amazing! Anyhow, here is a screenshot and the . If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG nodes. ComfyUI Workflow | OpenArt Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. So OP, please upload the PNG to civitai. ComfyUI-Custom-Scripts. Tried another browser (both FF and Chrome. It's simple and straight to the point. For more details on using the workflow, check out the full guide Does anyone else here use this Photoshop plugin? I managed to set up the sdxl_turbo_txt2img_api JSON file that is described in the documentation. In case you ever wanted to see what happened if you went from Prompt A to Prompt B with multiple steps in between, now you can! (The workflow was intended to be attached to the screenshot at the bottom of this post, but instead, here's a link to comfy uis inpainting and masking aint perfect. Features. Thanks for the responses tho, I was unaware that the meta data of the generated files contain the entire workflow. ) to integrate it with comfyUI for a "$0 budget sprite game". Creating such workflow with default core nodes of ComfyUI is not possible at the moment. You can save the workflow as json file and load it again from that file. Ability to load prompt information from JSON and PNG files. Actually natsort are not involved in Junction at all. Input your choice of checkpoint and lora in their respective nodes in Group A. 50, the graph will show lines more “spaced out” meaning that the frames are more distributed. For each of the sequences, I generated about ten of them and then chose the one I Plus, you want to upscale in latent space if possible. What is the best workflow you know of? For example I had very good results using resolve and multiple layers that were AI generated and did the rest in standard VFX so to speak. I understand how outpainting is supposed to work in comfyui (workflow. ComfyUI Tatoo Workflow | ComfyUI Workflow | OpenArt That being said, even for making apps, I believe using ComfyScript is better than directly modifying JSON, especially if the workflow is complex. png. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. I have searched far and wide but could not find a node that lets me save the current workflow to a json file. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in Each workflow runs in its own isolated environment Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. image saving and postprocess need was-node-suite-comfyui to be installed. x, 2. Like 1024, 1280, 2048, 1536. This repo contains examples of what is achievable with ComfyUI. Animation using ComfyUI Workflow by Future Thinker If you have the SDXL 0. This should update and may ask you the click restart. For your all-in-one workflow, use the Generate tab. This is the link to the workflow. 0 workflow with Mixed Diffusion, and reliable high quality High Res Fix, now officially released! Below are some example /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I know it's simple for now. However, when I change values in some other nodes like something like Canny Edge node or DW Pose Estimator, they don't rerun. If you download custom nodes, those workflows (. It now officialy supports ComfyUI and there is now a new Prompt Variant mode. [Load VAE] and [Load Lora] are not plugged in this config for DreamShaper. 1. So in this workflow each of them will run on your input image and you can select the one that produces the best results. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Do you want to save the image? choose a save image node and you'll find the outputs in the folders or you can right click and save that way too. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very Each workflow runs in its own isolated environment Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. g. All of these were generated using this simple Comfy workflow:https: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Its just not intended as an upscale from the resolution used in the base model stage. WAS suite has some workflow stuff in its github links somewhere as well. com/models/628682/flux-1-checkpoint You can download this webp animated image and load it or drag it on ComfyUI to get the workflow. 5 by using XL in comfy. I recently discovered the existence of the Gligen nodes in Comfyui and thought I would share some of the images I made using them (more in the civitai post link). ; When the workflow opens, download the dependent nodes by pressing "Install Missing Custom Nodes" in Comfy Manager. Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. For other types of detailer, just type "Detailer". Step 2: Upload an image. More on number 3: I know people would say "just right click on the image and save it", but this isn't the same at all. Img2Img ComfyUI workflow. Resoltuons 512x512, 600x400 and 800x400 is the limit that I've have tested, I dont't know how it will work at higher resolutions. Krita's json settings First, I generated a series of images in a 9:16 aspect ratio, some in comfyui with sdxl, and others in midjourney. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. The _offset field is a way to quickly skip ahead some data of same types. a search of the subreddit Didn't turn up any answers to my question. This is an interesting implementation of that idea, with a lot of potential. x, SDXL, LoRA, and upscaling makes ComfyUI flexible. If it's the best way to install control net because when I tried manually doing it . They are images of Thanks for the tips on Comfy! I'm enjoying it a lot so far. But all of the other API workflows listed in Custom ComfyUI Workflow dropdown in the plugin window within Photoshop are non-functional, giving variations of "ComfyUI Node type is not found" errors. Stage A >> \models\vae\SD Cascade stage_a. Drag and drop the JSON file to ComfyUI. Because there are an infinite number of things that can happen in front of a virtual camera there are then an infinite number of variables and scenarios that generative models will face. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). safetensors (5Gb - from the infamous SD3, instead of 20Gb - default from PixArt). Merging 2 Images A collection of simple but powerful ComfyUI workflows for Stable Diffusion with curated default settings. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. json file so I just roughly reproduced the workflow shown in the video on the Github site, and this works! Maybe it even works better than before--at least I'm getting good results with fewer samples. Example: An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, Hello everyone, I got some exiting updates to share for One Button Prompt. Ending Workflow. A lot of people are just discovering this technology, and want to show off what they created. The "workflow" is different, but if you're willing to put in the effort to thoroughly learn a game like that and enjoy the process, then learning ComfyUI shouldn't be that much of a challenge Reply reply More replies More replies Can someone give examples what you can do with the adapter in general? (Beyond what's in the videos) I've used it a little and it feels like a way to have an instant lora for a character. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. 10 votes, 10 comments. This workflow needs a bunch of custom nodes and models that are a pain to If necessary, updates of the workflow will be made available on Github. Flux Schnell is a distilled 4 step model. Check ComfyUI here: https://github. example to sdfx. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. Right now the only way I see is putting an There are a lot of upscale variants in ComfyUI. Put the flux1-dev. ; Download this workflow and drop it into ComfyUI - or you can use one of the workflows others in the community made below. Here is an example of 3 characters each with its own pose, outfit, features, and expression : /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I did Install Missing Custom Nodes, Update All, and etc etc, but there are many issues every time I load the workflows, and it looks pretty complicated to solve it. Andy Lau is ready for inpainting. Upscaling ComfyUI workflow. I've uploaded the json files that krita and comfy used for this. It is a simple workflow of Flux AI on ComfyUI. Ability to change default values of UI settings (loaded from settings. I haven't decided if I want to go through the frustration of trying this again after spending a full day trying to get the last . This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. Installing ComfyUI. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. The graphic style This is the workflow I use in ComfyUi to render 4k pictures with Dream shaper XL model. ComfyUI Fooocus Inpaint with Segmentation Workflow. Here's the big issue AI-only driven techniques face for filmmaking. Then when _offset have something like INT,1, then the first pin that have type INT will be 22. I think most of the time I only want the prompt and seed to be reused and keep the layout of my nodes unchanged. People are running Bots which generate Art all the time and post it automatically to Discord and other places, I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. That will give you a Save(API Format) option on the main menu. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper 17K subscribers in the comfyui community. There is also a UltimateSDUpscale node suite (as an extension). from a folder but mainly its a workflow designed make or change an initial image to send to our sampler Two workflows included. 0/Download workflow . would be really nice if there was a workflow folder under Comfy as a default save/load spot. It covers the following topics: Merge 2 images together with this ComfyUI workflow. I've been especially digging the detail in the clothing more than anything else. Updated IP Adapter Workflow Example - Asking . You can pull PNGs from Automatic1111 for the creation of some Comfy workflows but as far as I can tell it doesn't work with ControlNet or ADetailer images sadly. The idea is that it creates a tall canvas and renders 4 vertical sections separately, combining them as they go. Reload to refresh your session. But for a base to start at it'll work. What it's great for: Once you've achieved the artwork you're looking for, it's time to delve deeper and use inpainting, Get the Reddit app Scan this QR code to download the app now xpost from r/comfyui: New IPAdapter workflow. For more details on using the workflow, check out the full guide Note that ComfyUI workflow uses the masquerade custom nodes, but they're a bit broken, I can't be totally sure I downloaded the json but I don't have the images you set up as an example. This is why I used Rem as an example, to show you can "transplant" the kick to a different character using a character LoRA. In addition, I provide some sample images that can be imported into the program. Examples of what Welcome to the unofficial ComfyUI subreddit. gah chrome I'm new to comfyui, does the sample image work as a "workflow save", as if it was a json with all the nodes? Reply reply Dezordan I couldn't decipher it either, but I think I found something that works. Or open it in Visual Code and that can tell you if it ok or not. The video is just too fast. So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. Welcome to the TickTick Reddit! This community is devoted to the discussion of Nel file scaricabile troverai un file JSON da importare in ComfyUI, contenente due workflow pronti all’uso: uno con Portrait Master, dedicato ai ritratti, e uno per inserire manualmente i prompt positivi e negativi. Now I've enabled Developer mode in Comfy and I have managed to save the workflow in JSON API format but I need help setting up the API. Is there a way to load the workflow from an image within It's perfect for animating hair while keeping the rest of the face still, as you can see in the examples. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. Ability to save full metadata for generated images (as JSON or embedded in PNG, disabled by default). This ComfyUI Examples. That actually does create a json, but the json Hey all- I'm attempting to replicate my workflow from 1111 and SD1. This json file can then be processed automatically across multiple repos to construct an overall map of everything. . I tried to open SuperBeasts-POM-SmoothBatchCreative-V1. I made an open source ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. Last but not least, I have the JSON template SDXL Turbo Examples. - Ling-APE/ComfyUI-All-in-One-FluxDev yes, I've experienced that when the json file is not good. Then there's a full render of the image with a prompt that describes the whole thing. Can your ComfyUI-serverless be adapted to work if the ComfyUI workflow was hosted on Runpod, Kaggle, Google Colab, or some other site ? Any help would be appreciated. The closest I found was SaveImgPrompt. If you haven't already, install ComfyUI and Comfy Manager - you can find instructions on their pages. make sure to also rename sdfx. I notice the names of the settings in the krita json don't match what's in comfy's json at all, so I can't simply copy them across. Check comfyUI image examples in the link. A video snapshot is a variant on this theme. 0. I'm just wondering what other folks use it for. json. The comfyui workflow is just a bit easier to drag and drop and get going right a way. An example of what this workflow can make. EDIT: For example this workflow shows the use of the other prompt windows. (also fixed the json with a better sampler layout. Discussion, samples, tips and tricks on the Sigma FP. (for 12 gb VRAM Max is about 720p resolution). Otherwise, please change the flare to "Workflow not included" edit: I didn't see a sample . This is like the exact same example workflow that exists (and many others) on Kosinkadink's AnimateDiff Evolved GitHub renderartist • This is a great idea Welcome to the unofficial ComfyUI subreddit. Sytan's SDXL Offical ComyfUI 1. json workflow file from the C:\Downloads\ComfyUI\workflows folder. *Edit* KSampler is where the image generation is taking place and it outputs a latent image. Nodes/graph/flowchart interface to experiment Img2Img Examples. 3. If you want to automate it, I'm pretty sure there are Python packages that can do it, maybe even a tool that can read information out of a file, like for example ComfyUI workflow json file. Workflow in Json format. I played with hi-diffusion in comfyui with sd1. for example, is "I want to compose a very K12sysadmin is for K12 techs. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. K12sysadmin is open to view and closed to post. There are plenty of workflows made you can find. A few examples of my ComfyUI workflow to make very You can just open another tab of comfyui and load a different workflow in there. This ComfyUI workflow lets you remove backgrounds or replace backgrounds which is a must for anyone wanting to enhance their products by either removing a background or replacing the background with something new. I used the workflow kindly provided by the user u/LumaBrik, mainly playing with parameters like CFG Guidance, Augmentation level, and motion bucket. So every time I reconnect I have to load a presaved workflow to continue where I started. Share, discover, & run thousands of ComfyUI workflows. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. safetensors sd15_t2v_beta. Or what I started doing tonight was disconnect my upscale section but put a load image box at the start of upscale, generate a batch of images with a fixed seed if I like one of them then i load it at the start of the upscale and regeneration, because the seed hasn't changed it skips And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. It's thought to be as faster as possible to get the best clips and later upscale them. Belittling their efforts will get you banned. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Tidying up ComfyUI workflow for SDXL to fit it on 16:9 Monitor Toggle for "workflow loading" when dropping in image in ComfyUI. Still great on OP’s part for sharing the workflow. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. Adding LORAs in my next iteration. If you want the exact input image you can find it on on Ubuntu it's downloads. [This is a JSON uploaded to PasteBin, link also in comments] This means using natural language descriptions to automatically produce the corresponding JSON configurations. There are a couple abandoned suites that say they can do that, e. 1 that are now corrected. This is just a slightly modified ComfyUI workflow from an example provided in the examples repo. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). Mute the two Save Image nodes in Group E Click Queue Prompt to generate a batch of 4 image previews in Group B. 5 models and it easily generated 2k images without any distortion, which is better than khoya deep shrink. Much appreciated if you can post the json workflow or a picture generated from this workflow so it can be easier to setup. Hands are still bad though. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. the good thing is no upscale needed. ComfyUI/web folder is where you want to save/load . 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. You can then load or drag the following This repo is divided into macro categories, in the root of each directory you'll find the basic json files and an experiments directory. ComfyUI Workflow is here: If anyone sees any flaws in my workflow, please let me know. Has anyone else messed around with gligen much? Thanks. With ComfyUI Workflow Manager -Can I easily change or modify where my json workflows are stored and saved? Yes we just enabled this feature, please go to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. But reddit will strip it away. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to But actually I got the same problem as with "euler", just very wildly different results like in the examples above. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images Flux Dev. ComfyUI won't load my workflow JSON upvote /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. It does not work with SDXL for me at the moment. /r/StableDiffusion is back open after the protest of Reddit killing open /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. No errors in the shell on drag and drop, nothing on the page updates at all Tried multiple PNG and JSON files, including multiple known-good ones Pulled latest from github I removed all custom nodes. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Save this image then load it or drag it on ComfyUI to get the workflow. 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting Welcome to the unofficial ComfyUI subreddit. json but I am having problems with a couple of nodes: I have a tutorial here for those who want to learn it instead of ComfyUI based workflow. However, without the reference_only ControlNetthis works poorly. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have So, I started to get into Animatediff Vid2Vid using ComfyUI yesterday and starting to get the hang of it, where I keep running into issues is identifying key frames for prompt travel. Breakdown of workflow content. rgthree-comfy. rgthree does it, I've written CLI tools to do the same based on /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. json or drag and drop the workflow image (I think the image has to not be from reddit, reddit removes metadata, I believe) into the UI. I would also love to see some repo of actual JSON or images (since Comfy does build the workflow from the image if everything necessary is installed). This workflow needs a bunch of custom nodes and models that are a pain to track down: Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you You can use folders too, so eg cascade/clip_model. com/comfyanonymous/ComfyUI. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. json, and verify / edit the paths to your model folders Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. See my own response here: Flux Schnell. json files. ComfyUI-Impact-Pack. 43 KB. ckpt model For ease, you can download these models from here. We will walk through a simple example of using ComfyUI, introduce some concepts, and gradually move on to more complicated workflows. r/ticktick. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. Download. You can just use someone elses workflow of 0. AP Workflow 6. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. SDXL Default ComfyUI workflow. It didn't work out. ComfyUI-Image-Selector. The examples were generated with the Not a specialist, just a knowledgeable beginner. Simple workflow for using the new Stable Video Diffusion model in ComfyUI for image to video generation. OP probably thinks that comfyUI has the workflow included with the PNG, and it does. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. While I have you, can I ask where best to insert the base LoRA in your workflow? I created a ComfyUI workflow for Nel file scaricabile troverai un file JSON da importare in ComfyUI, contenente due workflow pronti all’uso: uno con Portrait Master, dedicato ai ritratti, e uno per inserire manualmente i prompt positivi e negativi. It lets you change the aspect ratio, resolution, steps and everything without having to edit the nodes. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. If you find it confusing, please post here for help or create an Issue in GitHub. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other Img2Img Examples. The way ComfyUI is built up, every image or video saves the workflow in the metadata, which means that once an image has been generated with ComfyUI, you can simply drag and drop it to get that complete workflow. One year passes very quickly and progress is never linear or promised. Pick an image that you want to inpaint. Also, if this is new and exciting to For example, we take a simple prompt, Create a list, Verify with the guideline, improve and then send it to `TaraPrompter` to actually generate the final prompt that we can send. SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. For example, it would be very cool if one could place the node numbers on a grid (of This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. I am trying to find a workflow to automate by learning the manual steps (blender+etc. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. I looked into the code and when you save your workflow you are actually "downloading" the json file so it goes to your default browser download folder. The examples were generated with the Welcome to the unofficial ComfyUI subreddit. json of the file I just used. We have four main sections: Masks, IPAdapters, Prompts, and Outputs. json file - use paths-example. ckpt model v3_sd15_mm. 9(just search in youtube sdxl 0. hopefully this will be useful to you. I would like to ask you the following two questions Can we currently use the stable diffusion turbo class model to make the speed faster Examples. Fusion Workflow - JSON From An Alert upvotes r/ticktick. You can use more steps to increase the quality. config. Click New Fixed Random in the Seed node in Group A. 0 for ComfyUI - Now with support for SD 1. You can then load or drag the following image in ComfyUI to get the workflow: Well, I feel dumb. I provide one example JSON to demonstrate how it works. pt 到 models/ultralytics/bbox/ Will load a workflow from JSON via the load menu, but not drag and drop. This tool also lets you export your workflows in a “launcher. Please share your tips, tricks, and workflows for using this software to create your AI art. Support for SD 1. I am thinking of the scenario, where you have generated, say, a 1000 images with a randomized prompt and low quality settings and then have selected the 100 best and want to create high quality Welcome to the unofficial ComfyUI subreddit. Simply download the . It might seem daunting at first, but you actually don't need to fully learn how these are connected. Yes. AP Workflow 7. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. ComfyUI was generating normal images just fine. https://youtu. here is a example: "Help me create a ComfyUI workflow that takes an input image, uses SAM to identify and inpaint watermarks for removal, then applies various methods to upscale the watermark-free image. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Welcome to the unofficial ComfyUI subreddit. Currently the extension still needs some improvement, for example you can only do resolution which can be divided by 256. This guide is about how to setup ComfyUI on your Windows computer to run Flux. This workflow requires quite a few custom nodes and models to run: PhotonLCM_v10. json file - use settings-example. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. The ui feels professional and directed. Reply reply For example: ffmpeg -i my-cool-video. 5 . 85 or even 0. 0 and upscalers It's a complex workflow with a lot of variables, I annotated the workflow trying to explain what is going on. json file - Thank you very much for your contribution. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. For Flux schnell you can get the checkpoint here that you can put in your: ComfyUI/models/checkpoints/ directory. 5 but with 1024X1024 latent noise, Just find it weird that in the official example the nodes are not the same as if you try to add them by yourself Many of the workflow examples can be copied either visually or by downloading a shared file containing the workflow. json to work. json file from CivitAI. safetensors sd15_lora_beta. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. safetensors and 1. Think about mass producing stuff, like game assets. Endless Nodes, but I couldn't find anything that actually can still be installed and works. You can then load or drag the 6 min read. ComfyUI workflow ComfyUI . Achieves high FPS using frame interpolation (w/ RIFE). You may plug them to use with 1. The trick of this method is to use new SD3 ComfyUI nodes for loading t5xxl_fp8_e4m3fn. 5 base models, and modify latent image dimensions and upscale values to Welcome to the unofficial ComfyUI subreddit. More examples. This workflow needs a bunch of custom nodes and models that are a pain to track down: ComfyUI Path Helper MarasIT Nodes KJNodes Mikey Nodes AnimateDiff AnimateDiff Evolved IPAdapter plus If you drag in a png made with comfyui, you'll see the workflow in comfyui with the nodes etc. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool Here you can download my ComfyUI workflow with 4 inputs. I think it is just the same as the 1. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. bgtmrm dnzgs eqf powrvb ygvhyt olc zhyij vxwg mtxiwjhhg ubsu