Skip to content

搞英语 → 看世界

翻译英文优质信息和名人推特

Menu
  • 首页
  • 作者列表
  • 独立博客
  • 专业媒体
  • 名人推特
  • 邮件列表
  • 关于本站
Menu

在 Google Colab 上轻松部署带有 Flux 模型的 ComfyUI

Posted on 2025-05-05

运行 ComfyUI 等功能强大的 AI 图像生成工具,尤其是 Flux 等尖端模型,通常需要大量的本地设置和强大的硬件。 Google Colab 提供了一个绝佳的替代方案,可以免费访问云中的 GPU。

这篇文章将指导您使用准备好的 Google Colab 笔记本快速设置 ComfyUI 并下载必要的 Flux 模型(FP8、Schnell 和 Regular FP16)及其依赖项。下面包含笔记本的完整代码。

Colab 笔记本的用途

提供的 Colab 笔记本代码可自动执行整个设置过程:

  • 克隆最新的 ComfyUI 存储库。
  • 安装所有必需的 Python 包。
  • 使用wget下载不同的 Flux 模型变体(单文件 FP8、Schnell FP8、常规 FP16)。
  • 使用wget下载每个 Flux 变体所需的特定 CLIP 和 VAE 模型。
  • 将所有下载的文件组织到正确的ComfyUI/models/子目录中( checkpoints 、 unet 、 clip 、 vae )。

Colab 笔记本代码

您可以将以下代码复制并粘贴到 Google Colab 笔记本中的单独单元格中。

 # -*- coding: utf-8 -*- """ Colab Notebook for Setting Up ComfyUI with Flux Models using wget and %cd This notebook automates the following steps: 1. Clones the ComfyUI repository. 2. Installs necessary dependencies. 3. Navigates into the models directory. 4. Downloads the different Flux model variants (Single-file FP8, Schnell FP8, Regular FP16) into relative subdirectories. 5. Downloads the required CLIP models and VAEs into relative subdirectories. 6. Places all downloaded files into their correct relative directories within the ComfyUI installation. Instructions: 1. Create a new Google Colab notebook. 2. Ensure the runtime type is set to GPU (Runtime > Change runtime type). 3. Copy the code sections below into separate cells in your notebook. 4. Run each cell sequentially. 5. After the setup is complete, run the final cell to start ComfyUI (it navigates back to the ComfyUI root first). 6. A link (usually ending with `trycloudflare.com` or `gradio.live`) will be generated. Click this link to access the ComfyUI interface in your browser. 7. Once in the ComfyUI interface, you can manually load the workflow JSON files provided in the original tutorial. """ # Cell 1: Clone ComfyUI Repository and Install Dependencies !git clone https://github.com/comfyanonymous/ComfyUI.git %cd ComfyUI !pip install -r requirements.txt # Install xformers for potential performance improvements (optional but recommended) !pip install xformers # Cell 2: Navigate to Models Dir, Create Subdirs, and Download Files using wget import os # Navigate into the models directory %cd models # --- Create Subdirectories --- # Create directories relative to the current 'models' directory os.makedirs( "checkpoints" , exist_ok= True ) os.makedirs( "unet" , exist_ok= True ) os.makedirs( "clip" , exist_ok= True ) os.makedirs( "vae" , exist_ok= True ) # --- Download Files using wget directly into relative paths --- print ( "\n--- Downloading Single-file FP8 Model ---" ) # Download directly into the 'checkpoints' subdirectory !wget -c -O checkpoints/flux1-dev-fp8.safetensors https://huggingface.co/Comfy-Org/flux1-dev/resolve/main/flux1-dev-fp8.safetensors print ( "\n--- Downloading Schnell FP8 Models & Dependencies ---" ) # Download directly into respective subdirectories !wget -c -O unet/flux1-schnell-fp8.safetensors https://huggingface.co/Comfy-Org/flux1-schnell/resolve/main/flux1-schnell-fp8.safetensors !wget -c -O vae/flux_schnell_ae.safetensors https://huggingface.co/black-forest-labs/FLUX .1 -schnell/resolve/main/ae.safetensors !wget -c -O clip/clip_l.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors !wget -c -O clip/t5xxl_fp8_e4m3fn.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors print ( "\n--- Downloading Regular FP16 Models & Dependencies ---" ) # Note: You might need to agree to terms on Hugging Face for this one first manually in a browser if wget fails. # If you encounter issues, download manually and upload to Colab's ComfyUI/models/unet directory. !wget -c -O unet/flux1-dev.safetensors https://huggingface.co/black-forest-labs/FLUX .1 -dev/resolve/main/flux1-dev.safetensors !wget -c -O vae/flux_regular_ae.safetensors https://huggingface.co/black-forest-labs/FLUX .1 -dev/resolve/main/ae.safetensors # clip_l.safetensors is already downloaded (or attempted above) !wget -c -O clip/t5xxl_fp16.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors print ( "\n--- All Downloads Attempted ---" ) print ( "Please check the output for any download errors." ) print ( f"Files should be in the respective subdirectories within the current 'models' folder." ) # Navigate back to the ComfyUI root directory before starting the server %cd .. # Cell 3: Run ComfyUI # This will start the ComfyUI server from the root directory and provide a public link (usually cloudflare) # If you get an error about port 8188 being in use, you might need to restart the Colab runtime. !python main.py --listen --port 8188 --enable-cors --preview-method auto # Note: The first time running might take a while as it sets things up. # Once you see output like "To see the GUI go to: https://...", click the link. # You will need to manually load the workflow JSON files into the ComfyUI interface.

在哪里可以找到 Flux 工作流 JSON 文件

使用 Colab 笔记本设置 ComfyUI 后,您需要工作流程文件 ( .json ) 加载到界面中。您可以在以下一些位置找到基于最近搜索的示例:

  • GitHub 存储库:

    • ThinkDiffusion 工作流程: Flux-schnell-fp16-default.json – https://github.com/thinkdiffusion/ComfyUI-Workflows/blob/main/flux/Flux-schnell-fp16-default.json
    • ZHO-ZHO-ZHO 工作流程: FLUX.1 DEV 1.0【Zho】.json – https://github.com/ZHO-ZHO-ZHO/ComfyUI-Workflows-ZHO/blob/main/FLUX.1%20DEV%201.0%E3%80%90Zho%E3%80%91.json
    • RunDiffusion Flux POC: flux-with-lora-RunDiffusion-ComfyUI-Workflow.json – https://huggingface.co/RunDiffusion/Wonderman-Flux-POC/blob/main/flux-with-lora-RunDiffusion-ComfyUI-Workflow.json
  • 指南和社区:

    • ComfyUI Wiki 指南:通常包含链接或嵌入式工作流程 – https://comfyui-wiki.com/en/tutorial/advanced/flux1-comfyui-guide-workflow-and-examples
    • OpenArt 工作流程示例:可能包含可下载文件 – https://openart.ai/workflows/maitruclam/comfyui-workflow-for-flux-simple/iuRdGnfzmTbOOzONIiVV
    • Reddit(r/StableDiffusion、r/comfyui):在这些社区中搜索共享 Flux 工作流程的帖子。用户通常直接链接到 JSON 文件或要点。

请记住下载.json文件并使用 Colab 实例中运行的 ComfyUI 界面中的“加载”按钮。

如何使用笔记本代码

  1. 创建笔记本:打开 Google Colab 并创建一个新笔记本。
  2. 设置运行时:确保您的 Colab 笔记本正在使用 GPU 运行时(运行时 > 更改运行时类型 > 硬件加速器 > GPU)。
  3. 复制和粘贴单元格:将标记为# Cell 1 、 # Cell 2和# Cell 3代码部分复制到 Colab 笔记本中单独的代码单元格中。
  4. 运行单元 1(设置) :执行第一个代码单元。这将安装 ComfyUI 和依赖项。
  5. 运行单元 2(下载模型) :执行第二个代码单元。这将使用wget下载所有模型。监视输出是否有错误。
  6. 运行单元格 3(启动 ComfyUI) :执行第三个代码单元格。这将启动服务器。
  7. 访问 ComfyUI :在第三个单元格的输出中查找 URL(例如, https://....trycloudflare.com )。单击此链接可打开 ComfyUI Web 界面。
  8. 加载工作流程:您的 ComfyUI 实例正在运行!使用界面中的“加载”按钮加载原始教程中的 Flux 工作流程.json文件。

重要提示

  • GPU 运行时:T4 GPU(通常是免费套餐)通常足以满足 FP8 型号的需求。由于 VRAM 要求 (24GB+),您可能需要为常规 FP16 型号配备更高级别的 GPU(例如付费计划中的 A100)。
  • 下载错误:如果wget对于常规flux1-dev.safetensors模型失败,请访问浏览器中的Hugging Face 页面,接受条款,然后重新运行下载单元。或者,手动下载并使用左侧的文件浏览器将其上传到 Colab 中的ComfyUI/models/unet/目录。
  • 工作流程文件:此笔记本设置环境和模型。您仍然需要工作流.json文件来告诉 ComfyUI如何连接节点。

原文: https://atlassc.net/2025/05/06/deploy-comfyui-with-flux-models-easily-on-google-colab

本站文章系自动翻译,站长会周期检查,如果有不当内容,请点此留言,非常感谢。
  • Abhinav
  • Abigail Pain
  • Adam Fortuna
  • Alberto Gallego
  • Alex Wlchan
  • Answer.AI
  • Arne Bahlo
  • Ben Carlson
  • Ben Kuhn
  • Bert Hubert
  • Bits about Money
  • Brian Krebs
  • ByteByteGo
  • Chip Huyen
  • Chips and Cheese
  • Cool Infographics
  • Dan Sinker
  • David Walsh
  • Dmitry Dolzhenko
  • Elad Gil
  • Ellie Huxtable
  • Ethan Marcotte
  • Exponential View
  • FAIL Blog
  • Founder Weekly
  • Geoffrey Huntley
  • Geoffrey Litt
  • Greg Mankiw
  • Henrique Dias
  • Hypercritical
  • IEEE Spectrum
  • Investment Talk
  • Jaz
  • Jeff Geerling
  • Jonas Hietala
  • Josh Comeau
  • Lenny Rachitsky
  • Lou Plummer
  • Luke Wroblewski
  • Matt Stoller
  • Mert Bulan
  • Mostly metrics
  • News Letter
  • NextDraft
  • Non_Interactive
  • Not Boring
  • One Useful Thing
  • Phil Eaton
  • Product Market Fit
  • Readwise
  • ReedyBear
  • Robert Heaton
  • Ruben Schade
  • Sage Economics
  • Sam Altman
  • selfh.st
  • Shtetl-Optimized
  • Simon schreibt
  • Slashdot
  • Small Good Things
  • Taylor Troesh
  • Telegram Blog
  • The Macro Compass
  • The Pomp Letter
  • Thinking Deep & Wide
  • Tim Kellogg
  • 英文媒体
  • 英文推特
  • 英文独立博客
©2025 搞英语 → 看世界 | Design: Newspaperly WordPress Theme