[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-VideoVerses--VideoTuna":3,"tool-VideoVerses--VideoTuna":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":75,"owner_location":75,"owner_email":75,"owner_twitter":75,"owner_website":75,"owner_url":76,"languages":77,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":10,"env_os":94,"env_gpu":95,"env_ram":96,"env_deps":97,"category_tags":111,"github_topics":113,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":121,"updated_at":122,"faqs":123,"releases":157},6586,"VideoVerses\u002FVideoTuna","VideoTuna","Let's finetune video generation models!","VideoTuna 是一个专为视频生成模型打造的一站式开源框架，旨在简化从文本、图像到视频的多种生成任务（包括 T2V、I2V、T2I 及 V2V）。它解决了当前 AI 视频领域工具分散、训练流程割裂的痛点，首次将模型推理、微调、持续训练、预训练以及基于人类反馈的强化学习（RLHF）对齐整合在统一的代码库中。\n\n无论是希望快速验证效果的算法研究人员，还是需要将模型适配到特定垂直领域的开发者，VideoTuna 都能提供极大的便利。用户不仅可以利用它轻松调用 Wan2.1、Hunyuan Video、CogVideoX 等前沿模型进行推理，还能通过简洁的流程对模型进行微调和持续优化，甚至利用视频增强模型对生成结果进行后处理修复。\n\n其核心技术亮点在于“全链路”支持：不仅覆盖了从数据准备到模型对齐的完整生命周期，还引入了 VideoVAE+ 等自研的高性能组件，实现了比肩业界顶尖水平的视频重建质量。此外，项目采用 Poetry 管理依赖并支持自动代码格式化，确保了开发环境的高效与整洁。如果你正致力于探索或定制属于自己的视频生成模型，VideoTuna 将是一个值得信赖的强大助手。","\u003Cp align=\"center\" width=\"50%\">\n\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F38efb5bc-723e-4012-aebd-f55723c593fb\" alt=\"VideoTuna\" style=\"width: 75%; min-width: 450px; display: block; margin: auto; background-color: transparent;\">\n\u003C\u002Fp>\n\n# VideoTuna\n\n![Version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fversion-0.1.0-blue) ![visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVideoVerses_VideoTuna_readme_64b0481c38e0.png)  [![](https:\u002F\u002Fdcbadge.limes.pink\u002Fapi\u002Fserver\u002FAammaaR2?style=flat)](https:\u002F\u002Fdiscord.gg\u002FAammaaR2) \u003Ca href='https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fa48d57a3-4d89-482c-8181-e0bce4f750fd'>\u003Cimg src='https:\u002F\u002Fbadges.aleen42.com\u002Fsrc\u002Fwechat.svg'>\u003C\u002Fa> [![Homepage](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHomepage-VideoTuna-orange)](https:\u002F\u002Fvideoverses.github.io\u002Fvideotuna\u002F) [![GitHub](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FVideoVerses\u002FVideoTuna?style=social)](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna)\n\n\n🤗🤗🤗 Videotuna is a useful codebase for text-to-video applications.  \n🌟 VideoTuna is the first repo that integrates multiple AI video generation models including `text-to-video (T2V)`, `image-to-video (I2V)`, `text-to-image (T2I)`, and `video-to-video (V2V)` generation for model inference and finetuning (to the best of our knowledge).  \n🌟 VideoTuna is the first repo that provides comprehensive pipelines in video generation, from fine-tuning to pre-training, continuous training, and post-training (alignment) (to the best of our knowledge).  \n\n\n\n## 🔆 Features\n![videotuna-pipeline-fig3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVideoVerses_VideoTuna_readme_365c41be98a3.png)\n🌟 **All-in-one framework:** Inference and fine-tune various up-to-date pre-trained video generation models.  \n🌟 **Continuous training:** Keep improving your model with new data.  \n🌟 **Fine-tuning:** Adapt pre-trained models to specific domains.  \n🌟 **Human preference alignment:** Leverage RLHF to align with human preferences.  \n🌟 **Post-processing:** Enhance and rectify the videos with video-to-video enhancement model.  \n\n\n## 🔆 Updates\n\n- [2025-04-22] 🐟 Supported **inference** for `Wan2.1` and `Step Video` and **fine-tuning** for `HunyuanVideo T2V`, with a unified codebase architecture.\n- [2025-02-03] 🐟 Supported automatic code formatting via [PR#27](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fpull\u002F27). Thanks [@samidarko](https:\u002F\u002Fgithub.com\u002Fsamidarko)!\n- [2025-02-01] 🐟 Migrated to [Poetry](https:\u002F\u002Fpython-poetry.org) for streamlined dependency and script management ([PR#25](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fpull\u002F25)). Thanks [@samidarko](https:\u002F\u002Fgithub.com\u002Fsamidarko)!\n- [2025-01-20] 🐟 Supported **fine-tuning** for `Flux-T2I`.\n- [2025-01-01] 🐟 Released **training** for `VideoVAE+` in the [VideoVAEPlus repo](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoVAEPlus).\n- [2025-01-01] 🐟 Supported **inference** for `Hunyuan Video` and `Mochi`.\n- [2024-12-24] 🐟 Released `VideoVAE+`: a SOTA Video VAE model—now available in [this repo](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoVAEPlus)! Achieves better video reconstruction than NVIDIA’s [`Cosmos-Tokenizer`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FCosmos-Tokenizer).\n- [2024-12-01] 🐟 Supported **inference** for `CogVideoX-1.5-T2V&I2V` and `Video-to-Video Enhancement` from ModelScope.\n- [2024-12-01] 🐟 Supported **fine-tuning** for `CogVideoX`.\n- [2024-11-01] 🐟 🎉 Released **VideoTuna v0.1.0**!  \n  Initial support includes inference for `VideoCrafter1-T2V&I2V`, `VideoCrafter2-T2V`, `DynamiCrafter-I2V`, `OpenSora-T2V`, `CogVideoX-1-2B-T2V`, `CogVideoX-1-T2V`, `Flux-T2I`, and training\u002Ffine-tuning of `VideoCrafter`, `DynamiCrafter`, and `Open-Sora`.\n\n## 🔆 Get started\n\n### 1.Prepare environment\n\n#### (1) If you use Linux and Conda (Recommend)\n``` shell\nconda create -n videotuna python=3.10 -y\nconda activate videotuna\npip install poetry\npoetry install\n```\n- ↑ It takes around 3 minitues.\n\n**Optional: Flash-attn installation**\n\nHunyuan model uses it to reduce memory usage and speed up inference. If it is not installed, the model will run in normal mode. Install the `flash-attn` via:\n``` shell\npoetry run install-flash-attn \n```\n- ↑ It takes 1 minitue.\n\n**Optional: Video-to-video enhancement**\n```\npoetry run pip install \"modelscope[cv]\" -f https:\u002F\u002Fmodelscope.oss-cn-beijing.aliyuncs.com\u002Freleases\u002Frepo.html\n```\n- If this command ↑ get stucked, kill and re-run it will solve the issue.\n\n\n#### (2) If you use Linux and Poetry (without Conda):\n\u003Cdetails>\n  \u003Csummary>Click to check instructions\u003C\u002Fsummary>\n  \u003Cbr>\n\n  Install Poetry: https:\u002F\u002Fpython-poetry.org\u002Fdocs\u002F#installation  \n  Then:\n\n  ``` shell\n  poetry config virtualenvs.in-project true # optional but recommended, will ensure the virtual env is created in the project root\n  poetry config virtualenvs.create true # enable this argument to ensure the virtual env is created in the project root\n  poetry env use python3.10 # will create the virtual env, check with `ls -l .venv`.\n  poetry env activate # optional because Poetry commands (e.g. `poetry install` or `poetry run \u003Ccommand>`) will always automatically load the virtual env.\n  poetry install\n  ```\n\n  **Optional: Flash-attn installation**\n\n  Hunyuan model uses it to reduce memory usage and speed up inference. If it is not installed, the model will run in normal mode. Install the `flash-attn` via:\n  ``` shell\n  poetry run install-flash-attn\n  ```\n  \n  **Optional: Video-to-video enhancement**\n  ```\n  poetry run pip install \"modelscope[cv]\" -f https:\u002F\u002Fmodelscope.oss-cn-beijing.aliyuncs.com\u002Freleases\u002Frepo.html\n  ```\n  - If this command ↑ get stucked, kill and re-run it will solve the issue.\n\n\u003C\u002Fdetails>\n\n\n\n#### (3) If you use MacOS\n\u003Cdetails>\n  \u003Csummary>Click to check instructions\u003C\u002Fsummary>\n  \u003Cbr>\n\n  On MacOS with Apple Silicon chip use [docker compose](https:\u002F\u002Fdocs.docker.com\u002Fcompose\u002F) because some dependencies are not supporting arm64 (e.g. `bitsandbytes`, `decord`, `xformers`).\n\n  First build:\n\n  ```shell\n  docker compose build videotuna\n  ```\n\n  To preserve the project's files permissions set those env variables:\n\n  ```shell\n  export HOST_UID=$(id -u)\n  export HOST_GID=$(id -g)\n  ```\n\n  Install dependencies:\n\n  ```shell\n  docker compose run --remove-orphans videotuna poetry env use \u002Fusr\u002Flocal\u002Fbin\u002Fpython\n  docker compose run --remove-orphans videotuna poetry run python -m pip install --upgrade pip setuptools wheel\n  docker compose run --remove-orphans videotuna poetry install\n  docker compose run --remove-orphans videotuna poetry run pip install \"modelscope[cv]\" -f https:\u002F\u002Fmodelscope.oss-cn-beijing.aliyuncs.com\u002Freleases\u002Frepo.html\n  ```\n\n  Note: installing swissarmytransformer might hang. Just try again and it should work.\n\n  Add a dependency:\n\n  ```shell\n  docker compose run --remove-orphans videotuna poetry add wheel\n  ```\n\n  Check dependencies:\n\n  ```shell\n  docker compose run --remove-orphans videotuna poetry run pip freeze\n  ```\n\n  Run Poetry commands:\n\n  ```shell\n  docker compose run --remove-orphans videotuna poetry run format\n  ```\n\n  Start a terminal:\n\n  ```shell\n  docker compose run -it --remove-orphans videotuna bash\n  ```\n\u003C\u002Fdetails>\n\n### 2.Prepare checkpoints\n\n- Please follow [docs\u002Fcheckpoints.md](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fblob\u002Fmain\u002Fdocs\u002Fcheckpoints.md) to download model checkpoints.  \n- After downloading, the model checkpoints should be placed as [Checkpoint Structure](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fblob\u002Fmain\u002Fdocs\u002Fcheckpoints.md#checkpoint-orgnization-structure).\n\n### 3.Inference state-of-the-art T2V\u002FI2V\u002FT2I models\n\n\nRun the following commands to inference models:\nIt will automatically perform T2V\u002FT2I based on prompts in `inputs\u002Ft2v\u002Fprompts.txt`, \nand I2V based on images and prompts in `inputs\u002Fi2v\u002F576x1024`.  \n\n**T2V**\nTask|Model|Command|Length (#Frames)|Resolution|Inference Time|GPU Memory (GB)|\n|:---------|:---------|:---------|:---------|:---------|:---------|:---------|\n|T2V|HunyuanVideo|`poetry run inference-hunyuan-t2v`|129|720x1280|32min|60G|\n|T2V|WanVideo|`poetry run inference-wanvideo-t2v-720p`|81|720x1280|32min|70G|\n|T2V|StepVideo|`poetry run inference-stepvideo-t2v-544x992`|51|544x992|8min|61G|\n|T2V|Mochi|`poetry run inference-mochi`|84|480x848|2min|26G|\n|T2V|CogVideoX-5b|`poetry run inference-cogvideo-t2v-diffusers`|49|480x720|2min|3G|\n|T2V|CogVideoX-2b|`poetry run inference-cogvideo-t2v-diffusers`|49|480x720|2min|3G|\n|T2V|Open Sora V1.0|`poetry run inference-opensora-v10-16x256x256`|16|256x256|11s|24G|\n|T2V|VideoCrafter-V2-320x512|`poetry run inference-vc2-t2v-320x512`|16|320x512|26s|11G|\n|T2V|VideoCrafter-V1-576x1024|`poetry run inference-vc1-t2v-576x1024`|16|576x1024|2min|15G|\n\n---\n\n\n**I2V**\n\n\nTask|Model|Command|Length (#Frames)|Resolution|Inference Time|GPU Memory (GB)|\n|:---------|:---------|:---------|:---------|:---------|:---------|:---------|\n|I2V|WanVideo|`poetry run inference-wanvideo-i2v-720p `|81|720x1280|28min|77G|\n|I2V|HunyuanVideo|`poetry run inference-hunyuan-i2v-720p`|129|720x1280|29min|43G|\n|I2V|CogVideoX-5b-I2V|`poetry run inference-cogvideox-15-5b-i2v`|49|480x720|5min|5G|\n|I2V|DynamiCrafter|`poetry run inference-dc-i2v-576x1024`|16|576x1024|2min|53G|\n|I2V|VideoCrafter-V1|`poetry run inference-vc1-i2v-320x512`|16|320x512|26s|11G|\n\n\n---\n\n**T2I**\n\nTask|Model|Command|Length (#Frames)|Resolution|Inference Time|GPU Memory (GB)|\n|:---------|:---------|:---------|:---------|:---------|:---------|:---------|\n|T2I|Flux-dev|`poetry run inference-flux-dev`|1|768x1360|4s|37G|\n|T2I|Flux-dev|`poetry run inference-flux-dev --enable_vae_tiling --enable_sequential_cpu_offload`|1|768x1360|4.2min|2G|\n|T2I|Flux-schnell|`poetry run inference-flux-schnell`|1|768x1360|1s|37G|\n|T2I|Flux-schnell|`poetry run inference-flux-schnell --enable_vae_tiling --enable_sequential_cpu_offload`|1|768x1360|24s|2G|\n\n### 4. Finetune T2V models\n#### (1) Prepare dataset\nPlease follow the [docs\u002Fdatasets.md](docs\u002Fdatasets.md) to try provided toydataset or build your own datasets.\n\n#### (2) Fine-tune\nAll  training commands were tested on H800 80G GPUs.  \n**T2V**\n\n|Task|Model|Mode|Command|More Details|#GPUs|\n|:----|:---------|:---------------|:-----------------------------------------|:----------------------------|:------|\n|T2V|Wan Video|Lora Fine-tune|`poetry run train-wan2-1-t2v-lora`|[docs\u002Ffinetune_wan.md](docs\u002Ffinetune_wan.md)|1|\n|T2V|Wan Video|Full Fine-tune|`poetry run train-wan2-1-t2v-fullft`|[docs\u002Ffinetune_wan.md](docs\u002Ffinetune_wan.md)|1|\n|T2V|Hunyuan Video|Lora Fine-tune|`poetry run train-hunyuan-t2v-lora`|[docs\u002Ffinetune_hunyuanvideo.md](docs\u002Ffinetune_hunyuanvideo.md)|2|\n|T2V|CogvideoX|Lora Fine-tune|`poetry run train-cogvideox-t2v-lora`|[docs\u002Ffinetune_cogvideox.md](docs\u002Ffinetune_cogvideox.md)|1|\n|T2V|CogvideoX|Full Fine-tune|`poetry run train-cogvideox-t2v-fullft`|[docs\u002Ffinetune_cogvideox.md](docs\u002Ffinetune_cogvideox.md)|4|\n|T2V|Open-Sora v1.0|Full Fine-tune|`poetry run train-opensorav10`|-|1|\n|T2V|VideoCrafter|Lora Fine-tune|`poetry run train-videocrafter-lora`|[docs\u002Ffinetune_videocrafter.md](docs\u002Ffinetune_videocrafter.md)|1|\n|T2V|VideoCrafter|Full Fine-tune|`poetry run train-videocrafter-v2`|[docs\u002Ffinetune_videocrafter.md](docs\u002Ffinetune_videocrafter.md)|1|\n\n---\n\n**I2V**\n\n|Task|Model|Mode|Command|More Details|#GPUs|\n|:----|:---------|:---------------|:-----------------------------------------|:----------------------------|:------|\n|I2V|Wan Video|Lora Fine-tune|`poetry run train-wan2-1-i2v-lora`|[docs\u002Ffinetune_wan.md](docs\u002Ffinetune_wan.md)|1|\n|I2V|Wan Video|Full Fine-tune|`poetry run train-wan2-1-i2v-fullft`|[docs\u002Ffinetune_wan.md](docs\u002Ffinetune_wan.md)|1|\n|I2V|CogvideoX|Lora Fine-tune|`poetry run train-cogvideox-i2v-lora`|[docs\u002Ffinetune_cogvideox.md](docs\u002Ffinetune_cogvideox.md)|1|\n|I2V|CogvideoX|Full Fine-tune|`poetry run train-cogvideox-i2v-fullft`|[docs\u002Ffinetune_cogvideox.md](docs\u002Ffinetune_cogvideox.md)|4|\n\n---\n\n**T2I**\n\n|Task|Model|Mode|Command|More Details|#GPUs|\n|:----|:---------|:---------------|:-----------------------------------------|:----------------------------|:------|\n|T2I|Flux|Lora Fine-tune|`poetry run train-flux-lora`|[docs\u002Ffinetune_flux.md](docs\u002Ffinetune_flux.md)|1|\n\n\n### 5. Evaluation\nWe support VBench evaluation to evaluate the T2V generation performance.\nPlease check [eval\u002FREADME.md](docs\u002Fevaluation.md) for details.\n\n\u003C!-- ### 6. Alignment\nWe support video alignment post-training to align human perference for video diffusion models. Please check [configs\u002Ftrain\u002F004_rlhf_vc2\u002FREADME.md](configs\u002Ftrain\u002F004_rlhf_vc2\u002FREADME.md) for details. -->\n\n## Contribute\n\n## Git hooks\n\nGit hooks are handled with [pre-commit](https:\u002F\u002Fpre-commit.com) library.\n\n### Hooks installation\n\nRun the following command to install hooks on `commit`. They will check formatting, linting and types.\n\n```shell\npoetry run pre-commit install\npoetry run pre-commit install --hook-type commit-msg\n```\n\n### Running the hooks without commiting\n\n```shell\npoetry run pre-commit run --all-files\n```\n\n## Acknowledgement\nWe thank the following repos for sharing their awesome models and codes!\n\n* [Wan2.1](https:\u002F\u002Fgithub.com\u002FWan-Video\u002FWan2.1): Wan: Open and Advanced Large-Scale Video Generative Models.\n* [HunyuanVideo](https:\u002F\u002Fgithub.com\u002FTencent\u002FHunyuanVideo): A Systematic Framework For Large Video Generation Model.\n* [Step-Video](https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep-Video-T2V): A text-to-video pre-trained model with 30 billion parameters and the capability to generate videos up to 204 frames.\n* [Mochi](https:\u002F\u002Fwww.genmo.ai\u002Fblog): A new SOTA in open-source video generation models\n* [VideoCrafter2](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FVideoCrafter): Overcoming Data Limitations for High-Quality Video Diffusion Models\n* [VideoCrafter1](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FVideoCrafter): Open Diffusion Models for High-Quality Video Generation\n* [DynamiCrafter](https:\u002F\u002Fgithub.com\u002FDoubiiu\u002FDynamiCrafter): Animating Open-domain Images with Video Diffusion Priors\n* [Open-Sora](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FOpen-Sora): Democratizing Efficient Video Production for All\n* [CogVideoX](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FCogVideo): Text-to-Video Diffusion Models with An Expert Transformer\n* [VADER](https:\u002F\u002Fgithub.com\u002Fmihirp1998\u002FVADER): Video Diffusion Alignment via Reward Gradients\n* [VBench](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench): Comprehensive Benchmark Suite for Video Generative Models\n* [Flux](https:\u002F\u002Fgithub.com\u002Fblack-forest-labs\u002Fflux): Text-to-image models from Black Forest Labs.\n* [SimpleTuner](https:\u002F\u002Fgithub.com\u002Fbghira\u002FSimpleTuner): A fine-tuning kit for text-to-image generation.\n\n\n\n\n## Some Resources\n* [LLMs-Meet-MM-Generation](https:\u002F\u002Fgithub.com\u002FYingqingHe\u002FAwesome-LLMs-meet-Multimodal-Generation): A paper collection of utilizing LLMs for multimodal generation (image, video, 3D and audio).\n* [MMTrail](https:\u002F\u002Fgithub.com\u002Flitwellchi\u002FMMTrail): A multimodal trailer video dataset with language and music descriptions.\n* [Seeing-and-Hearing](https:\u002F\u002Fgithub.com\u002Fyzxing87\u002FSeeing-and-Hearing): A versatile framework for Joint VA generation, V2A, A2V, and I2A.\n* [Self-Cascade](https:\u002F\u002Fgithub.com\u002FGuoLanqing\u002FSelf-Cascade): A Self-Cascade model for higher-resolution image and video generation.\n* [ScaleCrafter](https:\u002F\u002Fgithub.com\u002FYingqingHe\u002FScaleCrafter) and [HiPrompt](https:\u002F\u002Fliuxinyv.github.io\u002FHiPrompt\u002F): Free method for higher-resolution image and video generation.\n* [FreeTraj](https:\u002F\u002Fgithub.com\u002Farthur-qiu\u002FFreeTraj) and [FreeNoise](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FFreeNoise): Free method for video trajectory control and longer-video generation.\n* [Follow-Your-Emoji](https:\u002F\u002Fgithub.com\u002Fmayuelala\u002FFollowYourEmoji), [Follow-Your-Click](https:\u002F\u002Fgithub.com\u002Fmayuelala\u002FFollowYourClick), and [Follow-Your-Pose](https:\u002F\u002Ffollow-your-pose.github.io\u002F): Follow family for controllable video generation.\n* [Animate-A-Story](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FAnimate-A-Story): A framework for storytelling video generation.\n* [LVDM](https:\u002F\u002Fgithub.com\u002FYingqingHe\u002FLVDM): Latent Video Diffusion Model for long video generation and text-to-video generation.\n\n\n\n## 🍻 Contributors\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVideoVerses_VideoTuna_readme_58585d3f9fb2.png\" \u002F>\n\u003C\u002Fa>\n\n## 📋 License\nPlease follow [CC-BY-NC-ND](.\u002FLICENSE). If you want a license authorization, please contact the project leads Yingqing He (yhebm@connect.ust.hk) and Yazhou Xing (yxingag@connect.ust.hk).\n\n## 😊 Citation\n\n```bibtex\n@software{videotuna,\n  author = {Yingqing He and Yazhou Xing and Zhefan Rao and Haoyu Wu and Zhaoyang Liu and Jingye Chen and Pengjun Fang and Jiajun Li and Liya Ji and Runtao Liu and Xiaowei Chi and Yang Fei and Guocheng Shao and Yue Ma and Qifeng Chen},\n  title = {VideoTuna: A Powerful Toolkit for Video Generation with Model Fine-Tuning and Post-Training},\n  month = {Nov},\n  year = {2024},\n  url = {https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna}\n}\n```\n\n\n## Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVideoVerses_VideoTuna_readme_ff88494c5973.png)](https:\u002F\u002Fstar-history.com\u002F#VideoVerses\u002FVideoTuna&Date)\n","\u003Cp align=\"center\" width=\"50%\">\n\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F38efb5bc-723e-4012-aebd-f55723c593fb\" alt=\"VideoTuna\" style=\"width: 75%; min-width: 450px; display: block; margin: auto; background-color: transparent;\">\n\u003C\u002Fp>\n\n# VideoTuna\n\n![版本](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fversion-0.1.0-blue) ![访问量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVideoVerses_VideoTuna_readme_64b0481c38e0.png)  [![](https:\u002F\u002Fdcbadge.limes.pink\u002Fapi\u002Fserver\u002FAammaaR2?style=flat)](https:\u002F\u002Fdiscord.gg\u002FAammaaR2) \u003Ca href='https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fa48d57a3-4d89-482c-8181-e0bce4f750fd'>\u003Cimg src='https:\u002F\u002Fbadges.aleen42.com\u002Fsrc\u002Fwechat.svg'>\u003C\u002Fa> [![主页](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHomepage-VideoTuna-orange)](https:\u002F\u002Fvideoverses.github.io\u002Fvideotuna\u002F) [![GitHub](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FVideoVerses\u002FVideoTuna?style=social)](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna)\n\n\n🤗🤗🤗 VideoTuna 是一个用于文本到视频应用的实用代码库。  \n🌟 VideoTuna 是首个整合了多种 AI 视频生成模型的仓库，包括“文本到视频 (T2V)”、“图像到视频 (I2V)”、“文本到图像 (T2I)”以及“视频到视频 (V2V)”等生成任务，可用于模型推理与微调（据我们所知）。  \n🌟 VideoTuna 也是首个提供全面视频生成流水线的仓库，涵盖从微调到预训练、持续训练以及后训练对齐等全流程（据我们所知）。  \n\n\n\n## 🔆 特性\n![videotuna-pipeline-fig3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVideoVerses_VideoTuna_readme_365c41be98a3.png)\n🌟 **一体化框架：** 支持各类最新预训练视频生成模型的推理与微调。  \n🌟 **持续训练：** 可利用新数据不断优化您的模型。  \n🌟 **微调：** 将预训练模型适配到特定领域。  \n🌟 **人类偏好对齐：** 利用 RLHF 技术使模型更符合人类偏好。  \n🌟 **后处理：** 使用视频到视频增强模型对生成的视频进行优化和修正。  \n\n\n## 🔆 更新\n\n- [2025-04-22] 🐟 支持 `Wan2.1` 和 `Step Video` 的 **推理**，以及 `HunyuanVideo T2V` 的 **微调**，并采用统一的代码库架构。\n- [2025-02-03] 🐟 通过 [PR#27](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fpull\u002F27) 支持自动代码格式化。感谢 [@samidarko](https:\u002F\u002Fgithub.com\u002Fsamidarko)!\n- [2025-02-01] 🐟 迁移到 [Poetry](https:\u002F\u002Fpython-poetry.org) 以简化依赖管理和脚本运行（[PR#25](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fpull\u002F25)）。感谢 [@samidarko](https:\u002F\u002Fgithub.com\u002Fsamidarko)!\n- [2025-01-20] 🐟 支持 `Flux-T2I` 的 **微调**。\n- [2025-01-01] 🐟 在 [VideoVAEPlus 仓库](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoVAEPlus) 中发布了 `VideoVAE+` 的 **训练**。\n- [2025-01-01] 🐟 支持 `Hunyuan Video` 和 `Mochi` 的 **推理**。\n- [2024-12-24] 🐟 发布了 `VideoVAE+`：一款 SOTA 视频 VAE 模型——现已在 [此仓库](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoVAEPlus) 中可用！其视频重建效果优于 NVIDIA 的 [`Cosmos-Tokenizer`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FCosmos-Tokenizer)。\n- [2024-12-01] 🐟 支持 ModelScope 提供的 `CogVideoX-1.5-T2V&I2V` 和 **视频到视频增强** 的 **推理**。\n- [2024-12-01] 🐟 支持 `CogVideoX` 的 **微调**。\n- [2024-11-01] 🐟 🎉 发布了 **VideoTuna v0.1.0**！  \n  初始支持包括对 `VideoCrafter1-T2V&I2V`、`VideoCrafter2-T2V`、`DynamiCrafter-I2V`、`OpenSora-T2V`、`CogVideoX-1-2B-T2V`、`CogVideoX-1-T2V`、`Flux-T2I` 的推理，以及对 `VideoCrafter`、`DynamiCrafter` 和 `Open-Sora` 的训练与微调。\n\n## 🔆 开始使用\n\n### 1. 准备环境\n\n#### (1) 如果您使用 Linux 和 Conda（推荐）\n``` shell\nconda create -n videotuna python=3.10 -y\nconda activate videotuna\npip install poetry\npoetry install\n```\n- ↑ 大约需要 3 分钟。\n\n**可选：安装 Flash-attn**\n\nHunyuan 模型使用它来减少内存占用并加速推理。如果未安装，模型将以正常模式运行。可通过以下命令安装 `flash-attn`：\n``` shell\npoetry run install-flash-attn \n```\n- ↑ 大约需要 1 分钟。\n\n**可选：视频到视频增强**\n```\npoetry run pip install \"modelscope[cv]\" -f https:\u002F\u002Fmodelscope.oss-cn-beijing.aliyuncs.com\u002Freleases\u002Frepo.html\n```\n- 如果此命令 ↑ 卡住，终止后重新运行即可解决问题。\n\n\n#### (2) 如果您使用 Linux 和 Poetry（不使用 Conda）：\n\u003Cdetails>\n  \u003Csummary>点击查看说明\u003C\u002Fsummary>\n  \u003Cbr>\n\n  安装 Poetry：https:\u002F\u002Fpython-poetry.org\u002Fdocs\u002F#installation  \n  然后：\n\n  ``` shell\n  poetry config virtualenvs.in-project true # 可选但建议，确保虚拟环境创建在项目根目录\n  poetry config virtualenvs.create true # 启用此参数以确保虚拟环境创建在项目根目录\n  poetry env use python3.10 # 将创建虚拟环境，可通过 `ls -l .venv` 检查。\n  poetry env activate # 可选，因为 Poetry 命令（例如 `poetry install` 或 `poetry run \u003Ccommand>`）会自动加载虚拟环境。\n  poetry install\n  ```\n\n  **可选：安装 Flash-attn**\n\n  Hunyuan 模型使用它来减少内存占用并加速推理。如果未安装，模型将以正常模式运行。可通过以下命令安装 `flash-attn`：\n  ``` shell\n  poetry run install-flash-attn\n  ```\n  \n  **可选：视频到视频增强**\n  ```\n  poetry run pip install \"modelscope[cv]\" -f https:\u002F\u002Fmodelscope.oss-cn-beijing.aliyuncs.com\u002Freleases\u002Frepo.html\n  ```\n  - 如果此命令 ↑ 卡住，终止后重新运行即可解决问题。\n\n\u003C\u002Fdetails>\n\n\n\n#### (3) 如果您使用 MacOS\n\u003Cdetails>\n  \u003Csummary>点击查看说明\u003C\u002Fsummary>\n  \u003Cbr>\n\n  在搭载 Apple Silicon 芯片的 macOS 上，请使用 [docker compose](https:\u002F\u002Fdocs.docker.com\u002Fcompose\u002F)，因为某些依赖项不支持 arm64 架构（例如 `bitsandbytes`、`decord`、`xformers`）。\n\n  首先构建：\n\n  ```shell\n  docker compose build videotuna\n  ```\n\n  为保留项目文件权限，设置以下环境变量：\n\n  ```shell\n  export HOST_UID=$(id -u)\n  export HOST_GID=$(id -g)\n  ```\n\n  安装依赖项：\n\n  ```shell\n  docker compose run --remove-orphans videotuna poetry env use \u002Fusr\u002Flocal\u002Fbin\u002Fpython\n  docker compose run --remove-orphans videotuna poetry run python -m pip install --upgrade pip setuptools wheel\n  docker compose run --remove-orphans videotuna poetry install\n  docker compose run --remove-orphans videotuna poetry run pip install \"modelscope[cv]\" -f https:\u002F\u002Fmodelscope.oss-cn-beijing.aliyuncs.com\u002Freleases\u002Frepo.html\n  ```\n\n  注意：安装 swissarmytransformer 可能会卡住。只需重试即可成功。\n\n  添加依赖项：\n\n  ```shell\n  docker compose run --remove-orphans videotuna poetry add wheel\n  ```\n\n  检查依赖项：\n\n  ```shell\n  docker compose run --remove-orphans videotuna poetry run pip freeze\n  ```\n\n  运行 Poetry 命令：\n\n  ```shell\n  docker compose run --remove-orphans videotuna poetry run format\n  ```\n\n  启动终端：\n\n  ```shell\n  docker compose run -it --remove-orphans videotuna bash\n  ```\n\u003C\u002Fdetails>\n\n### 2. 准备检查点\n\n- 请按照 [docs\u002Fcheckpoints.md](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fblob\u002Fmain\u002Fdocs\u002Fcheckpoints.md) 下载模型检查点。  \n- 下载完成后，应将模型检查点放置于 [检查点结构](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fblob\u002Fmain\u002Fdocs\u002Fcheckpoints.md#checkpoint-orgnization-structure) 中。\n\n### 3. 推理最先进的 T2V\u002FI2V\u002FT2I 模型\n\n\n运行以下命令进行模型推理：\n它将根据 `inputs\u002Ft2v\u002Fprompts.txt` 中的提示自动生成 T2V\u002FT2I，并根据 `inputs\u002Fi2v\u002F576x1024` 中的图像和提示生成 I2V。  \n\n**T2V**\n任务|模型|命令|长度（帧数）|分辨率|推理时间|GPU 内存（GB）|\n|:---------|:---------|:---------|:---------|:---------|:---------|:---------|\n|T2V|HunyuanVideo|`poetry run inference-hunyuan-t2v`|129|720x1280|32分钟|60G|\n|T2V|WanVideo|`poetry run inference-wanvideo-t2v-720p`|81|720x1280|32分钟|70G|\n|T2V|StepVideo|`poetry run inference-stepvideo-t2v-544x992`|51|544x992|8分钟|61G|\n|T2V|Mochi|`poetry run inference-mochi`|84|480x848|2分钟|26G|\n|T2V|CogVideoX-5b|`poetry run inference-cogvideo-t2v-diffusers`|49|480x720|2分钟|3G|\n|T2V|CogVideoX-2b|`poetry run inference-cogvideo-t2v-diffusers`|49|480x720|2分钟|3G|\n|T2V|Open Sora V1.0|`poetry run inference-opensora-v10-16x256x256`|16|256x256|11秒|24G|\n|T2V|VideoCrafter-V2-320x512|`poetry run inference-vc2-t2v-320x512`|16|320x512|26秒|11G|\n|T2V|VideoCrafter-V1-576x1024|`poetry run inference-vc1-t2v-576x1024`|16|576x1024|2分钟|15G|\n\n---\n\n\n**I2V**\n\n\n任务|模型|命令|长度（帧数）|分辨率|推理时间|GPU 内存（GB）|\n|:---------|:---------|:---------|:---------|:---------|:---------|:---------|\n|I2V|WanVideo|`poetry run inference-wanvideo-i2v-720p `|81|720x1280|28分钟|77G|\n|I2V|HunyuanVideo|`poetry run inference-hunyuan-i2v-720p`|129|720x1280|29分钟|43G|\n|I2V|CogVideoX-5b-I2V|`poetry run inference-cogvideox-15-5b-i2v`|49|480x720|5分钟|5G|\n|I2V|DynamiCrafter|`poetry run inference-dc-i2v-576x1024`|16|576x1024|2分钟|53G|\n|I2V|VideoCrafter-V1|`poetry run inference-vc1-i2v-320x512`|16|320x512|26秒|11G|\n\n\n---\n\n**T2I**\n\n任务|模型|命令|长度（帧数）|分辨率|推理时间|GPU 内存（GB）|\n|:---------|:---------|:---------|:---------|:---------|:---------|:---------|\n|T2I|Flux-dev|`poetry run inference-flux-dev`|1|768x1360|4秒|37G|\n|T2I|Flux-dev|`poetry run inference-flux-dev --enable_vae_tiling --enable_sequential_cpu_offload`|1|768x1360|4.2分钟|2G|\n|T2I|Flux-schnell|`poetry run inference-flux-schnell`|1|768x1360|1秒|37G|\n|T2I|Flux-schnell|`poetry run inference-flux-schnell --enable_vae_tiling --enable_sequential_cpu_offload`|1|768x1360|24秒|2G|\n\n### 4. 微调 T2V 模型\n#### (1) 准备数据集\n请按照 [docs\u002Fdatasets.md](docs\u002Fdatasets.md) 中的说明，尝试使用提供的玩具数据集，或构建您自己的数据集。\n\n#### (2) 微调\n所有训练命令均在 H800 80G 显卡上测试通过。  \n**T2V**\n\n|任务|模型|模式|命令|更多详情|#GPU|\n|:----|:---------|:---------------|:-----------------------------------------|:----------------------------|:------|\n|T2V|Wan Video|LoRA 微调|`poetry run train-wan2-1-t2v-lora`|[docs\u002Ffinetune_wan.md](docs\u002Ffinetune_wan.md)|1|\n|T2V|Wan Video|全量微调|`poetry run train-wan2-1-t2v-fullft`|[docs\u002Ffinetune_wan.md](docs\u002Ffinetune_wan.md)|1|\n|T2V|Hunyuan Video|LoRA 微调|`poetry run train-hunyuan-t2v-lora`|[docs\u002Ffinetune_hunyuanvideo.md](docs\u002Ffinetune_hunyuanvideo.md)|2|\n|T2V|CogvideoX|LoRA 微调|`poetry run train-cogvideox-t2v-lora`|[docs\u002Ffinetune_cogvideox.md](docs\u002Ffinetune_cogvideox.md)|1|\n|T2V|CogvideoX|全量微调|`poetry run train-cogvideox-t2v-fullft`|[docs\u002Ffinetune_cogvideox.md](docs\u002Ffinetune_cogvideox.md)|4|\n|T2V|Open-Sora v1.0|全量微调|`poetry run train-opensorav10`|-|1|\n|T2V|VideoCrafter|LoRA 微调|`poetry run train-videocrafter-lora`|[docs\u002Ffinetune_videocrafter.md](docs\u002Ffinetune_videocrafter.md)|1|\n|T2V|VideoCrafter|全量微调|`poetry run train-videocrafter-v2`|[docs\u002Ffinetune_videocrafter.md](docs\u002Ffinetune_videocrafter.md)|1|\n\n---\n\n**I2V**\n\n|任务 | 模型 | 模式 | 命令 | 更多详情 | #GPU |\n| :---- | :--------- | :--------------- | :----------------------------------------- | :---------------------------- | :------ |\n| I2V | Wan Video | LoRA 微调 | `poetry run train-wan2-1-i2v-lora` | [docs\u002Ffinetune_wan.md](docs\u002Ffinetune_wan.md) | 1 |\n| I2V | Wan Video | 全量微调 | `poetry run train-wan2-1-i2v-fullft` | [docs\u002Ffinetune_wan.md](docs\u002Ffinetune_wan.md) | 1 |\n| I2V | CogvideoX | LoRA 微调 | `poetry run train-cogvideox-i2v-lora` | [docs\u002Ffinetune_cogvideox.md](docs\u002Ffinetune_cogvideox.md) | 1 |\n| I2V | CogvideoX | 全量微调 | `poetry run train-cogvideox-i2v-fullft` | [docs\u002Ffinetune_cogvideox.md](docs\u002Ffinetune_cogvideox.md) | 4 |\n\n---\n\n**T2I**\n\n|任务|模型|模式|命令|更多详情|#GPUs|\n|:----|:---------|:---------------|:-----------------------------------------|:----------------------------|:------|\n|T2I|Flux|Lora 微调|`poetry run train-flux-lora`|[docs\u002Ffinetune_flux.md](docs\u002Ffinetune_flux.md)|1|\n\n\n### 5. 评估\n我们支持 VBench 评估，用于评价 T2V 的生成性能。\n详细信息请参阅 [eval\u002FREADME.md](docs\u002Fevaluation.md)。\n\n\u003C!-- ### 6. 对齐\n我们支持视频扩散模型的后训练对齐，以符合人类偏好。详细信息请参阅 [configs\u002Ftrain\u002F004_rlhf_vc2\u002FREADME.md](configs\u002Ftrain\u002F004_rlhf_vc2\u002FREADME.md)。 -->\n\n## 贡献\n\n## Git 钩子\n\nGit 钩子由 [pre-commit](https:\u002F\u002Fpre-commit.com) 库管理。\n\n### 安装钩子\n\n运行以下命令以在 `commit` 时安装钩子。它们将检查格式、代码风格和类型注解。\n\n```shell\npoetry run pre-commit install\npoetry run pre-commit install --hook-type commit-msg\n```\n\n### 在不提交的情况下运行钩子\n\n```shell\npoetry run pre-commit run --all-files\n```\n\n## 致谢\n我们感谢以下项目分享了他们优秀的模型和代码！\n\n* [Wan2.1](https:\u002F\u002Fgithub.com\u002FWan-Video\u002FWan2.1): Wan：开放且先进的大规模视频生成模型。\n* [HunyuanVideo](https:\u002F\u002Fgithub.com\u002FTencent\u002FHunyuanVideo): 大规模视频生成模型的系统化框架。\n* [Step-Video](https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep-Video-T2V): 一个拥有300亿参数的文本到视频预训练模型，能够生成长达204帧的视频。\n* [Mochi](https:\u002F\u002Fwww.genmo.ai\u002Fblog): 开源视频生成领域的最新SOTA模型。\n* [VideoCrafter2](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FVideoCrafter): 克服数据限制，打造高质量视频扩散模型。\n* [VideoCrafter1](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FVideoCrafter): 开放的高质量视频生成扩散模型。\n* [DynamiCrafter](https:\u002F\u002Fgithub.com\u002FDoubiiu\u002FDynamiCrafter): 利用视频扩散先验动画化开放领域图像。\n* [Open-Sora](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FOpen-Sora): 让高效视频制作惠及所有人。\n* [CogVideoX](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FCogVideo): 具有专家级Transformer的文本到视频扩散模型。\n* [VADER](https:\u002F\u002Fgithub.com\u002Fmihirp1998\u002FVADER): 通过奖励梯度实现视频扩散对齐。\n* [VBench](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench): 视频生成模型的全面基准测试套件。\n* [Flux](https:\u002F\u002Fgithub.com\u002Fblack-forest-labs\u002Fflux): Black Forest Labs 的文本到图像模型。\n* [SimpleTuner](https:\u002F\u002Fgithub.com\u002Fbghira\u002FSimpleTuner): 文本到图像生成的微调工具包。\n\n\n\n\n## 一些资源\n* [LLMs-Meet-MM-Generation](https:\u002F\u002Fgithub.com\u002FYingqingHe\u002FAwesome-LLMs-meet-Multimodal-Generation): 一篇关于利用大语言模型进行多模态生成（图像、视频、3D和音频）的论文合集。\n* [MMTrail](https:\u002F\u002Fgithub.com\u002Flitwellchi\u002FMMTrail): 一个多模态预告片视频数据集，包含语言和音乐描述。\n* [Seeing-and-Hearing](https:\u002F\u002Fgithub.com\u002Fyzxing87\u002FSeeing-and-Hearing): 一个用于联合VA生成、V2A、A2V和I2A的多功能框架。\n* [Self-Cascade](https:\u002F\u002Fgithub.com\u002FGuoLanqing\u002FSelf-Cascade): 一种用于更高分辨率图像和视频生成的自级联模型。\n* [ScaleCrafter](https:\u002F\u002Fgithub.com\u002FYingqingHe\u002FScaleCrafter) 和 [HiPrompt](https:\u002F\u002Fliuxinyv.github.io\u002FHiPrompt\u002F): 用于更高分辨率图像和视频生成的免费方法。\n* [FreeTraj](https:\u002F\u002Fgithub.com\u002Farthur-qiu\u002FFreeTraj) 和 [FreeNoise](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FFreeNoise): 用于视频轨迹控制和更长视频生成的免费方法。\n* [Follow-Your-Emoji](https:\u002F\u002Fgithub.com\u002Fmayuelala\u002FFollowYourEmoji)、[Follow-Your-Click](https:\u002F\u002Fgithub.com\u002Fmayuelala\u002FFollowYourClick) 和 [Follow-Your-Pose](https:\u002F\u002Ffollow-your-pose.github.io\u002F): 一系列可控视频生成工具。\n* [Animate-A-Story](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FAnimate-A-Story): 一个用于故事性视频生成的框架。\n* [LVDM](https:\u002F\u002Fgithub.com\u002FYingqingHe\u002FLVDM): 用于长视频生成和文本到视频生成的潜在视频扩散模型。\n\n\n\n## 🍻 贡献者\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVideoVerses_VideoTuna_readme_58585d3f9fb2.png\" \u002F>\n\u003C\u002Fa>\n\n## 📋 许可证\n请遵守 [CC-BY-NC-ND](.\u002FLICENSE) 许可协议。如需许可授权，请联系项目负责人 Yingqing He (yhebm@connect.ust.hk) 和 Yazhou Xing (yxingag@connect.ust.hk)。\n\n## 😊 引用\n\n```bibtex\n@software{videotuna,\n  author = {Yingqing He and Yazhou Xing and Zhefan Rao and Haoyu Wu and Zhaoyang Liu and Jingye Chen and Pengjun Fang and Jiajun Li and Liya Ji and Runtao Liu and Xiaowei Chi and Yang Fei and Guocheng Shao and Yue Ma and Qifeng Chen},\n  title = {VideoTuna: 一款功能强大的视频生成工具包，支持模型微调和后训练优化},\n  month = {Nov},\n  year = {2024},\n  url = {https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna}\n}\n```\n\n## 星标历史\n\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVideoVerses_VideoTuna_readme_ff88494c5973.png)](https:\u002F\u002Fstar-history.com\u002F#VideoVerses\u002FVideoTuna&Date)","# VideoTuna 快速上手指南\n\nVideoTuna 是一个集成了多种主流 AI 视频生成模型（如文生视频 T2V、图生视频 I2V、文生图 T2I 等）的一站式框架，支持从推理、微调到持续训练的全流程。\n\n## 1. 环境准备\n\n### 系统要求\n- **操作系统**: Linux (推荐) 或 macOS (需使用 Docker)\n- **Python 版本**: 3.10\n- **依赖管理工具**: Conda (推荐) 或 Poetry\n- **GPU**: 建议使用 NVIDIA GPU，显存需求视具体模型而定（详见使用示例）\n\n### 前置说明\n本项目使用 `Poetry` 管理依赖。国内用户若遇到网络问题，可配置 Poetry 使用国内源（可选）：\n```bash\npoetry config repositories.pypi https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 2. 安装步骤\n\n### 方案 A：Linux + Conda (推荐)\n\n1. **创建并激活虚拟环境**\n   ```shell\n   conda create -n videotuna python=3.10 -y\n   conda activate videotuna\n   ```\n\n2. **安装 Poetry 并部署项目**\n   ```shell\n   pip install poetry\n   poetry install\n   ```\n   *预计耗时约 3 分钟。*\n\n3. **可选优化组件**\n   - **安装 Flash-attn** (用于 Hunyuan 等模型加速及显存优化):\n     ```shell\n     poetry run install-flash-attn\n     ```\n   - **安装视频增强功能** (基于 ModelScope):\n     ```shell\n     poetry run pip install \"modelscope[cv]\" -f https:\u002F\u002Fmodelscope.oss-cn-beijing.aliyuncs.com\u002Freleases\u002Frepo.html\n     ```\n     *注：若命令卡住，请终止后重新运行即可。*\n\n### 方案 B：macOS (Apple Silicon)\n\n由于部分依赖不支持 arm64 架构，macOS 用户需通过 Docker Compose 运行。\n\n1. **构建镜像**\n   ```shell\n   docker compose build videotuna\n   ```\n\n2. **设置权限变量**\n   ```shell\n   export HOST_UID=$(id -u)\n   export HOST_GID=$(id -g)\n   ```\n\n3. **安装依赖**\n   ```shell\n   docker compose run --remove-orphans videotuna poetry env use \u002Fusr\u002Flocal\u002Fbin\u002Fpython\n   docker compose run --remove-orphans videotuna poetry run python -m pip install --upgrade pip setuptools wheel\n   docker compose run --remove-orphans videotuna poetry install\n   docker compose run --remove-orphans videotuna poetry run pip install \"modelscope[cv]\" -f https:\u002F\u002Fmodelscope.oss-cn-beijing.aliyuncs.com\u002Freleases\u002Frepo.html\n   ```\n   *注：若 `swissarmytransformer` 安装挂起，重试一次通常可解决。*\n\n## 3. 基本使用\n\n### 第一步：准备模型权重\n请参照官方文档 [docs\u002Fcheckpoints.md](https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fblob\u002Fmain\u002Fdocs\u002Fcheckpoints.md) 下载所需模型的 Checkpoint，并按照指定目录结构存放。\n\n### 第二步：运行推理 (Inference)\n\nVideoTuna 会自动读取 `inputs\u002F` 目录下的提示词或图片进行生成。以下是常用模型的快速启动命令：\n\n#### 文生视频 (T2V)\n| 模型 | 命令 | 显存参考 |\n| :--- | :--- | :--- |\n| **HunyuanVideo** | `poetry run inference-hunyuan-t2v` | ~60G |\n| **WanVideo (720p)** | `poetry run inference-wanvideo-t2v-720p` | ~70G |\n| **CogVideoX-5b** | `poetry run inference-cogvideo-t2v-diffusers` | ~3G |\n| **Mochi** | `poetry run inference-mochi` | ~26G |\n\n#### 图生视频 (I2V)\n| 模型 | 命令 | 显存参考 |\n| :--- | :--- | :--- |\n| **WanVideo** | `poetry run inference-wanvideo-i2v-720p` | ~77G |\n| **CogVideoX-5b** | `poetry run inference-cogvideox-15-5b-i2v` | ~5G |\n| **DynamiCrafter** | `poetry run inference-dc-i2v-576x1024` | ~53G |\n\n#### 文生图 (T2I) - Flux\n```shell\n# 标准模式 (高显存)\npoetry run inference-flux-dev\n\n# 低显存模式 (开启 VAE Tiling 和 CPU Offload)\npoetry run inference-flux-dev --enable_vae_tiling --enable_sequential_cpu_offload\n```\n\n### 第三步：微调模型 (Fine-tuning)\n\n在微调前，请先按照 [docs\u002Fdatasets.md](docs\u002Fdatasets.md) 准备数据集。以下以 **Wan Video** 的 LoRA 微调为例：\n\n```shell\n# 文生视频 (T2V) LoRA 微调\npoetry run train-wan2-1-t2v-lora\n\n# 图生视频 (I2V) LoRA 微调\npoetry run train-wan2-1-i2v-lora\n```\n\n*注：更多模型的微调命令及详细参数配置（如 Hunyuan, CogVideoX, Flux 等）请参考项目根目录下的 `docs\u002F` 文档。所有训练命令已在 H800 80G GPU 上验证。*","一家专注于电商营销的初创团队，需要为不同品类的商品快速生成高质量的定制化短视频广告。\n\n### 没有 VideoTuna 时\n- **模型切换繁琐**：团队需分别搭建 Text-to-Video、Image-to-Video 等多套独立环境，代码库分散，维护成本极高。\n- **领域适配困难**：通用模型生成的视频缺乏品牌特色（如特定光影或产品质感），缺乏统一的微调接口进行领域适应。\n- **迭代周期漫长**：想要引入新数据持续优化模型或对齐人类审美偏好（RLHF），需从头编写训练脚本，耗时数周。\n- **画质修复割裂**：生成视频若出现瑕疵，需额外寻找并集成独立的视频增强工具，工作流断点严重。\n\n### 使用 VideoTuna 后\n- **一站式框架整合**：通过 VideoTuna 统一代码架构，轻松调用 HunyuanVideo、CogVideoX 等主流模型，无缝切换 T2V、I2V 等多种任务。\n- **高效领域微调**：利用内置的微调流水线，快速将预训练模型适配到“美妆”或\"3C\"特定场景，显著提升了视频的品牌一致性。\n- **持续进化能力**：借助连续训练和人类偏好对齐功能，团队能利用新收集的用户反馈数据不断迭代模型，使生成效果越用越精准。\n- **闭环画质增强**：直接调用集成的 Video-to-Video 增强模块，在生成后立即自动修复瑕疵并提升分辨率，实现了从生成到优化的全流程闭环。\n\nVideoTuna 通过构建全链路视频生成与微调体系，将原本碎片化的开发流程整合为高效闭环，极大降低了定制化视频模型的生产门槛与时间成本。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVideoVerses_VideoTuna_df4d075c.png","VideoVerses","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FVideoVerses_985a544d.png","",null,"https:\u002F\u002Fgithub.com\u002FVideoVerses",[78,82,86],{"name":79,"color":80,"percentage":81},"Python","#3572A5",99.2,{"name":83,"color":84,"percentage":85},"Shell","#89e051",0.7,{"name":87,"color":88,"percentage":89},"Dockerfile","#384d54",0.1,547,29,"2026-03-25T18:37:29","NOASSERTION","Linux, macOS","需要 NVIDIA GPU（Linux 推荐），macOS 需通过 Docker 运行。显存需求依模型而定：推理最低约 3GB (CogVideoX-2B)，最高约 77GB (WanVideo I2V)；训练\u002F微调推荐 H800 80G，单卡或多卡（1-4 张）配置。未明确指定 CUDA 版本，但依赖 flash-attn 和 xformers 通常暗示需要较新版本。","未说明",{"notes":98,"python":99,"dependencies":100},"1. Linux 环境下推荐使用 Conda 配合 Poetry 管理环境。2. macOS (Apple Silicon) 用户必须使用 Docker Compose，因为部分依赖（如 bitsandbytes, decord, xformers）不支持 arm64 架构。3. Hunyuan 模型可选安装 flash-attn 以降低显存占用并加速。4. 视频增强功能需额外安装 modelscope[cv]。5. 所有训练命令已在 H800 80G GPU 上测试通过。6. 部分依赖安装可能卡顿，建议重试。","3.10",[101,102,103,104,105,106,107,108,109,110],"poetry","flash-attn (可选)","modelscope[cv] (可选)","torch","transformers","diffusers","accelerate","bitsandbytes","decord","xformers",[112,14,13,15],"视频",[114,115,116,117,118,119,120],"ai","aigc","content-production","fine-tuning-diffusion","text-to-video","video-generation","visual-art","2026-03-27T02:49:30.150509","2026-04-11T17:38:31.110715",[124,129,134,139,144,148,152],{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},29732,"何时支持 Mochi-1 preview 模型的微调？","维护者表示将尽快支持 Mochi-1 的微调，预计时间在两周左右。","https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fissues\u002F1",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},29733,"如何使用 VideoTuna 进行视频增强（Video Enhancement）？","可以通过运行以下脚本来实现视频到视频（v2v）的增强效果：https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fblob\u002Fmain\u002Fshscripts\u002Finference_v2v_ms.sh","https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fissues\u002F21",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},29734,"README 中展示的视频压缩与重建示例是如何实现的？使用了哪个 VAE 模型？","示例中的 VAE 演示是使用项目团队自研的 VAE 模型获得的，并非来自 ModelScope 或 CogVid 的 3D Causal VAE。官方表示即将发布该模型的权重文件。","https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fissues\u002F20",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},29735,"训练 VideoCrafter2 时，即使配置了 `cond_stage_trainable: true` 也无法训练文本编码器，如何解决？","这是因为 `get_learned_conditioning` 过程被包裹在 `@torch.no_grad()` 装饰器中，导致无法计算梯度。解决方法是移除该装饰器。具体代码位置参考：src\u002Fbase\u002Fddpm3d.py 中的 `get_batch_input` 函数。","https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fissues\u002F13",{"id":145,"question_zh":146,"answer_zh":147,"source_url":143},29736,"Shell 脚本中的 `--auto_resume True` 参数写法是否正确？","不需要添加 \"True\"。`auto_resume` 是一个标志位（flag），没有参数值。如果存在该参数则视为真，不存在则视为假。正确的写法应仅保留 `--auto_resume`。",{"id":149,"question_zh":150,"answer_zh":151,"source_url":143},29737,"训练日志消息重复打印两次，该如何修复？","这通常是因为主 logger 继承了根 logger 的 handler。解决方法是在 `set_logger` 函数（位于 src\u002Futils\u002Ftrain_utils.py）中设置 `logger.propagate = False`。",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},29738,"VideoCrafter2 配置文件中为什么使用 L1 Loss 而不是常见的 L2 Loss？","这是一个配置错误，官方已修复。虽然测试显示在 LoRA 微调设置下 L1 和 L2 的结果非常相似，但原始训练损失确实应该使用 L2 Loss。","https:\u002F\u002Fgithub.com\u002FVideoVerses\u002FVideoTuna\u002Fissues\u002F11",[]]