[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-BlackSamorez--tensor_parallel":3,"tool-BlackSamorez--tensor_parallel":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159636,2,"2026-04-17T23:33:34",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":87,"forks":88,"last_commit_at":89,"license":90,"difficulty_score":32,"env_os":91,"env_gpu":92,"env_ram":93,"env_deps":94,"category_tags":100,"github_topics":101,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":109,"updated_at":110,"faqs":111,"releases":142},8777,"BlackSamorez\u002Ftensor_parallel","tensor_parallel","Automatically split your PyTorch models on multiple GPUs for training & inference","tensor_parallel 是一款专为 PyTorch 设计的开源工具，旨在帮助开发者轻松地将大型模型拆分到多张 GPU 上进行训练或推理。它主要解决了显存受限导致无法运行大参数模型（如几十亿参数的 LLM）的难题，让用户仅需一行代码即可实现模型的自动并行化，无需手动修改复杂的网络结构。\n\n这款工具非常适合 AI 研究人员、算法工程师以及需要处理大规模深度学习模型的开发者使用。其核心亮点在于“无感”集成：只需在模型加载后包裹一层 `tp.tensor_parallel`，原有的训练和推理代码几乎无需变动即可在多卡环境下运行。此外，它默认集成了类似 ZeRO-3 的参数分片技术，能自动避免参数冗余，显著提升显存利用率；同时支持将并行化后的模型无缝保存为标准格式，便于后续部署。无论是微调大型语言模型还是进行高负载推理，tensor_parallel 都能以极简的方式释放多显卡集群的算力潜能。","# tensor_parallel\n[![PyPI version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Ftensor-parallel.svg?color=blue)](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensor-parallel\u002F)\n[![Black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)\n[![CI status](https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Factions\u002Fworkflows\u002Frun-tests.yaml\u002Fbadge.svg?branch=main)](https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Factions)\n\n\u003Cp align=\"center\">\n    🚀 &nbsp;\u003Cb>\u003Ca href=\"https:\u002F\u002Fwww.kaggle.com\u002Fcode\u002Fblacksamorez\u002Ftensor-parallel-int4-llm\u002F\">Try new 40B LLMs demo in Kaggle\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fp>\n\nRun large PyTorch models on multiple GPUs in one line of code with potentially linear speedup.\n\n```python\nimport transformers\nimport tensor_parallel as tp\ntokenizer = transformers.AutoTokenizer.from_pretrained(\"facebook\u002Fopt-13b\")\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\"facebook\u002Fopt-13b\")  # use opt-125m for testing\n\nmodel = tp.tensor_parallel(model, [\"cuda:0\", \"cuda:1\"])  # \u003C- each GPU has half the weights\n\ninputs = tokenizer(\"A cat sat\", return_tensors=\"pt\")[\"input_ids\"].to(\"cuda:0\")\noutputs = model.generate(inputs, num_beams=5)\nprint(tokenizer.decode(outputs[0])) # A cat sat on my lap for a few minutes ...\n\nmodel(input_ids=inputs, labels=inputs).loss.backward()  # training works as usual\n```\n\n## Installation\nLatest stable version (recommended):\n```\npip install tensor_parallel\n```\nBleeding edge version:\n```\npip install https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Farchive\u002Fmain.zip\n```\n\n\n## Usage\n\n\nSimply wrap your PyTorch model with `tp.tensor_parallel` and use it normally.\nFor best memory efficiency, call `tp.tensor_parallel` while the model is still on CPU.  \n\nHere are a few use cases:\n- [`examples\u002Ftraining_flan-t5-xl.ipynb`](.\u002Fexamples\u002Ftraining_flan-t5-xl.ipynb) - fine-tune full FLAN-T5 model on text summarization\n- [`tensor_parallel int8 LLM`](https:\u002F\u002Fwww.kaggle.com\u002Fcode\u002Fblacksamorez\u002Ftensor-parallel-int8-llm\u002F) - adapter-tuning a large language model with LLM.8bit + tensor_parallel\n- __TBA__ - defining custom parallelism strategy\n\n\nAdvanced parameters to `tensor_parallel`:\n- `device_ids: List[device]` - which devices to use; defaults to all available GPUs\n- `output_device: device` - model outputs will have this device\n- `tensor_parallel_config: tp.Config` - use custom parallelism strategy, see [`slicing_configs.py`](.\u002Fsrc\u002Ftensor_parallel\u002Fslicing_configs.py)\n- `distributed: bool` - if True, use torch.distributed backend instead of threading (requires `torchrun`)\n- `sharded: bool` - if True, find all trainable parameters that weren't split by Tensor Parallelism and split them using [ZeRO-3 algorithm](https:\u002F\u002Fdeepspeed.readthedocs.io\u002Fen\u002Flatest\u002Fzero3.html).\n   - weights will be split between GPUs and re-assembled before each forward pass\n   - TL;DR use this when training to avoid duplicate parameters (enabled by default!) \n   - `sharded_param_names: List[str]` - parameter names that should be sharded this way, default = found automatically\n\n  \n### Saving the model\n\nTo save a model such that it could be used in a non `tensor_parallel` context, you should use a `save_tensor_parallel` context wrapper.\n\n```python\nimport torch\nimport transformers\nimport tensor_parallel as tp\n\nmodel = tp.tensor_parallel(\n    transformers.AutoModelForCausalLM.from_pretrained(\"facebook\u002Fopt-13b\"), \n)\n\n# A whole lot of trainig...\n\nwith tp.save_tensor_parallel(model):\n    torch.save(model.state_dict(), \"\u002Ftmp\u002F\")\n    # or \n    model.save_pretrained(\"\u002Ftmp\u002F\")\n```\n\nSuch code saves a model as if it was never split. It works by gathering model parts during `state_dict` creation.\n  \n### Memory efficient dispatch\n\nNormally, to normally create and dispatch a `tensor_parallel` model, one needs the whole model in memory. This can be troublesome, but there is another way.\n\nIt's possible to convert a `state_dict` of a basic model into the corresponding `tensor_parallel` `state_dict` using a helper function `convert_state_dict`. The state dict can then be dispatched and loaded into the model:\n\n```python\nimport accelerate\nimport transformers\n\nimport tensor_parallel as tp\n\n# Initialize a weightless tensor_parallel model from MyModel\nwith accelerate.init_empty_weights():\n    model = tp.TensorParallel(\n        MyModel(),\n        device_ids=[0, 1] # and prepare it to be put on GPUs 0 and 1\n    )\n\n# Load partial state_dict for MyModel\nstate_dict = torch.load(\"my_model_part_1_of_5.bin\")\n\n# Convert it into a tensor_parallel state_dict\ntensor_parallel_state_dict = tp.convert_state_dict(\n    state_dict,\n    tensor_parallel_config=model.tensor_parallel_config,\n    world_size=len(model.devices),\n)\n\n# Dispatch the partial state_dict (load_state_dict doesn't work with meta so here I use accelerate)\ndevice_map = tp.infer_sharded_device_map(model)\nfor param_name, param in state_dict.items():\n    module_name = param_name\n    while len(module_name) > 0 and module_name not in device_map:\n        module_name = \".\".join(module_name.split(\".\")[:-1])\n    param_device = device_map[module_name]\n    accelerate.utils.set_module_tensor_to_device(model, param_name, param_device, value=param)\n```\n\nWith this no more than one part of the model needs to be loaded into memory at once. \n  \n## FAQ\n\n- __Q:__ I don't have a multi-GPU server. Can I use tensor_parallel in Google Colab?\n- __A:__ Colab has a single GPU, so there's no point in tensor parallelism. However, [Kaggle offers two T4 for free](https:\u002F\u002Fwww.kaggle.com\u002Fcode\u002Fmuellerzr\u002Fmulti-gpu-and-accelerate) to all phone-verified accounts.\n\n\n- __Q:__ What is tensor parallelism?\n- __A:__ You split each layer's weights into parts, multiply each part on a separate GPU, then gather results. Read more [here](https:\u002F\u002Fcolossalai.org\u002Fdocs\u002Fconcepts\u002Fparadigms_of_parallelism\u002F)\n \n\n- __Q:__ Should I use `TensorParallel` or `DataParallel`?\n- __A:__ TensorParallel for large models, DataParallel for smaller ones\n\n\n- __Q:__ How does it compare against FullyShardedDataParallel and ZeRO?\n- __A:__ ZeRO is better if you can fit a large batch, TensorParallel is better for small batches\n\n\nWhy use `tensor_parallel` ...\n- v.s. [DeepSpeed](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed) and [FairScale](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffairscale\u002F)\n  - DeepSpeed has many parallelization strategies, but requires careful configuration\n  - tensor_parallel has one strategy that works with 1 line of code\n  - tensor_parallel works in a jupyter notebook\n- v.s. [MegatronLM](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMegatron-LM)\n  - MegatronLM has _great_ tensor parallelism for one model architecture\n  - tensor_parallel has _good_ parallelism for any architecture\n  - tensor_parallel is way easier to install\n- v.s. [parallelformers](https:\u002F\u002Fgithub.com\u002Ftunib-ai\u002Fparallelformers) \n  - parallelformers is inference-only, tensor_parallel supports training\n- v.s. [`alpa`](https:\u002F\u002Fgithub.com\u002Falpa-projects\u002Falpa)\n  - alpa is a powerful tool for automatic distributed training \u002F inference in JAX\n  - tensor_parallel works with PyTorch\n- v.s. [`Model.parallelize()`](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Fmodel_doc\u002Fgpt2#transformers.GPT2Model.parallelize)\n  - both are easy to use, both fit large models\n  - in parallelize, one GPU works at a time\n  - in tensor_parallel, GPUs work in parallel\n\nIn short, use `tensor_parallel` for quick prototyping on a single machine.\nUse DeepSpeed+Megatron or alpa for million-dollar training runs.\n\n\n## Troubleshooting\n\nIf you experience NCCL errors, or random hanging, you may have some code errors that are not displayed properly. \nTo debug these errors, we recommend restarting with `export TENSOR_PARALLEL_USE_NATIVE=1` or on a single device. \n\nIf you found a bug or encountered a problem, please report it to [our issue tracker](https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fissues).\nWe will do our best to help, but it may take some time before we get to it.\nPlease create issues only if your problem is specifically with `tensor_parallel`.\nFor example, if you need help installing `transformers` or optimizing your code, please seek it elsewhere.\n\n### Code style\n\nWe use [black](https:\u002F\u002Fblack.readthedocs.io\u002Fen\u002Fstable\u002Fthe_black_code_style\u002Fcurrent_style.html) and [isort](https:\u002F\u002Fpycqa.github.io\u002Fisort\u002F) for all pull requests.\nBefore committing your code, simply run `black . && isort .` and you will be fine.\n\n--------------------------------------------------------------------------------\n","# tensor_parallel\n[![PyPI version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Ftensor-parallel.svg?color=blue)](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensor-parallel\u002F)\n[![Black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack)\n[![CI status](https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Factions\u002Fworkflows\u002Frun-tests.yaml\u002Fbadge.svg?branch=main)](https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Factions)\n\n\u003Cp align=\"center\">\n    🚀 &nbsp;\u003Cb>\u003Ca href=\"https:\u002F\u002Fwww.kaggle.com\u002Fcode\u002Fblacksamorez\u002Ftensor-parallel-int4-llm\u002F\">在 Kaggle 上试用全新的 40B 参数 LLM 演示\u003C\u002Fa>\u003C\u002Fb>\n\u003C\u002Fp>\n\n只需一行代码，即可在多块 GPU 上运行大型 PyTorch 模型，并有望实现线性加速。\n\n```python\nimport transformers\nimport tensor_parallel as tp\ntokenizer = transformers.AutoTokenizer.from_pretrained(\"facebook\u002Fopt-13b\")\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\"facebook\u002Fopt-13b\")  # 使用 opt-125m 进行测试\n\nmodel = tp.tensor_parallel(model, [\"cuda:0\", \"cuda:1\"])  # \u003C- 每块 GPU 只持有模型参数的一半\n\ninputs = tokenizer(\"A cat sat\", return_tensors=\"pt\")[\"input_ids\"].to(\"cuda:0\")\noutputs = model.generate(inputs, num_beams=5)\nprint(tokenizer.decode(outputs[0])) # A cat sat on my lap for a few minutes ...\n\nmodel(input_ids=inputs, labels=inputs).loss.backward()  # 训练过程与平常无异\n```\n\n## 安装\n最新稳定版（推荐）：\n```\npip install tensor_parallel\n```\n开发版：\n```\npip install https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Farchive\u002Fmain.zip\n```\n\n\n## 使用方法\n\n\n只需将你的 PyTorch 模型包裹在 `tp.tensor_parallel` 中，即可像往常一样使用。为获得最佳内存效率，请在模型仍位于 CPU 上时调用 `tp.tensor_parallel`。\n\n以下是一些使用场景：\n- [`examples\u002Ftraining_flan-t5-xl.ipynb`](.\u002Fexamples\u002Ftraining_flan-t5-xl.ipynb) - 在文本摘要任务上微调完整的 FLAN-T5 模型\n- [`tensor_parallel int8 LLM`](https:\u002F\u002Fwww.kaggle.com\u002Fcode\u002Fblacksamorez\u002Ftensor-parallel-int8-llm\u002F) - 使用 LLM.8bit + tensor_parallel 对大型语言模型进行适配器微调\n- __待定__ - 自定义并行策略的定义\n\n\n`tensor_parallel` 的高级参数：\n- `device_ids: List[device]` - 指定使用的设备；默认为所有可用的 GPU\n- `output_device: device` - 模型输出将位于该设备上\n- `tensor_parallel_config: tp.Config` - 使用自定义并行策略，详见 [`slicing_configs.py`](.\u002Fsrc\u002Ftensor_parallel\u002Fslicing_configs.py)\n- `distributed: bool` - 如果为 True，则使用 torch.distributed 后端而非多线程（需要 `torchrun`）\n- `sharded: bool` - 如果为 True，将查找所有未被 Tensor Parallelism 划分的可训练参数，并使用 [ZeRO-3 算法](https:\u002F\u002Fdeepspeed.readthedocs.io\u002Fen\u002Flatest\u002Fzero3.html) 进行划分。\n   - 权重将在各 GPU 之间分割，并在每次前向传播前重新组合\n   - 简而言之：训练时使用此选项可避免参数重复（默认启用！）\n   - `sharded_param_names: List[str]` - 需要以这种方式划分的参数名称，默认为自动检测\n\n  \n### 保存模型\n\n若要保存一个可在非 `tensor_parallel` 环境中使用的模型，应使用 `save_tensor_parallel` 上下文包装器。\n\n```python\nimport torch\nimport transformers\nimport tensor_parallel as tp\n\nmodel = tp.tensor_parallel(\n    transformers.AutoModelForCausalLM.from_pretrained(\"facebook\u002Fopt-13b\"), \n)\n\n# 大量训练...\n\nwith tp.save_tensor_parallel(model):\n    torch.save(model.state_dict(), \"\u002Ftmp\u002F\")\n    # 或者\n    model.save_pretrained(\"\u002Ftmp\u002F\")\n```\n\n这段代码会将模型保存为未经过分割的状态。其原理是在创建 `state_dict` 时收集模型各部分的数据。\n  \n### 内存高效的调度\n\n通常情况下，要正常创建和调度一个 `tensor_parallel` 模型，需要将整个模型加载到内存中。这可能会带来一些麻烦，但其实还有另一种方法。\n\n可以使用辅助函数 `convert_state_dict` 将基础模型的 `state_dict` 转换为对应的 `tensor_parallel` `state_dict`，然后将其调度并加载到模型中：\n\n```python\nimport accelerate\nimport transformers\n\nimport tensor_parallel as tp\n\n# 从 MyModel 初始化一个无权重的 tensor_parallel 模型\nwith accelerate.init_empty_weights():\n    model = tp.TensorParallel(\n        MyModel(),\n        device_ids=[0, 1] # 并准备将其放置在 GPU 0 和 1 上\n    )\n\n# 加载 MyModel 的部分 state_dict\nstate_dict = torch.load(\"my_model_part_1_of_5.bin\")\n\n# 将其转换为 tensor_parallel 的 state_dict\ntensor_parallel_state_dict = tp.convert_state_dict(\n    state_dict,\n    tensor_parallel_config=model.tensor_parallel_config,\n    world_size=len(model.devices),\n)\n\n# 调度部分 state_dict（由于 meta 不支持 load_state_dict，这里使用 accelerate）\ndevice_map = tp.infer_sharded_device_map(model)\nfor param_name, param in state_dict.items():\n    module_name = param_name\n    while len(module_name) > 0 and module_name not in device_map:\n        module_name = \".\".join(module_name.split(\".\")[:-1])\n    param_device = device_map[module_name]\n    accelerate.utils.set_module_tensor_to_device(model, param_name, param_device, value=param)\n```\n\n通过这种方法，每次只需将模型的一部分加载到内存中即可。\n\n## 常见问题解答\n\n- __问：__ 我没有多GPU服务器，可以在Google Colab中使用tensor_parallel吗？\n- __答：__ Colab只有一块GPU，因此使用张量并行没有意义。不过，[Kaggle为所有通过手机验证的账户免费提供两块T4显卡](https:\u002F\u002Fwww.kaggle.com\u002Fcode\u002Fmuellerzr\u002Fmulti-gpu-and-accelerate)。\n\n\n- __问：__ 什么是张量并行？\n- __答：__ 将每一层的权重分成若干部分，在不同的GPU上分别进行矩阵乘法运算，最后再将结果汇总。更多内容请参阅[这里](https:\u002F\u002Fcolossalai.org\u002Fdocs\u002Fconcepts\u002Fparadigms_of_parallelism\u002F)。\n\n\n- __问：__ 我应该使用`TensorParallel`还是`DataParallel`？\n- __答：__ 对于大型模型使用`TensorParallel`，对于较小的模型则使用`DataParallel`。\n\n\n- __问：__ 它与FullyShardedDataParallel和ZeRO相比如何？\n- __答：__ 如果能容纳较大的批次，ZeRO更好；而对于小批次，`TensorParallel`更优。\n\n\n为什么使用`tensor_parallel`……\n- 与[DeepSpeed](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed)和[FairScale](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffairscale\u002F)相比：\n  - DeepSpeed提供了多种并行化策略，但需要仔细配置。\n  - `tensor_parallel`只需一行代码即可实现一种有效的并行化策略。\n  - `tensor_parallel`可以直接在Jupyter Notebook中运行。\n- 与[MegatronLM](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMegatron-LM)相比：\n  - MegatronLM针对特定模型架构提供了非常优秀的张量并行支持。\n  - `tensor_parallel`则适用于任何架构，并且安装更加简单。\n- 与[parallelformers](https:\u002F\u002Fgithub.com\u002Ftunib-ai\u002Fparallelformers)相比：\n  - parallelformers仅支持推理，而`tensor_parallel`支持训练。\n- 与[`alpa`](https:\u002F\u002Fgithub.com\u002Falpa-projects\u002Falpa)相比：\n  - alpa是用于JAX框架下自动分布式训练和推理的强大工具。\n  - `tensor_parallel`则基于PyTorch。\n- 与[`Model.parallelize()`](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Fmodel_doc\u002Fgpt2#transformers.GPT2Model.parallelize)相比：\n  - 两者都易于使用，都能处理大型模型。\n  - 在`parallelize`中，每次只有一个GPU工作。\n  - 而在`tensor_parallel`中，多个GPU可以并行工作。\n\n简而言之，如果只是在单机上快速原型设计，建议使用`tensor_parallel`。如果是大规模、耗资巨大的训练任务，则应选择DeepSpeed+Megatron或alpa。\n\n\n## 故障排除\n\n如果您遇到NCCL错误或程序随机挂起的情况，可能是代码中存在未被正确显示的错误。\n为了调试这些问题，我们建议您通过设置`export TENSOR_PARALLEL_USE_NATIVE=1`来重启，或者直接在单个设备上运行。\n\n如果您发现了bug或遇到了问题，请将其报告至[我们的问题追踪器](https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fissues)。\n我们将尽最大努力帮助您解决问题，但可能需要一些时间才能处理。请仅在您的问题与`tensor_parallel`相关时才创建新问题。\n例如，如果您需要帮助安装`transformers`库或优化代码，请尝试在其他地方寻求支持。\n\n### 代码风格\n\n我们对所有拉取请求均采用[black](https:\u002F\u002Fblack.readthedocs.io\u002Fen\u002Fstable\u002Fthe_black_code_style\u002Fcurrent_style.html)和[isort](https:\u002F\u002Fpycqa.github.io\u002Fisort\u002F)格式化工具。\n在提交代码之前，只需运行`black . && isort .`，即可确保代码符合规范。\n\n--------------------------------------------------------------------------------","# tensor_parallel 快速上手指南\n\n`tensor_parallel` 是一个轻量级的 PyTorch 工具，旨在通过**一行代码**将大型模型拆分到多个 GPU 上进行张量并行（Tensor Parallelism）训练或推理，从而实现近乎线性的加速比。它无需复杂的配置，支持任意模型架构，且兼容标准的 PyTorch 训练流程。\n\n## 环境准备\n\n*   **操作系统**: Linux (推荐), macOS, Windows\n*   **Python**: 3.7+\n*   **核心依赖**:\n    *   PyTorch (需安装支持多 GPU 的版本)\n    *   Transformers (可选，用于加载常见大模型)\n*   **硬件要求**: 至少 2 块 NVIDIA GPU (单卡无法体现张量并行优势，但可运行测试)\n*   **通信后端**: 默认使用多线程，若需高性能分布式训练需确保 NCCL 正常运作。\n\n## 安装步骤\n\n推荐使用 pip 安装最新稳定版：\n\n```bash\npip install tensor_parallel\n```\n\n如需体验最新开发版功能：\n\n```bash\npip install https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Farchive\u002Fmain.zip\n```\n\n> **提示**：国内用户若下载缓慢，可配置清华源或阿里源加速：\n> `pip install tensor_parallel -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 基本使用\n\n### 1. 最简单的多卡推理\u002F训练\n\n只需将模型包裹在 `tp.tensor_parallel` 中，指定使用的 GPU 设备列表即可。建议在模型仍在 CPU 上时进行包裹，以获得最佳内存效率。\n\n```python\nimport transformers\nimport tensor_parallel as tp\n\n# 1. 加载模型 (此时模型在 CPU 上)\ntokenizer = transformers.AutoTokenizer.from_pretrained(\"facebook\u002Fopt-13b\")\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\"facebook\u002Fopt-13b\")  \n# 测试时可替换为小模型: \"facebook\u002Fopt-125m\"\n\n# 2. 应用张量并行 (一行代码)\n# 模型权重将被自动拆分到 cuda:0 和 cuda:1 上\nmodel = tp.tensor_parallel(model, [\"cuda:0\", \"cuda:1\"])  \n\n# 3. 准备输入 (移至任意一个参与并行的设备，如 cuda:0)\ninputs = tokenizer(\"A cat sat\", return_tensors=\"pt\")[\"input_ids\"].to(\"cuda:0\")\n\n# 4. 推理 (生成文本)\noutputs = model.generate(inputs, num_beams=5)\nprint(tokenizer.decode(outputs[0])) \n\n# 5. 训练 (反向传播与普通用法完全一致)\nloss = model(input_ids=inputs, labels=inputs).loss\nloss.backward() \n```\n\n### 2. 保存模型\n\n若要保存模型以便在非 `tensor_parallel` 环境下使用（即恢复为完整单模型格式），请使用 `save_tensor_parallel` 上下文管理器。它会自动收集分散的权重。\n\n```python\nimport torch\nimport tensor_parallel as tp\n\n# ... 训练完成后 ...\n\nwith tp.save_tensor_parallel(model):\n    # 保存为标准 state_dict 或 pretrained 格式\n    torch.save(model.state_dict(), \"\u002Ftmp\u002Fmodel.bin\")\n    # 或者\n    # model.save_pretrained(\"\u002Ftmp\u002F\")\n```\n\n### 3. 高级用法：显存优化 (Sharding)\n\n在训练场景下，为了避免参数重复占用显存，默认启用 `sharded=True`。这会利用类似 ZeRO-3 的算法，仅在计算前临时重组需要的参数片段。\n\n```python\n# 显式开启分片模式（默认已开启，此处仅作演示）\nmodel = tp.tensor_parallel(\n    model, \n    [\"cuda:0\", \"cuda:1\"], \n    sharded=True \n)\n```\n\n### 4. 进阶：低显存加载大模型\n\n如果显存不足以一次性加载完整模型，可以先初始化空权重模型，再分块加载权重并转换：\n\n```python\nimport accelerate\nimport torch\nimport tensor_parallel as tp\nfrom my_model import MyModel # 假设这是你的模型类\n\n# 1. 初始化无权重模型\nwith accelerate.init_empty_weights():\n    model = tp.TensorParallel(\n        MyModel(),\n        device_ids=[0, 1]\n    )\n\n# 2. 加载部分权重文件\nstate_dict = torch.load(\"my_model_part_1_of_5.bin\")\n\n# 3. 将普通 state_dict 转换为 tensor_parallel 格式\ntensor_parallel_state_dict = tp.convert_state_dict(\n    state_dict,\n    tensor_parallel_config=model.tensor_parallel_config,\n    world_size=len(model.devices),\n)\n\n# 4. 将权重分发到对应设备\ndevice_map = tp.infer_sharded_device_map(model)\nfor param_name, param in state_dict.items():\n    module_name = param_name\n    while len(module_name) > 0 and module_name not in device_map:\n        module_name = \".\".join(module_name.split(\".\")[:-1])\n    param_device = device_map[module_name]\n    accelerate.utils.set_module_tensor_to_device(model, param_name, param_device, value=param)\n```","某 AI 初创团队需要在单台配备双显卡的服务器上，对参数量达 130 亿的 OPT 大语言模型进行微调训练，以适配垂直领域的客服对话数据。\n\n### 没有 tensor_parallel 时\n- **显存直接溢出**：单个 GPU 无法容纳完整的 13B 模型权重，尝试加载即报 OOM（显存不足）错误，被迫放弃本地调试。\n- **改造代码极其繁琐**：若强行使用多卡，需手动重写模型的前向传播逻辑，将矩阵乘法拆解并处理复杂的跨卡通信，开发周期长达数周。\n- **训练效率低下**：即使通过复杂的数据并行勉强运行，由于无法利用张量并行技术，单步计算耗时极长，难以快速验证算法效果。\n- **保存模型困难**：训练完成后，分散在多卡上的权重难以合并为标准格式，导致模型无法部署或迁移到其他环境。\n\n### 使用 tensor_parallel 后\n- **一行代码自动切分**：仅需调用 `tp.tensor_parallel(model, [\"cuda:0\", \"cuda:1\"])`，即可自动将模型权重均匀拆分到两张显卡上，瞬间解决显存瓶颈。\n- **零侵入式开发**：无需修改任何模型内部结构或训练循环代码，原有的 PyTorch 训练脚本可直接复用，当天即可启动实验。\n- **线性加速推理与训练**：自动优化了跨卡通信策略，在双卡环境下实现了接近线性的速度提升，大幅缩短了迭代周期。\n- **无缝保存与部署**：利用 `save_tensor_parallel` 上下文管理器，可自动将分布式权重聚合并保存为单一标准文件，方便后续直接部署。\n\ntensor_parallel 让开发者无需成为分布式系统专家，也能在消费级多卡设备上轻松驾驭超大参数模型的训练与推理。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FBlackSamorez_tensor_parallel_992cd670.png","BlackSamorez","Andrei Panferov","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FBlackSamorez_dff00ebe.jpg","PhD student at ISTA.\r\n\r\nYSDA alumnus.","ISTA","Vienna, Austria","andrei@panferov.org",null,"https:\u002F\u002Fblog.panferov.org\u002F","https:\u002F\u002Fgithub.com\u002FBlackSamorez",[83],{"name":84,"color":85,"percentage":86},"Python","#3572A5",100,655,44,"2026-04-16T01:11:58","MIT","未说明","必需多张 NVIDIA GPU（单卡无法发挥并行优势），具体型号和显存取决于模型大小，需支持 CUDA 和 NCCL","未说明（建议足够容纳未分割前的完整模型权重或分片加载）",{"notes":95,"python":91,"dependencies":96},"该工具旨在通过一行代码将大型 PyTorch 模型拆分到多张 GPU 上进行训练或推理。默认启用 ZeRO-3 算法以避免参数重复。若遇到 NCCL 错误或挂起，可设置环境变量 TENSOR_PARALLEL_USE_NATIVE=1 进行调试。在内存受限场景下，支持结合 accelerate 进行无权重初始化及分片状态字典加载。",[97,98,99],"torch","transformers","accelerate",[14,35],[102,103,104,105,106,107,108],"deep-learning","machine-learning","natural-language-processing","nlp","python","pytorch","pytorch-transformers","2026-03-27T02:49:30.150509","2026-04-18T09:19:32.108843",[112,117,122,127,132,137],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},39371,"如何在多 GPU 环境下加载 LoRA 微调后的权重？","你需要使用 `tensor_parallel.infer_sharded_device_map(model)` 从元模型（meta model）创建设备映射，然后将其传递给 `accelerate.load_checkpoint_in_model` 函数的 `device_map` 参数。如果不这样做，Accelerate 将无法知道如何分发张量从而报错。代码示例：\n`device_map = tensor_parallel.infer_sharded_device_map(model)`\n`accelerate.load_checkpoint_in_model(..., device_map=device_map)`","https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fissues\u002F67",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},39372,"如何解决 'All tensors must be on devices[0]' 的错误？","这是一个已知问题，解决方法是手动将输入张量放置在第一个设备上（通常是 cuda:0）。例如：\n`inputs = tokenizer(\"cat:\", return_tensors=\"pt\")[\"input_ids\"].to(\"cuda:0\")`\n即使过去任何 CUDA 设备都能工作，现在必须显式指定到 devices[0]。","https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fissues\u002F79",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},39373,"是否支持 PEFT LoRA 和 4-bit 量化？","是的，目前该库已支持 PEFT LoRA 和 4-bit 量化。如果在运行时遇到问题，请确保你的环境依赖版本正确。有用户反馈升级到 PEFT 的主分支（main branch）后解决了相关兼容性问题。你可以参考 Kaggle 上的 demo 来查看具体的环境配置和安装方式。","https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fissues\u002F80",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},39374,"如何使用 tensor_parallel 运行 LLaMA 模型？","LLaMA 模型现已完全支持。你可以使用 `TensorParallelPreTrainedModel` 包装加载后的模型。代码示例：\n`from tensor_parallel import TensorParallelPreTrainedModel`\n`model = LlamaForCausalLM.from_pretrained(\"...\", torch_dtype=torch.float16)`\n`model = TensorParallelPreTrainedModel(model, [\"cuda:0\", \"cuda:1\"])`\n注意：确保 transformers 版本在 4.28.0 或以上以获得最佳兼容性。","https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fissues\u002F51",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},39375,"为什么在使用全量微调时会报 'Model parameters were moved to incorrect devices' 错误？","这通常是因为与 Hugging Face Transformers 的 Trainer 类存在兼容性问题，特别是在 transformers 4.30.0 及以上版本中。解决方案是重写（override）Transformers 的 Trainer 类以适配 tensor_parallel 的设备管理逻辑，或者暂时使用 DeepSpeed ZeRO3 等其他并行策略进行训练。","https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fissues\u002F95",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},39376,"当使用超过 2 个 GPU 时，显存利用率没有提升或出现异常，这是为什么？","对于非常小的模型（例如仅几 MB），tensor_parallel 产生的开销可能导致显存分配看起来效率不高，这种情况下不建议使用该库。但对于大模型（如单卡无法容纳的大批次或大参数量模型），该库在多卡（>2）下能正常工作并有效分割张量。如果显存分布不均，请检查模型结构或掩码（attention_mask）的维度是否正确。","https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fissues\u002F98",[143,148,153,158,163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238],{"id":144,"version":145,"summary_zh":146,"released_at":147},315313,"v2.0.0","本次发布在处理复制参数和自动参数同步方面实现了重大改进。`Sharded` 类现已设为私有，并通过模块接口隐藏了其功能，从而简化了使用流程。\n\n为了保持与旧版本的兼容性，我们添加了警告信息，提示接口变更和已弃用的功能。\n\n## 变更内容\n* ZeRO-3 重构（分片）：由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F106 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.3.2...v2.0.0","2023-08-06T14:20:51",{"id":149,"version":150,"summary_zh":151,"released_at":152},315314,"v1.3.2","## 变更内容\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F113 中转发了 _prepare_model_inputs 方法\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.3.1...v1.3.2","2023-07-27T12:01:46",{"id":154,"version":155,"summary_zh":156,"released_at":157},315315,"v1.3.1","## 变更内容\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F109 中测试接口（即将重构）\n* 修复 get_llama_config 添加模型属性时出现的错误。由 @tonywang16 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F108 中完成\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F110 中修复 find_predefined_tensor_parallel_config 的 try-except 异常\n\n## 新贡献者\n* @tonywang16 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F108 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.3.0...v1.3.1","2023-07-26T18:48:32",{"id":159,"version":160,"summary_zh":161,"released_at":162},315316,"v1.3.0","## 变更内容\n* @tomoki0924 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F100 中明确选择了是否使用 torch.distributed\n* @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F103 中修复了 GPT-2 相关问题\n\n## 新贡献者\n* @tomoki0924 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F100 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.2.9...v1.3.0","2023-07-22T12:01:35",{"id":164,"version":165,"summary_zh":166,"released_at":167},315317,"v1.2.9","## 变更内容\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F93 中修复的 README 文件\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F101 中添加的 LLaMA-2 支持\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F102 中进行的版本号更新\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.2.8...v1.2.9","2023-07-21T16:01:32",{"id":169,"version":170,"summary_zh":171,"released_at":172},315318,"v1.2.8","## 变更内容\n* Falcon lm_head 拆分修复补丁，由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F92 中提交\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.2.7...v1.2.8","2023-06-23T13:22:44",{"id":174,"version":175,"summary_zh":176,"released_at":177},315319,"v1.2.7","## 变更内容\n* Falcon 预定义配置，由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F91 中添加\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.2.6...v1.2.7","2023-06-20T16:48:02",{"id":179,"version":180,"summary_zh":181,"released_at":182},315320,"v1.2.6","## 变更内容\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F90 中修复了 tp.Sharded 模型的分发问题\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.2.5...v1.2.6","2023-06-19T17:07:31",{"id":184,"version":185,"summary_zh":186,"released_at":187},315321,"v1.2.5","## 变更内容\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F78 中修复了 tp.convert_state_dict 的 README 示例。\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F82 中实际使用 SplitInsideChunks 处理 GPT-2 模型。\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F87 中移除了 PEFT 依赖，并改用运行时检查。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.2.4...v1.2.5","2023-06-14T15:40:45",{"id":189,"version":190,"summary_zh":191,"released_at":192},315322,"v1.2.4","## 变更内容\n* 由 @BlackSamorez 在 https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F73 中修复的权重共享状态字典问题\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.2.3...v1.2.4","2023-05-14T19:56:42",{"id":194,"version":195,"summary_zh":196,"released_at":197},315323,"v1.2.3","## What's Changed\r\n* Peft LoRA support by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F68\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.2.2...v1.2.3","2023-05-14T16:53:11",{"id":199,"version":200,"summary_zh":201,"released_at":202},315324,"v1.2.2","## What's Changed\r\n* Torch distributed hotfix by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F70\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.2.1...v1.2.2","2023-04-17T07:48:26",{"id":204,"version":205,"summary_zh":206,"released_at":207},315325,"v1.2.1","## What's Changed\r\n* Set seed for tests reproducibility  by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F64\r\n* Mention linear speedup in Readme by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F65\r\n* LLaMa models by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F53\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.2.0...v1.2.1","2023-04-10T16:52:16",{"id":209,"version":210,"summary_zh":211,"released_at":212},315326,"v1.2.0","## Config refactoring\r\n\r\n`state_actions` are now interfaces:\r\n```\r\nclass StateAction(ABC):\r\n    @abstractclassmethod\r\n    def __call__(self, tensor: Tensor, rank: int) -> Tensor:\r\n        pass\r\n\r\n    @abstractclassmethod\r\n    def undo(self, tensors: Sequence[Tensor]) -> Tensor:\r\n        pass\r\n```\r\n\r\nCallables and tuples of callables are still allowed but will produce deprecation warning.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.1.4...v1.2.0","2023-04-03T12:30:39",{"id":214,"version":215,"summary_zh":216,"released_at":217},315327,"v1.1.4","## What's Changed\r\n* CodeGen config by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F61\r\n* Added int8 LLMs demo link by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F62\r\n* Converting state dicts without model creation by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F60\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.1.3...v1.1.4","2023-03-27T19:46:47",{"id":219,"version":220,"summary_zh":221,"released_at":222},315328,"v1.1.3","## What's Changed\r\n* Unpersistent buffers meta loading fix by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F57\r\n* _reorder_cache fix for generation utils by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F56\r\n* Shard parameters initial dispatch fix by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F58\r\n* New version for dispatch hotfix by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F59\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.1.1...v1.1.3","2023-03-23T12:05:30",{"id":224,"version":225,"summary_zh":226,"released_at":227},315329,"v1.1.1","## What's Changed\r\n* GPT NeoX config by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F55\r\n* Removing accelerate hooks before splitting the model by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F52\r\n* Meta devices support by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F54\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.1.0...v1.1.1","2023-03-15T08:31:25",{"id":229,"version":230,"summary_zh":231,"released_at":232},315330,"v1.1.0","## What's Changed\r\n* Fixed PyPi link in readme by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F40\r\n* Adding support for more model architectures  by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F47\r\n* Replace architecture with model_type by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F44\r\n* Saving utilities by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F49\r\n* Version update by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F50\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.0.25...v1.1.0","2023-03-06T11:05:42",{"id":234,"version":235,"summary_zh":236,"released_at":237},315331,"v1.0.25","## What's Changed\r\n* hotfix canonic torch.device by @justheuristic in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F37\r\n* _TensorParallelWrapper attribute forwarding by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F38\r\n* Version update by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F39\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcompare\u002Fv1.0.24...v1.0.25","2023-02-21T19:42:30",{"id":239,"version":240,"summary_zh":241,"released_at":242},315332,"v1.0.24","## What's Changed\r\n* got rid of some unnecessary lines of code by @IaroslavLisniak in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F13\r\n* T5 example by @justheuristic in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F30\r\n* Hugging Face encoder-decoder architecture support by @BlackSamorez in https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fpull\u002F14\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FBlackSamorez\u002Ftensor_parallel\u002Fcommits\u002Fv1.0.24","2023-01-12T19:56:38"]