[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-kyegomez--RT-2":3,"tool-kyegomez--RT-2":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":10,"env_os":94,"env_gpu":95,"env_ram":94,"env_deps":96,"category_tags":101,"github_topics":102,"view_count":23,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":110,"updated_at":111,"faqs":112,"releases":143},3486,"kyegomez\u002FRT-2","RT-2","Democratization of RT-2 \"RT-2: New model translates vision and language into action\"","RT-2（Robotic Transformer 2）是一款前沿的视觉 - 语言 - 动作模型，旨在让机器人像人类一样，通过观察图像和理解语言指令来直接执行物理操作。它核心解决了传统机器人难以将抽象的语义理解转化为具体行动控制的难题，打破了感知与决策之间的壁垒，使机器人能够更灵活地应对复杂多变的环境任务。\n\n这款工具特别适合人工智能研究人员、机器人开发者以及从事自动化领域探索的专业人士使用。对于希望深入探究多模态大模型在具身智能中应用的团队，RT-2 提供了宝贵的研究基线和实现参考。\n\n在技术架构上，RT-2 的创新之处在于以 PALM-E 为骨干网络，巧妙地将视觉编码器提取的图像嵌入与语言模型的文本嵌入映射到同一向量空间中进行拼接处理。这种设计使得模型能够在一个统一的框架下同时处理视觉信息和语言指令，从而生成相应的动作序列。虽然这种架构在工程实现上相对直观，但它为探索多模态统一表示学习开辟了重要路径，是推动通用机器人技术发展的重要一步。通过简单的 pip 安装即可在 PyTorch 环境中调用，方便用户快速开展实验与验证。","[![Multi-Modality](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkyegomez_RT-2_readme_641983e15637.png)](https:\u002F\u002Fdiscord.gg\u002FqUtxnK2NMf)\n\n\n# Robotic Transformer 2 (RT-2): The Vision-Language-Action Model\n![rt gif](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkyegomez_RT-2_readme_962b06ab4cdd.gif)\n\n\u003Cdiv align=\"center\">\n\n[![GitHub issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fkyegomez\u002FRT-2)](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fissues) \n[![GitHub forks](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fkyegomez\u002FRT-2)](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fnetwork) \n[![GitHub stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fkyegomez\u002FRT-2)](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fstargazers) \n[![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fkyegomez\u002FRT-2)](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fblob\u002Fmaster\u002FLICENSE)\n[![Share on Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttps\u002Ftwitter.com\u002Fcloudposse.svg?style=social&label=Share%20%40kyegomez\u002FRT-2)](https:\u002F\u002Ftwitter.com\u002Fintent\u002Ftweet?text=Excited%20to%20introduce%20RT-2,%20the%20all-new%20robotics%20model%20with%20the%20potential%20to%20revolutionize%20automation.%20Join%20us%20on%20this%20journey%20towards%20a%20smarter%20future.%20%23RT1%20%23Robotics&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2)\n[![Share on Facebook](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FShare-%20facebook-blue)](https:\u002F\u002Fwww.facebook.com\u002Fsharer\u002Fsharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2)[![Share on LinkedIn](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FShare-%20linkedin-blue)](https:\u002F\u002Fwww.linkedin.com\u002FshareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2&title=Introducing%20RT-2%2C%20the%20All-New%20Robotics%20Model&summary=RT-2%20is%20the%20next-generation%20robotics%20model%20that%20promises%20to%20transform%20industries%20with%20its%20intelligence%20and%20efficiency.%20Join%20us%20to%20be%20a%20part%20of%20this%20revolutionary%20journey%20%23RT1%20%23Robotics&source=)\n![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F999382051935506503)\n[![Share on Reddit](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Share%20on%20Reddit-orange)](https:\u002F\u002Fwww.reddit.com\u002Fsubmit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2&title=Exciting%20Times%20Ahead%20with%20RT-2%2C%20the%20All-New%20Robotics%20Model%20%23RT1%20%23Robotics)\n[![Share on Hacker News](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Share%20on%20Hacker%20News-orange)](https:\u002F\u002Fnews.ycombinator.com\u002Fsubmitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2&t=Exciting%20Times%20Ahead%20with%20RT-2%2C%20the%20All-New%20Robotics%20Model%20%23RT1%20%23Robotics)\n[![Share on Pinterest](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Share%20on%20Pinterest-red)](https:\u002F\u002Fpinterest.com\u002Fpin\u002Fcreate\u002Fbutton\u002F?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=RT-2%2C%20the%20Revolutionary%20Robotics%20Model%20that%20will%20Change%20the%20Way%20We%20Work%20%23RT1%20%23Robotics)\n[![Share on WhatsApp](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Share%20on%20WhatsApp-green)](https:\u002F\u002Fapi.whatsapp.com\u002Fsend?text=I%20just%20discovered%20RT-2,%20the%20all-new%20robotics%20model%20that%20promises%20to%20revolutionize%20automation.%20Join%20me%20on%20this%20exciting%20journey%20towards%20a%20smarter%20future.%20%23RT1%20%23Robotics%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2)\n\n\u003C\u002Fdiv>\n\n---\n\n\nThis is my implementation of the model behind RT-2. RT-2 leverages PALM-E as the backbone with a Vision encoder and language backbone where images are embedded and concatenated in the same space as the language embeddings. This architecture is quite easy to architect but suffers from a lack of deep understanding of both the unified multi modal representation or the individual modality representations.\n\n[CLICK HERE FOR THE PAPER](https:\u002F\u002Frobotics-transformer2.github.io\u002Fassets\u002Frt2.pdf)\n\n\n## Installation\n\nRT-2 can be easily installed using pip:\n\n```bash\npip install rt2\n```\n# Usage\n\n\nThe `RT2` class is a PyTorch module that integrates the PALM-E model into the RT-2 class. Here are some examples of how to use it:\n\n#### Initialization\n\nFirst, you need to initialize the `RT2` class. You can do this by providing the necessary parameters to the constructor:\n\n```python\n\nimport torch\nfrom rt2.model import RT2\n\n# img: (batch_size, 3, 256, 256)\n# caption: (batch_size, 1024)\nimg = torch.randn(1, 3, 256, 256)\ncaption = torch.randint(0, 20000, (1, 1024))\n\n# model: RT2\nmodel = RT2()\n\n# Run model on img and caption\noutput = model(img, caption)\nprint(output)  # (1, 1024, 20000)\n\n\n```\n\n\n## Benefits\n\nRT-2 stands at the intersection of vision, language, and action, delivering unmatched capabilities and significant benefits for the world of robotics.\n\n- Leveraging web-scale datasets and firsthand robotic data, RT-2 provides exceptional performance in understanding and translating visual and semantic cues into robotic control actions.\n- RT-2's architecture is based on well-established models, offering a high chance of success in diverse applications.\n- With clear installation instructions and well-documented examples, you can integrate RT-2 into your systems quickly.\n- RT-2 simplifies the complexities of multi-domaster understanding, reducing the burden on your data processing and action prediction pipeline.\n\n## Model Architecture\n\nRT-2 integrates a high-capacity Vision-Language model (VLM), initially pre-trained on web-scale data, with robotics data from RT-2. The VLM uses images as input to generate a sequence of tokens representing natural language text. To adapt this for robotic control, RT-2 outputs actions represented as tokens in the model’s output.\n\nRT-2 is fine-tuned using both web and robotics data. The resultant model interprets robot camera images and predicts direct actions for the robot to execute. In essence, it converts visual and language patterns into action-oriented instructions, a remarkable feat in the field of robotic control.\n\n\n## Datasets\nDatasets used in the paper\n\n\n| Dataset | Description | Source | Percentage in Training Mixture (RT-2-PaLI-X) | Percentage in Training Mixture (RT-2-PaLM-E) |\n|---------|-------------|--------|----------------------------------------------|----------------------------------------------|\n| WebLI | Around 10B image-text pairs across 109 languages, filtered to the top 10% scoring cross-modal similarity examples to give 1B training examples. | Chen et al. (2023b), Driess et al. (2023) | N\u002FA | N\u002FA |\n| Episodic WebLI | Not used in co-fine-tuning RT-2-PaLI-X. | Chen et al. (2023a) | N\u002FA | N\u002FA |\n| Robotics Dataset | Demonstration episodes collected with a mobile manipulation robot. Each demonstration is annotated with a natural language instruction from one of seven skills. | Brohan et al. (2022) | 50% | 66% |\n| Language-Table | Used for training on several prediction tasks. | Lynch et al. (2022) | N\u002FA | N\u002FA |\n\n\n\n\n## Commercial Use Cases\n\nThe unique capabilities of RT-2 open up numerous commercial applications:\n\n- **Automated Factories**: RT-2 can significantly enhance automation in factories by understanding and responding to complex visual and language cues.\n- **Healthcare**: In robotic surgeries or patient care, RT-2 can assist in understanding and performing tasks based on both visual and verbal instructions.\n- **Smart Homes**: Integration of RT-2 in smart home systems can lead to improved automation, understanding homeowner instructions in a much more nuanced manner.\n\n\n## Contributing\n\nContributions to RT-2 are always welcome! Feel free to open an issue or pull request on the GitHub repository.\n\n## Contact\n\nFor any queries or issues, kindly open a GitHub issue or get in touch with [kyegomez](https:\u002F\u002Fgithub.com\u002Fkyegomez).\n\n## Citation\n\n```bibtex\n@inproceedings{RT-2,2023,\n  title={},\n  author={Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski,\nTianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu,\nMontse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog,\nJasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang,\nIsabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch,\nKarl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi,\nPierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong,\nAyzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu,\nand Brianna Zitkovich},\n  year={2024}\n}\n```\n\n\n## License\n\nRT-2 is provided under the MIT License. See the LICENSE file for details.","[![多模态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkyegomez_RT-2_readme_641983e15637.png)](https:\u002F\u002Fdiscord.gg\u002FqUtxnK2NMf)\n\n\n# 机器人Transformer 2 (RT-2)：视觉-语言-动作模型\n![rt gif](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkyegomez_RT-2_readme_962b06ab4cdd.gif)\n\n\u003Cdiv align=\"center\">\n\n[![GitHub 问题](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fkyegomez\u002FRT-2)](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fissues) \n[![GitHub 分支](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fkyegomez\u002FRT-2)](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fnetwork) \n[![GitHub 星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fkyegomez\u002FRT-2)](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fstargazers) \n[![GitHub 许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fkyegomez\u002FRT-2)](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fblob\u002Fmaster\u002FLICENSE)\n[![分享到 Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttps\u002Ftwitter.com\u002Fcloudposse.svg?style=social&label=Share%20%40kyegomez\u002FRT-2)](https:\u002F\u002Ftwitter.com\u002Fintent\u002Ftweet?text=Excited%20to%20introduce%20RT-2,%20the%20all-new%20robotics%20model%20with%20the%20potential%20to%20revolutionize%20automation.%20Join%20us%20on%20this%20journey%20towards%20a%20smarter%20future.%20%23RT1%20%23Robotics&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2)\n[![分享到 Facebook](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FShare-%20facebook-blue)](https:\u002F\u002Fwww.facebook.com\u002Fsharer\u002Fsharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2)[![分享到 LinkedIn](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FShare-%20linkedin-blue)](https:\u002F\u002Fwww.linkedin.com\u002FshareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2&title=Introducing%20RT-2%2C%20the%20All-New%20Robotics%20Model&summary=RT-2%20is%20the%20next-generation%20robotics%20model%20that%20promises%20to%20transform%20industries%20with%20its%20intelligence%20and%20efficiency.%20Join%20us%20to%20be%20a%20part%20of%20this%20revolutionary%20journey%20%23RT1%20%23Robotics&source=)\n![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F999382051935506503)\n[![分享到 Reddit](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Share%20on%20Reddit-orange)](https:\u002F\u002Fwww.reddit.com\u002Fsubmit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2&title=Exciting%20Times%20Ahead%20with%20RT-2%2C%20the%20All-New%20Robotics%20Model%20%23RT1%20%23Robotics)\n[![分享到 Hacker News](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Share%20on%20Hacker%20News-orange)](https:\u002F\u002Fnews.ycombinator.com\u002Fsubmitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2&t=Exciting%20Times%20Ahead%20with%20RT-2%2C%20the%20All-New%20Robotics%20Model%20%23RT1%20%23Robotics)\n[![分享到 Pinterest](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Share%20on%20Pinterest-red)](https:\u002F\u002Fpinterest.com\u002Fpin\u002Fcreate\u002Fbutton\u002F?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=RT-2%2C%20the%20Revolutionary%20Robotics%20Model%20that%20will%20Change%20the%20Way%20We%20Work%20%23RT1%20%23Robotics)\n[![分享到 WhatsApp](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Share%20on%20WhatsApp-green)](https:\u002F\u002Fapi.whatsapp.com\u002Fsend?text=I%20just%20discovered%20RT-2,%20the%20all-new%20robotics%20model%20that%20promises%20to%20revolutionize%20automation.%20Join%20me%20on%20this%20exciting%20journey%20towards%20a%20smarter%20future.%20%23RT1%20%23Robotics%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2FRT-2)\n\n\u003C\u002Fdiv>\n\n---\n\n\n这是我实现的 RT-2 背后的模型。RT-2 以 PALM-E 作为骨干网络，结合视觉编码器和语言骨干网络，将图像嵌入并与语言嵌入在同一空间中拼接。这种架构设计起来相当简单，但缺乏对统一多模态表示以及各个模态表示的深入理解。\n\n[点击此处查看论文](https:\u002F\u002Frobotics-transformer2.github.io\u002Fassets\u002Frt2.pdf)\n\n\n## 安装\n\n可以通过 pip 轻松安装 RT-2：\n\n```bash\npip install rt2\n```\n# 使用方法\n\n\n`RT2` 类是一个 PyTorch 模块，它将 PALM-E 模型集成到 RT-2 类中。以下是一些使用示例：\n\n#### 初始化\n\n首先，你需要初始化 `RT2` 类。可以通过向构造函数提供必要的参数来完成：\n\n```python\n\nimport torch\nfrom rt2.model import RT2\n\n# img: (batch_size, 3, 256, 256)\n# caption: (batch_size, 1024)\nimg = torch.randn(1, 3, 256, 256)\ncaption = torch.randint(0, 20000, (1, 1024))\n\n# model: RT2\nmodel = RT2()\n\n# 在 img 和 caption 上运行模型\noutput = model(img, caption)\nprint(output)  # (1, 1024, 20000)\n\n\n```\n\n\n## 优势\n\nRT-2 处于视觉、语言和动作的交汇处，为机器人领域提供了无与伦比的能力和显著的优势。\n\n- 利用大规模网络数据集和第一手机器人数据，RT-2 在理解和将视觉及语义线索转化为机器人控制动作方面表现出色。\n- RT-2 的架构基于成熟的模型，因此在各种应用中具有很高的成功率。\n- 凭借清晰的安装说明和完善的文档示例，您可以快速将 RT-2 集成到您的系统中。\n- RT-2 简化了多模态理解的复杂性，减轻了您在数据处理和动作预测流程中的负担。\n\n## 模型架构\n\nRT-2 将一个高容量的视觉-语言模型（VLM）与 RT-2 的机器人数据相结合，该 VLM 最初是在大规模网络数据上预训练的。VLM 使用图像作为输入，生成代表自然语言文本的标记序列。为了适应机器人控制，RT-2 在模型输出中以标记形式输出动作。\n\nRT-2 同时使用网络数据和机器人数据进行微调。最终得到的模型能够解释机器人摄像头拍摄的图像，并预测机器人需要执行的直接动作。本质上，它将视觉和语言模式转换为面向行动的指令，这在机器人控制领域是一项非凡的成就。\n\n\n## 数据集\n论文中使用的数据集\n\n\n| 数据集 | 描述 | 来源 | 在训练混合物中的百分比（RT-2-PaLI-X） | 在训练混合物中的百分比（RT-2-PaLM-E） |\n|---------|-------------|--------|----------------------------------------------|----------------------------------------------|\n| WebLI | 约 100 亿个跨 109 种语言的图像-文本对，筛选出跨模态相似度最高的前 10%，得到 10 亿个训练样本。 | Chen 等（2023b），Driess 等（2023） | 不适用 | 不适用 |\n| Episodic WebLI | 未用于 RT-2-PaLI-X 的联合微调。 | Chen 等（2023a） | 不适用 | 不适用 |\n| 机器人数据集 | 使用移动操作机器人收集的演示片段。每个演示都配有来自七种技能之一的自然语言指令。 | Brohan 等（2022） | 50% | 66% |\n| Language-Table | 用于多个预测任务的训练。 | Lynch 等（2022） | 不适用 | 不适用 |\n\n## 商业应用场景\n\nRT-2 的独特能力为其带来了众多商业应用：\n\n- **自动化工厂**：RT-2 可以通过理解和响应复杂的视觉与语言提示，显著提升工厂的自动化水平。\n- **医疗健康**：在机器人手术或患者护理中，RT-2 能够根据视觉和语言指令理解并执行任务。\n- **智能家居**：将 RT-2 集成到智能家居系统中，可以实现更高效的自动化，并以更加细腻的方式理解用户指令。\n\n\n## 贡献方式\n\n我们始终欢迎对 RT-2 的贡献！请随时在 GitHub 仓库中提交问题或拉取请求。\n\n## 联系方式\n\n如有任何疑问或问题，请在 GitHub 上提交 issue，或联系 [kyegomez](https:\u002F\u002Fgithub.com\u002Fkyegomez)。\n\n## 引用信息\n\n```bibtex\n@inproceedings{RT-2,2023,\n  title={},\n  author={Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski,\nTianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu,\nMontse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog,\nJasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang,\nIsabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch,\nKarl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi,\nPierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong,\nAyzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu,\nand Brianna Zitkovich},\n  year={2024}\n}\n```\n\n\n## 许可协议\n\nRT-2 基于 MIT 许可协议提供。详情请参阅 LICENSE 文件。","# RT-2 快速上手指南\n\nRT-2 (Robotic Transformer 2) 是一个视觉 - 语言 - 动作模型，旨在将视觉和语义线索转化为机器人控制动作。本指南基于 `kyegomez\u002FRT-2` 的开源实现，帮助开发者快速部署和测试。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python 版本**: Python 3.8 或更高版本\n*   **核心依赖**:\n    *   PyTorch (需预先安装与您的 CUDA 版本匹配的 PyTorch)\n    *   pip 包管理工具\n\n> **提示**：建议先安装 PyTorch。如果您在中国大陆，推荐使用清华源加速安装：\n> ```bash\n> pip install torch torchvision torchaudio --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 安装步骤\n\n使用 pip 直接安装 RT-2 库：\n\n```bash\npip install rt2\n```\n\n> **国内加速方案**：如果官方源下载缓慢，请使用以下命令通过清华镜像源安装：\n> ```bash\n> pip install rt2 -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 基本使用\n\n以下是最简单的代码示例，展示如何初始化模型并运行一次前向推理。\n\n1.  创建一个新的 Python 文件（例如 `demo.py`）。\n2.  复制并运行以下代码：\n\n```python\nimport torch\nfrom rt2.model import RT2\n\n# 1. 准备模拟输入数据\n# img: 模拟图像输入 (batch_size, 通道数，高度，宽度)\nimg = torch.randn(1, 3, 256, 256)\n\n# caption: 模拟文本指令输入 (batch_size, 序列长度)\n# 这里生成随机整数作为 token ID\ncaption = torch.randint(0, 20000, (1, 1024))\n\n# 2. 初始化 RT2 模型\nmodel = RT2()\n\n# 3. 执行前向传播\n# 输入图像和文本指令，输出动作预测\noutput = model(img, caption)\n\n# 4. 查看输出形状\n# 预期输出形状: (batch_size, 序列长度, vocab_size)\nprint(output.shape)  # 输出示例: torch.Size([1, 1024, 20000])\n```\n\n**说明**：\n*   该示例使用了随机生成的张量来模拟真实的图像和文本输入。\n*   在实际应用中，您需要将真实摄像头捕获的图像预处理为 `(B, 3, 256, 256)` 格式，并将自然语言指令分词为对应的 Token ID 序列。\n*   模型输出代表了预测的动作 Token 序列概率分布。","某智能仓储团队正致力于让机械臂在无需硬编码的情况下，自主识别并分拣货架上从未见过的异形商品。\n\n### 没有 RT-2 时\n- **泛化能力极差**：机械臂只能识别训练数据中明确标注过的物品，一旦遇到新包装或未见过的物体，必须重新采集数据并耗时数天重新训练模型。\n- **指令理解僵化**：无法理解“把那个易碎的红色盒子拿给我”这类包含语义属性和逻辑推理的自然语言指令，仅能执行固定的坐标移动代码。\n- **视觉与动作割裂**：视觉系统识别出物体后，需通过复杂的中间件将坐标转换为机械臂动作，流程繁琐且容易在动态环境中产生误差。\n- **场景适应性弱**：当仓库灯光变化或货物摆放角度微调时，系统极易失效，需要工程师频繁现场调试参数。\n\n### 使用 RT-2 后\n- **零样本泛化强大**：依托 PALM-E 架构，RT-2 能将视觉与语言嵌入同一空间，机械臂可直接根据“拿起那个像可乐罐的新饮料”的指令，成功操作从未训练过的物体。\n- **自然语言直控动作**：开发人员直接用人类语言下达复杂指令，RT-2 自动将语义理解转化为具体的机械臂控制信号，无需编写底层运动规划代码。\n- **端到端智能决策**：模型直接输出动作令牌，消除了传统流水线中视觉识别到动作执行的转换损耗，显著提升了在动态干扰下的抓取成功率。\n- **环境鲁棒性提升**：凭借大规模预训练知识，RT-2 能自适应不同的光照条件和物体姿态，大幅减少了现场维护和数据重采样的需求。\n\nRT-2 通过将大模型的通用常识注入机器人控制，实现了从“专用自动化设备”到“具身智能助手”的跨越，让机器人真正学会了像人一样思考并行动。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkyegomez_RT-2_641983e1.png","kyegomez","Kye Gomez","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkyegomez_0b95c3bb.jpg","Founder of swarms.ai","Swarms","Palo Alto",null,"KyeGomezB","https:\u002F\u002Fgithub.com\u002Fkyegomez\u002Fswarms","https:\u002F\u002Fgithub.com\u002Fkyegomez",[86],{"name":87,"color":88,"percentage":89},"Python","#3572A5",100,556,69,"2026-04-01T01:14:18","MIT","未说明","未说明 (模型基于 PyTorch 和 PALM-E 架构，通常推理需要 GPU，但 README 未指定具体型号或显存)",{"notes":97,"python":94,"dependencies":98},"README 仅提供了通过 pip 安装 'rt2' 包的指令及简单的 PyTorch 张量输入示例。文中未列出具体的系统硬件要求（如 GPU 型号、内存大小）、操作系统兼容性或详细的依赖库版本列表。该实现声称使用 PALM-E 作为骨干网络，将视觉编码器和语言骨干网结合，但未提供预训练模型的下载链接或具体的环境配置指南。",[99,100],"torch","rt2",[54,15,26],[103,104,105,106,107,108,109],"artificial-intelligence","attention-mechanism","embodied-agent","gpt4","multi-modal","robotics","transformer","2026-03-27T02:49:30.150509","2026-04-06T08:18:28.874142",[113,118,123,128,133,138],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},15987,"运行 example.py 时遇到张量尺寸不匹配（RuntimeError: The size of tensor a must match the size of tensor b）或初始化参数缺失错误，如何解决？","维护者已修复了代码中张量维度不匹配的问题，并补充了 RT2.init() 函数中缺失的 'vit' 关键字参数。如果遇到此类错误，请确保拉取最新的主分支代码（main branch），因为旧版本存在这些 Bug。如果仍然报错，请检查调用 RT2 初始化时是否显式提供了 'vit' 参数。","https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fissues\u002F7",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},15988,"安装 rt2 时 deepspeed 依赖包编译失败，提示需要预先安装 torch 或找不到 Rust 编译器，怎么办？","此问题通常是因为缺少编译依赖。对于 deepspeed 报错 'Unable to pre-compile ops without torch installed'，请先单独安装 PyTorch，然后再安装 rt2。对于 'tokenizers' 构建失败提示 'can't find Rust compiler'（常见于 Windows），需要先安装 Rust 编译器。建议在安装 rt2 之前，先执行 `pip install torch` 并确保系统已配置好 Rust 环境。","https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fissues\u002F10",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},15989,"导入模块时出现 ImportError: cannot import name 'PALME' from 'rt2.experimental.rt2_palme'，如何修正？","这是因为类名发生了变化。请将导入语句中的 'PALME' 修改为 'PalmE'。正确的导入代码应为：`from rt2.experimental.rt2_palme import PalmE, RT2`。如果修改后仍有问题，请尝试拉取最新的主分支代码，因为该问题可能在后续提交中已被修复。","https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fissues\u002F8",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},15990,"运行 test.py 或 example.py 时遇到 'RT2 object is not callable' 或 'tuple object has no attribute shape' 错误，如何解决？","这是由于模型返回值的结构处理问题导致的。维护者已经修复了该 Bug，主要涉及对元组（tuple）中嵌入（embeddings）的解包处理。请务必更新到最新的代码版本再试。如果问题依旧，请检查是否在调用模型后正确获取了返回值，避免直接对返回的元组对象进行形状检查。","https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fissues\u002F18",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},15991,"运行 example.py 时出现矩阵乘法形状错误（RuntimeError: mat1 and mat2 shapes cannot be multiplied），该如何解决？","该错误通常是由于依赖包版本过旧导致的。请升级 'zetascale' 包来解决此兼容性问题。执行命令：`pip3 install --upgrade zetascale`。升级后重新运行脚本即可。","https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fissues\u002F19",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},15992,"如何准备数据集并对 RT-2 模型进行微调（Fine-tune）？","项目文档文件夹（docs folder）中提供了数据集列表。准备数据集最简单的方式是使用 HuggingFace 上的数据集，只需按照需求预处理特定的列即可。关于机器人动作的分词器（tokenizer）和去分词化（De-Tokenize）逻辑，维护者表示已添加到代码中。建议参考 docs 文件夹中的示例来构建输入数据并进行微调。","https:\u002F\u002Fgithub.com\u002Fkyegomez\u002FRT-2\u002Fissues\u002F4",[144,149,153],{"id":145,"version":146,"summary_zh":147,"released_at":148},90696,"0.0.3","### 更改日志：\n\n1. **封装**：\n    - 将 `MaxViT` 和 `RT2` 的功能封装到一个名为 `RT2` 的类中，以简化初始化和使用流程。\n\n2. **默认参数**：\n    - 为 `RT2` 类中的多个参数设置了默认值。这样，用户无需提供所有参数即可实例化该类，除非他们需要非默认配置。\n\n3. **训练与评估模式**：\n    - 引入了 `train()` 和 `eval()` 方法，用于在 `RT2` 模型的训练模式和评估模式之间轻松切换，这符合 PyTorch 的标准做法。\n\n4. **统一的前向传播方法**：\n    - 创建了一个 `__call__` 方法，用于包装 `RT2` 模型的前向传播方法。这提供了一种直观的方式来处理视频和指令，只需直接调用 `RoboticTransformer` 类的实例即可。\n\n5. **条件执行**：\n    - 修改了前向传播过程（通过 `__call__` 方法），使其在提供了 `cond_scale` 参数时有条件地使用该参数，从而确保仅在评估阶段使用，正如提供的代码中所暗示的那样。\n\n6. **示例用法**：\n    - 在末尾添加了一个示例，演示如何使用新的 `RT2` 类进行训练和评估。\n\n总体而言，这些更改旨在使代码更加易用和模块化，封装复杂细节，并为用户提供更符合 Python 风格的接口。","2023-08-10T03:46:46",{"id":150,"version":151,"summary_zh":81,"released_at":152},90697,"0.0.2","2023-07-28T19:10:24",{"id":154,"version":155,"summary_zh":81,"released_at":156},90698,"0.0.1","2023-07-28T15:28:39"]