[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-mlverse--torch":3,"tool-mlverse--torch":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":78,"owner_url":79,"languages":80,"stars":110,"forks":111,"last_commit_at":112,"license":113,"difficulty_score":10,"env_os":114,"env_gpu":115,"env_ram":116,"env_deps":117,"category_tags":120,"github_topics":121,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":125,"updated_at":126,"faqs":127,"releases":148},1224,"mlverse\u002Ftorch","torch","R Interface to Torch","torch是R语言的深度学习接口包，专为R用户设计，让数据科学家和研究人员能直接在R环境中使用Torch深度学习框架。它解决了R语言在深度学习领域功能薄弱的问题——无需切换到Python，就能轻松创建张量、进行自动微分计算和构建神经网络模型。torch的核心亮点包括：自动微分功能（自动计算梯度，简化模型训练流程）、GPU加速支持（通过CUDA 11.8~12.4提升计算效率），以及与R生态系统的无缝集成。安装简单，只需`install.packages(\"torch\")`，首次加载会自动配置依赖。特别适合R开发者、机器学习研究人员和数据科学家使用，尤其在需要快速实验AI模型或结合R统计分析的场景中。它让深度学习门槛更低，让R用户也能高效参与AI开发。","\n\u003C!-- README.md is generated from README.Rmd. Please edit that file -->\n\n# torch \u003Ca href='https:\u002F\u002Ftorch.mlverse.org'>\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlverse_torch_readme_e239c4c3de61.png' align=\"right\" height=\"139\" \u002F>\u003C\u002Fa>\n\n[![Lifecycle:\nexperimental](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flifecycle-experimental-orange.svg)](https:\u002F\u002Flifecycle.r-lib.org\u002Farticles\u002Fstages.html)\n[![Test](https:\u002F\u002Fgithub.com\u002Fmlverse\u002Ftorch\u002Factions\u002Fworkflows\u002Fmain.yaml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmlverse\u002Ftorch\u002Factions\u002Fworkflows\u002Fmain.yaml)\n[![CRAN\nstatus](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlverse_torch_readme_4b8c9eaa73ff.png)](https:\u002F\u002FCRAN.R-project.org\u002Fpackage=torch)\n[![cuda](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcuda-11.8~12.4-green)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-downloads)\n[![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlverse_torch_readme_45426eba8368.png)](https:\u002F\u002Fcran.r-project.org\u002Fpackage=torch)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F837019024499277855?logo=discord)](https:\u002F\u002Fdiscord.com\u002Finvite\u002Fs3D5cKhBkx)\n\n## Installation\n\ntorch can be installed from CRAN with:\n\n``` r\ninstall.packages(\"torch\")\n```\n\nYou can also install the development version with:\n\n``` r\nremotes::install_github(\"mlverse\u002Ftorch\")\n```\n\nAt the first package load additional software will be installed. See\nalso the full [installation\nguide](https:\u002F\u002Ftorch.mlverse.org\u002Fdocs\u002Farticles\u002Finstallation.html) here.\n\n## Examples\n\nYou can create torch tensors from R objects with the `torch_tensor`\nfunction and convert them back to R objects with `as_array`.\n\n``` r\nlibrary(torch)\nx \u003C- array(runif(8), dim = c(2, 2, 2))\ny \u003C- torch_tensor(x, dtype = torch_float64())\ny\n#> torch_tensor\n#> (1,.,.) = \n#>   0.6192  0.5800\n#>   0.2488  0.3681\n#> \n#> (2,.,.) = \n#>   0.0042  0.9206\n#>   0.4388  0.5664\n#> [ CPUDoubleType{2,2,2} ]\nidentical(x, as_array(y))\n#> [1] TRUE\n```\n\n### Simple Autograd Example\n\nIn the following snippet we let torch, using the autograd feature,\ncalculate the derivatives:\n\n``` r\nx \u003C- torch_tensor(1, requires_grad = TRUE)\nw \u003C- torch_tensor(2, requires_grad = TRUE)\nb \u003C- torch_tensor(3, requires_grad = TRUE)\ny \u003C- w * x + b\ny$backward()\nx$grad\n#> torch_tensor\n#>  2\n#> [ CPUFloatType{1} ]\nw$grad\n#> torch_tensor\n#>  1\n#> [ CPUFloatType{1} ]\nb$grad\n#> torch_tensor\n#>  1\n#> [ CPUFloatType{1} ]\n```\n\n## Contributing\n\nNo matter your current skills it’s possible to contribute to `torch`\ndevelopment. See the [contributing\nguide](https:\u002F\u002Ftorch.mlverse.org\u002Fdocs\u002Fcontributing) for more\ninformation.\n","\u003C!-- README.md 由 README.Rmd 生成。请编辑该文件 -->\n\n# torch \u003Ca href='https:\u002F\u002Ftorch.mlverse.org'>\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlverse_torch_readme_e239c4c3de61.png' align=\"right\" height=\"139\" \u002F>\u003C\u002Fa>\n\n[![生命周期：\n实验性](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flifecycle-experimental-orange.svg)](https:\u002F\u002Flifecycle.r-lib.org\u002Farticles\u002Fstages.html)\n[![测试](https:\u002F\u002Fgithub.com\u002Fmlverse\u002Ftorch\u002Factions\u002Fworkflows\u002Fmain.yaml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmlverse\u002Ftorch\u002Factions\u002Fworkflows\u002Fmain.yaml)\n[![CRAN\n状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlverse_torch_readme_4b8c9eaa73ff.png)](https:\u002F\u002FCRAN.R-project.org\u002Fpackage=torch)\n[![CUDA](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcuda-11.8~12.4-green)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-downloads)\n[![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlverse_torch_readme_45426eba8368.png)](https:\u002F\u002Fcran.r-project.org\u002Fpackage=torch)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F837019024499277855?logo=discord)](https:\u002F\u002Fdiscord.com\u002Finvite\u002Fs3D5cKhBkx)\n\n## 安装\n\n可以通过 CRAN 安装 `torch`：\n\n``` r\ninstall.packages(\"torch\")\n```\n\n你也可以通过以下命令安装开发版本：\n\n``` r\nremotes::install_github(\"mlverse\u002Ftorch\")\n```\n\n首次加载包时，会自动安装一些额外的软件。更多信息请参阅完整的\n[安装指南](https:\u002F\u002Ftorch.mlverse.org\u002Fdocs\u002Farticles\u002Finstallation.html)。\n\n## 示例\n\n你可以使用 `torch_tensor` 函数将 R 对象转换为 torch 张量，并使用\n`as_array` 将其转换回 R 对象。\n\n``` r\nlibrary(torch)\nx \u003C- array(runif(8), dim = c(2, 2, 2))\ny \u003C- torch_tensor(x, dtype = torch_float64())\ny\n#> torch_tensor\n#> (1,.,.) = \n#>   0.6192  0.5800\n#>   0.2488  0.3681\n#> \n#> (2,.,.) = \n#>   0.0042  0.9206\n#>   0.4388  0.5664\n#> [ CPUDoubleType{2,2,2} ]\nidentical(x, as_array(y))\n#> [1] TRUE\n```\n\n### 简单的自动求导示例\n\n在下面的代码片段中，我们利用 torch 的自动求导功能计算导数：\n\n``` r\nx \u003C- torch_tensor(1, requires_grad = TRUE)\nw \u003C- torch_tensor(2, requires_grad = TRUE)\nb \u003C- torch_tensor(3, requires_grad = TRUE)\ny \u003C- w * x + b\ny$backward()\nx$grad\n#> torch_tensor\n#>  2\n#> [ CPUFloatType{1} ]\nw$grad\n#> torch_tensor\n#>  1\n#> [ CPUFloatType{1} ]\nb$grad\n#> torch_tensor\n#>  1\n#> [ CPUFloatType{1} ]\n```\n\n## 贡献\n\n无论你的技能水平如何，都可以为 `torch` 的开发做出贡献。更多详情请参阅\n[贡献指南](https:\u002F\u002Ftorch.mlverse.org\u002Fdocs\u002Fcontributing)。","# torch 快速上手指南\n\n## 环境准备\n\n- **系统要求**：R 语言环境（推荐版本 ≥ 4.0）\n- **前置依赖**：\n  - CPU 版本：无需额外依赖（推荐快速上手使用）\n  - GPU 版本：需安装 NVIDIA CUDA 11.8~12.4（如需 GPU 加速）\n- **中国加速**：建议设置 R 的 CRAN 镜像为清华大学镜像，加速包下载\n\n## 安装步骤\n\n1. 打开 R 或 RStudio\n2. 设置国内镜像（推荐）：\n   ```r\n   options(repos = c(CRAN = \"https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002FCRAN\u002F\"))\n   ```\n3. 安装 torch 包：\n   ```r\n   install.packages(\"torch\")\n   ```\n   > 第一次加载包时会自动安装 PyTorch 依赖（约 1-2 分钟）\n\n## 基本使用\n\n### 创建张量\n```r\nlibrary(torch)\nx \u003C- array(runif(8), dim = c(2, 2, 2))\ny \u003C- torch_tensor(x, dtype = torch_float64())\ny\n#> torch_tensor\n#> (1,.,.) = \n#>   0.6192  0.5800\n#>   0.2488  0.3681\n#> \n#> (2,.,.) = \n#>   0.0042  0.9206\n#>   0.4388  0.5664\n#> [ CPUDoubleType{2,2,2} ]\nidentical(x, as_array(y))\n#> [1] TRUE\n```\n\n### 简单自动微分\n```r\nx \u003C- torch_tensor(1, requires_grad = TRUE)\nw \u003C- torch_tensor(2, requires_grad = TRUE)\nb \u003C- torch_tensor(3, requires_grad = TRUE)\ny \u003C- w * x + b\ny$backward()\nx$grad\n#> torch_tensor\n#>  2\n#> [ CPUFloatType{1} ]\nw$grad\n#> torch_tensor\n#>  1\n#> [ CPUFloatType{1} ]\nb$grad\n#> torch_tensor\n#>  1\n#> [ CPUFloatType{1} ]\n```","某量化基金的数据分析师小李，正为股票价格波动预测模型开发新功能，需在R中快速实现LSTM神经网络并迭代优化。\n\n### 没有 torch 时\n- 手动实现反向传播算法，需编写数十行梯度计算代码，每次调整网络结构都需重写逻辑，错误率高且维护成本大。\n- 依赖R基础数组处理时间序列张量，内存占用大，处理10万条数据时卡顿严重，训练速度慢如蜗牛。\n- 模型调试依赖大量`print`语句输出中间变量，参数更新路径难以追踪，定位问题耗时数小时。\n- 无法利用GPU加速，单次训练需6小时以上，阻碍快速实验验证新策略。\n\n### 使用 torch 后\n- torch自动微分机制一键计算梯度，代码行数减少60%，网络结构调整后无需重写梯度逻辑。\n- torch_tensor高效处理多维数据，结合GPU加速，训练时间从6小时骤降至30分钟。\n- 通过`grad`属性直接查看梯度变化，调试过程直观高效，问题定位时间缩短至10分钟内。\n- 无缝集成R工作流，快速尝试不同LSTM层数，实验周期从天级压缩到小时级。\n\ntorch让R深度学习开发从手动编码的低效泥潭中解放，实现真正可迭代的现代化模型构建流程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmlverse_torch_e239c4c3.png","mlverse","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmlverse_cb111391.png","Open source libraries to scale Data Science",null,"https:\u002F\u002Fgithub.com\u002Fmlverse",[81,85,89,93,97,101,104,107],{"name":82,"color":83,"percentage":84},"C++","#f34b7d",72,{"name":86,"color":87,"percentage":88},"R","#198CE7",25.7,{"name":90,"color":91,"percentage":92},"Python","#3572A5",2.1,{"name":94,"color":95,"percentage":96},"CMake","#DA3434",0.1,{"name":98,"color":99,"percentage":100},"Cuda","#3A4E3A",0,{"name":102,"color":103,"percentage":100},"C","#555555",{"name":105,"color":106,"percentage":100},"Shell","#89e051",{"name":108,"color":109,"percentage":100},"CSS","#663399",563,91,"2026-04-02T15:13:35","NOASSERTION","Linux, macOS","需要NVIDIA GPU，CUDA 11.8~12.4","未说明",{"notes":118,"python":116,"dependencies":119},"首次加载会自动安装额外依赖，需网络下载约5GB文件",[116],[52,13,51,26,55,14,53,54,15],[67,122,123,124],"autograd","deep-learning","r","2026-03-27T02:49:30.150509","2026-04-06T08:10:36.026635",[128,133,138,143],{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},5564,"在 Windows 11 上安装 torch 0.13.0 时无法下载 lantern 如何解决？","设置环境变量 TORCH_URL 和 LANTERN_URL 指向正确的下载链接：`Sys.setenv(TORCH_URL=\"https:\u002F\u002Fdownload.pytorch.org\u002Flibtorch\u002Fcu124\u002Flibtorch-win-shared-with-deps-2.5.1%2Bcu124.zip\")` 和 `Sys.setenv(LANTERN_URL=\"https:\u002F\u002Ftorch-cdn.mlverse.org\u002Fbinaries\u002Frefs\u002Fheads\u002Fcran\u002Fv0.14.1\u002Flatest\u002Flantern-0.14.1+cu124-win64.zip\")`，然后运行 `remotes::install_github(\"mlverse\u002Ftorch\")` 和 `torch::install_torch(cuda_version=\"12.4\")`。","https:\u002F\u002Fgithub.com\u002Fmlverse\u002Ftorch\u002Fissues\u002F1172",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},5565,"R 会崩溃（R Session Aborted）在初始化训练时如何解决？","安装 cuDNN 9.7.1.26 for CUDA 12，下载链接：https:\u002F\u002Fdeveloper.download.nvidia.com\u002Fcompute\u002Fcudnn\u002Fredist\u002Fcudnn\u002Fwindows-x86_64\u002Fcudnn-windows-x86_64-9.7.1.26_cuda12-archive.zip。确保 cuDNN 版本与 CUDA 12 匹配。","https:\u002F\u002Fgithub.com\u002Fmlverse\u002Ftorch\u002Fissues\u002F1275",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},5566,"在 Apple M1 芯片上如何使用 R torch？","使用开发版本（dev version），当前 dev 版本已支持 M1 芯片，不再链接到 openMP。安装方法：`remotes::install_github(\"mlverse\u002Ftorch\")`。","https:\u002F\u002Fgithub.com\u002Fmlverse\u002Ftorch\u002Fissues\u002F595",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},5567,"Torch 无法在 Mac 或 Windows 上启动如何解决？","尝试从 GitHub 安装最新版本：`remotes::install_github(\"mlverse\u002Ftorch\")`。安装后重启 R 会话。","https:\u002F\u002Fgithub.com\u002Fmlverse\u002Ftorch\u002Fissues\u002F1021",[149,153,158,163,168,172,177,182,187,192,197,202,207,212,217,222,227,232,237,242],{"id":150,"version":151,"summary_zh":78,"released_at":152},114827,"v0.16.3","2025-11-03T11:19:31",{"id":154,"version":155,"summary_zh":156,"released_at":157},114828,"v0.16.2","More fixes related to CRAN submissions","2025-10-31T08:49:51",{"id":159,"version":160,"summary_zh":161,"released_at":162},114829,"v0.16.1","- Fixed issue with Windows CRAN checks. (#1366)\r\n","2025-10-16T13:31:27",{"id":164,"version":165,"summary_zh":166,"released_at":167},114830,"v0.16.0","- Support for CUDA 11.8 was dropped on Windows. (#1342)\r\n- Dropped support for CUDA 12.4 (#1348)\r\n- Added Support for CUDA 12.6 and 12.8 (#1348)\r\n- Updated to LibTorch 2.7.1 (#1348)\r\n- Fixed a bug causing repeat_interleave to not work properly on CUDA (#1353)\r\n","2025-08-21T14:43:29",{"id":169,"version":170,"summary_zh":78,"released_at":171},114831,"v0.15.1","2025-07-11T16:55:31",{"id":173,"version":174,"summary_zh":175,"released_at":176},114832,"v0.15.0","- add ROC AUM loss with `nn_aum_loss()` (#1310 @cregouby)\r\n- Add translation of install messages in French (@cregouby #1317)\r\n- fix installation checks for r2u users ((@cregouby #1317)\r\n","2025-06-24T10:42:32",{"id":178,"version":179,"summary_zh":180,"released_at":181},114833,"v0.14.2","- Fixed a regression causing torch to crash on Windows if used within RStudio.\r\n","2025-02-14T22:48:59",{"id":183,"version":184,"summary_zh":185,"released_at":186},114834,"v0.14.1","Bug fixes:\r\n\r\n- Fixed issue when compiling on non-glibc Linux distributions that don't implement `RTLD_DEEPBIND` (#1268)\r\n","2025-02-04T11:55:21",{"id":188,"version":189,"summary_zh":190,"released_at":191},114835,"v0.14.0","## Breaking changes\r\n\r\n- Updated to LibTorch v2.5.1 (#1204) -- potentially breaking change!\r\n\r\n## New features\r\n\r\n- Feature: Faster optimizers (`optim_ignite_\u003Cname>()`) are available: Adam, AdamW, Adagrad, RMSprop,SGD.\r\n  These can be used as drop-in replacements for `optim_\u003Cname>` but are considerably\r\n  faster as they wrap the LibTorch implementation of the optimizer.\r\n  The biggest speed differences can be observed for complex optimizers such as `AdamW`.\r\n\r\n## Bug fixes\r\n\r\n- `torch_iinfo()` now support all integer dtypes (#1190 @cregouby)\r\n- Fixed float key_padding_mask in `nnf_multi_head_attention_forward()` (#1205)\r\n- Fix french translation (#1176 @cregouby)\r\n- Trace jitted modules now respect 'train' and 'eval' mode (#1211)\r\n* Fix: Avoid name clashes between multiple calls to `jit_trace` (#1246)\r\n","2025-01-30T21:23:06",{"id":193,"version":194,"summary_zh":195,"released_at":196},114842,"v0.8.1","## Breaking changes\r\n\r\n- We now prompt the user before installing torch additional dependencies in interactive environments. This was requested by CRAN maintainers. (#864)\r\n\r\n## New features\r\n\r\n- Dataloaders can now handle logical values. (#858, @ryan-heslin)\r\n- We now provide builds for Pre CXX11 ABI version of LibTorch. They can be used by setting the environment variable `PRECXX11ABI=1`. This can be useful in environments with older versions of GLIBC. (#870)\r\n\r\n## Bug fixes\r\n\r\n- Fixed the way errors are passed from dataloaders workers to the main process. Now using new rlang error chaining. (#864)\r\n\r\n## Internal\r\n\r\n- We can now call GC even if from a backward call (ie, from a different thread) which allows for better memory management. (#853)\r\n- Fix HTML5 Manual information as resquested by CRAN (#869)\r\n","2022-08-19T11:21:26",{"id":198,"version":199,"summary_zh":200,"released_at":201},114836,"v0.13.0","## Breaking changes\r\n\r\n- lantern is now distributed over a different URL (https:\u002F\u002Ftorch-cdn.mlverse.org). \r\n  For most users this shouldn't have any effect, unless you need special authorization\r\n  to access some URL's. (#1162)\r\n\r\n## New features\r\n\r\n- Added support for a private `$finalize_deep_clone()` method for `nn_module` which\r\n allows to run some code after cloning a module.\r\n- A `compare_proxy` method for the `torch_tensor` type was added\r\n  it allows to compare torch tensors using `testthat::expect_equal()`.\r\n- Converting torch tensor to R array works when tensor has 'cuda' device (#1130)\r\n\r\n## Bug fixes\r\n\r\n- Fix a bug on using input projection initialization bias in `nnf_multi_head_attention_forward` (#1154 @cregouby)\r\n- Bugfix: calling `$detach()` on a tensor now preserves attributes (#1136)\r\n- Make sure deep cloning of tensor and nn_module preserves class attributes and the requires_grad field. (#1129)\r\n- Fixed that parameters and buffers of children of nn_modules were not cloned\r\n- Cloned objects no longer reference the object from which they were cloned\r\n- Fixed bug where nn_module's patched clone method was invalid after a call to\r\n  the internal `create_nn_module_callable()`\r\n- Printing of `grad_fn` now appends a new line at the end.\r\n- Make sure deep cloning preserve state dict attributes. (#1129)\r\n- Added separate setter and unsetter for the autocast context instead of only allowing `local_autocast()`. (#1142)\r\n- Fixed a bug in `torch_arange()` causing it to return 1:(n-1) values when specific request `dtype = torch_int64()` (#1160)\r\n","2024-05-22T10:48:47",{"id":203,"version":204,"summary_zh":205,"released_at":206},114837,"v0.12.0","## Breaking changes\r\n\r\n- New `torch_save` serialization format. It's ~10x faster and since it's based on safetensors, files can be read with any safetensors implementation. (#1071)\r\n- Updated to LibTorch 2.0.1. (#1085)\r\n- `torch_load` no longer supports `device=NULL` to load weights in the same device they were saved. (#1085)\r\n- Lantern binaries and torch pre-built binaries are now built on Ubuntu 20.04. (#1124)\r\n\r\n## New features\r\n\r\n- Added support for CUDA 11.8. (#1089)\r\n- Added support for iterable datasets. (#1095)\r\n\r\n## Bug fixes\r\n\r\n- fix printer of torch device (add new line at the end)\r\n- `as.array` now moves tensors to the cpu before copying data into R. (#1080)\r\n- Fixed segfault caused by comparing a `dtype` with a `NULL`. (#1090)\r\n- Fixed incorrect naming of complex data type names, such as `torch_cfloat64`. (#1091)\r\n- Fixed name of the `out_features` attribute in the `nn_linear` module. (#1097)\r\n- Fixed issues when loading the state dict of optimizers and learning rate schedulers. (#1100)\r\n- Fixed bug when cloning `nn_module`s with empty state dicts. (#1108)\r\n- `distr_multivariate_normal` now correctly handles precision matrix's. (#1110)\r\n- Moved `length.torch_tensor` implementation to R7 to avoid problems when a torch dataset has the `torch_tensor` class. (#1111)\r\n- Fixed problem when deep cloning a `nn_module`. (#1123)\r\n","2024-01-24T11:22:29",{"id":208,"version":209,"summary_zh":210,"released_at":211},114838,"v0.11.0","## Breaking changes\r\n\r\n- `load_state_dict()` for optimizers now default to cloning the tensors in the state dict, so they don't keep references to objects in the dict. (#1041)\r\n\r\n## New features\r\n\r\n- Added `nn_utils_weight_norm` (#1025)\r\n- Added support for reading from ordered state dicts serialized with PyTorch. (#1031)\r\n- Added `jit_ops` allowing to access JIT operators. (#1023)\r\n- Added `with_device` and `local_device` to allow temporarily modify the default device tensors get initialized. (#1034)\r\n- `nnf_gelu()` and `nn_gelu()` gained the `approximate` argument. (#1043)\r\n- Implemented `!=` for torch devices. (#1042)\r\n- Allows setting the dtype with a string. (#1045)\r\n- You can now create a named list of modules using `nn_module_dict()`. (#1046)\r\n- Faster `load_state_dict()`, also using less memory. It' possible to use the legacy implementation if required, see PR. (#1051)\r\n- Export helpers for handling RNG state, and temprarily modifying it. (#1057)\r\n- Added support for converting half tensors into R with `as.numeric()`. (#1056)\r\n- Added new `torch_tensor_from_buffer()` and `buffer_from_torch_tensor()` that allow low level creation of torch tensors. (#1061, #1062)\r\n\r\n## Documentation\r\n\r\n- Improved documentation for LBFGS optimizer. (#1035)\r\n- Added a message asking the user to restart the session after a manual installation with `install_torch()`. (#1055)\r\n\r\n## Bug fixes\r\n\r\n- Fixed bug related to handling of non-persistent buffers. They would get added to the `state_dict()` even if they should not. (#1036)\r\n- Fixed a typo in the `optim_adamw` class name.\r\n- Fixed `nn_cross_entropy_loss` class name. (#1043)\r\n- Fixed bug in LBFGS w\u002F line search. (#1048)\r\n- Correctly handle the installation when `RemoteSha` is a package version. (#1058)\r\n\r\n## Internal\r\n\r\n- Started building LibLantern on macOS 11 instead of macOS12 for maximum compatibility. (#1026)\r\n- Added `CXXSTD` to Makevars to enable C+11 compilation options.\r\n- Refactored codepath for TensorOptions and now all tensors initialization are handled by the same codepath. (#1033)\r\n- Added internal argument `.refer_to_state_dict` to the `load_state_dict()` `nn_module()` method. Allows loading the state dict into the model keeping parmaters as references to that state dict. (#1036)\r\n","2023-06-06T17:41:54",{"id":213,"version":214,"summary_zh":215,"released_at":216},114839,"v0.10.0","## Breaking changes\r\n\r\n- Updated to LibTorch v1.13.1 (#977)\r\n\r\n## New features\r\n\r\n- Provide pre-built binaries for torch using a GH Action workflow. (#975)\r\n- Added `nn_silu()` and `nnf_silu()`. (#985)\r\n- Added support for deep cloning `nn_module`s. (#986)\r\n- Added `local_no_grad()` and `local_enable_grad()` as alternatives for the `with_` functions. (#990)\r\n- Added `optim_adamw` optimizer. (#991)\r\n- Added support for automatic mixed precision (#996)\r\n- Added functionality to temporarily modify the torch seed. (#999)\r\n- Support for creating torch tensors from raw vectors and back. (#1003)\r\n\r\n## Bug fixes\r\n\r\n- Dataloaders now preserve the batch dimension when `batch_size=1` is used. (#994)\r\n\r\n## Internal\r\n\r\n- Large refactoring of the build system. (#964)\r\n- Use native symbol registration instead of dynamic lookup. (#976)\r\n- Returning lists of tensors to R is now much faster. (#993)\r\n","2023-04-14T06:07:27",{"id":218,"version":219,"summary_zh":220,"released_at":221},114840,"v0.9.1","## Breaking changes\r\n\r\n- `torch_where` now returns 1-based indices when it's called with the `condition` argument only. (#951, @skeydan)\r\n\r\n## New features\r\n\r\n- Added support for nonABI builds on CUDA 11.6. (#919)\r\n- The `torch_fft_fftfreq()` function is now exported. (#950, @skeydan)\r\n\r\n## Bug fixes\r\n\r\n- Fixed bug that caused `distr_normal$sample()` not being able to generate reproducible results after setting seeds. (#938)\r\n- `torch_cat` error message now correctly reflects 1-based indexing. (#952, @skeydan)\r\n\r\n## Internal\r\n\r\n- Fixed warnings in R CMD Check generated by the unsafe use of `sprintf`. (#959, @shikokuchuo)\r\n- Import, not suggest `glue` (#960)\r\n","2023-01-23T20:02:52",{"id":223,"version":224,"summary_zh":225,"released_at":226},114841,"v0.9.0","## Breaking changes\r\n\r\n- Updated to LibTorch v1.12.1. (#889, #893, #899)\r\n- `torch_bincount` is now 1-based indexed. (#896)\r\n- `torch_movedim()` and `$movedim()` are now both 1-based indexed. (#905)\r\n\r\n## New features\r\n\r\n- Added `cuda_synchronize()` to allow synchronization of CUDA operations. (#887)\r\n- Added support for M1 Macs, including creating Tensors in the MPS device. (#890)\r\n- Added support for CUDA 11.6 on Linux. (#902)\r\n- Added `cuda_empty_cache()` to allow freeing memory from the caching allocator to the system. (#903)\r\n- Added `$is_sparse()` method to check wether a Tensor is sparse or not. (#903)\r\n- `dataset_subset` now adds a class to the modified dataset that is the same as the original dataset classes postfixed with `_subset`. (#904)\r\n- Added `torch_serialize()` to allow creating a raw vector from torch objects. (#908)\r\n\r\n## Bug fixes\r\n\r\n- Fixed bug in `torch_arange` that was causing the `end` value not getting included in the result. (#885, @skeydan)\r\n- Fixed bug in window functions by setting a default dtype. (#886, @skeydan)\r\n- Fixed bug when using `install_torch(reinstall = TRUE)`. (#883)\r\n- The `dims` argument in `torch_tile()` is no longer modified, as it's not meant to be the a 1-based dimension. (#905)\r\n- `nn_module$state_diict()` now detaches output tensors by default. (#916)\r\n\r\n## Internal\r\n\r\n- Re-implemented the `$` method for R7 classes in C\u002FC++ to improve speed when calling methods. (#873)\r\n- Re-implemented garbage collection logic when calling it from inside a `backward()` call. This improves speed because we no longer need to call GC\r\neverytime backward is called. (#873)\r\n- We now use a thread pool instead of launching a new thread for backward calls. (#883)\r\n- Implemented options to allow configuring the activation of garbage collection when allocating more CUDA memory. (#883)\r\n- Some `nnf_` functions have been updated to use a single `torch_` kernel instead of the custom implementation. (#896)\r\n- Improved performance of dataloaders. (#900)\r\n- We now let LibTorch query the default generator, this allows one to use `torch_bernoulli()` with `device=\"gpu\"`. (#906)\r\n","2022-10-24T18:23:48",{"id":228,"version":229,"summary_zh":230,"released_at":231},114843,"v0.8.0","## Breaking changes\r\n\r\n- Serialization is now much faster because we avoid base64 encoding the serialized tensors. As a result, files serialized with newer versions of torch can't be opened with older versions of torch. Set `options(torch.serialization_version = 1)` if you want your file to be readable by older versions. (#803)\r\n- Deprecated support for CUDA 10.2 on Windows. (#835)\r\n- `linalg_matrix_rank` and `linalg_pinv` gained `atol` and `rtol` arguments while deprecating `tol` and `rcond`. (#835)\r\n\r\n## New features\r\n\r\n- Improved auto-detection of CUDA version on Windows. (#798, @SvenVw)\r\n- Improved parallel dataloaders performance by using a socket conection to transfer data between workers and the main process. (#803)\r\n- `keep_graph` now defaults to the value of `create_graph` when calling `$backward()`. We also renamed it to `retain_graph` to match PyTorch. (#811)\r\n- Optimizers created with `optimizer` now carry the classname in the generator and in instances. Optimizer generators now have the class `torch_optimizer_generator`. The class of torch optimizers has been renamed from `torch_Optimizer` to `torch_optimizer`. (#814)\r\n- New utility function `nn_prune_head()` to prune top layer(s) of a network (#819 @cregouby)\r\n- `torch_kron()` is now exported (#818).\r\n- Added `nn_embedding_bag`. (#827, @egillax)\r\n- `nn_multihead_attention` now supports the `batch_first` option. (#828, @jonthegeek)\r\n- It's now possible to modify the gradient of a tensor using the syntax `x$grad \u003C- new_grad`. (#832)\r\n- `sampler()` is now exported allowing to create custom samplers that can be passed to `dataloader()`. (#833)\r\n- Creating `nn_module`s without a `initialize` method is now supported. (#834)\r\n- Added `lr_reduce_on_plateau` learning rate scheduler. (#836, @egillax)\r\n- `torch_tensor(NULL)` no longer fails. It now returns a tensor with no dimensions and no data. (#839)\r\n- Improved complex numbers handling, including better printing and support for casting from and to R. (#844)\r\n\r\n## Bug fixes\r\n\r\n- Fixed bug in weight decay handling in the Adam optimizer. (#824, @egillax)\r\n- Fixed bug in `nn_l1_loss`. (#825, @sebffischer)\r\n\r\n## Documentation\r\n\r\n- Nice error message when `embed_dim` is not divisible by `num_heads` in `nn_multihead_attention`. (#828)\r\n\r\n## Internal\r\n\r\n- Updated to LibTorch v1.11.0. (#835)\r\n- Moved error message translations into R, this makes easier to add new ones and update the existing. (#841)\r\n","2022-06-10T13:04:07",{"id":233,"version":234,"summary_zh":235,"released_at":236},114844,"v0.7.2","## Bug fix\r\n\r\n- Fixed vignette building on Windows.\r\n","2022-03-02T11:19:25",{"id":238,"version":239,"summary_zh":240,"released_at":241},114845,"v0.7.1","## New features\r\n\r\n- Added `cuda_runtime_version()` to query the CUDA Tolkit version that torch is using. (#790)\r\n","2022-02-25T09:57:53",{"id":243,"version":244,"summary_zh":245,"released_at":246},114846,"v0.7.0","## Breaking changes\r\n\r\n- `torch_sort` and `Tensor$sort` now return 1-indexed results. (#709, @mohamed-180)\r\n- Support for LibTorch 1.10.2. See also [release notes](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch\u002Freleases\u002Ftag\u002Fv1.10.0) for the PyTorch v1.10. (#758, #763, #775, @hsbadr).\r\n- Changed default `dim` from `1` to `2` in `nnf_cosine_similarity`. (#769)\r\n- The default value for arguments of various functions have changed. A bug in the code generation was truncating the default values specially if they were float values that needed more than 6 digit precision. (#770)\r\n\r\n## New features\r\n\r\n- `jit_save_for_mobile` allows to save a traced model in bytecode form, to be loaded by a `LiteModuleLoader`. (#713)\r\n- Exported `is_torch_tensor` to check wether an object is a tensor or not. (#730, @rdinnager)\r\n- Adds `cuda_get_device_properties(device)` that allows one to query device capability and other properties. (#734, @rdinnager)\r\n- Implemented `call_torch_function()` to allow calling potentially unexported torch core functions. (#743, @rdinnager)\r\n- Now when installing torch all of LibTorch and Lantern headers will be installed within the `inst` directory. This will allow for packages extending torch to bind directly to its C++ library. (#718)\r\n- `dataset_subset` will use the `.getbatch` method of the wrapped dataset if one is available. (#742, @egillax)\r\n- Added `nn_flatten` and `nn_unflatten` modules. (#773)\r\n- Added `cuda_memory_stats()` and `cuda_memory_summary()` to verify the amount of memory torch is using from the GPU. (#774)\r\n- Added `backends_cudnn_version()` to query the CuDNN version found by torch. (#774)\r\n\r\n## Bug fixes\r\n\r\n- Fixed a bug in `.validate_sample` for the `Distribution` class that would incorrectly check for tensors. (#739, @hsbadr)\r\n- Fixed memory leak when applying custom `autograd_function`s. (#750)\r\n- Fixed a bug that caused `autograd_grad` to deadlock when used with custom autograd functions. (#771)\r\n- Fixed a bug in `torch_max` and `torch_min` that would fail with `length=2` Tensors. (#772)\r\n\r\n## Documentation\r\n\r\n- Improved the 'Loading data' vignette and datasets documentation. (#780, @jnolis)\r\n\r\n## Internal\r\n\r\n- Refactored the internal Lantern types and Rcpp types and made clearer which are the exported types that can be used in the C++ extensions. (#718)\r\n- Simplified concurrency related constructs in autograd. (#755, @yitao-li)\r\n- R and C++ code cleanup, styling, and formatting. (#753, @hsbadr)\r\n- Dataloaders are slightly faster with a new transpose function. (#783)\r\n- `torch_tensor` is now a C++ only function slighly increasing performance in a few situations. (#784)\r\n","2022-02-18T11:41:48"]