[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-bytedance--Protenix":3,"tool-bytedance--Protenix":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85092,2,"2026-04-10T11:13:16",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[19,14,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},5773,"cs-video-courses","Developer-Y\u002Fcs-video-courses","cs-video-courses 是一个精心整理的计算机科学视频课程清单，旨在为自学者提供系统化的学习路径。它汇集了全球知名高校（如加州大学伯克利分校、新南威尔士大学等）的完整课程录像，涵盖从编程基础、数据结构与算法，到操作系统、分布式系统、数据库等核心领域，并深入延伸至人工智能、机器学习、量子计算及区块链等前沿方向。\n\n面对网络上零散且质量参差不齐的教学资源，cs-video-courses 解决了学习者难以找到成体系、高难度大学级别课程的痛点。该项目严格筛选内容，仅收录真正的大学层级课程，排除了碎片化的简短教程或商业广告，确保用户能接触到严谨的学术内容。\n\n这份清单特别适合希望夯实计算机基础的开发者、需要补充特定领域知识的研究人员，以及渴望像在校生一样系统学习计算机科学的自学者。其独特的技术亮点在于分类极其详尽，不仅包含传统的软件工程与网络安全，还细分了生成式 AI、大语言模型、计算生物学等新兴学科，并直接链接至官方视频播放列表，让用户能一站式获取高质量的教育资源，免费享受世界顶尖大学的课堂体验。",79792,"2026-04-08T22:03:59",[18,13,14,20],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",75832,"2026-04-17T21:58:25",[19,13,20,18],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":29,"last_commit_at":63,"category_tags":64,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,"2026-04-03T21:50:24",[20,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":71,"readme_en":72,"readme_zh":73,"quickstart_zh":74,"use_case_zh":75,"hero_image_url":76,"owner_login":77,"owner_name":78,"owner_avatar_url":79,"owner_bio":80,"owner_company":81,"owner_location":81,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":112,"forks":113,"last_commit_at":114,"license":115,"difficulty_score":46,"env_os":116,"env_gpu":117,"env_ram":116,"env_deps":118,"category_tags":122,"github_topics":123,"view_count":10,"oss_zip_url":81,"oss_zip_packed_at":81,"status":22,"created_at":126,"updated_at":127,"faqs":128,"releases":158},8681,"bytedance\u002FProtenix","Protenix","Toward High-Accuracy Open-Source Biomolecular Structure Prediction.","Protenix 是一款致力于实现高精度生物分子结构预测的开源人工智能工具。它核心解决了科研领域中对蛋白质及其复合物（如抗体 - 抗原、蛋白 - 配体）三维结构进行快速、准确建模的难题，旨在为计算生物学社区提供易用且可扩展的研究基础设施。\n\n这款工具特别适合生物信息学研究人员、药物发现科学家以及 AI 开发者使用。无论是需要解析复杂生物大分子结构的学者，还是希望基于高质量基础模型开发新算法的工程师，Protenix 都能提供强有力的支持。此外，其配套的 Web 服务器也让非代码背景的生物学家能够便捷地获取预测结果。\n\nProtenix 的技术亮点显著：它不仅发布了针对抗体和配体结合场景优化的 v2 版本，还拥有一套名为 PXMeter 的开源评估工具包，确保模型性能评价的公正与可复现。在效率方面，Protenix 提供了轻量级的\"Mini\"版本，大幅降低了推理成本；同时支持原子级接触约束等物理先验条件，进一步提升了预测的准确性。作为生态的一部分，基于 Protenix 构建的 PXDesign 模型在新药设计任务中展现了超越现有主流方法的实验成功率。通过开放完整的训练数据流水线和兼容 ","Protenix 是一款致力于实现高精度生物分子结构预测的开源人工智能工具。它核心解决了科研领域中对蛋白质及其复合物（如抗体 - 抗原、蛋白 - 配体）三维结构进行快速、准确建模的难题，旨在为计算生物学社区提供易用且可扩展的研究基础设施。\n\n这款工具特别适合生物信息学研究人员、药物发现科学家以及 AI 开发者使用。无论是需要解析复杂生物大分子结构的学者，还是希望基于高质量基础模型开发新算法的工程师，Protenix 都能提供强有力的支持。此外，其配套的 Web 服务器也让非代码背景的生物学家能够便捷地获取预测结果。\n\nProtenix 的技术亮点显著：它不仅发布了针对抗体和配体结合场景优化的 v2 版本，还拥有一套名为 PXMeter 的开源评估工具包，确保模型性能评价的公正与可复现。在效率方面，Protenix 提供了轻量级的\"Mini\"版本，大幅降低了推理成本；同时支持原子级接触约束等物理先验条件，进一步提升了预测的准确性。作为生态的一部分，基于 Protenix 构建的 PXDesign 模型在新药设计任务中展现了超越现有主流方法的实验成功率。通过开放完整的训练数据流水线和兼容 ColabFold 的搜索功能，Protenix 正推动着生物分子结构预测技术的开放与进步。","# Protenix: Protein + X\n\n\u003Cdiv align=\"center\" style=\"margin: 20px 0;\">\n  \u003Cspan style=\"margin: 0 10px;\">⚡ \u003Ca href=\"https:\u002F\u002Fprotenix-server.com\">Protenix Web Server\u003C\u002Fa>\u003C\u002Fspan>\n  &bull; \u003Cspan style=\"margin: 0 10px;\">📄 \u003Ca href=\"docs\u002FPTX_V1_Technical_Report_202602042356.pdf\">Protenix-v1\u003C\u002Fa>\u003C\u002Fspan>\n  &bull; \u003Cspan style=\"margin: 0 10px;\">📄 \u003Ca href=\"docs\u002FPX2.pdf\">Protenix-v2\u003C\u002Fa>\u003C\u002Fspan>\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTwitter-Follow-blue?logo=x)](https:\u002F\u002Fx.com\u002Fai4s_protenix)\n[![Slack](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSlack-Join-yellow?logo=slack)](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fprotenixworkspace\u002Fshared_invite\u002Fzt-3drypwagk-zRnDF2VtOQhpWJqMrIveMw)\n[![Wechat](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWechat-Join-brightgreen?logo=wechat)](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fissues\u002F52)\n[![Email](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmail-Contact-lightgrey?logo=gmail)](#contact-us)\n\u003C\u002Fdiv>\n\nWe’re excited to introduce **Protenix** — Toward High-Accuracy Open-Source Biomolecular Structure Prediction.\n\nProtenix is built for high-accuracy structure prediction. It serves as an initial step in our journey toward advancing accessible and extensible research tools for the computational biology community.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_Protenix_readme_547d1e77b108.gif\" style=\"width: 100%; height: auto;\" alt=\"Protenix predictions\">\n\n## 🌟 Related Projects\n- **[PXDesign](https:\u002F\u002Fprotenix.github.io\u002Fpxdesign\u002F)** is a model suite for de novo protein-binder design built on the Protenix foundation model. PXDesign achieves 20–73% experimental success rates across multiple targets — 2–6× higher than prior SOTA methods such as AlphaProteo and RFdiffusion. The framework is freely accessible via the Protenix Server.\n\n- **[PXMeter](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FPXMeter\u002F)** is an open-source toolkit designed for reproducible evaluation of structure prediction models, released with high-quality benchmark dataset that has been manually reviewed to remove experimental artifacts and non-biological interactions. The associated study presents an in-depth comparative analysis of state-of-the-art models, drawing insights from extensive metric data and detailed case studies. The evaluation of Protenix is based on PXMeter.\n\n- **[Protenix-Dock](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix-Dock)**: Our implementation of a classical protein-ligand docking framework that leverages empirical scoring functions. Without using deep neural networks, Protenix-Dock delivers competitive performance in rigid docking tasks.\n\n## 🎉 Latest Updates\n- **2026-04-08: Protenix-v2 Released** 💪💪 [[Protenix-v2 Technical Report](docs\u002FPX2.pdf)]\n  - Protenix-v2 shows clear gains on antibody-antigen structure prediction, together with an additional update in ligand-related plausibility.\n- **2026-02-05: Protenix-v1 Released** 💪 [[Protenix-v1 Technical Report](docs\u002FPTX_V1_Technical_Report_202602042356.pdf)]\n  - Supported Template\u002FRNA MSA features and improved training dynamics, along with further Inference-time model performance enhancements.\n- **2025-11-05: Protenix-v0.7.0 Released** 🚀\n  - Introduced advanced diffusion inference optimizations: Shared variable caching, efficient kernel fusion, and TF32 acceleration. See our [performance analysis](.\u002Fassets\u002Finference_time_vs_ntoken.png).\n- **2025-07-17: Protenix-Mini & Constraint Features**\n  - Released lightweight model variants ([Protenix-Mini](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.11839)) that drastically reduce inference costs with minimal accuracy loss.\n  - Added support for [atom-level contact and pocket constraints](docs\u002Finfer_json_format.md#constraint), enhancing prediction accuracy through physical priors.\n- **2025-01-16: Pipeline Enhancements**\n  - Open-sourced the full [training data pipeline](.\u002Fdocs\u002Fprepare_training_data.md) and [MSA pipeline](.\u002Fdocs\u002Fmsa_template_pipeline.md).\n  - Integrated local [ColabFold-compatible search](.\u002Fdocs\u002Fcolabfold_compatible_msa.md) for streamlined MSA generation.\n\n\n## 🚀 Getting Started\n\n### 🛠 Quick Installation\n\n```bash\npip install protenix\n```\n\n### 🧬 Quick Prediction\n\n```bash\n# Predict structure using a JSON input\nprotenix pred -i examples\u002Finput.json -o .\u002Foutput -n protenix_base_default_v1.0.0\n```\n\n#### Key Model Descriptions\n| Model Name | MSA | RNA MSA | Template | Params | Training Data Cutoff | Model Release Date |\n| :--- | :---: | :---: | :---: | :---: | :---: | :---: |\n| `protenix-v2` | ✅ | ✅ | ✅ | 464 M | 2021-09-30 | 2026-04-08 |\n| `protenix_base_default_v1.0.0` | ✅ | ✅ | ✅ | 368 M | 2021-09-30 | 2026-02-05 |\n| `protenix_base_20250630_v1.0.0` | ✅ | ✅ | ✅ | 368 M | 2025-06-30 | 2026-02-05 |\n| `protenix_base_default_v0.5.0` | ✅ | ❌ | ❌ | 368 M | 2021-09-30 | 2025-05-30 |\n\n- **protenix-v2**: An enhanced-capacity version of the base model, featuring increased representation dimensionality and expanded parameter space (~464M), along with substantial training and optimization improvements.\n- **protenix_base_default_v1.0.0**: Base model, trained with a data cutoff aligned with AlphaFold3 (2021-09-30). The total parameter count of protenix_base_default_v1.0.0 is close to that of AlphaFold3.\n- **protenix_base_20250630_v1.0.0**: Applied model, trained with an updated data cutoff (2025-06-30) for better practical performance. This model can be used for practical application scenarios.\n- **protenix_base_default_v0.5.0**: Previous version of the model, maintained primarily for backward compatibility with users who developed based on v0.5.0.\n\nFor a complete list of supported models, please refer to [Supported Models](docs\u002Fsupported_models.md).\n\nFor detailed instructions on installation, data preprocessing, inference, and training, please refer to the [Training and Inference Instructions](docs\u002Ftraining_inference_instructions.md). We recommend users refer to [inference_demo.sh](inference_demo.sh) for detailed inference methods and input explanations.\n\n\n### 📊 Benchmark\n\n#### Protenix-v2\n\nProtenix-v2 (refers to the `protenix-v2` model) shows clear gains on antibody-antigen structure prediction, together with an additional update in ligand-related plausibility. Compared to baselines and the earlier Protenix-v1, Protenix-v2 demonstrates a substantial improvement trend. At the DockQ > 0.23 threshold, Protenix-v2 achieves absolute success rate gains of 9 to 13 percentage points over Protenix-v1 across three collections. Remarkably, Protenix-v2 at only 5 seeds already exceeds the performance of Protenix-v1 at 1000 seeds, indicating a clear gain in efficiency.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_Protenix_readme_4ae0f37e0bec.png\" style=\"width: 100%; height: auto;\" alt=\"Protenix-v2 model Metrics\">\n\n\n#### Protenix-v1\n\nProtenix-v1 (refers to the `protenix_base_default_v1.0.0` model), the first fully open-source model that outperforms AlphaFold3 across diverse benchmark sets while adhering to the same training data cutoff, model scale, and inference budget as AlphaFold3. For challenging targets, such as antigen-antibody complexes, the prediction accuracy of Protenix-v1 can be further enhanced through inference-time scaling – increasing the sampling budget from several to hundreds of candidates leads to consistent log-linear gains.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_Protenix_readme_ffde03fa502c.png\" style=\"width: 100%; height: auto;\" alt=\"protenix-v1 model Metrics\">\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_Protenix_readme_e509beeaa4e8.png\" style=\"width: 100%; height: auto;\" alt=\"protenix-v1 model Metrics 2\">\n\nFor detailed benchmark metrics on each dataset, please refer to [docs\u002Fmodel_1.0.0_benchmark.md](docs\u002Fmodel_1.0.0_benchmark.md).\n\n## Citing Protenix\n\nIf you use Protenix in your research, please cite the following:\n\n```\n@article {Zhang2026.04.10.717613,\n\tauthor = {Zhang, Yuxuan and Gong, Chengyue and Sun, Jinyuan and Guan, Jiaqi and Ren, Milong and Xue, Song and Zhang, Hanyu and Ma, Wenzhi and Liu, Zhenyu and Chen, Xinshi and Xiao, Wenzhi},\n\ttitle = {Protenix-v2: Broadening the Reach of Structure Prediction and Biomolecular Design},\n\telocation-id = {2026.04.10.717613},\n\tyear = {2026},\n\tdoi = {10.64898\u002F2026.04.10.717613},\n\tpublisher = {Cold Spring Harbor Laboratory},\n\tURL = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2026\u002F04\u002F11\u002F2026.04.10.717613},\n\teprint = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2026\u002F04\u002F11\u002F2026.04.10.717613.full.pdf},\n\tjournal = {bioRxiv}\n}\n\n@article {Zhang2026.02.05.703733,\n\tauthor = {Zhang, Yuxuan and Gong, Chengyue and Zhang, Hanyu and Ma, Wenzhi and Liu, Zhenyu and Chen, Xinshi and Guan, Jiaqi and Wang, Lan and Yang, Yanping and Xia, Yu and Xiao, Wenzhi},\n\ttitle = {Protenix-v1: Toward High-Accuracy Open-Source Biomolecular Structure Prediction},\n\telocation-id = {2026.02.05.703733},\n\tyear = {2026},\n\tdoi = {10.64898\u002F2026.02.05.703733},\n\tpublisher = {Cold Spring Harbor Laboratory},\n\tURL = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2026\u002F02\u002F22\u002F2026.02.05.703733.1},\n\teprint = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2026\u002F02\u002F22\u002F2026.02.05.703733.1.full.pdf},\n\tjournal = {bioRxiv}\n}\n\n@article {2025.01.08.631967,\n\tauthor = {ByteDance AML AI4Science Team and Chen, Xinshi and Zhang, Yuxuan and Lu, Chan and Ma, Wenzhi and Guan, Jiaqi and Gong, Chengyue and Yang, Jincai and Zhang, Hanyu and Zhang, Ke and Wu, Shenghao and Zhou, Kuangqi and Yang, Yanping and Liu, Zhenyu and Wang, Lan and Shi, Bo and Shi, Shaochen and Xiao, Wenzhi},\n\ttitle = {Protenix - Advancing Structure Prediction Through a Comprehensive AlphaFold3 Reproduction},\n\telocation-id = {2025.01.08.631967},\n\tyear = {2025},\n\tdoi = {10.1101\u002F2025.01.08.631967},\n\tpublisher = {Cold Spring Harbor Laboratory},\n\tURL = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2025\u002F01\u002F11\u002F2025.01.08.631967},\n\teprint = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2025\u002F01\u002F11\u002F2025.01.08.631967.full.pdf},\n\tjournal = {bioRxiv}\n}\n```\n\n### 📚 Citing Related Work\nProtenix is built upon and inspired by several influential projects. If you use Protenix in your research, we also encourage citing the following foundational works where appropriate:\n```\n@article{abramson2024accurate,\n  title={Accurate structure prediction of biomolecular interactions with AlphaFold 3},\n  author={Abramson, Josh and Adler, Jonas and Dunger, Jack and Evans, Richard and Green, Tim and Pritzel, Alexander and Ronneberger, Olaf and Willmore, Lindsay and Ballard, Andrew J and Bambrick, Joshua and others},\n  journal={Nature},\n  volume={630},\n  number={8016},\n  pages={493--500},\n  year={2024},\n  publisher={Nature Publishing Group UK London}\n}\n@article{ahdritz2024openfold,\n  title={OpenFold: Retraining AlphaFold2 yields new insights into its learning mechanisms and capacity for generalization},\n  author={Ahdritz, Gustaf and Bouatta, Nazim and Floristean, Christina and Kadyan, Sachin and Xia, Qinghui and Gerecke, William and O’Donnell, Timothy J and Berenberg, Daniel and Fisk, Ian and Zanichelli, Niccol{\\`o} and others},\n  journal={Nature Methods},\n  volume={21},\n  number={8},\n  pages={1514--1524},\n  year={2024},\n  publisher={Nature Publishing Group US New York}\n}\n@article{mirdita2022colabfold,\n  title={ColabFold: making protein folding accessible to all},\n  author={Mirdita, Milot and Sch{\\\"u}tze, Konstantin and Moriwaki, Yoshitaka and Heo, Lim and Ovchinnikov, Sergey and Steinegger, Martin},\n  journal={Nature methods},\n  volume={19},\n  number={6},\n  pages={679--682},\n  year={2022},\n  publisher={Nature Publishing Group US New York}\n}\n```\n\n## Contributing to Protenix\n\nWe welcome contributions from the community to help improve Protenix!\n\n📄 Check out the [Contributing Guide](CONTRIBUTING.md) to get started.\n\n✅ Code Quality: \nWe use `pre-commit` hooks to ensure consistency and code quality. Please install them before making commits:\n\n```bash\npip install pre-commit\npre-commit install\n```\n\n🐞 Found a bug or have a feature request? [Open an issue](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fissues).\n\n\n\n## Acknowledgements\n\n\nThe implementation of LayerNorm operators refers to both [OneFlow](https:\u002F\u002Fgithub.com\u002FOneflow-Inc\u002Foneflow) and [FastFold](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FFastFold).\nWe also adopted several [module](protenix\u002Fopenfold_local\u002F) implementations from [OpenFold](https:\u002F\u002Fgithub.com\u002Faqlaboratory\u002Fopenfold), except for [`LayerNorm`](protenix\u002Fmodel\u002Flayer_norm\u002F), which is implemented independently.\n\n\n## Code of Conduct\n\nWe are committed to fostering a welcoming and inclusive environment.\nPlease review our [Code of Conduct](CODE_OF_CONDUCT.md) for guidelines on how to participate respectfully.\n\n\n## Security\n\nIf you discover a potential security issue in this project, or think you may\nhave discovered a security issue, we ask that you notify Bytedance Security via our [security center](https:\u002F\u002Fsecurity.bytedance.com\u002Fsrc) or [vulnerability reporting email](sec@bytedance.com).\n\nPlease do **not** create a public GitHub issue.\n\n## License\n\nThe Protenix project including both code and model parameters is released under the [Apache 2.0 License](.\u002FLICENSE). It is free for both academic research and commercial use.\n\n## Contact Us\n\nWe welcome inquiries and collaboration opportunities for advanced applications of our model, such as developing new features, fine-tuning for specific use cases, and more. Please feel free to contact us at ai4s-bio@bytedance.com.","# Protenix：蛋白质 + X\n\n\u003Cdiv align=\"center\" style=\"margin: 20px 0;\">\n  \u003Cspan style=\"margin: 0 10px;\">⚡ \u003Ca href=\"https:\u002F\u002Fprotenix-server.com\">Protenix 网络服务器\u003C\u002Fa>\u003C\u002Fspan>\n  &bull; \u003Cspan style=\"margin: 0 10px;\">📄 \u003Ca href=\"docs\u002FPTX_V1_Technical_Report_202602042356.pdf\">Protenix-v1\u003C\u002Fa>\u003C\u002Fspan>\n  &bull; \u003Cspan style=\"margin: 0 10px;\">📄 \u003Ca href=\"docs\u002FPX2.pdf\">Protenix-v2\u003C\u002Fa>\u003C\u002Fspan>\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTwitter-Follow-blue?logo=x)](https:\u002F\u002Fx.com\u002Fai4s_protenix)\n[![Slack](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSlack-Join-yellow?logo=slack)](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fprotenixworkspace\u002Fshared_invite\u002Fzt-3drypwagk-zRnDF2VtOQhpWJqMrIveMw)\n[![Wechat](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWechat-Join-brightgreen?logo=wechat)](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fissues\u002F52)\n[![Email](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmail-Contact-lightgrey?logo=gmail)](#contact-us)\n\u003C\u002Fdiv>\n\n我们非常高兴地推出 **Protenix** —— 朝着高精度开源生物分子结构预测迈进。\n\nProtenix 专为高精度结构预测而设计。它是我们致力于为计算生物学界提供可访问且可扩展的研究工具这一旅程中的第一步。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_Protenix_readme_547d1e77b108.gif\" style=\"width: 100%; height: auto;\" alt=\"Protenix 预测结果\">\n\n## 🌟 相关项目\n- **[PXDesign](https:\u002F\u002Fprotenix.github.io\u002Fpxdesign\u002F)** 是基于 Protenix 基础模型构建的从头设计蛋白质结合剂的模型套件。PXDesign 在多个靶标上实现了 20–73% 的实验成功率，比 AlphaProteo 和 RFdiffusion 等先前的 SOTA 方法高出 2–6 倍。该框架可通过 Protenix 服务器免费获取。\n\n- **[PXMeter](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FPXMeter\u002F)** 是一个用于结构预测模型可重复性评估的开源工具包，并附带高质量的基准数据集。该数据集经过人工审核，去除了实验伪影和非生物相互作用。相关研究对当前最先进的模型进行了深入比较分析，基于大量指标数据和详细案例研究得出见解。Protenix 的评估正是基于 PXMeter 进行的。\n\n- **[Protenix-Dock](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix-Dock)**：我们实现的一个经典蛋白质-配体对接框架，利用经验评分函数。在不使用深度神经网络的情况下，Protenix-Dock 在刚性对接任务中表现出具有竞争力的性能。\n\n## 🎉 最新动态\n- **2026年4月8日：Protenix-v2 发布** 💪💪 [[Protenix-v2 技术报告](docs\u002FPX2.pdf)]\n  - Protenix-v2 在抗体-抗原结构预测方面取得了明显提升，并进一步优化了配体相关合理性的判断。\n- **2026年2月5日：Protenix-v1 发布** 💪 [[Protenix-v1 技术报告](docs\u002FPTX_V1_Technical_Report_202602042356.pdf)]\n  - 支持模板\u002FRNA 多序列比对功能，改进了训练动态，并进一步提升了推理时的模型性能。\n- **2025年11月5日：Protenix-v0.7.0 发布** 🚀\n  - 引入了先进的扩散推理优化：共享变量缓存、高效内核融合以及 TF32 加速。请参阅我们的 [性能分析](.\u002Fassets\u002Finference_time_vs_ntoken.png)。\n- **2025年7月17日：Protenix-Mini 及约束特征**\n  - 发布了轻量级模型变体（[Protenix-Mini](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.11839)），在几乎不损失精度的情况下大幅降低了推理成本。\n  - 新增了对 [原子级接触与口袋约束](docs\u002Finfer_json_format.md#constraint) 的支持，通过物理先验信息提高了预测准确性。\n- **2025年1月16日：管道优化**\n  - 开源了完整的 [训练数据管道](.\u002Fdocs\u002Fprepare_training_data.md) 和 [MSA 管道](.\u002Fdocs\u002Fmsa_template_pipeline.md)。\n  - 集成了本地 [兼容 ColabFold 的搜索](.\u002Fdocs\u002Fcolabfold_compatible_msa.md)，以简化 MSA 的生成过程。\n\n\n## 🚀 快速入门\n\n### 🛠 快速安装\n\n```bash\npip install protenix\n```\n\n### 🧬 快速预测\n\n```bash\n# 使用 JSON 输入进行结构预测\nprotenix pred -i examples\u002Finput.json -o .\u002Foutput -n protenix_base_default_v1.0.0\n```\n\n#### 关键模型说明\n| 模型名称 | MSA | RNA MSA | 模板 | 参数量 | 训练数据截止日期 | 模型发布日期 |\n| :--- | :---: | :---: | :---: | :---: | :---: | :---: |\n| `protenix-v2` | ✅ | ✅ | ✅ | 4.64亿 | 2021年9月30日 | 2026年4月8日 |\n| `protenix_base_default_v1.0.0` | ✅ | ✅ | ✅ | 3.68亿 | 2021年9月30日 | 2026年2月5日 |\n| `protenix_base_20250630_v1.0.0` | ✅ | ✅ | ✅ | 3.68亿 | 2025年6月30日 | 2026年2月5日 |\n| `protenix_base_default_v0.5.0` | ✅ | ❌ | ❌ | 3.68亿 | 2021年9月30日 | 2025年5月30日 |\n\n- **protenix-v2**：基础模型的增强版，具有更高的表示维度和更大的参数空间（约 4.64 亿），并进行了大量的训练和优化改进。\n- **protenix_base_default_v1.0.0**：基础模型，采用与 AlphaFold3 一致的数据截止日期（2021年9月30日）进行训练。其总参数量接近 AlphaFold3。\n- **protenix_base_20250630_v1.0.0**：应用型模型，采用更新的数据截止日期（2025年6月30日），以获得更好的实际性能。该模型可用于实际应用场景。\n- **protenix_base_default_v0.5.0**：之前的模型版本，主要为保持向后兼容性而保留，供基于 v0.5.0 版本开发的用户使用。\n\n有关支持的所有模型的完整列表，请参阅 [支持的模型](docs\u002Fsupported_models.md)。\n\n有关安装、数据预处理、推理和训练的详细说明，请参阅 [训练与推理指南](docs\u002Ftraining_inference_instructions.md)。我们建议用户参考 [inference_demo.sh](inference_demo.sh)，以了解详细的推理方法和输入说明。\n\n### 📊 基准测试\n\n#### Protenix-v2\n\nProtenix-v2（指 `protenix-v2` 模型）在抗体-抗原结构预测方面表现出显著提升，同时在配体相关合理性方面也有所改进。与基准模型及早期的 Protenix-v1 相比，Protenix-v2 展现出了大幅的性能提升趋势。在 DockQ > 0.23 的阈值下，Protenix-v2 在三个数据集上相对于 Protenix-v1 的绝对成功率提升了 9 至 13 个百分点。值得注意的是，Protenix-v2 仅使用 5 个种子就已超越了 Protenix-v1 使用 1000 个种子时的性能，这表明其效率有了明显提高。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_Protenix_readme_4ae0f37e0bec.png\" style=\"width: 100%; height: auto;\" alt=\"Protenix-v2 模型指标\">\n\n\n#### Protenix-v1\n\nProtenix-v1（指 `protenix_base_default_v1.0.0` 模型）是首个完全开源且在多种基准测试中表现优于 AlphaFold3 的模型，同时其训练数据截止时间、模型规模和推理预算均与 AlphaFold3 保持一致。对于诸如抗原-抗体复合物等具有挑战性的目标，Protenix-v1 的预测准确度可通过推理时的扩展进一步提升——将采样预算从几个候选者增加到数百个，可以带来持续的对数线性增长。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_Protenix_readme_ffde03fa502c.png\" style=\"width: 100%; height: auto;\" alt=\"protenix-v1 模型指标\">\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_Protenix_readme_e509beeaa4e8.png\" style=\"width: 100%; height: auto;\" alt=\"protenix-v1 模型指标 2\">\n\n有关各数据集的详细基准测试指标，请参阅 [docs\u002Fmodel_1.0.0_benchmark.md](docs\u002Fmodel_1.0.0_benchmark.md)。\n\n## 引用 Protenix\n\n如果您在研究中使用了 Protenix，请引用以下文献：\n\n```\n@article {Zhang2026.04.10.717613,\n\tauthor = {Zhang, Yuxuan and Gong, Chengyue and Sun, Jinyuan and Guan, Jiaqi and Ren, Milong and Xue, Song and Zhang, Hanyu and Ma, Wenzhi and Liu, Zhenyu and Chen, Xinshi and Xiao, Wenzhi},\n\ttitle = {Protenix-v2: 扩展结构预测与生物分子设计的应用范围},\n\telocation-id = {2026.04.10.717613},\n\tyear = {2026},\n\tdoi = {10.64898\u002F2026.04.10.717613},\n\tpublisher = {冷泉港实验室},\n\tURL = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2026\u002F04\u002F11\u002F2026.04.10.717613},\n\teprint = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2026\u002F04\u002F11\u002F2026.04.10.717613.full.pdf},\n\tjournal = {bioRxiv}\n}\n\n@article {Zhang2026.02.05.703733,\n\tauthor = {Zhang, Yuxuan and Gong, Chengyue and Zhang, Hanyu and Ma, Wenzhi and Liu, Zhenyu and Chen, Xinshi and Guan, Jiaqi and Wang, Lan and Yang, Yanping and Xia, Yu and Xiao, Wenzhi},\n\ttitle = {Protenix-v1: 向高精度开源生物分子结构预测迈进},\n\telocation-id = {2026.02.05.703733},\n\tyear = {2026},\n\tdoi = {10.64898\u002F2026.02.05.703733},\n\tpublisher = {冷泉港实验室},\n\tURL = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2026\u002F02\u002F22\u002F2026.02.05.703733.1},\n\teprint = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2026\u002F02\u002F22\u002F2026.02.05.703733.1.full.pdf},\n\tjournal = {bioRxiv}\n}\n\n@article {2025.01.08.631967,\n\tauthor = {字节跳动 AML AI4Science 团队 和 Chen, Xinshi、Zhang, Yuxuan、Lu, Chan、Ma, Wenzhi、Guan, Jiaqi、Gong, Chengyue、Yang, Jincai、Zhang, Hanyu、Zhang, Ke、Wu, Shenghao、Zhou, Kuangqi、Yang, Yanping、Liu, Zhenyu、Wang, Lan、Shi, Bo、Shi, Shaochen、Xiao, Wenzhi},\n\ttitle = {Protenix - 通过全面复现 AlphaFold3 推进结构预测},\n\telocation-id = {2025.01.08.631967},\n\tyear = {2025},\n\tdoi = {10.1101\u002F2025.01.08.631967},\n\tpublisher = {冷泉港实验室},\n\tURL = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2025\u002F01\u002F11\u002F2025.01.08.631967},\n\teprint = {https:\u002F\u002Fwww.biorxiv.org\u002Fcontent\u002Fearly\u002F2025\u002F01\u002F11\u002F2025.01.08.631967.full.pdf},\n\tjournal = {bioRxiv}\n}\n```\n\n### 📚 引用相关工作\nProtenix 是基于并受到多个有影响力项目启发而构建的。如果您在研究中使用了 Protenix，我们也鼓励您在适当的情况下引用以下基础性工作：\n```\n@article{abramson2024accurate,\n  title={利用 AlphaFold 3 实现生物分子相互作用的精确结构预测},\n  author={Abramson, Josh 和 Adler, Jonas 以及 Dunger, Jack 等人},\n  journal={Nature},\n  volume={630},\n  number={8016},\n  pages={493--500},\n  year={2024},\n  publisher={自然出版集团英国伦敦分部}\n}\n@article{ahdritz2024openfold,\n  title={OpenFold：重新训练 AlphaFold2 揭示其学习机制与泛化能力的新见解},\n  author={Ahdritz, Gustaf 和 Bouatta, Nazim 以及 Floristean, Christina 等人},\n  journal={Nature Methods},\n  volume={21},\n  number={8},\n  pages={1514--1524},\n  year={2024},\n  publisher={自然出版集团美国纽约分部}\n}\n@article{mirdita2022colabfold,\n  title={ColabFold：让蛋白质折叠触手可及},\n  author={Mirdita, Milot 和 Sch{\\\"u}tze, Konstantin 以及 Moriwaki, Yoshitaka 等人},\n  journal={Nature Methods},\n  volume={19},\n  number={6},\n  pages={679--682},\n  year={2022},\n  publisher={自然出版集团美国纽约分部}\n}\n```\n\n## 参与 Protenix 的贡献\n\n我们欢迎社区成员为改进 Protenix 贡献力量！\n\n📄 请查看 [贡献指南](CONTRIBUTING.md)，开始您的贡献之旅。\n\n✅ 代码质量：\n我们使用 `pre-commit` 钩子来确保代码的一致性和质量。请在提交代码前安装它们：\n\n```bash\npip install pre-commit\npre-commit install\n```\n\n🐞 发现了 bug 或有功能需求？[提交问题](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fissues)。\n\n\n\n## 致谢\n\n\nLayerNorm 运算符的实现参考了 [OneFlow](https:\u002F\u002Fgithub.com\u002FOneflow-Inc\u002Foneflow) 和 [FastFold](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FFastFold)。\n此外，我们还采用了来自 [OpenFold](https:\u002F\u002Fgithub.com\u002Faqlaboratory\u002Fopenfold) 的若干 [模块](protenix\u002Fopenfold_local\u002F) 实现，但 [`LayerNorm`](protenix\u002Fmodel\u002Flayer_norm\u002F) 是独立实现的。\n\n\n## 行为准则\n\n我们致力于营造一个友好且包容的环境。\n请查阅我们的 [行为准则](CODE_OF_CONDUCT.md)，了解如何以尊重的态度参与其中。\n\n\n## 安全\n如果您在此项目中发现了潜在的安全问题，或认为自己可能发现了安全漏洞，请通过我们的 [安全中心](https:\u002F\u002Fsecurity.bytedance.com\u002Fsrc) 或 [漏洞报告邮箱](sec@bytedance.com) 通知字节跳动安全团队。\n\n请**不要**创建公开的 GitHub 问题。\n\n## 许可证\nProtenix 项目，包括代码和模型参数，均采用 [Apache 2.0 许可证](.\u002FLICENSE) 发布。无论是学术研究还是商业用途，均可免费使用。\n\n## 联系我们\n\n我们欢迎关于模型高级应用的咨询与合作机会，例如开发新功能、针对特定场景进行微调等。如有兴趣，请随时通过 ai4s-bio@bytedance.com 与我们联系。","# Protenix 快速上手指南\n\nProtenix 是一个高精度的开源生物分子结构预测工具，旨在为计算生物学社区提供可扩展的研究工具。本指南将帮助您快速完成环境配置并运行首个预测任务。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下基本要求：\n\n*   **操作系统**：推荐 Linux (Ubuntu 20.04+) 或 macOS。\n*   **Python 版本**：Python 3.8 或更高版本。\n*   **硬件要求**：\n    *   推荐使用 NVIDIA GPU 进行加速推理（需安装对应的 CUDA 驱动）。\n    *   虽然支持 CPU 运行，但推理速度会显著降低。\n*   **前置依赖**：\n    *   确保已安装 `pip` 包管理工具。\n    *   若需生成多序列比对 (MSA)，建议预先配置好本地 MSA 搜索工具或确保网络通畅以使用内置的 ColabFold 兼容搜索功能。\n\n> **国内开发者提示**：如果遇到 PyPI 源下载缓慢的问题，建议使用国内镜像源加速安装（见下文安装步骤）。\n\n## 2. 安装步骤\n\n### 标准安装\n使用 pip 直接安装最新版本的 Protenix：\n\n```bash\npip install protenix\n```\n\n### 使用国内镜像加速（推荐）\n在中国大陆地区，建议使用清华大学或阿里云镜像源以提升下载速度和稳定性：\n\n```bash\npip install protenix -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 代码质量检查工具（可选）\n如果您计划贡献代码或需要严格的格式检查，请安装 `pre-commit` 钩子：\n\n```bash\npip install pre-commit\npre-commit install\n```\n\n## 3. 基本使用\n\n安装完成后，您可以立即通过命令行进行生物分子结构预测。\n\n### 最简单的预测示例\n\n准备一个符合格式的 JSON 输入文件（可参考项目自带的 `examples\u002Finput.json`），然后运行以下命令：\n\n```bash\nprotenix pred -i examples\u002Finput.json -o .\u002Foutput -n protenix_base_default_v1.0.0\n```\n\n**参数说明：**\n*   `-i`: 输入文件路径（JSON 格式，包含序列、模板或约束信息）。\n*   `-o`: 输出结果目录。\n*   `-n`: 指定使用的模型名称。\n\n### 常用模型选择\n\n根据您的需求选择合适的模型版本：\n\n| 模型名称 | 特点描述 | 适用场景 |\n| :--- | :--- | :--- |\n| `protenix-v2` | **最新最强**。参数量约 464M，在抗体 - 抗原预测及配体合理性上表现最佳。 | 追求最高精度，特别是涉及抗体或复杂配体的任务。 |\n| `protenix_base_default_v1.0.0` | **基准模型**。参数量约 368M，训练数据截止至 2021-09-30，性能对标 AlphaFold3。 | 通用结构预测，复现研究结果。 |\n| `protenix_base_20250630_v1.0.0` | **应用模型**。使用更新的数据集（截止 2025-06-30）训练。 | 实际生产环境，需要利用最新已知结构信息的场景。 |\n\n**示例：使用最新的 v2 模型进行预测**\n```bash\nprotenix pred -i examples\u002Finput.json -o .\u002Foutput_v2 -n protenix-v2\n```\n\n更多详细的输入格式说明（如原子级接触约束、口袋约束等）请参考官方文档 `docs\u002Finfer_json_format.md` 或查看 `inference_demo.sh` 脚本。","某生物医药公司的计算生物学团队正致力于开发一种针对新型病毒变异株的高亲和力中和抗体，急需快速预测抗体 - 抗原复合物的精确三维结构以指导后续实验。\n\n### 没有 Protenix 时\n- **复合物预测精度不足**：依赖传统工具难以准确模拟抗体与抗原结合时的复杂构象变化，导致预测结构与真实情况偏差较大。\n- **迭代周期漫长**：由于初始模型可信度低，团队需进行多轮耗时的湿实验验证和手动修正，严重拖慢药物研发进度。\n- **缺乏针对性优化**：通用模型未针对抗体 - 抗原相互作用进行专门训练，无法有效捕捉关键的界面接触特征。\n- **评估标准不统一**：缺乏经过人工清洗的高质量基准数据集，难以客观量化模型性能，导致选型困难。\n\n### 使用 Protenix 后\n- **高精度结构解析**：利用 Protenix-v2 专为抗体 - 抗原预测优化的能力，直接生成高置信度的复合物结构，显著提升了界面预测的准确性。\n- **研发效率倍增**：基于更可靠的初始结构，湿实验验证成功率大幅提高，将原本数周的“预测 - 验证”循环缩短至几天。\n- **物理约束增强**：通过引入原子级接触和口袋约束功能，结合物理先验知识进一步微调模型，确保生成的结构在化学上更加合理。\n- **科学评估闭环**：借助配套的 PXMeter 工具包，团队能在去除实验伪影的高质量基准上复现并验证结果，确保决策依据科学可靠。\n\nProtenix 通过提供高精度的开源生物分子结构预测能力，将抗体药物发现中的关键结构解析环节从“盲目试错”转变为“精准设计”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_Protenix_ffde03fa.png","bytedance","Bytedance Inc.","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbytedance_7fee2b15.png","",null,"ByteDanceOSS","https:\u002F\u002Fopensource.bytedance.com","https:\u002F\u002Fgithub.com\u002Fbytedance",[86,90,94,98,101,105,109],{"name":87,"color":88,"percentage":89},"Python","#3572A5",95.3,{"name":91,"color":92,"percentage":93},"Cuda","#3A4E3A",2.1,{"name":95,"color":96,"percentage":97},"Shell","#89e051",1.4,{"name":99,"color":100,"percentage":29},"C++","#f34b7d",{"name":102,"color":103,"percentage":104},"Jupyter Notebook","#DA5B0B",0.2,{"name":106,"color":107,"percentage":108},"Dockerfile","#384d54",0,{"name":110,"color":111,"percentage":108},"C","#555555",1821,261,"2026-04-17T07:58:16","Apache-2.0","未说明","未说明 (模型涉及扩散推理优化、TF32 加速及高效内核融合，通常暗示需要支持这些特性的现代 NVIDIA GPU)",{"notes":119,"python":116,"dependencies":120},"README 中未明确列出具体的操作系统、GPU 型号、显存大小、内存需求、Python 版本及底层依赖库（如 PyTorch 版本）。仅提供了通过 pip 安装 'protenix' 包的命令。技术细节提及支持 TF32 加速、共享变量缓存和内核融合，建议参考官方文档 'docs\u002Ftraining_inference_instructions.md' 获取详细的运行环境配置。",[121],"protenix",[18],[124,125],"ai4science","research","2026-03-27T02:49:30.150509","2026-04-18T09:20:11.621393",[129,134,139,144,149,154],{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},38882,"如何使用本地生成的 A3M 文件作为 Protenix 的输入，而不经过 MSA 服务器？","可以使用 LocalColabFold 生成本地兼容 Protenix 的 MSA 文件。具体步骤是：首先运行 colab_search 生成本地数据库扫描结果，然后使用 A3MProcessor 解析 a3m 文件以生成配对和非配对的 MSA 文件（此过程不会再次调用 colab_search）。详细操作指南请参考官方文档：[Using Local Colabfold_search to Generate Protenix-Compatible MSA](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fblob\u002Fmain\u002Fdocs\u002Fcolabfold_compatiable_msa.md)。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fissues\u002F39",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},38883,"运行示例时遇到共享内存（Shared Memory）不足的错误如何解决？","如果遇到共享内存问题，建议将输入的 FASTA 文件拆分为单个序列文件，然后通过 shell 循环逐个处理。以下是一个参考脚本：\n```bash\n#!\u002Fbin\u002Fbash\ninput_fasta=\"input.fasta\"\ndb_path=\"\u003Cpath\u002Fto\u002Fcolabfold_db>\"\nmmseqs_path=\"\u003Cpath\u002Fto\u002Fmmseqs>\"\noutput_dir=\"dimer_colabfold_msa\"\n\nmkdir -p split_fasta\nmkdir -p \"$output_dir\"\n\n# 拆分 FASTA 文件\nawk '\u002F^>\u002F{f=\"split_fasta\u002Fseq\"++i\".fasta\"} {print > f}' \"$input_fasta\"\n\n# 循环处理每个文件\nfor file in split_fasta\u002F*.fasta; do\n    echo \"Processing $file with colabfold_msa.py...\"\n    python3 scripts\u002Fcolabfold_msa.py \"$file\" \"$db_path\" \"$output_dir\" \\\n        --db1 uniref30_2103_db \\\n        --db3 colabfold_envdb_202108_db \\\n        --mmseqs_path \"$mmseqs_path\"\ndone\n```","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fissues\u002F74",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},38884,"在 Docker 或 Conda 环境中安装 Protenix 时遇到 `pip install -e .` 报错怎么办？","Docker 镜像已包含推理和训练所需的全部环境和依赖，它不是 Conda 环境。\n1. 如果使用 Docker 环境，可以在容器内运行 `pip install -e .` 或直接安装发布版 `pip install protenix==0.6.1`。\n2. 如果仅打算运行推理（inference），推荐创建一个干净的 Conda 环境，这样更简单且足够满足需求。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fissues\u002F182",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},38885,"约束模型（with constraint）和无约束模型（without constraint）的权重关系及微调策略是什么？","带约束的模型是在无约束预训练模型的基础上进行微调得到的。微调时整个模型处于解冻状态，但新增的约束部分和其余部分会使用不同的学习率调度策略。推荐的学习率设置为 5e-4，批量大小（batch size）为 64，微调步数约为 15,000 到 20,000 步。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fissues\u002F75",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},38886,"运行 inference_demo.sh 时遇到关于 MET 残基解析的 'NoneType' 错误如何处理？","该错误通常与 CCD CIF 文件中 MET 残基的解析有关。虽然本地文件中存在 MET，但解析器可能未能正确识别。请确保使用的是最新版本的 Protenix（如 v0.4.0 或更高），因为新版本修复了相关的解析逻辑。如果问题依旧，请检查 `json_parser.py` 或 `infer_data_pipeline.py` 中关于 `get_component_atom_array()` 的处理逻辑，并确认 CCD 缓存数据路径配置正确。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fissues\u002F43",{"id":155,"question_zh":156,"answer_zh":157,"source_url":138},38887,"如何批量处理多个相互作用对（interactors）的 MSA 生成任务？","可以将包含多个序列对的 FASTA 文件拆分为独立的文件，然后使用循环调用 `colabfold_msa.py` 脚本进行批量处理。对于大规模预测，建议先在本地使用 `colabfold_search` 命令一次性生成所有对的 MSA，然后再转换格式。这样可以避免重复扫描数据库，提高效率。",[159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249,254],{"id":160,"version":161,"summary_zh":162,"released_at":163},314813,"v2.0.0","  ## 变更内容                                                                                                \r\n                                                                                                                                                 \r\n### 🚀 重点概览                                                                                                                           \r\n                                                                                                                                                 \r\n  • 发布 Protenix-v2 模型：推出了增强版模型 protenix-v2（4.64亿参数）。该模型在预测具有挑战性的抗体-抗原复合物结构方面显著提升了准确性，并更新了配体相关合理性的评估。                            \r\n  • 无训练引导模块（TFG）：引入了一个强大的新引导模块，在扩散采样过程中强制执行几何和物理约束（如空间位阻、扭转角、键长等），而无需重新训练。                                                                         \r\n                                                                                                                             \r\n### ✨ 新功能与改进                                                                                                              \r\n                                                                                                                                                 \r\n  • 推理效率突破：protenix-v2 在效率上取得了显著提升。仅使用5个采样种子，其性能便成功超越了使用1000个种子的 protenix-v1。                                                                                                         \r\n  • TFG 功能可配置：通过 --use_tfg_guidance CLI 标志进行启用。支持的几何约束包括 VinaStericPotential、ExperimentalTorsionPotential 和 PairwiseDistancePotential。                                             \r\n                                                                                                                                                                                                                                                                                                                                                                                                                 \r\n### 📖 文档与资源                                                                                                                                                                                                                                                    \r\n  • 将 Protenix 版本升级至 2.0.0。                                                                                                             \r\n  • 发布了新的 Protenix-v2 技术报告（docs\u002FPX2.pdf）。             ","2026-04-07T18:33:52",{"id":165,"version":166,"summary_zh":167,"released_at":168},314814,"v1.1.0","## 变更内容\n  - 在编译 layer_norm 的 CUDA 扩展之前，检查当前目录是否可写\n  - 如果当前目录不可写，则回退到 PyTorch 的缓存目录","2026-03-23T04:34:13",{"id":170,"version":171,"summary_zh":172,"released_at":173},314815,"v1.0.9","## 变更内容\n\n- 为 `protenix json` 命令添加了一个 `include_discont_poly_poly_bonds` 开关。","2026-03-17T06:54:57",{"id":175,"version":176,"summary_zh":177,"released_at":178},314816,"v1.0.8","## 变更内容\n\n- 用于 dropout 残差连接的 Triton 融合算子现在默认已禁用，因为它们会导致 foldbench 性能略有下降。可以通过设置 FUSED_DROPOUT_RESIDUAL 环境变量来启用这些算子。","2026-03-16T04:32:18",{"id":180,"version":181,"summary_zh":182,"released_at":183},314817,"v1.0.7","## 变更内容\n\n - 添加 `protenix\u002Fversion.py` 以存储版本字符串\n - 更新 `setup.py` 以动态读取版本\n - 更新 `runner\u002Fbatch_inference.py` 以使用共享的版本\n - 在 `protenix\u002F__init__.py` 中暴露版本","2026-03-15T04:45:41",{"id":185,"version":186,"summary_zh":187,"released_at":188},314818,"v1.0.6","## 变更内容\n* 自动检测 CUDA 架构以编译层归一化内核，由 @longleo17 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F252 中实现\n* 特征提取器：零拷贝张量创建 + NumPy 向量化，由 @longleo17 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F246 中实现\n* MSA 编码：向量化序列解析，由 @longleo17 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F247 中实现\n* 数据集：向量化 Pandas\u002FNumPy 操作，由 @longleo17 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F248 中实现\n* 注意力精度修复 + LDDT 融合阈值 + 损失缓存，由 @longleo17 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F249 中实现\n* 融合 Triton Dropout + 残差相加内核（v2，修复 cueq 问题），由 @longleo17 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F256 中实现\n* 修复 Blackwell GPU 兼容性，由 @giulioisac 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F257 中实现\n* 链索引重排，由 @giulioisac 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F260 中实现\n* 修复当 opposite 为 False 时 eye_mask 返回单位矩阵的问题，由 @mrzzmrzz 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F261 中实现\n* 移除无效的 AdamW 参数，并处理 None 类型的 param_names，由 @mrzzmrzz 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F262 中实现\n* 支持输入 JSON 中的自定义 id 字段\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fcompare\u002Fv1.0.5...v1.0.6","2026-03-11T04:54:54",{"id":190,"version":191,"summary_zh":192,"released_at":193},314819,"v1.0.5","## 变更内容\n* 推理：修复 MSA 断言，并加强输入和模型名称的校验，由 @ullahsamee 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F235 中完成。\n* 按需远程获取 mmCIF 模板文件，由 @giulioisac 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F243 中实现。\n* 在 Protenix CLI 中新增 `need_atom_confidence` 选项。\n## 新贡献者\n* @giulioisac 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F243 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fcompare\u002Fv1.0.4...v1.0.5","2026-02-25T07:29:37",{"id":195,"version":196,"summary_zh":197,"released_at":198},314820,"v1.0.4","## 变更内容\n* 修复：在 colab_request_utils.py 中添加了正确的文件句柄管理，由 @hobostay 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F228 中完成\n* 三角形：修复了 PyTorch 2.9 中张量索引已弃用的警告，由 @ullahsamee 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F229 中完成\n* 如果使用模板但未找到 kalign，则抛出异常\n\n## 新贡献者\n* @hobostay 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F228 中完成了首次贡献\n* @ullahsamee 在 https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F229 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fcompare\u002Fv1.0.3...v1.0.4","2026-02-09T14:06:33",{"id":200,"version":201,"summary_zh":202,"released_at":203},314821,"v1.0.3","## 变更内容\n- 修复 mini_esm 模型的边缘情况推断问题\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fcompare\u002Fv1.0.2...v1.0.3","2026-02-09T11:04:32",{"id":205,"version":206,"summary_zh":207,"released_at":208},314822,"v1.0.2","## 变更内容\n- 发布 RNA 多序列比对数据及相应配置文档。\n- 对最终输出的 CIF 文件中的链 ID 进行了小幅修改。","2026-02-07T03:07:55",{"id":210,"version":211,"summary_zh":212,"released_at":213},314823,"v1.0.1","## What's changed\r\n- Fix inference with ion","2026-02-07T03:03:01",{"id":215,"version":216,"summary_zh":217,"released_at":218},314824,"v1.0.0","## What's changed\r\n🔹 Verified inference-time scaling behavior\r\n🔹 RNA MSA & protein template support\r\n🔹 Additional release: Protenix-v1-20250630 trained on a larger dataset\r\n🔹 PXMeter v1.0.0 for transparent evaluation (6k+ complexes, time-split & domain-specific subsets)\r\n\r\n\u003Cimg src=\".\u002Fassets\u002Fprotenix_base_default_v1.0.0_metrics.png\" style=\"width: 100%; height: auto;\" alt=\"protenix-v1 model Metrics\">\r\n\r\n\u003Cimg src=\".\u002Fassets\u002Fprotenix_base_default_v1.0.0_metrics2.png\" style=\"width: 100%; height: auto;\" alt=\"protenix-v1 model Metrics 2\">","2026-02-04T17:04:15",{"id":220,"version":221,"summary_zh":222,"released_at":223},314825,"v0.7.3","## What's changed\r\n- Fix the bug in the code where ref_space_uid was mistakenly written as ref_mask in the cache computation. commit [855973d](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fcommit\u002F855973dc8a2505997b6510cbf57874ae2f994898).\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fcompare\u002Fv0.7.2...v0.7.3","2025-11-27T07:10:28",{"id":225,"version":226,"summary_zh":227,"released_at":228},314826,"v0.7.2","## What's changed\r\n\r\n- If the directory specified by `precomputed_msa_dir` under the `msa` field in the inference file does not contain the pairing.a3m file, no error will be thrown; instead, only the non_pairing.a3m file will be used for inference. In previous versions, this would have caused an immediate error.\r\n\r\nexample.json\r\n\r\n```json\r\n[{\r\n    \"sequences\": [\r\n        {\r\n            \"proteinChain\": {\r\n                \"sequence\": \"MGSSHHHHHHSSGLVPRGSHMSGKIQHKAVVPAPSRIPLTLSEIEDLRRKGFNQTEIAELYGVTRQAVSWHKKTYGGRLTTRQIVQQNWPWDTRKPHDKSKAFQRLRDHGEYMRVGSFRTMSEDKKKRLLSWWKMLRDNDLVLEFDPSIEPYEGMAGGGFRYVPRDISDDDLLIRVNEHTQLTAEGELLWSWPDDIEELLSEP\",\r\n                \"count\": 1,\r\n                \"msa\": {\r\n                    \"precomputed_msa_dir\": \".\u002Fexamples\u002F7r6r\u002Fmsa\u002F1\",\r\n                    \"pairing_db\": \"uniref100\"\r\n                }\r\n            }\r\n        },\r\n        {\r\n            \"dnaSequence\": {\r\n                \"sequence\": \"TTTCGGTGGCTGTCAAGCGGG\",\r\n                \"count\": 1\r\n            }\r\n        },\r\n        {\r\n            \"dnaSequence\": {\r\n                \"sequence\": \"CCCGCTTGACAGCCACCGAAA\",\r\n                \"count\": 1\r\n            }\r\n        }\r\n    ],\r\n    \"name\": \"7r6r\"\r\n}\r\n]\r\n```","2025-11-20T09:51:03",{"id":230,"version":231,"summary_zh":232,"released_at":233},314827,"v0.7.1","## What's Changed\r\n- Add a dtype parameter to the Protenix CLI for inference, enabling FP32 inference via the -d flag.\r\n- For inference on V100 GPUs, certain configurations are forcibly adjusted — for example, BF16 precision and unsupported optimized kernels are disabled by default.","2025-11-12T08:38:22",{"id":235,"version":236,"summary_zh":237,"released_at":238},314828,"v0.7.0","## What's Changed\r\nWe’re excited to announce the open-source release of Protenix v0.7.0, supported by @yangyanpinghpc,  featuring several performance optimizations for diffusion inference. This version introduces three new optional acceleration flags (enabled by default in inference stage) and improved support for batched inference:\r\n- --enable_cache\r\nPrecomputes and caches shared intermediate variables (pair_z, p_lm, c_l) across the N_sample and N_step dimensions. \r\n- --enable_fusion\r\nFuses bias transformations and normalization in the 24-layer diffusion transformer blocks at compile time.\r\n- --enable_tf32\r\nEnables TF32 precision for matrix multiplications when using FP32 computation, trading slight numerical accuracy for speed.\r\n- Batched Diffusion Support (N_sample > 1)\r\nShares s_trunk and z_pair across the N_sample dimension during diffusion, reducing memory and compute overhead without affecting results. \r\n\r\nYou can run it using the following example command:\r\n(Note: if not specified, --enable_cache, --enable_fusion, and --enable_tf32 default to true.)\r\n```\r\nprotenix predict -i examples\u002Fexample.json -o  .\u002Ftest_outputs\u002Fcmd\u002Foutput_mini -s 105,106 -n \"protenix_mini_default_v0.5.0\" --triatt_kernel \"torch\" --trimul_kernel \"torch\" --enable_cache true --enable_fusion true --enable_tf32 true\r\n```\r\n![v0.7.0 performance](assets\u002Finference_time_vs_ntoken.png)\r\n","2025-11-05T07:25:23",{"id":240,"version":241,"summary_zh":242,"released_at":243},314829,"v0.6.3","## What's Changed\r\n- Polymer–polymer bond input at inference. Inference can now read user-specified polymer–polymer covalent bonds from JSON and incorporate them into features. This supports cyclic peptides formed by head-to-tail amide linkage or disulfide bonds.\r\n- CIF output quality. Cleaned and optimized fields in the generated CIF files for better downstream compatibility.\r\n- msa_pairing.py: remove an assertion with deprecated np.string_ for improved NumPy compatibility.\r\n- Updated inference README. Clarifies how to specify polymer–polymer bonds in JSON, the supported cyclic-peptide cases, and current limitations.","2025-10-30T06:06:44",{"id":245,"version":246,"summary_zh":247,"released_at":248},314830,"v0.6.2","## What's Changed\r\n* minor modification by @OccupyMars2025 in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F177\r\n* add compatibility with colabfold mmseqs server api by @JinyuanSun in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F178\r\n* tests: Add test cases for installation and compatibility issues by @ShadNygren in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F192\r\n* fix: Resolve DeepSpeed\u002FPydantic compatibility issue (#182) by @ShadNygren in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F193\r\n* Fix #185: Enable consumer GPU support (RTX 3090\u002F4090) with Triton fallback by @ShadNygren in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F194\r\n* minor modification: switch `residue_index` to `token_index` by @OccupyMars2025 in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F195\r\n* fix typo for get_atom_permutation_list function by @mrzzmrzz in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F196\r\n* update cuequivariance to 0.6.1\r\n* update constraint api and Protenix web server\r\n\r\n## New Contributors\r\n* @JinyuanSun made their first contribution in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F178\r\n* @ShadNygren made their first contribution in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F192\r\n* @mrzzmrzz made their first contribution in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F196\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fcompare\u002Fv0.6.1...v0.6.2","2025-09-18T04:11:49",{"id":250,"version":251,"summary_zh":252,"released_at":253},314831,"v0.6.1","- Fixed ESM model loading compatibility with PyTorch 2.6 and later versions.\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fcompare\u002Fv0.6.0...v0.6.1","2025-08-20T05:57:34",{"id":255,"version":256,"summary_zh":257,"released_at":258},314832,"v0.6.0","## What's Changed\r\n1. Optimized the custom LayerNorm kernel, further boosting end-to-end inference and training speed.\r\n2. Integrated a custom Triton-based implementation of the TriangleAttention operator (triattention), improving computational efficiency.\r\n3. Integrated the cuEquivariance operator from [NVIDIA\u002FcuEquivariance ](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FcuEquivariance)to accelerate equivariant operations, with notable efficiency gains in the TriangleAttention and TriangleMultiplication modules.\r\n4. Upgraded the container image and dependencies to resolve efficiency bottlenecks in PyTorch 2.4 and later versions; Supported Biotite 1.2 and above.\r\n\r\n## New Contributors\r\n* @chaitjo made their first contribution in https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fpull\u002F175\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbytedance\u002FProtenix\u002Fcompare\u002Fv0.5.5...v0.6.0","2025-08-19T07:35:54"]