[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ofirpress--attention_with_linear_biases":3,"tool-ofirpress--attention_with_linear_biases":64},[4,17,25,39,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":10,"last_commit_at":23,"category_tags":24,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":26,"name":27,"github_repo":28,"description_zh":29,"stars":30,"difficulty_score":10,"last_commit_at":31,"category_tags":32,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[33,34,35,36,14,37,15,13,38],"图像","数据工具","视频","插件","其他","音频",{"id":40,"name":41,"github_repo":42,"description_zh":43,"stars":44,"difficulty_score":45,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[14,33,13,15,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":45,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[15,33,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":45,"last_commit_at":62,"category_tags":63,"status":16},2181,"OpenHands","OpenHands\u002FOpenHands","OpenHands 是一个专注于 AI 驱动开发的开源平台，旨在让智能体（Agent）像人类开发者一样理解、编写和调试代码。它解决了传统编程中重复性劳动多、环境配置复杂以及人机协作效率低等痛点，通过自动化流程显著提升开发速度。\n\n无论是希望提升编码效率的软件工程师、探索智能体技术的研究人员，还是需要快速原型验证的技术团队，都能从中受益。OpenHands 提供了灵活多样的使用方式：既可以通过命令行（CLI）或本地图形界面在个人电脑上轻松上手，体验类似 Devin 的流畅交互；也能利用其强大的 Python SDK 自定义智能体逻辑，甚至在云端大规模部署上千个智能体并行工作。\n\n其核心技术亮点在于模块化的软件智能体 SDK，这不仅构成了平台的引擎，还支持高度可组合的开发模式。此外，OpenHands 在 SWE-bench 基准测试中取得了 77.6% 的优异成绩，证明了其解决真实世界软件工程问题的能力。平台还具备完善的企业级功能，支持与 Slack、Jira 等工具集成，并提供细粒度的权限管理，适合从个人开发者到大型企业的各类用户场景。",70612,"2026-04-05T11:12:22",[15,14,13,36],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":76,"owner_website":81,"owner_url":82,"languages":83,"stars":107,"forks":108,"last_commit_at":109,"license":110,"difficulty_score":45,"env_os":111,"env_gpu":112,"env_ram":113,"env_deps":114,"category_tags":123,"github_topics":80,"view_count":45,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":124,"updated_at":125,"faqs":126,"releases":156},1124,"ofirpress\u002Fattention_with_linear_biases","attention_with_linear_biases","Code for the ALiBi method for transformer language models (ICLR 2022)","ALiBi（Attention with Linear Biases）是一种改进Transformer架构注意力机制的轻量级方法，通过在注意力计算中引入固定的线性偏置代替传统可学习的位置编码，使模型在训练时使用短文本序列，推理阶段却能直接处理显著更长的序列（如训练1024长度，推理支持2048以上），且无需额外微调。这种方法解决了Transformer模型在长文本处理时常见的外推能力不足问题，同时在低资源语言建模场景中也能提升基础性能。\n\n传统位置编码会让模型过度依赖特定位置特征，导致长序列泛化困难。ALiBi采用非参数化设计，通过头（head）级别的线性偏置矩阵动态调整注意力权重，既保留位置信息又避免过拟合。其核心优势在于：1）完全无需学习位置参数，减少计算开销；2）推理时序列长度扩展能力显著优于TransformerXL等方法；3）内存占用仅增加约100MB，计算效率与标准Transformer相当。\n\n该工具特别适合需要处理超长文本（如法律文书、科研论文）的开发者，以及关注模型效率与泛化能力的研究人员。对于中文等需要处理复杂长距离依赖的语言建模任务，ALiBi在保持低资源消耗的同","ALiBi（Attention with Linear Biases）是一种改进Transformer架构注意力机制的轻量级方法，通过在注意力计算中引入固定的线性偏置代替传统可学习的位置编码，使模型在训练时使用短文本序列，推理阶段却能直接处理显著更长的序列（如训练1024长度，推理支持2048以上），且无需额外微调。这种方法解决了Transformer模型在长文本处理时常见的外推能力不足问题，同时在低资源语言建模场景中也能提升基础性能。\n\n传统位置编码会让模型过度依赖特定位置特征，导致长序列泛化困难。ALiBi采用非参数化设计，通过头（head）级别的线性偏置矩阵动态调整注意力权重，既保留位置信息又避免过拟合。其核心优势在于：1）完全无需学习位置参数，减少计算开销；2）推理时序列长度扩展能力显著优于TransformerXL等方法；3）内存占用仅增加约100MB，计算效率与标准Transformer相当。\n\n该工具特别适合需要处理超长文本（如法律文书、科研论文）的开发者，以及关注模型效率与泛化能力的研究人员。对于中文等需要处理复杂长距离依赖的语言建模任务，ALiBi在保持低资源消耗的同时提供了更强的序列扩展性。其实现代码已开源，适配Fairseq框架，通过移除位置嵌入、添加偏置矩阵等三处核心修改即可集成，技术细节对工程实践具有较强指导价值。","# Train Short, Test Long: Attention with Linear Biases (ALiBi) Enables Input Length Extrapolation \n\nThis repository contains the ALiBi code and models for our ICLR 2022 paper Train Short, Test Long. This file explains how to run our experiments on the WikiText-103 dataset. \n\nPaper: [here](https:\u002F\u002Fofir.io\u002Ftrain_short_test_long.pdf)\n\nVideo: [here](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Pp61ShI9VGc)\n\n```diff\n+ -----------\n+ NEW STUFF: \n+ -----------\n```\n1. There's a [new paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.13017) showing that the position interpolation trick works with ALiBi (October 2023). \n2. If you'd like to apply ALiBi to a bidirectional transformer (such as an encoder) model, you could use one of the methods mentioned [here](https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fissues\u002F5). \n3. There's a FAQ [below](https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002F#faq) that answers questions regarding comparisons with TransformerXL and why I think ALiBi works well. \n\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\".github\u002FALiBi.jpeg\" width=\"50%\" height=\"50%\">\n\u003C\u002Fp>\n\nAttention with Linear Biases (ALiBi) is very simple! Instead of adding position embeddings at the bottom of the transformer stack (which we don't) we add a linear bias to each attention score, as depicted in the figure above. The 'm' hyperparam is head-specific and is not learned- it is set at the beginning of training. We have a function that automatically generates these m values given the number of heads in the model. \n\nALiBi allows the model to be trained on, for example, 1024 tokens, and then do inference on 2048 (or much more) tokens without any finetuning. It's also able to improve performance, even when not extrapolating, in lower resource language modeling settings. \n\nThe implementation is very simple.\n\n0. Remove the position embeddings from the model: https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fblob\u002Fmaster\u002Ffairseq\u002Fmodels\u002Ftransformer.py#L941\n1. Set up the relative bias matrix, here: https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fblob\u002Fmaster\u002Ffairseq\u002Fmodels\u002Ftransformer.py#L742\n2. Add the bias matrix to the mask, which is then added in each attention score computation: https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fblob\u002Fmaster\u002Ffairseq\u002Fmodels\u002Ftransformer.py#L1011\n3. (This might not be necessary in other frameworks.) Move the mask computation to before the layer loop, to make the transformer a tiny bit faster: https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fblob\u002Fmaster\u002Ffairseq\u002Fmodels\u002Ftransformer.py#L949\n\n\nThats it!\n## FAQ:\n#### \n#### Why do you think ALiBi works?\nI'm not quite sure, but here are my thoughts- I think that using position embeddings (learned, sinusoidal or rotary) is not optimal. I think it leads to the LM 'overfitting' to those position embeddings, and not really understanding the concept of positionality. I have more details about this in [this twitter thread](https:\u002F\u002Ftwitter.com\u002FOfirPress\u002Fstatus\u002F1435690039925567489). \n\n#### How does ALiBi compare to the positional embedding method from TransformerXL?\nGood question! Although I don't think it has ever been thoroughly tested, the TransformerXL positioning method might also enable extrapolation. In a previous paper ([Shortformer](https:\u002F\u002Fofir.io\u002Fshortformer.pdf), Table 5) we've shown that that method leads to the attention mechanism being more than two times *slower* than the unmodified attention method. It also requires doubling the amount of memory that the attention sublayer uses. \nALiBi runs just as quickly as unmodified attention and uses at most 100MB of extra memory.\n\n#### How can I apply ALiBi to bidirectional models like an encoder-only model (like BERT) or an encoder-decoder model (like T5)?\nSee [this](https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fissues\u002F5).\n\n#### If I want to extrapolate to longer sequences, can't I just apply a sliding window mask to a sinusoidal\u002Flearned position embedding model?\nNope, that won't work (I've tried). With learned position embeddings you won't even have a position embedding for any token beyond the training context length (so if you trained on 1k tokens you won't have a position embedding for token 1001). With sinusoidal position embedding, the model will become unstable once you start feeding in position embeddings that it did not observe during training. I talk a bit more about this in my video lecture [here](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Pp61ShI9VGc).\n\n#### The biases in ALiBi grow linearly. Have you tried making them grow in other ways, such as exponentially?\nYup, during the development of ALiBi I tried making the bias growth function exponential and that performed worse than linear. I tinkered around with this a bit, trying other growth functions, but linear worked best, and so that's why we chose that for the final version of ALiBi. \n\n\n\u003Chr>\n\n#### Citation:\n```\n@inproceedings{\nalibi,\ntitle={Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation},\nauthor={Ofir Press and Noah Smith and Mike Lewis},\nbooktitle={International Conference on Learning Representations},\nyear={2022},\nurl={https:\u002F\u002Fopenreview.net\u002Fforum?id=R8sQPpGCv0}\n}\n```\n\n## WikiText-103\n### Requirements and Installation\n\nThis repository is a fork of the [Fairseq](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ffairseq) repository and so has the same requirements. \n\nOnce you've installed the dependencies, you can install this repository by running:\n\n```bash\npip install --editable .\n```\n\n### Preparing the data\n\nTo download and preprocess the data, run:\n\n```bash\ncd examples\u002Flanguage_model\u002F\nbash prepare-wikitext-103.sh\ncd ..\u002F..\n\n\nTEXT=examples\u002Flanguage_model\u002Fwikitext-103\nfairseq-preprocess \\\n    --only-source \\\n    --trainpref $TEXT\u002Fwiki.train.tokens \\\n    --validpref $TEXT\u002Fwiki.valid.tokens \\\n    --testpref $TEXT\u002Fwiki.test.tokens \\\n    --destdir data-bin\u002Fwikitext-103 \\\n    --workers 20\n```\n\n### Training and Inference\n\nTo train a language model with attention with linear baises (ALiBi), on input sequences with 512 tokens, run:\n```bash\npython train.py --task language_modeling data-bin\u002Fwikitext-103 --save-dir wt103\u002F --arch transformer_lm_wiki103 --max-update 286000 --lr 1.0 --t-mult 2 --lr-period-updates 270000 --lr-scheduler cosine --lr-shrink 0.75 --warmup-updates 16000 --warmup-init-lr 1e-07 --stop-min-lr 1e-09 --optimizer nag --min-lr 0.0001 --clip-norm 0.1 --criterion adaptive_loss --max-tokens 9216 --update-freq 1 --tokens-per-sample 512 --seed 1 --sample-break-mode none --skip-invalid-size-inputs-valid-test --ddp-backend=legacy_ddp --fp16 --required-batch-size-multiple 1\n```\n\nFor input sequences larger than 512 (and up to 2048) tokens, just change the --tokens-per-sample.\n\nTo train the model with inputs of length 3072, the --update-freq parameter must be changed to 3 and the --max-tokens parameter must be reduced to 3072 (and --tokens-per-sample must also be set to 3072). \n\n**If you run out of memory while training:** set --max-tokens to be 0.5 times what it was perviously and set --update-freq to be 2 times what it was previously. This results in a batched computation that is mathematically equivalent to the original command but requires less memory. If that doesn't work, set --max-tokens to be 0.25 times what it was previously and set the --update-freq to be 4 times what it was previously, and so on... \n\n#### Saved Checkpoints\nIf you'd like to download our trained models on WikiText-103, they are available here:\n| Input Length      | Model | Log |\n| ----------- | ----------- | ----------- |\n| 64     | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L64.pt)          |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L64.log)          |\n| 128    | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L128.pt)         |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L128.log)         |\n| 256    | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L256.pt)         |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L256.log)         |\n| 512    | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L512.pt)         |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L512.log)         |\n| 1024   | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L1024.pt)        |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L1024.log)        |\n| 1536   | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L1536.pt)        |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L1536.log)        |\n| 2048   | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L2048.pt)        |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L2048.log)        |\n| 3072   | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L3072.pt)        |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L3072.log)        |\n\n\nRename the file you downloaded to ```checkpoint_best.pt``` if you'd like to follow the directions below.\n\n#### Inference\n\nFor nonoverlapping evaluation on the validation set, run:\n```bash\nl=1024; fairseq-eval-lm data-bin\u002Fwikitext-103\u002F     --path wt103\u002Fcheckpoint_best.pt  --sample-break-mode none --gen-subset valid   --max-sentences 1 --model-overrides \"{'max_tokens':$l, 'tokens_per_sample':$l, 'max_target_positions':$l}\"  --tokens-per-sample $l --max-tokens $l  --max-target-positions $l  --context-window 0\n```\n\nwhere ```l``` is set to the length of input subsequences during validation (```l```=1024 in the above example). \n\nFor sliding window evaluation on the validation set, run:\n\n```bash\nl=1024; fairseq-eval-lm data-bin\u002Fwikitext-103\u002F     --path wt103\u002Fcheckpoint_best.pt  --sample-break-mode none --gen-subset valid   --max-sentences 1 --model-overrides \"{'max_tokens':$l, 'tokens_per_sample':$l, 'max_target_positions':$l}\"  --tokens-per-sample $l --max-tokens $l  --max-target-positions $l  --context-window $((l-1)) \n```\n(We just modify the --context-window argument here to be l-1, this means that we will slide the evaluation window by 1 token at every forward pass).\n","# 训练短，测试长：带有线性偏置的注意力机制（ALiBi）实现输入长度外推\n\n本仓库包含我们2022年ICLR论文《训练短，测试长》中使用的ALiBi代码及模型。本文档将说明如何在WikiText-103数据集上运行我们的实验。\n\n论文：[这里](https:\u002F\u002Fofir.io\u002Ftrain_short_test_long.pdf)\n\n视频：[这里](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Pp61ShI9VGc)\n\n```diff\n+ -----------\n+ 新内容：\n+ -----------\n```\n1. 最新发表的一篇论文（arXiv:2310.13017）表明，位置插值技巧同样适用于ALiBi（2023年10月）。\n2. 如果您希望将ALiBi应用于双向Transformer模型（例如编码器），可以参考[此处](https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fissues\u002F5)提到的方法。\n3. 下方的FAQ部分解答了关于ALiBi与TransformerXL比较的问题，以及为什么我认为ALiBi效果较好。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\".github\u002FALiBi.jpeg\" width=\"50%\" height=\"50%\">\n\u003C\u002Fp>\n\n带有线性偏置的注意力机制（ALiBi）非常简单！我们没有在Transformer堆栈底部添加位置嵌入，而是如上图所示，为每个注意力分数添加一个线性偏置。超参数“m”是按头设置的，且不进行学习——它在训练开始时就被固定下来。我们提供了一个函数，可以根据模型中的头数自动生成这些m值。\n\n使用ALiBi，模型可以在1024个标记上进行训练，随后无需任何微调即可在2048个甚至更多标记上进行推理。此外，在资源有限的语言建模场景中，即使不进行外推，ALiBi也能提升性能。\n\n其实现过程非常简单：\n\n0. 从模型中移除位置嵌入：https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fblob\u002Fmaster\u002Ffairseq\u002Fmodels\u002Ftransformer.py#L941\n1. 设置相对偏置矩阵，见：https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fblob\u002Fmaster\u002Ffairseq\u002Fmodels\u002Ftransformer.py#L742\n2. 将偏置矩阵添加到掩码中，该掩码会在每次计算注意力分数时被应用：https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fblob\u002Fmaster\u002Ffairseq\u002Fmodels\u002Ftransformer.py#L1011\n3. （这一步在其他框架中可能不需要。）将掩码计算移到层循环之前，以使Transformer运行速度稍快一些：https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fblob\u002Fmaster\u002Ffairseq\u002Fmodels\u002Ftransformer.py#L949\n\n就这些！\n## FAQ：\n#### \n#### 您认为ALiBi为何有效？\n我也不太确定，但我的想法如下：我认为使用位置嵌入（无论是可学习的、正弦的还是旋转式的）并不是最优的选择。它们可能会导致语言模型过度依赖这些位置嵌入，而未能真正理解位置的概念。关于这一点，我在[这篇推文](https:\u002F\u002Ftwitter.com\u002FOfirPress\u002Fstatus\u002F1435690039925567489)中有更详细的讨论。\n\n#### ALiBi与TransformerXL的位置嵌入方法相比如何？\n好问题！尽管尚未经过充分验证，但TransformerXL的位置编码方法也可能支持外推。在我们之前的论文（[Shortformer](https:\u002F\u002Fofir.io\u002Fshortformer.pdf)，表5）中，我们已经证明该方法会使注意力机制比原始版本慢两倍以上，并且需要将注意力子层的内存消耗增加一倍。而ALiBi的运行速度与原始注意力机制相当，额外内存开销最多仅为100MB。\n\n#### 如何将ALiBi应用于双向模型，比如仅编码器模型（如BERT）或编码器-解码器模型（如T5）？\n请参阅[此链接](https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fissues\u002F5)。\n\n#### 如果我想对更长的序列进行外推，是否可以直接对使用正弦\u002F可学习位置嵌入的模型应用滑动窗口掩码？\n不行，那样不起作用（我试过）。对于可学习的位置嵌入，一旦超过训练上下文长度，模型就无法为超出部分的标记生成位置嵌入（例如，如果只在1000个标记上训练，那么第1001个标记就没有位置嵌入）。而对于正弦位置嵌入，当输入的位置嵌入与训练期间观察到的不同时，模型会变得不稳定。我在[这段视频讲座](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Pp61ShI9VGc)中对此有进一步的说明。\n\n#### ALiBi中的偏置是线性增长的。您尝试过让其以其他方式增长吗，比如指数增长？\n是的，在开发ALiBi的过程中，我也尝试过使用指数增长的偏置函数，但效果不如线性增长的好。我还试验了其他形式的增长函数，最终发现线性增长的效果最佳，因此我们在ALiBi的最终版本中选择了线性偏置。\n\n\u003Chr>\n\n#### 引用：\n```\n@inproceedings{\nalibi,\ntitle={Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation},\nauthor={Ofir Press and Noah Smith and Mike Lewis},\nbooktitle={International Conference on Learning Representations},\nyear={2022},\nurl={https:\u002F\u002Fopenreview.net\u002Fforum?id=R8sQPpGCv0}\n}\n```\n\n## WikiText-103\n### 要求与安装\n\n本仓库是[Fairseq](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ffairseq)仓库的一个分支，因此具有相同的依赖要求。\n\n安装完依赖后，您可以通过以下命令安装本仓库：\n\n```bash\npip install --editable .\n```\n\n### 数据准备\n\n要下载并预处理数据，请执行以下命令：\n\n```bash\ncd examples\u002Flanguage_model\u002F\nbash prepare-wikitext-103.sh\ncd ..\u002F.\n\nTEXT=examples\u002Flanguage_model\u002Fwikitext-103\nfairseq-preprocess \\\n    --only-source \\\n    --trainpref $TEXT\u002Fwiki.train.tokens \\\n    --validpref $TEXT\u002Fwiki.valid.tokens \\\n    --testpref $TEXT\u002Fwiki.test.tokens \\\n    --destdir data-bin\u002Fwikitext-103 \\\n    --workers 20\n```\n\n### 训练与推理\n\n要使用线性偏置注意力机制（ALiBi）训练一个语言模型，输入序列为512个标记时，请运行以下命令：\n```bash\npython train.py --task language_modeling data-bin\u002Fwikitext-103 --save-dir wt103\u002F --arch transformer_lm_wiki103 --max-update 286000 --lr 1.0 --t-mult 2 --lr-period-updates 270000 --lr-scheduler cosine --lr-shrink 0.75 --warmup-updates 16000 --warmup-init-lr 1e-07 --stop-min-lr 1e-09 --optimizer nag --min-lr 0.0001 --clip-norm 0.1 --criterion adaptive_loss --max-tokens 9216 --update-freq 1 --tokens-per-sample 512 --seed 1 --sample-break-mode none --skip-invalid-size-inputs-valid-test --ddp-backend=legacy_ddp --fp16 --required-batch-size-multiple 1\n```\n\n对于长度大于512（最多2048）个标记的输入序列，只需更改`--tokens-per-sample`参数即可。\n\n若要训练输入长度为3072的模型，则需将`--update-freq`参数改为3，并将`--max-tokens`参数降低至3072（同时将`--tokens-per-sample`也设置为3072）。\n\n**如果在训练过程中出现内存不足的情况：** 可以将`--max-tokens`设置为之前值的50%，并将`--update-freq`设置为之前值的两倍。这样会得到一种在数学上等价于原始命令但所需内存更少的批处理计算方式。如果仍然无法解决，可进一步将`--max-tokens`设置为之前值的25%，并将`--update-freq`设置为之前值的四倍，依此类推……\n\n#### 已保存的检查点\n如果您希望下载我们在WikiText-103数据集上训练好的模型，它们可以在这里找到：\n| 输入长度      | 模型 | 日志 |\n| ----------- | ----------- | ----------- |\n| 64     | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L64.pt)          |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L64.log)          |\n| 128    | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L128.pt)         |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L128.log)         |\n| 256    | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L256.pt)         |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L256.log)         |\n| 512    | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L512.pt)         |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L512.log)         |\n| 1024   | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L1024.pt)        |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L1024.log)        |\n| 1536   | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L1536.pt)        |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L1536.log)        |\n| 2048   | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L2048.pt)        |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L2048.log)        |\n| 3072   | [⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L3072.pt)        |[⬇️ ](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L3072.log)        |\n\n如果您想按照下面的说明进行操作，请将下载的文件重命名为`checkpoint_best.pt`。\n\n#### 推理\n\n对于验证集上的非重叠评估，运行以下命令：\n```bash\nl=1024; fairseq-eval-lm data-bin\u002Fwikitext-103\u002F     --path wt103\u002Fcheckpoint_best.pt  --sample-break-mode none --gen-subset valid   --max-sentences 1 --model-overrides \"{'max_tokens':$l, 'tokens_per_sample':$l, 'max_target_positions':$l}\"  --tokens-per-sample $l --max-tokens $l  --max-target-positions $l  --context-window 0\n```\n\n其中`l`被设置为验证过程中输入子序列的长度（如上例中`l`=1024）。\n\n对于验证集上的滑动窗口评估，运行以下命令：\n\n```bash\nl=1024; fairseq-eval-lm data-bin\u002Fwikitext-103\u002F     --path wt103\u002Fcheckpoint_best.pt  --sample-break-mode none --gen-subset valid   --max-sentences 1 --model-overrides \"{'max_tokens':$l, 'tokens_per_sample':$l, 'max_target_positions':$l}\"  --tokens-per-sample $l --max-tokens $l  --max-target-positions $l  --context-window $((l-1)) \n```\n（我们只需将`--context-window`参数修改为`l-1`，这意味着每次前向传播时，评估窗口都会向后滑动1个标记）。","# ALiBi 快速上手指南\n\n## 环境准备\n- **系统要求**：Linux\u002FmacOS + Python 3.8+\n- **前置依赖**：\n  ```bash\n  # 推荐使用国内镜像加速安装\n  pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n  pip install fairseq --no-cache-dir  # 原生fairseq依赖\n  ```\n\n## 安装步骤\n```bash\n# 克隆仓库（建议使用国内GitHub镜像加速）\ngit clone https:\u002F\u002Fhub.fastgit.org\u002Fofirpress\u002Fattention_with_linear_biases.git\ncd attention_with_linear_biases\n\n# 安装依赖（使用清华源加速）\npip install --editable . -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\n### 数据准备（WikiText-103）\n```bash\n# 下载并预处理数据集\ncd examples\u002Flanguage_model\u002F\nbash prepare-wikitext-103.sh\ncd ..\u002F..\n\n# 二值化处理（使用20个进程加速）\nTEXT=examples\u002Flanguage_model\u002Fwikitext-103\nfairseq-preprocess \\\n    --only-source \\\n    --trainpref $TEXT\u002Fwiki.train.tokens \\\n    --validpref $TEXT\u002Fwiki.valid.tokens \\\n    --testpref $TEXT\u002Fwiki.test.tokens \\\n    --destdir data-bin\u002Fwikitext-103 \\\n    --workers 20\n```\n\n### 模型训练\n```bash\n# 训练基础命令（512长度序列）\npython train.py --task language_modeling data-bin\u002Fwikitext-103 \\\n    --save-dir wt103\u002F \\\n    --arch transformer_lm_wiki103 \\\n    --tokens-per-sample 512 \\  # 调整此参数控制输入长度\n    --max-tokens 9216 \\\n    --update-freq 1 \\\n    --fp16  # 自动混合精度训练\n\n# 内存不足时调整方案：\n# 将max-tokens减半，update-freq翻倍（保持等效batch size）\n--max-tokens 4608 --update-freq 2\n```\n\n### 模型推理\n```bash\n# 非重叠验证（输入长度1024）\nl=1024\nfairseq-eval-lm data-bin\u002Fwikitext-103\u002F \\\n    --path wt103\u002Fcheckpoint_best.pt \\\n    --tokens-per-sample $l \\\n    --max-tokens $l \\\n    --context-window 0  # 非重叠模式\n\n# 滑动窗口验证（输入长度1024）\nfairseq-eval-lm data-bin\u002Fwikitext-103\u002F \\\n    --path wt103\u002Fcheckpoint_best.pt \\\n    --tokens-per-sample $l \\\n    --context-window $((l-1))  # 滑动窗口模式\n```\n\n## 模型下载\n预训练模型（直接下载）：\n| 输入长度 | 模型文件 |\n|---------|---------|\n| 1024    | [下载链接](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L1024.pt) |\n| 2048    | [下载链接](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftrain_short_test_long\u002Fwt103\u002Falibi_wt103_L2048.pt) |\n\n> 下载后重命名为 `checkpoint_best.pt` 可直接使用推理命令\n\n---\n\n💡 **提示**：训练长序列时，建议通过 `--tokens-per-sample` 逐步增加长度，配合 `--update-freq` 参数动态调整显存占用。","数据标注团队正在为金融领域训练长文档摘要生成模型，需处理长达2048token的财报文本。传统方法在训练时需要完整加载长序列，导致显存占用过高。\n\n### 没有 attention_with_linear_biases 时\n- 训练时必须将序列截断为512token，导致上下文断裂，摘要出现逻辑断层\n- 使用滑动窗口处理长文档时，显存消耗增加40%，单卡训练batch_size被迫降至8\n- 位置嵌入过拟合训练长度，推理时处理1024+token文档时注意力权重异常\n- 需要额外开发分片训练代码，增加3人日调试成本\n- 训练速度下降25%（每epoch耗时从1.5h增至1.85h）\n\n### 使用 attention_with_linear_biases 后\n- 直接训练512token序列，推理时原生支持2048token输入，摘要连贯性提升32%\n- 显存占用降低至原方案的65%，batch_size可提升至24\n- 注意力偏差矩阵自动适配任意长度，无需修改位置编码计算逻辑\n- 移除位置嵌入模块后，训练代码减少120行冗余代码\n- 训练速度提升18%（每epoch耗时降至1.23h）\n\n核心价值：通过线性注意力偏差实现长度外推，在保持模型性能的同时，将长文本处理的显存消耗降低40%以上，显著提升训练效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fofirpress_attention_with_linear_biases_d93199d1.jpg","ofirpress","Ofir Press","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fofirpress_3a77529c.jpg","Modeling language",null,"http:\u002F\u002Fofir.io\u002Fabout","https:\u002F\u002Fgithub.com\u002Fofirpress",[84,88,92,96,100,104],{"name":85,"color":86,"percentage":87},"Python","#3572A5",97.3,{"name":89,"color":90,"percentage":91},"Cuda","#3A4E3A",1.3,{"name":93,"color":94,"percentage":95},"C++","#f34b7d",0.7,{"name":97,"color":98,"percentage":99},"Cython","#fedf5b",0.5,{"name":101,"color":102,"percentage":103},"Lua","#000080",0.1,{"name":105,"color":106,"percentage":103},"Shell","#89e051",555,42,"2026-03-22T15:19:10","MIT","Linux, macOS, Windows","需要 NVIDIA GPU，显存 8GB+，CUDA 版本需与 PyTorch 兼容（如 CUDA 11.7+）","未说明",{"notes":115,"python":116,"dependencies":117},"需安装 Fairseq 框架（通过 pip install --editable .），首次运行需下载约 5GB 模型文件。训练时可通过调整 --max-tokens 和 --update-freq 参数降低显存占用。建议使用 conda 管理环境。","3.8+",[118,119,120,121,122],"torch>=1.8","fairseq","transformers","accelerate","numpy",[15],"2026-03-27T02:49:30.150509","2026-04-06T07:13:22.955106",[127,132,137,142,147,152],{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},5070,"ALiBi的线性偏差为何需要乘以mask？","这是为了优化性能。通过将mask与线性偏差（alibi_bias）预先相加一次，并在后续多次复用该结果，避免了重复计算，从而提升运行效率。","https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fissues\u002F7",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},5071,"如何在HuggingFace模型中应用ALiBi位置编码？","ALiBi只能用于训练时已集成该机制的模型（如GPT-NeoX）。直接对未使用ALiBi训练的模型（如Llama）添加ALiBi无效。可参考EleutherAI的GPT-NeoX实现自定义代码。","https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fissues\u002F11",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},5072,"ALiBi的attn_mask数值为何与论文描述不一致？","论文中的attn_mask数值描述与代码实现差异源于softmax的平移不变性。代码中通过调整数值（如添加常数）实现等效效果，具体解释可参考ALiBi技术博客。","https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fissues\u002F9",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},5073,"如何实现滑动窗口评估？","需设置`context-window`参数为序列长度L-1（如L=3072时设为3071），并使用以下命令示例：\n`fairseq-eval-lm ... --context-window 512`。\n滑动步长默认为1 token。","https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fissues\u002F10",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},5074,"是否有HuggingFace兼容的ALiBi检查点？","MPT-7B、BLOOM及Replit LM等模型已集成ALiBi并在HuggingFace提供预训练权重（如`mosaicml\u002Fmpt-7b`）。","https:\u002F\u002Fgithub.com\u002Fofirpress\u002Fattention_with_linear_biases\u002Fissues\u002F12",{"id":153,"question_zh":154,"answer_zh":155,"source_url":131},5075,"使用ALiBi时如何解决attention_scores的形状广播问题？","需确保输入张量形状匹配。例如：当`q.k^T`形状为`(1,8,512,512)`时，alibi应扩展为`(1,8,512,512)`以支持广播。具体可参考代码中维度调整逻辑。",[]]