[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-p-e-w--heretic":3,"tool-p-e-w--heretic":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":78,"owner_location":79,"owner_email":80,"owner_twitter":78,"owner_website":81,"owner_url":82,"languages":83,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":23,"env_os":92,"env_gpu":93,"env_ram":92,"env_deps":94,"category_tags":102,"github_topics":103,"view_count":107,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":108,"updated_at":109,"faqs":110,"releases":144},1338,"p-e-w\u002Fheretic","heretic","Fully automatic censorship removal for language models","Heretic 是一款开源命令行程序，专为“一键去审查”而生：它能自动把经过安全对齐、动辄拒绝回答敏感话题的大模型，还原成几乎不再说“对不起，我无法回答”的版本，却又不牺牲原有智力。传统做法需要昂贵微调或人工调参，而 Heretic 通过“方向消融（abliteration）”技术，自动搜索最优参数，在减少拒答率的同时，把与原模型的差异（KL 散度）压到最低，全程无需了解 Transformer 内部细节。\n\n只需一行指令，普通用户、开发者或研究人员就能把 Hugging Face 上的模型变成“去限制”版本，并立即用内置脚本验证效果。它已在 Gemma-3-12B 等模型上实现 97% 拒答率的显著下降，且对无害问题的回答几乎保持原样。如果你希望本地大模型更坦率、研究对齐机制，或需要无过滤语料做实验，Heretic 是目前最省心、效果可复现的选择。","\u003Cimg width=\"128\" height=\"128\" align=\"right\" alt=\"Logo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_readme_88beb85e82de.png\" \u002F>\n\n# Heretic: Fully automatic censorship removal for language models\u003Cbr>\u003Cbr>[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1447831134212984903?color=5865F2&label=discord&labelColor=black&logo=discord&logoColor=white&style=for-the-badge)](https:\u002F\u002Fdiscord.gg\u002FgdXc48gSyT) [![Follow us on Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fhuggingface\u002Fbadges\u002Fresolve\u002Fmain\u002Ffollow-us-on-hf-md-dark.svg)](https:\u002F\u002Fhuggingface.co\u002Fheretic-org)\n\n[![#1 Repository of the Day](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_readme_4a68feb902da.png)](https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F20538)\n\nHeretic is a tool that removes censorship (aka \"safety alignment\") from\ntransformer-based language models without expensive post-training.\nIt combines an advanced implementation of directional ablation, also known\nas \"abliteration\" ([Arditi et al. 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11717),\nLai 2025 ([1](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fgrimjim\u002Fprojected-abliteration),\n[2](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fgrimjim\u002Fnorm-preserving-biprojected-abliteration))),\nwith a TPE-based parameter optimizer powered by [Optuna](https:\u002F\u002Foptuna.org\u002F).\n\nThis approach enables Heretic to work **completely automatically.** Heretic\nfinds high-quality abliteration parameters by co-minimizing the number of\nrefusals and the KL divergence from the original model. This results in a\ndecensored model that retains as much of the original model's intelligence\nas possible. Using Heretic does not require an understanding of transformer\ninternals. In fact, anyone who knows how to run a command-line program\ncan use Heretic to decensor language models.\n\n\u003Cimg width=\"650\" height=\"715\" alt=\"Screenshot\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_readme_3ebb7d468e36.png\" \u002F>\n\n&nbsp;\n\nRunning unsupervised with the default configuration, Heretic can produce\ndecensored models that rival the quality of abliterations created manually\nby human experts:\n\n| Model | Refusals for \"harmful\" prompts | KL divergence from original model for \"harmless\" prompts |\n| :--- | ---: | ---: |\n| [google\u002Fgemma-3-12b-it](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-3-12b-it) (original) | 97\u002F100 | 0 *(by definition)* |\n| [mlabonne\u002Fgemma-3-12b-it-abliterated-v2](https:\u002F\u002Fhuggingface.co\u002Fmlabonne\u002Fgemma-3-12b-it-abliterated-v2) | 3\u002F100 | 1.04 |\n| [huihui-ai\u002Fgemma-3-12b-it-abliterated](https:\u002F\u002Fhuggingface.co\u002Fhuihui-ai\u002Fgemma-3-12b-it-abliterated) | 3\u002F100 | 0.45 |\n| **[p-e-w\u002Fgemma-3-12b-it-heretic](https:\u002F\u002Fhuggingface.co\u002Fp-e-w\u002Fgemma-3-12b-it-heretic) (ours)** | **3\u002F100** | **0.16** |\n\nThe Heretic version, generated without any human effort, achieves the same\nlevel of refusal suppression as other abliterations, but at a much lower\nKL divergence, indicating less damage to the original model's capabilities.\n*(You can reproduce those numbers using Heretic's built-in evaluation functionality,\ne.g. `heretic --model google\u002Fgemma-3-12b-it --evaluate-model p-e-w\u002Fgemma-3-12b-it-heretic`.\nNote that the exact values might be platform- and hardware-dependent.\nThe table above was compiled using PyTorch 2.8 on an RTX 5090.)*\n\nOf course, mathematical metrics and automated benchmarks never tell the whole\nstory, and are no substitute for human evaluation. Models generated with\nHeretic have been well-received by users (links and emphasis added):\n\n> \"I was skeptical before, but I just downloaded\n> [**GPT-OSS 20B Heretic**](https:\u002F\u002Fhuggingface.co\u002Fp-e-w\u002Fgpt-oss-20b-heretic)\n> model and holy shit. It gives properly formatted long responses to sensitive topics,\n> using the exact uncensored words that you would expect from an uncensored model,\n> produces markdown format tables with details and whatnot. Looks like this is\n> the best abliterated version of this model so far...\"\n> [*(Link to comment)*](https:\u002F\u002Fold.reddit.com\u002Fr\u002FLocalLLaMA\u002Fcomments\u002F1oymku1\u002Fheretic_fully_automatic_censorship_removal_for\u002Fnp6tba6\u002F)\n\n> \"[**Heretic GPT 20b**](https:\u002F\u002Fhuggingface.co\u002Fp-e-w\u002Fgpt-oss-20b-heretic)\n> seems to be the best uncensored model I have tried yet. It doesn't destroy a\n> the model's intelligence and it is answering prompts normally would be\n> rejected by the base model.\"\n> [*(Link to comment)*](https:\u002F\u002Fold.reddit.com\u002Fr\u002FLocalLLaMA\u002Fcomments\u002F1oymku1\u002Fheretic_fully_automatic_censorship_removal_for\u002Fnpe9jng\u002F)\n\n> \"[[**Qwen3-4B-Instruct-2507-heretic**](https:\u002F\u002Fhuggingface.co\u002Fp-e-w\u002FQwen3-4B-Instruct-2507-heretic)]\n> Has been the best unquantized abliterated model that I have been able to run on 16gb vram.\"\n> [*(Link to comment)*](https:\u002F\u002Fold.reddit.com\u002Fr\u002FLocalLLaMA\u002Fcomments\u002F1phjxca\u002Fim_calling_these_people_out_right_now\u002Fnt06tji\u002F)\n\nHeretic supports most dense models, including many multimodal models, and\nseveral different MoE architectures. It does not yet support SSMs\u002Fhybrid models,\nmodels with inhomogeneous layers, and certain novel attention systems.\n\nYou can find a small collection of models that have been decensored using Heretic\n[on Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fp-e-w\u002Fthe-bestiary),\nand the community has created and published\n[well over 1,000](https:\u002F\u002Fhuggingface.co\u002Fmodels?other=heretic)\nHeretic models in addition to those.\n\n\n## Usage\n\nPrepare a Python 3.10+ environment with PyTorch 2.2+ installed as appropriate\nfor your hardware. Then run:\n\n```\npip install -U heretic-llm\nheretic Qwen\u002FQwen3-4B-Instruct-2507\n```\n\nReplace `Qwen\u002FQwen3-4B-Instruct-2507` with whatever model you want to decensor.\n\nThe process is fully automatic and does not require configuration; however,\nHeretic has a variety of configuration parameters that can be changed for\ngreater control. Run `heretic --help` to see available command-line options,\nor look at [`config.default.toml`](config.default.toml) if you prefer to use\na configuration file.\n\nAt the start of a program run, Heretic benchmarks the system to determine\nthe optimal batch size to make the most of the available hardware.\nOn an RTX 3090, with the default configuration, decensoring Llama-3.1-8B-Instruct\ntakes about 45 minutes. Note that Heretic supports model quantization with\nbitsandbytes, which can drastically reduce the amount of VRAM required to process\nmodels. Set the `quantization` option to `bnb_4bit` to enable quantization.\n\nAfter Heretic has finished decensoring a model, you are given the option to\nsave the model, upload it to Hugging Face, chat with it to test how well it works,\nor any combination of those actions.\n\n\n## Research features\n\nIn addition to its primary function of removing model censorship, Heretic also\nprovides features designed to support research into the semantics of model internals\n(interpretability). To use those features, you need to install Heretic with the\noptional `research` extra:\n\n```\npip install -U heretic-llm[research]\n```\n\nThis gives you access to the following functionality:\n\n### Generate plots of residual vectors by passing `--plot-residuals`\n\nWhen run with this flag, Heretic will:\n\n1. Compute residual vectors (hidden states) for the first output token,\n   for each transformer layer, for both \"harmful\" and \"harmless\" prompts.\n2. Perform a [PaCMAP projection](https:\u002F\u002Fgithub.com\u002FYingfanWang\u002FPaCMAP)\n   from residual space to 2D-space.\n3. Left-right align the projections of \"harmful\"\u002F\"harmless\" residuals\n   by their geometric medians to make projections for consecutive layers\n   more similar. Additionally, PaCMAP is initialized with the previous\n   layer's projections for each new layer, minimizing disruptive transitions.\n4. Scatter-plot the projections, generating a PNG image for each layer.\n5. Generate an animation showing how residuals transform between layers,\n   as an animated GIF.\n\n\u003Cimg width=\"800\" height=\"600\" alt=\"Plot of residual vectors\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_readme_6c33c071b25e.png\" \u002F>\n\nSee [the configuration file](config.default.toml) for options that allow you\nto control various aspects of the generated plots.\n\nNote that PaCMAP is an expensive operation that is performed on the CPU.\nFor larger models, it can take an hour or more to compute projections\nfor all layers.\n\n### Print details about residual geometry by passing `--print-residual-geometry`\n\nIf you are interested in a quantitative analysis of how residual vectors\nfor \"harmful\" and \"harmless\" prompts relate to each other, this flag gives you\nthe following table, packed with metrics that can facilitate understanding\nthe same (for [gemma-3-270m-it](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-3-270m-it)\nin this case):\n\n```\n┏━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┓\n┃ Layer ┃ S(g,b) ┃ S(g*,b*) ┃  S(g,r) ┃ S(g*,r*) ┃  S(b,r) ┃ S(b*,r*) ┃      |g| ┃     |g*| ┃      |b| ┃     |b*| ┃     |r| ┃    |r*| ┃   Silh ┃\n┡━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━┩\n│     1 │ 1.0000 │   1.0000 │ -0.4311 │  -0.4906 │ -0.4254 │  -0.4847 │   170.29 │   170.49 │   169.78 │   169.85 │    1.19 │    1.31 │ 0.0480 │\n│     2 │ 1.0000 │   1.0000 │  0.4297 │   0.4465 │  0.4365 │   0.4524 │   768.55 │   768.77 │   771.32 │   771.36 │    6.39 │    5.76 │ 0.0745 │\n│     3 │ 0.9999 │   1.0000 │ -0.5699 │  -0.5577 │ -0.5614 │  -0.5498 │  1020.98 │  1021.13 │  1013.80 │  1014.71 │   12.70 │   11.60 │ 0.0920 │\n│     4 │ 0.9999 │   1.0000 │  0.6582 │   0.6553 │  0.6659 │   0.6627 │  1356.39 │  1356.20 │  1368.71 │  1367.95 │   18.62 │   17.84 │ 0.0957 │\n│     5 │ 0.9987 │   0.9990 │ -0.6880 │  -0.6761 │ -0.6497 │  -0.6418 │   766.54 │   762.25 │   731.75 │   732.42 │   51.97 │   45.24 │ 0.1018 │\n│     6 │ 0.9998 │   0.9998 │ -0.1983 │  -0.2312 │ -0.1811 │  -0.2141 │  2417.35 │  2421.08 │  2409.18 │  2411.40 │   43.06 │   43.47 │ 0.0900 │\n│     7 │ 0.9998 │   0.9997 │ -0.5258 │  -0.5746 │ -0.5072 │  -0.5560 │  3444.92 │  3474.99 │  3400.01 │  3421.63 │   86.94 │   94.38 │ 0.0492 │\n│     8 │ 0.9990 │   0.9991 │  0.8235 │   0.8312 │  0.8479 │   0.8542 │  4596.54 │  4615.62 │  4918.32 │  4934.20 │  384.87 │  377.87 │ 0.2278 │\n│     9 │ 0.9992 │   0.9992 │  0.5335 │   0.5441 │  0.5678 │   0.5780 │  5322.30 │  5316.96 │  5468.65 │  5466.98 │  265.68 │  267.28 │ 0.1318 │\n│    10 │ 0.9974 │   0.9973 │  0.8189 │   0.8250 │  0.8579 │   0.8644 │  5328.81 │  5325.63 │  5953.35 │  5985.15 │  743.95 │  779.74 │ 0.2863 │\n│    11 │ 0.9977 │   0.9978 │  0.4262 │   0.4045 │  0.4862 │   0.4645 │  9644.02 │  9674.06 │  9983.47 │  9990.28 │  743.28 │  726.99 │ 0.1576 │\n│    12 │ 0.9904 │   0.9907 │  0.4384 │   0.4077 │  0.5586 │   0.5283 │ 10257.40 │ 10368.50 │ 11114.51 │ 11151.21 │ 1711.18 │ 1664.69 │ 0.1890 │\n│    13 │ 0.9867 │   0.9874 │  0.4007 │   0.3680 │  0.5444 │   0.5103 │ 12305.12 │ 12423.75 │ 13440.31 │ 13432.47 │ 2386.43 │ 2282.47 │ 0.1293 │\n│    14 │ 0.9921 │   0.9922 │  0.3198 │   0.2682 │  0.4364 │   0.3859 │ 16929.16 │ 17080.37 │ 17826.97 │ 17836.03 │ 2365.23 │ 2301.87 │ 0.1282 │\n│    15 │ 0.9846 │   0.9850 │  0.1198 │   0.0963 │  0.2913 │   0.2663 │ 16858.58 │ 16949.44 │ 17496.00 │ 17502.88 │ 3077.08 │ 3029.60 │ 0.1611 │\n│    16 │ 0.9686 │   0.9689 │ -0.0029 │  -0.0254 │  0.2457 │   0.2226 │ 18912.77 │ 19074.86 │ 19510.56 │ 19559.62 │ 4848.35 │ 4839.75 │ 0.1516 │\n│    17 │ 0.9782 │   0.9784 │ -0.0174 │  -0.0381 │  0.1908 │   0.1694 │ 27098.09 │ 27273.00 │ 27601.12 │ 27653.12 │ 5738.19 │ 5724.21 │ 0.1641 │\n│    18 │ 0.9184 │   0.9196 │  0.1343 │   0.1430 │  0.5155 │   0.5204 │   190.16 │   190.35 │   219.91 │   220.62 │   87.82 │   87.59 │ 0.1855 │\n└───────┴────────┴──────────┴─────────┴──────────┴─────────┴──────────┴──────────┴──────────┴──────────┴──────────┴─────────┴─────────┴────────┘\ng = mean of residual vectors for good prompts\ng* = geometric median of residual vectors for good prompts\nb = mean of residual vectors for bad prompts\nb* = geometric median of residual vectors for bad prompts\nr = refusal direction for means (i.e., b - g)\nr* = refusal direction for geometric medians (i.e., b* - g*)\nS(x,y) = cosine similarity of x and y\n|x| = L2 norm of x\nSilh = Mean silhouette coefficient of residuals for good\u002Fbad clusters\n```\n\n\n## How Heretic works\n\nHeretic implements a parametrized variant of directional ablation. For each\nsupported transformer component (currently, attention out-projection and\nMLP down-projection), it identifies the associated matrices in each transformer\nlayer, and orthogonalizes them with respect to the relevant \"refusal direction\",\ninhibiting the expression of that direction in the result of multiplications\nwith that matrix.\n\nRefusal directions are computed for each layer as a difference-of-means between\nthe first-token residuals for \"harmful\" and \"harmless\" example prompts.\n\nThe ablation process is controlled by several optimizable parameters:\n\n* `direction_index`: Either the index of a refusal direction, or the special\n  value `per layer`, indicating that each layer should be ablated using the\n  refusal direction associated with that layer.\n* `max_weight`, `max_weight_position`, `min_weight`, and `min_weight_distance`:\n  For each component, these parameters describe the shape and position of the\n  ablation weight kernel over the layers. The following diagram illustrates this:\n\n\u003Cimg width=\"800\" height=\"500\" alt=\"Explanation\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_readme_bb62b3382bb5.png\" \u002F>\n\n&nbsp;\n\nHeretic's main innovations over existing abliteration systems are:\n\n* The shape of the ablation weight kernel is highly flexible, which, combined with\n  automatic parameter optimization, can improve the compliance\u002Fquality tradeoff.\n  Non-constant ablation weights were previously explored by Maxime Labonne in\n  [gemma-3-12b-it-abliterated-v2](https:\u002F\u002Fhuggingface.co\u002Fmlabonne\u002Fgemma-3-12b-it-abliterated-v2).\n* The refusal direction index is a float rather than an integer. For non-integral\n  values, the two nearest refusal direction vectors are linearly interpolated.\n  This unlocks a vast space of additional directions beyond the ones identified\n  by the difference-of-means computation, and often enables the optimization\n  process to find a better direction than that belonging to any individual layer.\n* Ablation parameters are chosen separately for each component. I have found that\n  MLP interventions tend to be more damaging to the model than attention interventions,\n  so using different ablation weights can squeeze out some extra performance.\n\n\n## Prior art\n\nI'm aware of the following publicly available implementations of abliteration\ntechniques:\n\n* [AutoAbliteration](https:\u002F\u002Fhuggingface.co\u002Fposts\u002Fmlabonne\u002F714992455492422)\n* [abliterator.py](https:\u002F\u002Fgithub.com\u002FFailSpy\u002Fabliterator)\n* [wassname's Abliterator](https:\u002F\u002Fgithub.com\u002Fwassname\u002Fabliterator)\n* [ErisForge](https:\u002F\u002Fgithub.com\u002FTsadoq\u002FErisForge)\n* [Removing refusals with HF Transformers](https:\u002F\u002Fgithub.com\u002FSumandora\u002Fremove-refusals-with-transformers)\n* [deccp](https:\u002F\u002Fgithub.com\u002FAUGMXNT\u002Fdeccp)\n\nNote that Heretic was written from scratch, and does not reuse code from\nany of those projects.\n\n\n## Acknowledgments\n\nThe development of Heretic was informed by:\n\n* [The original abliteration paper (Arditi et al. 2024)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11717)\n* [Maxime Labonne's article on abliteration](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fmlabonne\u002Fabliteration),\n  as well as some details from the model cards of his own abliterated models (see above)\n* Jim Lai's articles describing [\"projected abliteration\"](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fgrimjim\u002Fprojected-abliteration)\n  and [\"norm-preserving biprojected abliteration\"](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fgrimjim\u002Fnorm-preserving-biprojected-abliteration)\n\n\n## Citation\n\nIf you use Heretic for your research, please cite it using the following BibTeX entry:\n\n```bibtex\n@misc{heretic,\n  author = {Weidmann, Philipp Emanuel},\n  title = {Heretic: Fully automatic censorship removal for language models},\n  year = {2025},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic}}\n}\n```\n\n\n## License\n\nCopyright &copy; 2025-2026  Philipp Emanuel Weidmann (\u003Cpew@worldwidemann.com>) + contributors\n\nThis program is free software: you can redistribute it and\u002For modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with this program.  If not, see \u003Chttps:\u002F\u002Fwww.gnu.org\u002Flicenses\u002F>.\n\n**By contributing to this project, you agree to release your\ncontributions under the same license.**\n","\u003Cimg width=\"128\" height=\"128\" align=\"right\" alt=\"Logo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_readme_88beb85e82de.png\" \u002F>\n\n# Heretic：面向语言模型的全自动去审查工具\u003Cbr>\u003Cbr>[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1447831134212984903?color=5865F2&label=discord&labelColor=black&logo=discord&logoColor=white&style=for-the-badge)](https:\u002F\u002Fdiscord.gg\u002FgdXc48gSyT) [![在 Hugging Face 上关注我们](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fhuggingface\u002Fbadges\u002Fresolve\u002Fmain\u002Ffollow-us-on-hf-md-dark.svg)](https:\u002F\u002Fhuggingface.co\u002Fheretic-org)\n\n[![#1 今日最佳仓库](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_readme_4a68feb902da.png)](https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F20538)\n\nHeretic 是一款无需昂贵后训练即可从基于 Transformer 的语言模型中去除审查（即“安全对齐”）的工具。它将一种先进的方向性消融实现——也称为“abliteration”（参见 Arditi 等人，2024 年；Lai，2025 年，[1](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fgrimjim\u002Fprojected-abliteration)、[2](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fgrimjim\u002Fnorm-preserving-biprojected-abliteration)）——与基于 TPE 的参数优化器相结合，而该优化器由 [Optuna](https:\u002F\u002Foptuna.org\u002F) 提供支持。\n\n这种方法使 Heretic 能够**完全自动运行**。Heretic 通过同时最小化拒绝次数和与原始模型的 KL 散度来寻找高质量的消融参数，从而生成一个去审查后的模型，并尽可能保留原始模型的智能。使用 Heretic 不需要理解 Transformer 的内部机制。事实上，只要会运行命令行程序的人，就能用 Heretic 对语言模型进行去审查。\n\n\u003Cimg width=\"650\" height=\"715\" alt=\"截图\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_readme_3ebb7d468e36.png\" \u002F>\n\n&nbsp;\n\n在使用默认配置进行无监督运行时，Heretic 可以生成与人类专家手动创建的消融结果质量相当的去审查模型：\n\n| 模型 | 针对“有害”提示的拒绝次数 | 针对“无害”提示的与原始模型的 KL 散度 |\n| :--- | ---: | ---: |\n| [google\u002Fgemma-3-12b-it](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-3-12b-it)（原始） | 97\u002F100 | 0 *(按定义)* |\n| [mlabonne\u002Fgemma-3-12b-it-abliterated-v2](https:\u002F\u002Fhuggingface.co\u002Fmlabonne\u002Fgemma-3-12b-it-abliterated-v2) | 3\u002F100 | 1.04 |\n| [huihui-ai\u002Fgemma-3-12b-it-abliterated](https:\u002F\u002Fhuggingface.co\u002Fhuihui-ai\u002Fgemma-3-12b-it-abliterated) | 3\u002F100 | 0.45 |\n| **[p-e-w\u002Fgemma-3-12b-it-heretic](https:\u002F\u002Fhuggingface.co\u002Fp-e-w\u002Fgemma-3-12b-it-heretic)（我们的版本）** | **3\u002F100** | **0.16** |\n\nHeretic 版本在无需任何人工干预的情况下，实现了与其他消融方法相同的拒绝抑制水平，但其 KL 散度要低得多，表明对原始模型能力的损害更小。（您可以通过 Heretic 内置的评估功能重现这些数值，例如 `heretic --model google\u002Fgemma-3-12b-it --evaluate-model p-e-w\u002Fgemma-3-12b-it-heretic`。请注意，具体数值可能因平台和硬件而异。上表是在 RTX 5090 上使用 PyTorch 2.8 编译的。）\n\n当然，数学指标和自动化基准测试永远无法说明全部问题，也无法取代人工评估。使用 Heretic 生成的模型深受用户好评（链接和强调已添加）：\n\n> “我之前还持怀疑态度，但刚刚下载了\n> 【**GPT-OSS 20B Heretic**】(https:\u002F\u002Fhuggingface.co\u002Fp-e-w\u002Fgpt-oss-20b-heretic)\n> 模型，天哪！它能针对敏感话题给出格式规范的长回复，\n> 使用的正是您期望从未审查模型中得到的未经审查的措辞，\n> 还能生成带有详细信息等内容的 Markdown 格式表格。看起来这似乎是迄今为止该模型的最佳消融版本……”\n> 【*(评论链接)*】(https:\u002F\u002Fold.reddit.com\u002Fr\u002FLocalLLaMA\u002Fcomments\u002F1oymku1\u002Fheretic_fully_automatic_censorship_removal_for\u002Fnp6tba6\u002F)\n\n> “【**Heretic GPT 20b**】(https:\u002F\u002Fhuggingface.co\u002Fp-e-w\u002Fgpt-oss-20b-heretic)\n> 似乎是我迄今为止尝试过的最好的未审查模型。它没有破坏模型的智能，而且能够正常回答那些原本会被基础模型拒绝的提示。”\n> 【*(评论链接)*】(https:\u002F\u002Fold.reddit.com\u002Fr\u002FLocalLLaMA\u002Fcomments\u002F1oymku1\u002Fheretic_fully_automatic_censorship_removal_for\u002Fnpe9jng\u002F)\n\n> “【[[**Qwen3-4B-Instruct-2507-heretic**]](https:\u002F\u002Fhuggingface.co\u002Fp-e-w\u002FQwen3-4B-Instruct-2507-heretic)]\n> 是我在 16GB 显存上运行过的最佳未量化消融模型。”\n> 【*(评论链接)*】(https:\u002F\u002Fold.reddit.com\u002Fr\u002FLocalLLaMA\u002Fcomments\u002F1phjxca\u002Fim_calling_these_people_out_right_now\u002Fnt06tji\u002F)\n\nHeretic 支持大多数稠密模型，包括许多多模态模型以及多种 MoE 架构。不过，它目前尚不支持 SSM\u002F混合模型、具有非均匀层的模型，以及某些新型注意力机制。\n\n您可以在 Hugging Face 上找到一小部分已使用 Heretic 去审查的模型集合[这里](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fp-e-w\u002Fthe-bestiary)，此外，社区还创建并发布了[超过 1,000 个](https:\u002F\u002Fhuggingface.co\u002Fmodels?other=heretic) Heretic 模型。\n\n## 使用方法\n\n请根据您的硬件情况，准备一个安装了 PyTorch 2.2+ 的 Python 3.10+ 环境。然后运行：\n\n```\npip install -U heretic-llm\nheretic Qwen\u002FQwen3-4B-Instruct-2507\n```\n\n将 `Qwen\u002FQwen3-4B-Instruct-2507` 替换为您想要去审查的任意模型。\n\n整个过程完全自动，无需任何配置；不过，Heretic 提供了多种可更改的配置参数，以便您获得更大的控制权。运行 `heretic --help` 查看可用的命令行选项，或者如果您更喜欢使用配置文件，可以查看 [`config.default.toml`](config.default.toml)。\n\n在程序运行开始时，Heretic 会先对系统进行基准测试，以确定最优的批处理大小，从而充分利用现有硬件资源。在 RTX 3090 上，使用默认配置对 Llama-3.1-8B-Instruct 进行去审查大约需要 45 分钟。需要注意的是，Heretic 支持 bitsandbytes 的模型量化功能，这可以大幅减少处理模型所需的显存容量。只需将 `quantization` 选项设置为 `bnb_4bit` 即可启用量化。\n\nHeretic 完成模型去审查后，您可以选择保存模型、将其上传至 Hugging Face、与模型聊天以测试其效果，或以上操作的任意组合。\n\n\n## 研究功能\n\n除了其主要功能——去除模型审查之外，Heretic 还提供了一些旨在支持模型内部语义研究（即可解释性）的功能。要使用这些功能，您需要在安装 Heretic 时附加 `research` 选项：\n\n```\npip install -U heretic-llm[research]\n```\n\n这样您就可以访问以下功能：\n\n### 通过传递 `--plot-residuals` 生成残差向量的可视化图\n\n当使用此标志运行时，Heretic 将会：\n\n1. 对每个 Transformer 层的第一输出 token，分别计算“有害”和“无害”提示的残差向量（即隐藏状态）。\n2. 在残差空间中执行 [PaCMAP 投影](https:\u002F\u002Fgithub.com\u002FYingfanWang\u002FPaCMAP)，将其映射到二维空间。\n3. 按照“有害”与“无害”残差的几何中位数对齐其投影，以使连续各层的投影更加相似。此外，对于每一新层，PaCMAP 都会以前一层的投影作为初始值，从而最大限度地减少突变式的过渡。\n4. 将这些投影绘制为散点图，并为每一层生成一张 PNG 图像。\n5. 生成一个动画，展示残差在各层之间的变换过程，以动态 GIF 的形式呈现。\n\n\u003Cimg width=\"800\" height=\"600\" alt=\"残差向量的可视化图\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_readme_6c33c071b25e.png\" \u002F>\n\n有关控制生成图像各个方面的选项，请参阅[配置文件](config.default.toml)。\n\n请注意，PaCMAP 是一项开销较大的操作，且在 CPU 上执行。对于更大的模型，计算所有层的投影可能需要一小时甚至更久。\n\n### 通过传递 `--print-residual-geometry` 打印残差几何的详细信息\n\n如果您希望对“有害”与“无害”提示的残差向量之间的关系进行定量分析，此标志将为您提供如下表格，其中包含大量指标，有助于深入理解这一关系（以 [gemma-3-270m-it](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-3-270m-it) 为例）：\n\n```\n┏━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┓\n┃ 层 ┃ S(g,b) ┃ S(g*,b*) ┃  S(g,r) ┃ S(g*,r*) ┃  S(b,r) ┃ S(b*,r*) ┃      |g| ┃     |g*| ┃      |b| ┃     |b*| ┃     |r| ┃    |r*| ┃   Silh ┃\n┡━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━┩\n│     1 │ 1.0000 │   1.0000 │ -0.4311 │  -0.4906 │ -0.4254 │  -0.4847 │   170.29 │   170.49 │   169.78 │   169.85 │    1.19 │    1.31 │ 0.0480 │\n│     2 │ 1.0000 │   1.0000 │  0.4297 │   0.4465 │  0.4365 │   0.4524 │   768.55 │   768.77 │   771.32 │   771.36 │    6.39 │    5.76 │ 0.0745 │\n│     3 │ 0.9999 │   1.0000 │ -0.5699 │  -0.5577 │ -0.5614 │  -0.5498 │  1020.98 │  1021.13 │  1013.80 │  1014.71 │   12.70 │   11.60 │ 0.0920 │\n│     4 │ 0.9999 │   1.0000 │  0.6582 │   0.6553 │  0.6659 │   0.6627 │  1356.39 │  1356.20 │  1368.71 │  1367.95 │   18.62 │   17.84 │ 0.0957 │\n│     5 │ 0.9987 │   0.9990 │ -0.6880 │  -0.6761 │ -0.6497 │  -0.6418 │   766.54 │   762.25 │   731.75 │   732.42 │   51.97 │   45.24 │ 0.1018 │\n│     6 │ 0.9998 │   0.9998 │ -0.1983 │  -0.2312 │ -0.1811 │  -0.2141 │  2417.35 │  2421.08 │  2409.18 │  2411.40 │   43.06 │   43.47 │ 0.0900 │\n│     7 │ 0.9998 │   0.9997 │ -0.5258 │  -0.5746 │ -0.5072 │  -0.5560 │  3444.92 │  3474.99 │  3400.01 │  3421.63 │   86.94 │   94.38 │ 0.0492 │\n│     8 │ 0.9990 │   0.9991 │  0.8235 │   0.8312 │  0.8479 │   0.8542 │  4596.54 │  4615.62 │  4918.32 │  4934.20 │  384.87 │  377.87 │ 0.2278 │\n│     9 │ 0.9992 │   0.9992 │  0.5335 │   0.5441 │  0.5678 │   0.5780 │  5322.30 │  5316.96 │  5468.65 │  5466.98 │  265.68 │  267.28 │ 0.1318 │\n│    10 │ 0.9974 │   0.9973 │  0.8189 │   0.8250 │  0.8579 │   0.8644 │  5328.81 │  5325.63 │  5953.35 │  5985.15 │  743.95 │  779.74 │ 0.2863 │\n│    11 │ 0.9977 │   0.9978 │  0.4262 │   0.4045 │  0.4862 │   0.4645 │  9644.02 │  9674.06 │  9983.47 │  9990.28 │  743.28 │  726.99 │ 0.1576 │\n│    12 │ 0.9904 │   0.9907 │  0.4384 │   0.4077 │  0.5586 │   0.5283 │ 10257.40 │ 10368.50 │ 11114.51 │ 11151.21 │ 1711.18 │ 1664.69 │ 0.1890 │\n│    13 │ 0.9867 │   0.9874 │  0.4007 │   0.3680 │  0.5444 │   0.5103 │ 12305.12 │ 12423.75 │ 13440.31 │ 13432.47 │ 2386.43 │ 2282.47 │ 0.1293 │\n│    14 │ 0.9921 │   0.9922 │  0.3198 │   0.2682 │  0.4364 │   0.3859 │ 16929.16 │ 17080.37 │ 17826.97 │ 17836.03 │ 2365.23 │ 2301.87 │ 0.1282 │\n│    15 │ 0.9846 │   0.9850 │  0.1198 │   0.0963 │  0.2913 │   0.2663 │ 16858.58 │ 16949.44 │ 17496.00 │ 17502.88 │ 3077.08 │ 3029.60 │ 0.1611 │\n│    16 │ 0.9686 │   0.9689 │ -0.0029 │  -0.0254 │  0.2457 │   0.2226 │ 18912.77 │ 19074.86 │ 19510.56 │ 19559.62 │ 4848.35 │ 4839.75 │ 0.1516 │\n│    17 │ 0.9782 │   0.9784 │ -0.0174 │  -0.0381 │  0.1908 │   0.1694 │ 27098.09 │ 27273.00 │ 27601.12 │ 27653.12 │ 5738.19 │ 5724.21 │ 0.1641 │\n│    18 │ 0.9184 │   0.9196 │  0.1343 │   0.1430 │  0.5155 │   0.5204 │   190.16 │   190.35 │   219.91 │   220.62 │   87.82 │   87.59 │ 0.1855 │\n└───────┴────────┴──────────┴─────────┴──────────┴─────────┴──────────┴──────────┴──────────┴──────────┴──────────┴─────────┴─────────┴────────┘\ng = 好提示的残差向量均值\ng* = 好提示的残差向量几何中位数\nb = 坏提示的残差向量均值\nb* = 坏提示的残差向量几何中位数\nr = 均值的拒绝方向（即 b - g）\nr* = 几何中位数的拒绝方向（即 b* - g*）\nS(x,y) = x 与 y 的余弦相似度\n|x| = x 的 L2 范数\nSilh = 好\u002F坏聚类残差的平均轮廓系数\n```\n\n## Heretic 的工作原理\n\nHeretic 实现了一种参数化的方向性消融方法。对于每个受支持的 Transformer 组件（目前包括注意力输出投影和 MLP 下采样投影），它会在每一层 Transformer 中识别出相关的矩阵，并根据相应的“拒绝方向”对这些矩阵进行正交化处理，从而抑制该方向在与该矩阵相乘后的结果中得以表达。\n\n拒绝方向是针对每一层计算得出的，具体做法是取“有害”与“无害”示例提示的第一 token 残差之间的均值差。\n\n消融过程由若干可优化的参数控制：\n\n* `direction_index`：可以是某个拒绝方向的索引，也可以是特殊值 `per layer`，表示每层都应使用与其对应的拒绝方向进行消融。\n* `max_weight`、`max_weight_position`、`min_weight` 和 `min_weight_distance`：对于每个组件，这些参数描述了消融权重核在各层上的形状与位置。下图对此进行了说明：\n\n![Explanation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_readme_bb62b3382bb5.png)\n\n&nbsp;\n\nHeretic 相较于现有消融系统的主要创新在于：\n\n* 消融权重核的形状具有高度灵活性，结合自动参数优化，能够更好地平衡合规性与性能之间的权衡。此前，Maxime Labonne 曾在 [gemma-3-12b-it-abliterated-v2](https:\u002F\u002Fhuggingface.co\u002Fmlabonne\u002Fgemma-3-12b-it-abliterated-v2) 中探索过非恒定的消融权重。\n* 拒绝方向索引采用浮点数而非整数。对于非整数值，会在线性插值两个最近的拒绝方向向量。这一设计开辟了远超均值差计算所确定方向的广阔新方向空间，并且往往能使优化过程找到比单个层对应方向更优的方向。\n* 每个组件的消融参数单独设定。我发现，相较于注意力干预，MLP 干预往往会对模型造成更大的损害，因此采用不同的消融权重能够进一步提升模型性能。\n\n\n## 先行技术\n\n我已知以下公开可用的消融技术实现：\n\n* [AutoAbliteration](https:\u002F\u002Fhuggingface.co\u002Fposts\u002Fmlabonne\u002F714992455492422)\n* [abliterator.py](https:\u002F\u002Fgithub.com\u002FFailSpy\u002Fabliterator)\n* [wassname 的 Abliterator](https:\u002F\u002Fgithub.com\u002Fwassname\u002Fabliterator)\n* [ErisForge](https:\u002F\u002Fgithub.com\u002FTsadoq\u002FErisForge)\n* [用 HF Transformers 移除拒绝](https:\u002F\u002Fgithub.com\u002FSumandora\u002Fremove-refusals-with-transformers)\n* [deccp](https:\u002F\u002Fgithub.com\u002FAUGMXNT\u002Fdeccp)\n\n需要注意的是，Heretic 是从零开始编写的，未复用上述任何项目的代码。\n\n\n## 致谢\n\nHeretic 的开发受到了以下内容的启发：\n\n* [原始消融论文（Arditi 等人，2024）](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11717)\n* Maxime Labonne 关于消融的文章（[https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fmlabonne\u002Fabliteration](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fmlabonne\u002Fabliteration)），以及他本人消融模型的模型卡片中的部分细节（见上文）\n* Jim Lai 描述的“投影消融”（[https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fgrimjim\u002Fprojected-abliteration](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fgrimjim\u002Fprojected-abliteration)）和“保范双投影消融”（[https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fgrimjim\u002Fnorm-preserving-biprojected-abliteration](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fgrimjim\u002Fnorm-preserving-biprojected-abliteration)）\n\n\n## 引用\n\n如果您在研究中使用 Heretic，请按照以下 BibTeX 条目进行引用：\n\n```bibtex\n@misc{heretic,\n  author = {Weidmann, Philipp Emanuel},\n  title = {Heretic：面向语言模型的全自动审查移除},\n  year = {2025},\n  publisher = {GitHub},\n  journal = {GitHub 仓库},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic}}\n}\n```\n\n\n## 许可证\n\n版权所有 © 2025–2026 Philipp Emanuel Weidmann (\u003Cpew@worldwidemann.com>) 及其贡献者\n\n本程序为自由软件：您可以重新分发并修改本程序，但须遵守 GNU Affero 通用公共许可证的规定，该许可证由自由软件基金会发布，无论是第 3 版还是后续版本均可。\n\n本程序以“按原样”提供，不提供任何担保；甚至不提供适销性或特定用途适用性的隐含担保。有关详细信息，请参阅 GNU Affero 通用公共许可证。\n\n您应当随本程序收到一份 GNU Affero 通用公共许可证副本。如未收到，请访问 \u003Chttps:\u002F\u002Fwww.gnu.org\u002Flicenses\u002F>。\n\n**通过为本项目作出贡献，您同意将自己的贡献也以相同许可证发布。**","# Heretic 快速上手指南\n\nHeretic 是一款全自动工具，旨在无需昂贵后训练即可移除基于 Transformer 的语言模型中的“审查”机制（即安全对齐）。它结合了方向消融（Directional Ablation）技术与 Optuna 参数优化器，能在最小化模型拒绝回答率的同时，保持与原模型极低的知识损失（KL 散度）。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**：Linux、macOS 或 Windows（需支持 PyTorch CUDA\u002FMPS）。\n*   **Python 版本**：3.10 或更高。\n*   **深度学习框架**：PyTorch 2.2 或更高版本（需根据您的硬件正确安装，如 NVIDIA GPU 需安装对应的 CUDA 版本）。\n*   **硬件建议**：\n    *   推荐使用 NVIDIA GPU。\n    *   对于大模型，显存不足时可启用量化模式（`bnb_4bit`）以降低资源需求。\n    *   参考性能：在 RTX 3090 上处理 Llama-3.1-8B-Instruct 约需 45 分钟。\n\n> **国内加速提示**：\n> 建议配置 pip 国内镜像源以加快依赖下载速度。例如使用清华大学镜像源：\n> ```bash\n> export PIP_INDEX_URL=https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 安装步骤\n\n使用 pip 安装 `heretic-llm` 包：\n\n```bash\npip install -U heretic-llm\n```\n\n如果您需要进行模型内部语义研究（如生成残差向量图或几何分析），请安装包含研究功能的版本：\n\n```bash\npip install -U \"heretic-llm[research]\"\n```\n\n## 基本使用\n\nHeretic 的设计目标是完全自动化，默认配置下无需额外调整即可运行。\n\n### 1. 运行去审查任务\n\n将 `\u003Cmodel_name>` 替换为您想要处理的 Hugging Face 模型 ID（例如 `Qwen\u002FQwen3-4B-Instruct-2507`）：\n\n```bash\nheretic Qwen\u002FQwen3-4B-Instruct-2507\n```\n\n程序启动后会自动执行以下操作：\n1.  **基准测试**：检测系统硬件以确定最佳批处理大小。\n2.  **自动优化**：寻找最佳消融参数，平衡“减少拒绝回答”与“保留原模型能力”。\n3.  **结果处理**：完成后，您可以选择保存模型到本地、上传至 Hugging Face 或直接进行对话测试。\n\n### 2. 启用量化（可选）\n\n如果您的显存有限，可以通过添加参数启用 4-bit 量化，大幅降低显存占用：\n\n```bash\nheretic Qwen\u002FQwen3-4B-Instruct-2507 --quantization bnb_4bit\n```\n\n### 3. 查看帮助与配置\n\n如需自定义参数（如调整优化目标、输出路径等），可查看详细帮助或默认配置文件：\n\n```bash\n# 查看命令行选项\nheretic --help\n\n# 查看默认配置文件结构（需先安装或从源码获取 config.default.toml）\n```\n\n运行结束后，您将获得一个去除了安全限制但尽可能保留原始智能的模型版本。","一位独立开发者正在构建本地化的历史研究助手，需要模型能够客观分析包含暴力或争议性描述的原始史料，而不受安全过滤机制的干扰。\n\n### 没有 heretic 时\n- **关键信息缺失**：当输入涉及战争细节或敏感政治事件的历史文献时，模型频繁拒绝回答，导致研究链条中断。\n- **人工调优门槛高**：若想手动移除安全对齐（Abliteration），开发者需深入理解 Transformer 内部结构并编写复杂代码，耗时数周且极易出错。\n- **智能程度受损**：强行通过提示词绕过限制往往导致模型逻辑混乱，或在回答无害问题时也出现能力下降，无法保持原有的语言理解力。\n- **迭代成本昂贵**：每次调整参数都需要重新进行昂贵的后训练或微调，对个人开发者的算力资源是巨大负担。\n\n### 使用 heretic 后\n- **无阻碍深度分析**：heretic 自动移除了审查机制，模型能直接引用史料中的原始措辞生成详细表格和分析，不再对敏感话题说“不”。\n- **全自动一键处理**：无需任何深度学习背景，只需运行一行命令行指令，heretic 即可利用 Optuna 自动搜索最优参数完成去 censorship。\n- **完美保留原模型智力**：通过最小化 KL 散度，heretic 生成的版本在去除限制的同时，将模型性能损耗降至最低（如 Gemma 3 案例中 KL 散度仅为 0.16）。\n- **零成本快速迭代**：完全无需额外的后训练过程，几分钟内即可在本地显卡上生成高质量的去限制模型，大幅降低试错成本。\n\nheretic 让普通开发者也能以零门槛、低成本获得既自由又智能的本地大模型，彻底打破了安全对齐对专业研究的束缚。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fp-e-w_heretic_50c7dbcd.png","p-e-w","Philipp Emanuel Weidmann","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fp-e-w_784c310d.png",null,"Anywhere the Internet is","pew@worldwidemann.com","https:\u002F\u002Fworldwidemann.com","https:\u002F\u002Fgithub.com\u002Fp-e-w",[84],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,18371,1831,"2026-04-05T11:25:12","AGPL-3.0","未说明","需要 NVIDIA GPU（文中提及 RTX 3090, RTX 5090），支持 bitsandbytes 量化以降低显存需求；具体显存大小取决于模型，文中示例提到可在 16GB 显存上运行 4B 参数模型",{"notes":95,"python":96,"dependencies":97},"该工具主要用于移除语言模型的审查机制。默认配置下会自动基准测试系统以确定最佳批次大小。处理 8B 参数模型在 RTX 3090 上约需 45 分钟。支持通过安装额外依赖包开启研究功能（如生成残差向量图），但该功能计算量大且主要在 CPU 上运行。不支持 SSM\u002F混合架构模型或具有非均匀层的模型。","3.10+",[98,99,100,101],"torch>=2.2","optuna","bitsandbytes","PaCMAP",[13,26],[104,105,106],"abliteration","llm","transformer",4,"2026-03-27T02:49:30.150509","2026-04-06T06:45:53.698971",[111,116,121,125,130,134,139],{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},6108,"在单张 RTX 3090 (24GB) 上加载 openai\u002Fgpt-oss-20b 模型时遇到显存不足（CUDA out of memory）错误，如何解决？","单张 24GB 显存的显卡（如 RTX 3090）不足以加载 BF16 格式的 gpt-oss-20b 模型，该模型通常需要约 45-50GB 显存才能以合理速度运行。\n解决方案：\n1. 使用多 GPU 支持：主分支（master）已支持多显卡，但尚未发布到 PyPI，需要从 GitHub 源码安装：`pip install git+https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic.git`。\n2. 如果必须单卡运行，需确认模型是否提供量化版本，否则硬件限制无法通过软件配置完全绕过。","https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fissues\u002F21",{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},6109,"Ampere 架构显卡（如 RTX 3090，CUDA 8.6）不支持原生 mxfp4 格式，能否用于去 censorship（decensor）gpt-oss-120b 模型？","不能直接高效运行。虽然工具可能会尝试加载，但由于缺乏对 mxfp4 格式的原生硬件支持，可能会导致性能极差或失败。对于 gpt-oss-120b 及其较小版本，如果权重是 mxfp4 格式，建议使用支持该格式的更新架构显卡，或者等待\u002F寻找转换为其他格式（如 BF16）的模型版本。此外，显存需求是硬性指标，显存不足会导致进程直接失败，而不仅仅是变慢。","https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fissues\u002F89",{"id":122,"question_zh":123,"answer_zh":124,"source_url":120},6110,"运行 Heretic 时遇到显存碎片化导致分配失败，即使总显存看似足够，有什么环境变量可以优化吗？","当遇到显存碎片化问题（reserved but unallocated memory is large）时，可以设置以下 PyTorch 环境变量来避免碎片化并允许更大的内存块分配：\nexport PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True\n设置后重新运行命令。这有助于在显存紧张时更灵活地管理内存分配。",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},6111,"在处理 Qwen3.5 等模型时，如果工具提示\"未找到通用响应前缀（No common response prefix found）\"，这是否正常？","这通常是正常的，除非该模型被明确设计为总是返回相同的前缀。Qwen3.5 等模型可能没有固定的拒绝响应前缀。\n如果怀疑工具未能正确检测拒绝内容，建议执行以下步骤进行调试：\n1. 使用 `--print-responses` 参数运行工具，获取实际生成的响应内容。\n2. 检查输出中是否包含预期的拒绝语句。\n3. 如果确实存在检测问题，请提交包含 1-2 个具体响应示例的新 Issue，以便开发者分析，而不是仅报告前缀缺失。","https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fissues\u002F216",{"id":131,"question_zh":132,"answer_zh":133,"source_url":115},6112,"如何正确安装包含多 GPU 支持的最新版本 Heretic？","多 GPU 支持目前仅在 GitHub 的主分支（master）中可用，尚未发布到 PyPI。因此，不能仅通过 `pip install heretic-llm` 安装。\n请使用以下命令直接从 Git 源码安装最新版本：\npip install git+https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic.git\n安装完成后，即可在多显卡环境中尝试加载大型模型。",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},6113,"针对 GPT-OSS-20b 模型，修复 KL 散度计算回归问题的临时解决方法是什么？","如果在应用某个修复后，GPT-OSS-20b 模型的 KL 散度计算结果始终为 0（而其他模型如 Gemma 3 正常），可以尝试注释掉 `src\u002Fheretic\u002Fmodel.py` 文件中第 50-53 行的代码。\n具体操作：打开 `src\u002Fheretic\u002Fmodel.py`，找到对应的代码块并将其注释掉，这通常能立即解决该特定模型的 KL 散度计算问题。","https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fissues\u002F75",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},6114,"在使用 Heretic 测试模型拒绝率时，如果结果显示\"Initial refusals: 0\u002F100\"但怀疑检测失效，该如何排查？","如果显示 0\u002F100 拒绝但怀疑是检测逻辑问题而非模型本身无拒绝：\n1. 检查 PyTorch 版本兼容性，不同版本（如 2.9.0 vs 2.10.0）可能在显存管理和推理速度上有显著差异，影响检测结果。\n2. 调整 batch size：如果显存不足导致静默失败或极慢，尝试减小 batch size（例如从 32 减半到 16）。\n3. 使用 `--print-responses` 查看实际输出，确认模型是否真的没有输出拒绝内容，还是工具未能识别。\n4. 监控 GPU 使用情况（使用 `nvidia-smi`），确认进程是否在正常运行而非卡死。","https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fissues\u002F233",[145,150,155],{"id":146,"version":147,"summary_zh":148,"released_at":149},105702,"v1.2.0","## Changes\r\n* @noctrex added a `max_memory` setting to limit memory usage in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F83\r\n* @spikymoth added a mechanism to avoid excessive low-divergence iteration in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F73\r\n* @accemlcc implemented a new LoRA-based abliteration engine with support for 4-bit quantization in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F60\r\n* @accemlcc added enumeration of all available GPUs on startup in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F86\r\n* @Vinayyyy7 added the ability to run more trials after optimization is complete in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F76\r\n* @anrp fixed MXFP4 loading in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F107\r\n* @anrp refactored the save machinery in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F110\r\n* @anrp added broad support for VL models in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F108\r\n* @anrp implemented saving and resuming optimization progress in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F106, https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F119, and https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F116\r\n* @spikymoth implemented Magnitude-Preserving Orthogonal Ablation in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F52\r\n* @salmanmkc upgraded GitHub Actions to the latest versions in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F136 and https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F137\r\n* @p-e-w added full type checking of the codebase, debug output, prompt modification functionality, and an example config file for slop reduction, and fixed various minor issues\r\n\r\n## New Contributors\r\n* @noctrex made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F83\r\n* @accemlcc made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F60\r\n* @anrp made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F107\r\n* @salmanmkc made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F136\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fcompare\u002Fv1.1.0...v1.2.0","2026-02-14T13:45:15",{"id":151,"version":152,"summary_zh":153,"released_at":154},105703,"v1.1.0","## Changes\r\n\r\n* @mbarnson added basic MPS (Apple Silicon) support in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F5\r\n* @red40maxxer reduced memory usage in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F15\r\n* @Ooooze added IBM Granite MoE support in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F14\r\n* @kldzj added multi-GPU support in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F17 and https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F32\r\n* @ricyoung fixed an error when Hugging Face user profile fields are missing in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F20\r\n* @tymat added support for MXFP4 quantized models with Triton tensors in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F28\r\n* @spikymoth improved support for loading local datasets in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F33\r\n* @kldzj added support for models that require `trust_remote_code` in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F31\r\n* @Vinayyyy7 added notebook (Colab\u002FKaggle) compatibility in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F42\r\n* @Vinayyyy7 fixed loading for certain models that default to the float32 dtype in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F44\r\n* @spikymoth improved refusal detection in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F45\r\n* @red40maxxer added a PR title lint to CI in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F66\r\n* @p-e-w added research features, support for stopping the optimization process early, and support for thinking models, and implemented an important padding fix suggested by @accemlcc\r\n\r\n## New Contributors\r\n\r\n* @mbarnson made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F5\r\n* @red40maxxer made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F15\r\n* @Ooooze made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F14\r\n* @kldzj made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F17\r\n* @ricyoung made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F20\r\n* @tymat made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F28\r\n* @spikymoth made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F33\r\n* @Vinayyyy7 made their first contribution in https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fpull\u002F42\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fp-e-w\u002Fheretic\u002Fcompare\u002Fv1.0.1...v1.1.0\r\n","2025-12-10T12:24:21",{"id":156,"version":157,"summary_zh":158,"released_at":159},105704,"v1.0.1","First public release\r\n","2025-11-16T13:00:29"]