[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-kpe--bert-for-tf2":3,"tool-kpe--bert-for-tf2":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":77,"owner_company":77,"owner_location":79,"owner_email":77,"owner_twitter":77,"owner_website":77,"owner_url":80,"languages":81,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":94,"env_os":95,"env_gpu":96,"env_ram":95,"env_deps":97,"category_tags":106,"github_topics":107,"view_count":23,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":111,"updated_at":112,"faqs":113,"releases":143},2495,"kpe\u002Fbert-for-tf2","bert-for-tf2","A Keras TensorFlow 2.0 implementation of BERT, ALBERT and adapter-BERT.","bert-for-tf2 是一个基于 TensorFlow 2.0 和 Keras 框架开发的开源库，旨在为开发者提供 BERT、ALBERT 以及 adapter-BERT 模型的轻量级实现。它核心解决了在现代化深度学习环境中高效集成预训练语言模型的难题，确保加载原始预训练权重后，其输出的激活值与谷歌官方原始模型在数值上完全一致，从而保证了模型复现的准确性和可靠性。\n\n这款工具特别适合从事自然语言处理（NLP）的算法工程师、研究人员以及希望快速构建语义理解应用的开发者使用。无论是进行文本分类、情感分析，还是探索参数高效的迁移学习，用户都能通过简洁的 Keras 层接口轻松调用，无需深入底层复杂的 TensorFlow 操作细节。\n\nbert-for-tf2 的技术亮点在于其“纯净”的实现方式：代码从零构建，仅依赖基础的 TensorFlow 运算，剔除了冗余代码并进行了简化，同时利用 params-flow 库大幅减少了 Keras 常见的样板代码，使模型配置更加直观。此外，它不仅支持标准的 BERT，还通过简单的参数配置即可启用 ALBERT（共享层参数）和 adapter-BER","bert-for-tf2 是一个基于 TensorFlow 2.0 和 Keras 框架开发的开源库，旨在为开发者提供 BERT、ALBERT 以及 adapter-BERT 模型的轻量级实现。它核心解决了在现代化深度学习环境中高效集成预训练语言模型的难题，确保加载原始预训练权重后，其输出的激活值与谷歌官方原始模型在数值上完全一致，从而保证了模型复现的准确性和可靠性。\n\n这款工具特别适合从事自然语言处理（NLP）的算法工程师、研究人员以及希望快速构建语义理解应用的开发者使用。无论是进行文本分类、情感分析，还是探索参数高效的迁移学习，用户都能通过简洁的 Keras 层接口轻松调用，无需深入底层复杂的 TensorFlow 操作细节。\n\nbert-for-tf2 的技术亮点在于其“纯净”的实现方式：代码从零构建，仅依赖基础的 TensorFlow 运算，剔除了冗余代码并进行了简化，同时利用 params-flow 库大幅减少了 Keras 常见的样板代码，使模型配置更加直观。此外，它不仅支持标准的 BERT，还通过简单的参数配置即可启用 ALBERT（共享层参数）和 adapter-BERT（冻结主干网络，仅微调适配器层），甚至支持两者结合的 adapter-ALBERT 架构。这种设计极大地降低了显存占用和计算成本，让在小资源环境下进行大规模模型微调和实验成为可能，是连接前沿预训练模型与实际工程应用的实用桥梁。","BERT for TensorFlow v2\n======================\n\n|Build Status| |Coverage Status| |Version Status| |Python Versions| |Downloads|\n\nThis repo contains a `TensorFlow 2.0`_ `Keras`_ implementation of `google-research\u002Fbert`_\nwith support for loading of the original `pre-trained weights`_,\nand producing activations **numerically identical** to the one calculated by the original model.\n\n`ALBERT`_ and `adapter-BERT`_ are also supported by setting the corresponding\nconfiguration parameters (``shared_layer=True``, ``embedding_size`` for `ALBERT`_\nand ``adapter_size`` for `adapter-BERT`_). Setting both will result in an adapter-ALBERT\nby sharing the BERT parameters across all layers while adapting every layer with layer specific adapter.\n\nThe implementation is build from scratch using only basic tensorflow operations,\nfollowing the code in `google-research\u002Fbert\u002Fmodeling.py`_\n(but skipping dead code and applying some simplifications). It also utilizes `kpe\u002Fparams-flow`_ to reduce\ncommon Keras boilerplate code (related to passing model and layer configuration arguments).\n\n`bert-for-tf2`_ should work with both `TensorFlow 2.0`_ and `TensorFlow 1.14`_ or newer.\n\nNEWS\n----\n - **30.Jul.2020** - `VERBOSE=0` env variable for suppressing stdout output.\n - **06.Apr.2020** - using latest ``py-params`` introducing ``WithParams`` base for ``Layer``\n   and ``Model``. See news in `kpe\u002Fpy-params`_ for how to update (``_construct()`` signature has change and\n   requires calling ``super().__construct()``).\n - **06.Jan.2020** - support for loading the tar format weights from `google-research\u002FALBERT`.\n - **18.Nov.2019** - ALBERT tokenization added (make sure to import as ``from bert import albert_tokenization`` or ``from bert import bert_tokenization``).\n\n - **08.Nov.2019** - using v2 per default when loading the `TFHub\u002Falbert`_ weights of `google-research\u002FALBERT`_.\n\n - **05.Nov.2019** - minor ALBERT word embeddings refactoring (``word_embeddings_2`` -> ``word_embeddings_projector``) and related parameter freezing fixes.\n\n - **04.Nov.2019** - support for extra (task specific) token embeddings using negative token ids.\n\n - **29.Oct.2019** - support for loading of the pre-trained ALBERT weights released by `google-research\u002FALBERT`_  at `TFHub\u002Falbert`_.\n\n - **11.Oct.2019** - support for loading of the pre-trained ALBERT weights released by `brightmart\u002Falbert_zh ALBERT for Chinese`_.\n\n - **10.Oct.2019** - support for `ALBERT`_ through the ``shared_layer=True``\n   and ``embedding_size=128`` params.\n\n - **03.Sep.2019** - walkthrough on fine tuning with adapter-BERT and storing the\n   fine tuned fraction of the weights in a separate checkpoint (see ``tests\u002Ftest_adapter_finetune.py``).\n\n - **02.Sep.2019** - support for extending the token type embeddings of a pre-trained model\n   by returning the mismatched weights in ``load_stock_weights()`` (see ``tests\u002Ftest_extend_segments.py``).\n\n - **25.Jul.2019** - there are now two colab notebooks under ``examples\u002F`` showing how to\n   fine-tune an IMDB Movie Reviews sentiment classifier from pre-trained BERT weights\n   using an `adapter-BERT`_ model architecture on a GPU or TPU in Google Colab.\n\n - **28.Jun.2019** - v.0.3.0 supports `adapter-BERT`_ (`google-research\u002Fadapter-bert`_)\n   for \"Parameter-Efficient Transfer Learning for NLP\", i.e. fine-tuning small overlay adapter\n   layers over BERT's transformer encoders without changing the frozen BERT weights.\n\n\n\nLICENSE\n-------\n\nMIT. See `License File \u003Chttps:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fblob\u002Fmaster\u002FLICENSE.txt>`_.\n\nInstall\n-------\n\n``bert-for-tf2`` is on the Python Package Index (PyPI):\n\n::\n\n    pip install bert-for-tf2\n\n\nUsage\n-----\n\nBERT in `bert-for-tf2` is implemented as a Keras layer. You could instantiate it like this:\n\n.. code:: python\n\n  from bert import BertModelLayer\n\n  l_bert = BertModelLayer(**BertModelLayer.Params(\n    vocab_size               = 16000,        # embedding params\n    use_token_type           = True,\n    use_position_embeddings  = True,\n    token_type_vocab_size    = 2,\n\n    num_layers               = 12,           # transformer encoder params\n    hidden_size              = 768,\n    hidden_dropout           = 0.1,\n    intermediate_size        = 4*768,\n    intermediate_activation  = \"gelu\",\n\n    adapter_size             = None,         # see arXiv:1902.00751 (adapter-BERT)\n\n    shared_layer             = False,        # True for ALBERT (arXiv:1909.11942)\n    embedding_size           = None,         # None for BERT, wordpiece embedding size for ALBERT\n\n    name                     = \"bert\"        # any other Keras layer params\n  ))\n\nor by using the ``bert_config.json`` from a `pre-trained google model`_:\n\n.. code:: python\n\n  import bert\n\n  model_dir = \".models\u002Funcased_L-12_H-768_A-12\"\n\n  bert_params = bert.params_from_pretrained_ckpt(model_dir)\n  l_bert = bert.BertModelLayer.from_params(bert_params, name=\"bert\")\n\n\nnow you can use the BERT layer in your Keras model like this:\n\n.. code:: python\n\n  from tensorflow import keras\n\n  max_seq_len = 128\n  l_input_ids      = keras.layers.Input(shape=(max_seq_len,), dtype='int32')\n  l_token_type_ids = keras.layers.Input(shape=(max_seq_len,), dtype='int32')\n\n  # using the default token_type\u002Fsegment id 0\n  output = l_bert(l_input_ids)                              # output: [batch_size, max_seq_len, hidden_size]\n  model = keras.Model(inputs=l_input_ids, outputs=output)\n  model.build(input_shape=(None, max_seq_len))\n\n  # provide a custom token_type\u002Fsegment id as a layer input\n  output = l_bert([l_input_ids, l_token_type_ids])          # [batch_size, max_seq_len, hidden_size]\n  model = keras.Model(inputs=[l_input_ids, l_token_type_ids], outputs=output)\n  model.build(input_shape=[(None, max_seq_len), (None, max_seq_len)])\n\nif you choose to use `adapter-BERT`_ by setting the `adapter_size` parameter,\nyou would also like to freeze all the original BERT layers by calling:\n\n.. code:: python\n\n  l_bert.apply_adapter_freeze()\n\nand once the model has been build or compiled, the original pre-trained weights\ncan be loaded in the BERT layer:\n\n.. code:: python\n\n  import bert\n\n  bert_ckpt_file   = os.path.join(model_dir, \"bert_model.ckpt\")\n  bert.load_stock_weights(l_bert, bert_ckpt_file)\n\n**N.B.** see `tests\u002Ftest_bert_activations.py`_ for a complete example.\n\nFAQ\n---\n0. In all the examlpes bellow, **please note** the line:\n\n.. code:: python\n\n  # use in a Keras Model here, and call model.build()\n\nfor a quick test, you can replace it with something like:\n\n.. code:: python\n\n  model = keras.models.Sequential([\n    keras.layers.InputLayer(input_shape=(128,)),\n    l_bert,\n    keras.layers.Lambda(lambda x: x[:, 0, :]),\n    keras.layers.Dense(2)\n  ])\n  model.build(input_shape=(None, 128))\n\n\n1. How to use BERT with the `google-research\u002Fbert`_ pre-trained weights?\n\n.. code:: python\n\n  model_name = \"uncased_L-12_H-768_A-12\"\n  model_dir = bert.fetch_google_bert_model(model_name, \".models\")\n  model_ckpt = os.path.join(model_dir, \"bert_model.ckpt\")\n\n  bert_params = bert.params_from_pretrained_ckpt(model_dir)\n  l_bert = bert.BertModelLayer.from_params(bert_params, name=\"bert\")\n\n  # use in a Keras Model here, and call model.build()\n\n  bert.load_bert_weights(l_bert, model_ckpt)      # should be called after model.build()\n\n2. How to use ALBERT with the `google-research\u002FALBERT`_ pre-trained weights (fetching from TFHub)?\n\nsee `tests\u002Fnonci\u002Ftest_load_pretrained_weights.py \u003Chttps:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fblob\u002Fmaster\u002Ftests\u002Fnonci\u002Ftest_load_pretrained_weights.py>`_:\n\n.. code:: python\n\n  model_name = \"albert_base\"\n  model_dir    = bert.fetch_tfhub_albert_model(model_name, \".models\")\n  model_params = bert.albert_params(model_name)\n  l_bert = bert.BertModelLayer.from_params(model_params, name=\"albert\")\n\n  # use in a Keras Model here, and call model.build()\n\n  bert.load_albert_weights(l_bert, albert_dir)      # should be called after model.build()\n\n3. How to use ALBERT with the `google-research\u002FALBERT`_ pre-trained weights (non TFHub)?\n\nsee `tests\u002Fnonci\u002Ftest_load_pretrained_weights.py \u003Chttps:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fblob\u002Fmaster\u002Ftests\u002Fnonci\u002Ftest_load_pretrained_weights.py>`_:\n\n.. code:: python\n\n  model_name = \"albert_base_v2\"\n  model_dir    = bert.fetch_google_albert_model(model_name, \".models\")\n  model_ckpt   = os.path.join(albert_dir, \"model.ckpt-best\")\n\n  model_params = bert.albert_params(model_dir)\n  l_bert = bert.BertModelLayer.from_params(model_params, name=\"albert\")\n\n  # use in a Keras Model here, and call model.build()\n\n  bert.load_albert_weights(l_bert, model_ckpt)      # should be called after model.build()\n\n4. How to use ALBERT with the `brightmart\u002Falbert_zh`_ pre-trained weights?\n\nsee `tests\u002Fnonci\u002Ftest_albert.py \u003Chttps:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fblob\u002Fmaster\u002Ftests\u002Fnonci\u002Ftest_albert.py>`_:\n\n.. code:: python\n\n  model_name = \"albert_base\"\n  model_dir = bert.fetch_brightmart_albert_model(model_name, \".models\")\n  model_ckpt = os.path.join(model_dir, \"albert_model.ckpt\")\n\n  bert_params = bert.params_from_pretrained_ckpt(model_dir)\n  l_bert = bert.BertModelLayer.from_params(bert_params, name=\"bert\")\n\n  # use in a Keras Model here, and call model.build()\n\n  bert.load_albert_weights(l_bert, model_ckpt)      # should be called after model.build()\n\n5. How to tokenize the input for the `google-research\u002Fbert`_ models?\n\n.. code:: python\n\n  do_lower_case = not (model_name.find(\"cased\") == 0 or model_name.find(\"multi_cased\") == 0)\n  bert.bert_tokenization.validate_case_matches_checkpoint(do_lower_case, model_ckpt)\n  vocab_file = os.path.join(model_dir, \"vocab.txt\")\n  tokenizer = bert.bert_tokenization.FullTokenizer(vocab_file, do_lower_case)\n  tokens = tokenizer.tokenize(\"Hello, BERT-World!\")\n  token_ids = tokenizer.convert_tokens_to_ids(tokens)\n\n6. How to tokenize the input for `brightmart\u002Falbert_zh`?\n\n.. code:: python\n\n  import params_flow pf\n\n  # fetch the vocab file\n  albert_zh_vocab_url = \"https:\u002F\u002Fraw.githubusercontent.com\u002Fbrightmart\u002Falbert_zh\u002Fmaster\u002Falbert_config\u002Fvocab.txt\"\n  vocab_file = pf.utils.fetch_url(albert_zh_vocab_url, model_dir)\n\n  tokenizer = bert.albert_tokenization.FullTokenizer(vocab_file)\n  tokens = tokenizer.tokenize(\"你好世界\")\n  token_ids = tokenizer.convert_tokens_to_ids(tokens)\n\n7. How to tokenize the input for the `google-research\u002FALBERT`_ models?\n\n.. code:: python\n\n  import sentencepiece as spm\n\n  spm_model = os.path.join(model_dir, \"assets\", \"30k-clean.model\")\n  sp = spm.SentencePieceProcessor()\n  sp.load(spm_model)\n  do_lower_case = True\n\n  processed_text = bert.albert_tokenization.preprocess_text(\"Hello, World!\", lower=do_lower_case)\n  token_ids = bert.albert_tokenization.encode_ids(sp, processed_text)\n\n8. How to tokenize the input for the Chinese `google-research\u002FALBERT`_ models?\n\n.. code:: python\n\n  import bert\n\n  vocab_file = os.path.join(model_dir, \"vocab.txt\")\n  tokenizer = bert.albert_tokenization.FullTokenizer(vocab_file=vocab_file)\n  tokens = tokenizer.tokenize(u\"你好世界\")\n  token_ids = tokenizer.convert_tokens_to_ids(tokens)\n\nResources\n---------\n\n- `BERT`_ - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding\n- `adapter-BERT`_ - adapter-BERT: Parameter-Efficient Transfer Learning for NLP\n- `ALBERT`_ - ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations\n- `google-research\u002Fbert`_ - the original `BERT`_ implementation\n- `google-research\u002FALBERT`_ - the original `ALBERT`_ implementation by Google\n- `google-research\u002Falbert(old)`_ - the old location of the original `ALBERT`_ implementation by Google\n- `brightmart\u002Falbert_zh`_ - pre-trained `ALBERT`_ weights for Chinese\n- `kpe\u002Fparams-flow`_ - A Keras coding style for reducing `Keras`_ boilerplate code in custom layers by utilizing `kpe\u002Fpy-params`_\n\n.. _`kpe\u002Fparams-flow`: https:\u002F\u002Fgithub.com\u002Fkpe\u002Fparams-flow\n.. _`kpe\u002Fpy-params`: https:\u002F\u002Fgithub.com\u002Fkpe\u002Fpy-params\n.. _`bert-for-tf2`: https:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\n\n.. _`Keras`: https:\u002F\u002Fkeras.io\n.. _`pre-trained weights`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert#pre-trained-models\n.. _`google-research\u002Fbert`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert\n.. _`google-research\u002Fbert\u002Fmodeling.py`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert\u002Fblob\u002Fmaster\u002Fmodeling.py\n.. _`BERT`: https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.04805\n.. _`pre-trained google model`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert\n.. _`tests\u002Ftest_bert_activations.py`: https:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fblob\u002Fmaster\u002Ftests\u002Ftest_compare_activations.py\n.. _`TensorFlow 2.0`: https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fr2.0\u002Fapi_docs\u002Fpython\u002Ftf\n.. _`TensorFlow 1.14`: https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fr1.14\u002Fapi_docs\u002Fpython\u002Ftf\n\n.. _`google-research\u002Fadapter-bert`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fadapter-bert\u002F\n.. _`adapter-BERT`: https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.00751\n.. _`ALBERT`: https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11942\n.. _`brightmart\u002Falbert_zh ALBERT for Chinese`: https:\u002F\u002Fgithub.com\u002Fbrightmart\u002Falbert_zh\n.. _`brightmart\u002Falbert_zh`: https:\u002F\u002Fgithub.com\u002Fbrightmart\u002Falbert_zh\n.. _`google ALBERT weights`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Falbert\n.. _`google-research\u002Falbert(old)`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Falbert\n.. _`google-research\u002FALBERT`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002FALBERT\n.. _`TFHub\u002Falbert`: https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Falbert_base\u002F2\n\n.. |Build Status| image:: https:\u002F\u002Ftravis-ci.com\u002Fkpe\u002Fbert-for-tf2.svg?branch=master\n   :target: https:\u002F\u002Ftravis-ci.com\u002Fkpe\u002Fbert-for-tf2\n.. |Coverage Status| image:: https:\u002F\u002Fcoveralls.io\u002Frepos\u002Fkpe\u002Fbert-for-tf2\u002Fbadge.svg?branch=master\n   :target: https:\u002F\u002Fcoveralls.io\u002Fr\u002Fkpe\u002Fbert-for-tf2?branch=master\n.. |Version Status| image:: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fbert-for-tf2.svg\n   :target: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fbert-for-tf2\n.. |Python Versions| image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fbert-for-tf2.svg\n.. |Downloads| image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fbert-for-tf2.svg\n.. |Twitter| image:: https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fsiddhadev?logo=twitter&label=&style=\n   :target: https:\u002F\u002Ftwitter.com\u002Fintent\u002Fuser?screen_name=siddhadev\n","BERT for TensorFlow v2\n======================\n\n|构建状态| |覆盖率状态| |版本状态| |Python版本| |下载量|\n\n此仓库包含一个基于 `TensorFlow 2.0`_ 和 `Keras`_ 的 `google-research\u002Fbert`_ 实现，支持加载原始的 `预训练权重`_，并生成与原模型计算结果 **数值完全一致** 的激活值。\n\n通过设置相应的配置参数（``shared_layer=True``、`ALBERT`_ 的 `embedding_size` 以及 `adapter-BERT`_ 的 `adapter_size`），还可以支持 `ALBERT`_ 和 `adapter-BERT`_。同时设置这两个参数将得到一个 adapter-ALBERT 模型：在所有层之间共享 BERT 参数，同时为每层使用特定的适配器进行调整。\n\n该实现从零开始构建，仅使用基本的 TensorFlow 操作，遵循 `google-research\u002Fbert\u002Fmodeling.py`_ 中的代码（但去除了冗余代码并进行了一些简化）。此外，它还利用了 `kpe\u002Fparams-flow`_ 来减少 Keras 中常见的样板代码（与传递模型和层的配置参数相关）。\n\n`bert-for-tf2`_ 应当兼容 `TensorFlow 2.0`_ 以及 `TensorFlow 1.14`_ 或更高版本。\n\n新消息\n----\n - **2020年7月30日** - 添加了 `VERBOSE=0` 环境变量，用于抑制标准输出。\n - **2020年4月6日** - 使用最新的 ``py-params`` 引入了 `WithParams` 基类，适用于 `Layer` 和 `Model`。请参阅 `kpe\u002Fpy-params`_ 中的相关更新说明（``_construct()`` 的签名已更改，需要调用 ``super().__construct()``）。\n - **2020年1月6日** - 支持加载来自 `google-research\u002FALBERT` 的 tar 格式权重。\n - **2019年11月18日** - 添加了 ALBERT 分词功能（请确保导入时使用 ``from bert import albert_tokenization`` 或 ``from bert import bert_tokenization``）。\n\n - **2019年11月8日** - 加载 `TFHub\u002Falbert`_ 上的 `google-research\u002FALBERT`_ 权重时，默认使用 v2 版本。\n\n - **2019年11月5日** - 对 ALBERT 词嵌入进行了小幅重构（``word_embeddings_2`` -> ``word_embeddings_projector``），并修复了相关参数冻结的问题。\n\n - **2019年11月4日** - 支持使用负数 token ID 来添加额外的（任务特定）token 嵌入。\n\n - **2019年10月29日** - 支持加载由 `google-research\u002FALBERT`_ 在 `TFHub\u002Falbert`_ 上发布的预训练 ALBERT 权重。\n \n - **2019年10月11日** - 支持加载由 `brightmart\u002Falbert_zh ALBERT for Chinese`_ 发布的预训练 ALBERT 权重。\n \n - **2019年10月10日** - 通过设置 ``shared_layer=True`` 和 ``embedding_size=128`` 参数，支持 `ALBERT`_。\n \n - **2019年9月3日** - 提供了使用 adapter-BERT 进行微调，并将微调后的部分权重保存到单独检查点的教程（参见 ``tests\u002Ftest_adapter_finetune.py``）。\n \n - **2019年9月2日** - 支持扩展预训练模型的 token 类型嵌入，方法是在 ``load_stock_weights()`` 中返回不匹配的权重（参见 ``tests\u002Ftest_extend_segments.py``）。\n \n - **2019年7月25日** - 现在 `examples\u002F` 目录下有两个 Colab 笔记本，展示了如何在 Google Colab 的 GPU 或 TPU 上，使用 `adapter-BERT`_ 模型架构，从预训练的 BERT 权重中微调 IMDB 电影评论情感分类器。\n \n - **2019年6月28日** - v.0.3.0 支持 `adapter-BERT`_（来自 `google-research\u002Fadapter-bert`_），用于“NLP 的参数高效迁移学习”，即在不改变冻结的 BERT 权重的情况下，在 BERT 的 Transformer 编码器上微调小型叠加适配器层。\n\n\n\n许可证\n-------\n\nMIT 许可证。详情请参阅 `许可证文件 \u003Chttps:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fblob\u002Fmaster\u002FLICENSE.txt>`_。\n\n安装\n-------\n\n`bert-for-tf2` 已发布在 Python 包索引（PyPI）上：\n\n::\n\n    pip install bert-for-tf2\n\n\n使用方法\n-----\n\n`bert-for-tf2` 中的 BERT 被实现为一个 Keras 层。您可以这样实例化它：\n\n.. code:: python\n\n  from bert import BertModelLayer\n\n  l_bert = BertModelLayer(**BertModelLayer.Params(\n    vocab_size               = 16000,        # 嵌入参数\n    use_token_type           = True,\n    use_position_embeddings  = True,\n    token_type_vocab_size    = 2,\n\n    num_layers               = 12,           # Transformer 编码器参数\n    hidden_size              = 768,\n    hidden_dropout           = 0.1,\n    intermediate_size        = 4*768,\n    intermediate_activation  = \"gelu\",\n\n    adapter_size             = None,         # 参见 arXiv:1902.00751 (adapter-BERT)\n\n    shared_layer             = False,        # 对于 ALBERT 为 True（arXiv:1909.11942）\n    embedding_size           = None,         # 对于 BERT 为 None，对于 ALBERT 则为 wordpiece 嵌入大小\n\n    name                     = \"bert\"        # 其他 Keras 层参数\n  ))\n\n或者使用来自 `预训练谷歌模型`_ 的 ``bert_config.json`` 文件：\n\n.. code:: python\n\n  import bert\n\n  model_dir = \".models\u002Funcased_L-12_H-768_A-12\"\n\n  bert_params = bert.params_from_pretrained_ckpt(model_dir)\n  l_bert = bert.BertModelLayer.from_params(bert_params, name=\"bert\")\n\n\n现在您可以在自己的 Keras 模型中使用 BERT 层，如下所示：\n\n.. code:: python\n\n  from tensorflow import keras\n\n  max_seq_len = 128\n  l_input_ids      = keras.layers.Input(shape=(max_seq_len,), dtype='int32')\n  l_token_type_ids = keras.layers.Input(shape=(max_seq_len,), dtype='int32')\n\n  # 使用默认的 token_type\u002Fsegment id 0\n  output = l_bert(l_input_ids)                              # 输出：[batch_size, max_seq_len, hidden_size]\n  model = keras.Model(inputs=l_input_ids, outputs=output)\n  model.build(input_shape=(None, max_seq_len))\n\n  # 提供自定义的 token_type\u002Fsegment id 作为层的输入\n  output = l_bert([l_input_ids, l_token_type_ids])          # [batch_size, max_seq_len, hidden_size]\n  model = keras.Model(inputs=[l_input_ids, l_token_type_ids], outputs=output)\n  model.build(input_shape=[(None, max_seq_len), (None, max_seq_len)])\n\n如果您选择使用 `adapter-BERT`_ 并设置了 `adapter_size` 参数，还需要通过以下方式冻结所有原始的 BERT 层：\n\n.. code:: python\n\n  l_bert.apply_adapter_freeze()\n\n一旦模型构建或编译完成，就可以将原始的预训练权重加载到 BERT 层中：\n\n.. code:: python\n\n  import bert\n\n  bert_ckpt_file   = os.path.join(model_dir, \"bert_model.ckpt\")\n  bert.load_stock_weights(l_bert, bert_ckpt_file)\n\n**注意**：完整的示例请参阅 `tests\u002Ftest_bert_activations.py`_。\n\n常见问题解答\n---\n0. 在下面的所有示例中，请务必注意以下这行代码：\n\n.. code:: python\n\n  # 在此处将其用于 Keras 模型，并调用 model.build()\n\n为了快速测试，您可以将其替换为如下代码：\n\n.. code:: python\n\n  model = keras.models.Sequential([\n    keras.layers.InputLayer(input_shape=(128,)),\n    l_bert,\n    keras.layers.Lambda(lambda x: x[:, 0, :]),\n    keras.layers.Dense(2)\n  ])\n  model.build(input_shape=(None, 128))\n\n\n1. 如何使用带有 `google-research\u002Fbert`_ 预训练权重的 BERT？\n\n.. code:: python\n\n  model_name = \"uncased_L-12_H-768_A-12\"\n  model_dir = bert.fetch_google_bert_model(model_name, \".models\")\n  model_ckpt = os.path.join(model_dir, \"bert_model.ckpt\")\n\nbert_params = bert.params_from_pretrained_ckpt(model_dir)\n  l_bert = bert.BertModelLayer.from_params(bert_params, name=\"bert\")\n\n  # 在 Keras 模型中使用此处，并调用 model.build()\n\n  bert.load_bert_weights(l_bert, model_ckpt)      # 应在 model.build() 之后调用\n\n2. 如何使用 `google-research\u002FALBERT` 的预训练权重（从 TFHub 获取）？\n\n请参阅 `tests\u002Fnonci\u002Ftest_load_pretrained_weights.py \u003Chttps:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fblob\u002Fmaster\u002Ftests\u002Fnonci\u002Ftest_load_pretrained_weights.py>`_：\n\n.. code:: python\n\n  model_name = \"albert_base\"\n  model_dir    = bert.fetch_tfhub_albert_model(model_name, \".models\")\n  model_params = bert.albert_params(model_name)\n  l_bert = bert.BertModelLayer.from_params(model_params, name=\"albert\")\n\n  # 在 Keras 模型中使用此处，并调用 model.build()\n\n  bert.load_albert_weights(l_bert, albert_dir)      # 应在 model.build() 之后调用\n\n3. 如何使用 `google-research\u002FALBERT` 的预训练权重（非 TFHub）？\n\n请参阅 `tests\u002Fnonci\u002Ftest_load_pretrained_weights.py \u003Chttps:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fblob\u002Fmaster\u002Ftests\u002Fnonci\u002Ftest_load_pretrained_weights.py>`_：\n\n.. code:: python\n\n  model_name = \"albert_base_v2\"\n  model_dir    = bert.fetch_google_albert_model(model_name, \".models\")\n  model_ckpt   = os.path.join(albert_dir, \"model.ckpt-best\")\n\n  model_params = bert.albert_params(model_dir)\n  l_bert = bert.BertModelLayer.from_params(model_params, name=\"albert\")\n\n  # 在 Keras 模型中使用此处，并调用 model.build()\n\n  bert.load_albert_weights(l_bert, model_ckpt)      # 应在 model.build() 之后调用\n\n4. 如何使用 `brightmart\u002Falbert_zh` 的预训练权重？\n\n请参阅 `tests\u002Fnonci\u002Ftest_albert.py \u003Chttps:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fblob\u002Fmaster\u002Ftests\u002Fnonci\u002Ftest_albert.py>`_：\n\n.. code:: python\n\n  model_name = \"albert_base\"\n  model_dir = bert.fetch_brightmart_albert_model(model_name, \".models\")\n  model_ckpt = os.path.join(model_dir, \"albert_model.ckpt\")\n\n  bert_params = bert.params_from_pretrained_ckpt(model_dir)\n  l_bert = bert.BertModelLayer.from_params(bert_params, name=\"bert\")\n\n  # 在 Keras 模型中使用此处，并调用 model.build()\n\n  bert.load_albert_weights(l_bert, model_ckpt)      # 应在 model.build() 之后调用\n\n5. 如何为 `google-research\u002Fbert` 模型对输入进行分词？\n\n.. code:: python\n\n  do_lower_case = not (model_name.find(\"cased\") == 0 or model_name.find(\"multi_cased\") == 0)\n  bert.bert_tokenization.validate_case_matches_checkpoint(do_lower_case, model_ckpt)\n  vocab_file = os.path.join(model_dir, \"vocab.txt\")\n  tokenizer = bert.bert_tokenization.FullTokenizer(vocab_file, do_lower_case)\n  tokens = tokenizer.tokenize(\"Hello, BERT-World!\")\n  token_ids = tokenizer.convert_tokens_to_ids(tokens)\n\n6. 如何为 `brightmart\u002Falbert_zh` 对输入进行分词？\n\n.. code:: python\n\n  import params_flow pf\n\n  # 获取词汇表文件\n  albert_zh_vocab_url = \"https:\u002F\u002Fraw.githubusercontent.com\u002Fbrightmart\u002Falbert_zh\u002Fmaster\u002Falbert_config\u002Fvocab.txt\"\n  vocab_file = pf.utils.fetch_url(albert_zh_vocab_url, model_dir)\n\n  tokenizer = bert.albert_tokenization.FullTokenizer(vocab_file)\n  tokens = tokenizer.tokenize(\"你好世界\")\n  token_ids = tokenizer.convert_tokens_to_ids(tokens)\n\n7. 如何为 `google-research\u002FALBERT` 模型对输入进行分词？\n\n.. code:: python\n\n  import sentencepiece as spm\n\n  spm_model = os.path.join(model_dir, \"assets\", \"30k-clean.model\")\n  sp = spm.SentencePieceProcessor()\n  sp.load(spm_model)\n  do_lower_case = True\n\n  processed_text = bert.albert_tokenization.preprocess_text(\"Hello, World!\", lower=do_lower_case)\n  token_ids = bert.albert_tokenization.encode_ids(sp, processed_text)\n\n8. 如何为中文 `google-research\u002FALBERT` 模型对输入进行分词？\n\n.. code:: python\n\n  import bert\n\n  vocab_file = os.path.join(model_dir, \"vocab.txt\")\n  tokenizer = bert.albert_tokenization.FullTokenizer(vocab_file=vocab_file)\n  tokens = tokenizer.tokenize(u\"你好世界\")\n  token_ids = tokenizer.convert_tokens_to_ids(tokens)\n\n资源\n---------\n\n- `BERT` - BERT：用于语言理解的深度双向变换器预训练\n- `adapter-BERT` - adapter-BERT：面向 NLP 的参数高效迁移学习\n- `ALBERT` - ALBERT：用于语言表示自监督学习的轻量级 BERT\n- `google-research\u002Fbert` - 原始的 `BERT` 实现\n- `google-research\u002FALBERT` - Google 原始的 `ALBERT` 实现\n- `google-research\u002Falbert(old)` - Google 原始 `ALBERT` 实现的旧位置\n- `brightmart\u002Falbert_zh` - 面向中文的预训练 `ALBERT` 权重\n- `kpe\u002Fparams-flow` - 一种 Keras 编码风格，通过利用 `kpe\u002Fpy-params` 减少自定义层中的 `Keras` 繁琐代码\n\n.. _`kpe\u002Fparams-flow`: https:\u002F\u002Fgithub.com\u002Fkpe\u002Fparams-flow\n.. _`kpe\u002Fpy-params`: https:\u002F\u002Fgithub.com\u002Fkpe\u002Fpy-params\n.. _`bert-for-tf2`: https:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\n\n.. _`Keras`: https:\u002F\u002Fkeras.io\n.. _`pre-trained weights`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert#pre-trained-models\n.. _`google-research\u002Fbert`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert\n.. _`google-research\u002Fbert\u002Fmodeling.py`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert\u002Fblob\u002Fmaster\u002Fmodeling.py\n.. _`BERT`: https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.04805\n.. _`pre-trained google model`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert\n.. _`tests\u002Ftest_bert_activations.py`: https:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fblob\u002Fmaster\u002Ftests\u002Ftest_compare_activations.py\n.. _`TensorFlow 2.0`: https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fr2.0\u002Fapi_docs\u002Fpython\u002Ftf\n.. _`TensorFlow 1.14`: https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fr1.14\u002Fapi_docs\u002Fpython\u002Ftf\n\n.. _`google-research\u002Fadapter-bert`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fadapter-bert\u002F\n.. _`adapter-BERT`: https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.00751\n.. _`ALBERT`: https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11942\n.. _`brightmart\u002Falbert_zh ALBERT for Chinese`: https:\u002F\u002Fgithub.com\u002Fbrightmart\u002Falbert_zh\n.. _`brightmart\u002Falbert_zh`: https:\u002F\u002Fgithub.com\u002Fbrightmart\u002Falbert_zh\n.. _`google ALBERT weights`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Falbert\n.. _`google-research\u002Falbert(old)`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Falbert\n.. _`google-research\u002FALBERT`: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002FALBERT\n.. _`TFHub\u002Falbert`: https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Falbert_base\u002F2\n\n.. |Build Status| image:: https:\u002F\u002Ftravis-ci.com\u002Fkpe\u002Fbert-for-tf2.svg?branch=master\n   :target: https:\u002F\u002Ftravis-ci.com\u002Fkpe\u002Fbert-for-tf2\n.. |Coverage Status| image:: https:\u002F\u002Fcoveralls.io\u002Frepos\u002Fkpe\u002Fbert-for-tf2\u002Fbadge.svg?branch=master\n   :target: https:\u002F\u002Fcoveralls.io\u002Fr\u002Fkpe\u002Fbert-for-tf2?branch=master\n.. |Version Status| image:: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fbert-for-tf2.svg\n   :target: https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fbert-for-tf2\n.. |Python Versions| image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fbert-for-tf2.svg\n.. |Downloads| image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fbert-for-tf2.svg\n.. |Twitter| image:: https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fsiddhadev?logo=twitter&label=&style=\n   :target: https:\u002F\u002Ftwitter.com\u002Fintent\u002Fuser?screen_name=siddhadev","# bert-for-tf2 快速上手指南\n\n`bert-for-tf2` 是一个基于 TensorFlow 2.0 (Keras) 实现的 BERT 库。它支持加载 Google 官方预训练权重，并能产生与原始模型数值完全一致的激活结果。此外，它还原生支持 ALBERT 和 adapter-BERT 架构。\n\n## 环境准备\n\n*   **Python 版本**：兼容 Python 3.x\n*   **深度学习框架**：\n    *   TensorFlow 2.0+（推荐）\n    *   或 TensorFlow 1.14+\n*   **前置依赖**：安装时会自动处理 `keras`、`params-flow` 等依赖库。\n\n## 安装步骤\n\n通过 PyPI 直接安装：\n\n```bash\npip install bert-for-tf2\n```\n\n> **提示**：如果下载速度较慢，可以使用国内镜像源加速：\n> ```bash\n> pip install bert-for-tf2 -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 基本使用\n\n以下是使用预训练 BERT 模型构建 Keras 层的最简示例。\n\n### 1. 加载预训练配置与模型层\n\n首先，从预训练检查点目录加载参数并初始化 BERT 层。\n\n```python\nimport os\nimport bert\nfrom tensorflow import keras\n\n# 假设你已经下载了预训练模型到 .models\u002Funcased_L-12_H-768_A-12 目录\n# 如果未下载，可使用 bert.fetch_google_bert_model() 自动下载\nmodel_dir = \".models\u002Funcased_L-12_H-768_A-12\"\n\n# 从配置文件加载参数\nbert_params = bert.params_from_pretrained_ckpt(model_dir)\n\n# 创建 BERT Keras 层\nl_bert = bert.BertModelLayer.from_params(bert_params, name=\"bert\")\n```\n\n### 2. 构建 Keras 模型\n\n将 BERT 层嵌入到标准的 Keras 模型中。注意需要先调用 `model.build()` 以初始化权重形状。\n\n```python\nmax_seq_len = 128\n\n# 定义输入层 (input_ids)\nl_input_ids = keras.layers.Input(shape=(max_seq_len,), dtype='int32')\n\n# 将输入传入 BERT 层\n# 输出形状: [batch_size, max_seq_len, hidden_size]\noutput = l_bert(l_input_ids)\n\n# 构建模型\nmodel = keras.Model(inputs=l_input_ids, outputs=output)\nmodel.build(input_shape=(None, max_seq_len))\n```\n\n### 3. 加载预训练权重\n\n在模型构建完成后，加载实际的预训练权重文件。\n\n```python\n# 指定权重文件路径\nbert_ckpt_file = os.path.join(model_dir, \"bert_model.ckpt\")\n\n# 加载权重\nbert.load_stock_weights(l_bert, bert_ckpt_file)\n```\n\n### 4. （可选）使用 ALBERT 或 Adapter-BERT\n\n如果需要使用的是 **ALBERT**，只需在加载参数时设置相应配置，并使用专用的加载函数：\n\n```python\n# 示例：加载 ALBERT base 模型\nmodel_name = \"albert_base\"\nmodel_dir = bert.fetch_tfhub_albert_model(model_name, \".models\")\nmodel_params = bert.albert_params(model_name)\n\nl_bert = bert.BertModelLayer.from_params(model_params, name=\"albert\")\n\n# ... 构建模型代码同上 ...\n\n# 加载 ALBERT 权重\nbert.load_albert_weights(l_bert, model_dir)\n```\n\n如果使用 **Adapter-BERT** 进行参数高效微调，请在加载权重前冻结 BERT 主干网络：\n\n```python\n# 在初始化 BertModelLayer 时设置 adapter_size\n# l_bert = BertModelLayer(..., adapter_size=64, ...)\n\n# 冻结原始 BERT 层，只训练 Adapter 部分\nl_bert.apply_adapter_freeze()\n```","某电商公司的算法团队正致力于优化智能客服系统，需要基于海量历史对话数据微调 BERT 模型，以精准识别用户投诉中的情绪倾向和具体诉求。\n\n### 没有 bert-for-tf2 时\n- **框架迁移成本高**：团队熟悉 TensorFlow 2.0 和 Keras 的简洁 API，但官方 BERT 实现主要基于 TF1，导致代码风格割裂，需编写大量兼容层或重写数据管道。\n- **数值一致性验证难**：自行复现 BERT 结构时，难以确保自定义层与 Google 原始预训练权重的计算结果完全一致，常因细微的数值偏差导致模型效果下降且排查困难。\n- **显存资源浪费**：全量微调大参数量的 BERT 模型对 GPU 显存要求极高，在有限硬件资源下无法尝试更大批次（Batch Size），限制了训练效率和收敛速度。\n- **模型变体支持弱**：若想尝试更轻量的 ALBERT 或参数高效的 Adapter-BERT，需寻找不同的第三方库或手动修改网络结构，增加了实验的复杂度和维护负担。\n\n### 使用 bert-for-tf2 后\n- **原生 Keras 集成**：直接作为 Keras 层嵌入 TF2 模型，无缝利用 `model.fit` 等高级 API，代码量大幅减少，开发流程符合现代深度学习规范。\n- **保证数值精确复现**：加载官方预训练权重后，能产生与原始模型数值完全一致的激活输出，消除了因实现差异带来的不确定性，确保微调起点可靠。\n- **高效参数微调**：通过配置 `adapter_size` 启用 Adapter-BERT，仅微调少量适配器参数而冻结主干网络，显著降低显存占用，使在单卡上快速迭代成为可能。\n- **灵活切换架构**：只需调整 `shared_layer` 和 `embedding_size` 等参数，即可在同一代码框架下轻松切换 BERT、ALBERT 或 Adapter-BERT，极大提升了算法实验效率。\n\nbert-for-tf2 的核心价值在于将复杂的 BERT 系列模型转化为标准化的 Keras 组件，在确保精度的前提下，极大地降低了 TensorFlow 2.0 环境下 NLP 模型的开发门槛与资源消耗。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkpe_bert-for-tf2_1cb62819.png","kpe",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkpe_13f713c9.jpg","Aachen","https:\u002F\u002Fgithub.com\u002Fkpe",[82,86],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.8,{"name":87,"color":88,"percentage":89},"Shell","#89e051",0.2,808,194,"2026-02-25T06:41:14","MIT",1,"未说明","未说明（README 提及可在 Google Colab 的 GPU 或 TPU 上运行，但未指定具体硬件型号、显存或 CUDA 版本要求）",{"notes":98,"python":99,"dependencies":100},"该库兼容 TensorFlow 2.0 和 TensorFlow 1.14 或更高版本。支持加载原始 BERT、ALBERT 和 adapter-BERT 的预训练权重。若使用 adapter-BERT，需设置 adapter_size 参数并调用 apply_adapter_freeze() 冻结原始 BERT 层。可通过设置环境变量 VERBOSE=0 来抑制标准输出。","未说明（README 徽章显示支持多个 Python 版本，但未在文本中明确指定最低版本要求）",[101,102,103,104,105],"tensorflow>=2.0","keras","py-params","params-flow","sentencepiece",[26,13],[108,102,109,110],"bert","tensorflow","transformer","2026-03-27T02:49:30.150509","2026-04-06T05:36:38.214701",[114,119,124,129,134,139],{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},11507,"如何在模型中集成 BERT 的分词（Tokenization）过程？","建议安装 `tensorflow-text` 库 (`pip install tensorflow-text`)，并使用其提供的 `BertTokenizer`。示例代码如下：\n```python\nimport tensorflow_text as text\n\ntokenizer = text.BertTokenizer(os.path.join(ckpt_dir, 'vocab.txt'))\ntok_ids = tokenizer.tokenize([\"hello, cruel world!\", \"abcccccccd\"]).merge_dims(-2,-1).to_tensor(shape=(2, max_seq_len))\n```\n这比尝试在自定义 Keras 层中手动调用 Python 分词器更稳定且兼容 TensorFlow 图模式。","https:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fissues\u002F75",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},11508,"如何从 BERT 层获取句子级别的嵌入向量（Sentence Embedding）而不是每个 token 的向量？","BERT 层输出的是序列中每个 token 的向量。要获得单个句子向量，通常有两种方法：\n1. 使用全局平均池化（GlobalAveragePooling1D）或全局最大池化（GlobalMaxPooling1D）。\n2. 提取 `[CLS]` token 对应的向量（通常通过 Lambda 层切片获取）。\n示例代码：\n```python\nSequential([\n    ...\n    l_bert,\n    tf.keras.layers.GlobalAveragePooling1D(),\n    ...\n])\n```","https:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fissues\u002F26",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},11509,"为什么冻结预训练参数（freeze params）不起作用？","在模型构建（build）或编译（compile）之前，无法正确冻结权重。你需要确保在调用 `model.build()` 或 `model.compile()` 之后，再设置 `trainable=False`。或者等待库更新，因为维护者提到可以将内部层的实例化移到构造函数中以解决此问题。简而言之，请在模型构建完成后应用冻结操作。","https:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fissues\u002F29",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},11510,"该库是否支持 ALBERT 模型？","是的，最新版本的 `bert-for-tf2` 已经支持加载由 @brightmart 发布的预训练 ALBERT 权重。你可以直接使用现有的接口加载 ALBERT 模型。","https:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fissues\u002F7",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},11511,"使用 GlobalAveragePooling1D 时出现 IndexError: list index out of range 错误怎么办？","这通常是因为 BERT 输出的维度与池化层期望的输入不匹配，或者模型尚未完全构建。确保在应用 `GlobalAveragePooling1D` 之前，BERT 层的输出形状是正确的（通常为 [batch_size, seq_len, hidden_size]）。如果问题依旧，请检查是否在模型构建完成后再添加池化层，或参考官方示例中关于序列分类的正确用法。","https:\u002F\u002Fgithub.com\u002Fkpe\u002Fbert-for-tf2\u002Fissues\u002F58",{"id":140,"question_zh":141,"answer_zh":142,"source_url":123},11512,"如何正确使用 [CLS] token 进行分类任务？","除了使用池化层外，更传统的方法是直接使用 `[CLS]` token 的输出表示进行分类。你可以参考仓库中的示例笔记本 `examples\u002Ftpu_movie_reviews.ipynb`，其中展示了如何使用 `[CLS]` 表示以及全局池化进行序列分类。通常通过 Lambda 层提取第一个时间步的输出即可获取 `[CLS]` 向量。",[]]