[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-BrikerMan--Kashgari":3,"tool-BrikerMan--Kashgari":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",155373,2,"2026-04-14T11:34:08",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":32,"env_os":95,"env_gpu":96,"env_ram":95,"env_deps":97,"category_tags":104,"github_topics":107,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":121,"updated_at":122,"faqs":123,"releases":153},7551,"BrikerMan\u002FKashgari","Kashgari","Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.","Kashgari 是一个基于 tf.keras 构建的生产级自然语言处理（NLP）迁移学习框架，专为文本分类和序列标注任务（如命名实体识别、词性标注）而设计。它旨在解决开发者在构建高精度 NLP 模型时面临的代码复杂、重复造轮子以及难以快速落地等痛点，让用户仅需几分钟即可搭建出业界领先的模型。\n\n无论是希望快速验证假设的研究人员、想要学习高质量代码的 NLP 初学者，还是急需部署模型的开发工程师，Kashgari 都能提供极大的便利。其代码结构清晰、文档完善且经过充分测试，极具亲和力。\n\n在技术亮点方面，Kashgari 内置了 Word2Vec、BERT 和 GPT-2 等主流预训练语言模型嵌入，让迁移学习变得异常简单。它不仅支持灵活的实验环境，方便尝试不同的嵌入方式和模型结构，还具备强大的生产就绪能力：支持将模型导出为 TensorFlow Serving 所需的 SavedModel 格式，可直接部署至云端。随着 2.0 版本的发布，Kashgari 已全面支持 TensorFlow 2，继续以简洁高效的特性助力用户轻松应对各类文本处理挑战。","\u003C!-- prettier-ignore-start -->\n\u003C!-- markdownlint-disable -->\n\u003Ch1 align=\"center\">\n    \u003Ca href='https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMahmud_al-Kashgari'>Kashgari\u003C\u002Fa>\n\u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBrikerMan\u002Fkashgari\u002Fblob\u002Fmaster\u002FLICENSE\">\n        \u003Cimg alt=\"GitHub\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FBrikerMan\u002Fkashgari.svg?color=blue&style=popout\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fkashgari\u002Fshared_invite\u002FenQtODU4OTEzNDExNjUyLTY0MzI4MGFkZmRkY2VmMzdmZjRkZTYxMmMwNjMyOTI1NGE5YzQ2OTZkYzA1YWY0NTkyMDdlZGY5MGI5N2U4YzM\">\n        \u003Cimg alt=\"Slack\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fchat-Slack-blueviolet?logo=Slack&style=popout\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Ftravis-ci.com\u002FBrikerMan\u002FKashgari\">\n        \u003Cimg src=\"https:\u002F\u002Ftravis-ci.com\u002FBrikerMan\u002FKashgari.svg?branch=master\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca href='https:\u002F\u002Fcoveralls.io\u002Fgithub\u002FBrikerMan\u002FKashgari?branch=master'>\n        \u003Cimg src='https:\u002F\u002Fcoveralls.io\u002Frepos\u002Fgithub\u002FBrikerMan\u002FKashgari\u002Fbadge.svg?branch=master' alt='Coverage Status'\u002F>\n    \u003C\u002Fa>\n     \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fkashgari\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FBrikerMan_Kashgari_readme_447db3d07051.png\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fkashgari\u002F\">\n        \u003Cimg alt=\"PyPI\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fkashgari.svg\">\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Ch4 align=\"center\">\n    \u003Ca href=\"#overview\">Overview\u003C\u002Fa> |\n    \u003Ca href=\"#performance\">Performance\u003C\u002Fa> |\n    \u003Ca href=\"#installation\">Installation\u003C\u002Fa> |\n    \u003Ca href=\"https:\u002F\u002Fkashgari.readthedocs.io\u002F\">Documentation\u003C\u002Fa> |\n    \u003Ca href=\"https:\u002F\u002Fkashgari.readthedocs.io\u002Fabout\u002Fcontributing\u002F\">Contributing\u003C\u002Fa>\n\u003C\u002Fh4>\n\n\u003C!-- markdownlint-enable -->\n\u003C!-- prettier-ignore-end -->\n\n🎉🎉🎉 We released the 2.0.0 version with TF2 Support. 🎉🎉🎉\n\nIf you use this project for your research, please cite:\n\n```\n@misc{Kashgari\n  author = {Eliyar Eziz},\n  title = {Kashgari},\n  year = {2019},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari}}\n}\n```\n\n## Overview\n\nKashgari is a simple and powerful NLP Transfer learning framework, build a state-of-art model in 5 minutes for named entity recognition (NER), part-of-speech tagging (PoS), and text classification tasks.\n\n- **Human-friendly**. Kashgari's code is straightforward, well documented and tested, which makes it very easy to understand and modify.\n- **Powerful and simple**. Kashgari allows you to apply state-of-the-art natural language processing (NLP) models to your text, such as named entity recognition (NER), part-of-speech tagging (PoS) and classification.\n- **Built-in transfer learning**. Kashgari built-in pre-trained BERT and Word2vec embedding models, which makes it very simple to transfer learning to train your model.\n- **Fully scalable**. Kashgari provides a simple, fast, and scalable environment for fast experimentation, train your models and experiment with new approaches using different embeddings and model structure.\n- **Production Ready**. Kashgari could export model with `SavedModel` format for tensorflow serving, you could directly deploy it on the cloud.\n\n## Our Goal\n\n- **Academic users** Easier experimentation to prove their hypothesis without coding from scratch.\n- **NLP beginners** Learn how to build an NLP project with production level code quality.\n- **NLP developers** Build a production level classification\u002Flabeling model within minutes.\n\n## Performance\n\nWelcome to add performance report.\n\n| Task                       | Language | Dataset                     | Score |\n| -------------------------- | -------- | --------------------------- | ----- |\n| [Named Entity Recognition] | Chinese  | [People's Daily Ner Corpus] | 95.57 |\n| [Text Classification]      | Chinese  | [SMP2018ECDTCorpus]         | 94.57 |\n\n## Installation\n\nThe project is based on Python 3.6+, because it is 2019 and type hinting is cool.\n\n| Backend          | kashgari version                       | desc                  |\n| ---------------- | -------------------------------------- | --------------------- |\n| TensorFlow 2.2+  | `pip install 'kashgari>=2.0.2'`        | TF2.10+ with tf.keras |\n| TensorFlow 1.14+ | `pip install 'kashgari>=1.0.0,\u003C2.0.0'` | TF1.14+ with tf.keras |\n| Keras            | `pip install 'kashgari\u003C1.0.0'`         | keras version         |\n\nYou also need to install `tensorflow_addons` with TensorFlow.\n\n| TensorFlow Version       | tensorflow_addons version               |\n| ------------------------ | --------------------------------------- |\n| TensorFlow 2.1           | `pip install tensorflow_addons==0.9.1`  |\n| TensorFlow 2.2           | `pip install tensorflow_addons==0.11.2` |\n| TensorFlow 2.3, 2.4, 2.5 | `pip install tensorflow_addons==0.13.0` |\n\n## Tutorials\n\nHere is a set of quick tutorials to get you started with the library:\n\n- [Tutorial 1: Text Classification](.\u002Fdocs\u002Ftutorial\u002Ftext-classification.md)\n- [Tutorial 2: Text Labeling](.\u002Fdocs\u002Ftutorial\u002Ftext-labeling.md)\n- [Tutorial 3: Seq2Seq](.\u002Fdocs\u002Ftutorial\u002Fseq2seq.md)\n- [Tutorial 4: Language Embedding](.\u002Fdocs\u002Fembeddings\u002Findex.md)\n\nThere are also articles and posts that illustrate how to use Kashgari:\n\n- [基于 Kashgari 2 的短文本分类: 数据分析和预处理](https:\u002F\u002Feliyar.biz\u002Fshort_text_classificaion_with_kashgari_v2_part_1\u002Findex.html)\n- [基于 Kashgari 2 的短文本分类: 训练模型和调优](https:\u002F\u002Feliyar.biz\u002Fnlp\u002Fshort_text_classificaion_with_kashgari_v2_part_2\u002Findex.html)\n- [基于 Kashgari 2 的短文本分类: 模型部署](https:\u002F\u002Feliyar.biz\u002Fnlp\u002Fshort_text_classificaion_with_kashgari_v2_part_3\u002Findex.html)\n- [15 分钟搭建中文文本分类模型](https:\u002F\u002Feliyar.biz\u002Fnlp_chinese_text_classification_in_15mins\u002F)\n- [基于 BERT 的中文命名实体识别（NER)](https:\u002F\u002Feliyar.biz\u002Fnlp_chinese_bert_ner\u002F)\n- [BERT\u002FERNIE 文本分类和部署](https:\u002F\u002Feliyar.biz\u002Fnlp_train_and_deploy_bert_text_classification\u002F)\n- [五分钟搭建一个基于BERT的NER模型](https:\u002F\u002Fwww.jianshu.com\u002Fp\u002F1d6689851622)\n- [Multi-Class Text Classification with Kashgari in 15 minutes](https:\u002F\u002Fmedium.com\u002F@BrikerMan\u002Fmulti-class-text-classification-with-kashgari-in-15mins-c3e744ce971d)\n\nExamples:\n\n- [Neural machine translation with Seq2Seq](.\u002Fexamples\u002Ftranslate_with_seq2seq.ipynb)\n\n## Contributors ✨\n\nThanks goes to these wonderful people. And there are many ways to get involved.\nStart with the [contributor guidelines](.\u002Fdocs\u002Fabout\u002Fcontributing.md) and then check these open issues for specific tasks.\n\n[Named Entity Recognition]: \u002Ftutorial\u002Ftext-labeling\u002F#chinese-ner-performance\n[People's Daily Ner Corpus]: \u002Fapis\u002Fcorpus\u002F#kashgari.corpus.ChineseDailyNerCorpus\n[Text Classification]: \u002Ftutorial\u002Ftext-classification\u002F#short-sentence-classification-performance\n[SMP2018ECDTCorpus]: \u002Fapis\u002Fcorpus\u002F#kashgari.corpus.SMP2018ECDTCorpus\n\n","\u003C!-- prettier-ignore-start -->\n\u003C!-- markdownlint-disable -->\n\u003Ch1 align=\"center\">\n    \u003Ca href='https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMahmud_al-Kashgari'>喀什噶里\u003C\u002Fa>\n\u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FBrikerMan\u002Fkashgari\u002Fblob\u002Fmaster\u002FLICENSE\">\n        \u003Cimg alt=\"GitHub\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FBrikerMan\u002Fkashgari.svg?color=blue&style=popout\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fjoin.slack.com\u002Ft\u002Fkashgari\u002Fshared_invite\u002FenQtODU4OTEzNDExNjUyLTY0MzI4MGFkZmRkY2VmMzdmZjRkZTYxMmMwNjMyOTI1NGE5YzQ2OTZkYzA1YWY0NTkyMDdlZGY5MGI5N2U4YzM\">\n        \u003Cimg alt=\"Slack\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fchat-Slack-blueviolet?logo=Slack&style=popout\">\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Ftravis-ci.com\u002FBrikerMan\u002FKashgari\">\n        \u003Cimg src=\"https:\u002F\u002Ftravis-ci.com\u002FBrikerMan\u002FKashgari.svg?branch=master\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca href='https:\u002F\u002Fcoveralls.io\u002Fgithub\u002FBrikerMan\u002FKashgari?branch=master'>\n        \u003Cimg src='https:\u002F\u002Fcoveralls.io\u002Frepos\u002Fgithub\u002FBrikerMan\u002FKashgari\u002Fbadge.svg?branch=master' alt='Coverage Status'\u002F>\n    \u003C\u002Fa>\n     \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Fkashgari\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FBrikerMan_Kashgari_readme_447db3d07051.png\"\u002F>\n    \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fkashgari\u002F\">\n        \u003Cimg alt=\"PyPI\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fkashgari.svg\">\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Ch4 align=\"center\">\n    \u003Ca href=\"#overview\">概述\u003C\u002Fa> |\n    \u003Ca href=\"#performance\">性能\u003C\u002Fa> |\n    \u003Ca href=\"#installation\">安装\u003C\u002Fa> |\n    \u003Ca href=\"https:\u002F\u002Fkashgari.readthedocs.io\u002F\">文档\u003C\u002Fa> |\n    \u003Ca href=\"https:\u002F\u002Fkashgari.readthedocs.io\u002Fabout\u002Fcontributing\u002F\">贡献\u003C\u002Fa>\n\u003C\u002Fh4>\n\n\u003C!-- markdownlint-enable -->\n\u003C!-- prettier-ignore-end -->\n\n🎉🎉🎉 我们发布了支持 TF2 的 2.0.0 版本。🎉🎉🎉\n\n如果您在研究中使用了本项目，请引用以下文献：\n\n```\n@misc{Kashgari\n  author = {Eliyar Eziz},\n  title = {Kashgari},\n  year = {2019},\n  publisher = {GitHub},\n  journal = {GitHub 仓库},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari}}\n}\n```\n\n## 概述\n\nKashgari 是一个简单而强大的 NLP 迁移学习框架，能够在 5 分钟内为命名实体识别 (NER)、词性标注 (PoS) 和文本分类任务构建最先进的模型。\n\n- **易于上手**。Kashgari 的代码简洁明了，文档齐全且经过充分测试，因此非常容易理解和修改。\n- **强大且简单**。Kashgari 允许您将最先进的自然语言处理 (NLP) 模型应用于您的文本数据，例如命名实体识别 (NER)、词性标注 (PoS) 和分类任务。\n- **内置迁移学习**。Kashgari 内置了预训练的 BERT 和 Word2vec 嵌入模型，使得迁移学习和训练自定义模型变得非常简单。\n- **完全可扩展**。Kashgari 提供了一个简单、快速且可扩展的环境，方便您进行快速实验、训练模型以及尝试不同的嵌入方式和模型结构。\n- **生产就绪**。Kashgari 可以将模型导出为 TensorFlow Serving 所需的 `SavedModel` 格式，您可以直接将其部署到云端。\n\n## 我们的目标\n\n- **学术用户**：无需从头编写代码即可轻松验证假设。\n- **NLP 初学者**：学习如何以生产级代码质量构建 NLP 项目。\n- **NLP 开发人员**：在几分钟内构建生产级别的分类\u002F标注模型。\n\n## 性能\n\n欢迎添加性能报告。\n\n| 任务                       | 语言 | 数据集                     | 分数 |\n| -------------------------- | -------- | --------------------------- | ----- |\n| [命名实体识别] | 中文  | [人民日报 NER 语料库] | 95.57 |\n| [文本分类]      | 中文  | [SMP2018ECDTCorpus]         | 94.57 |\n\n## 安装\n\n该项目基于 Python 3.6+，因为现在是 2019 年，类型提示很酷。\n\n| 后端          | kashgari 版本                       | 描述                  |\n| ---------------- | -------------------------------------- | --------------------- |\n| TensorFlow 2.2+  | `pip install 'kashgari>=2.0.2'`        | TF2.10+ 使用 tf.keras |\n| TensorFlow 1.14+ | `pip install 'kashgari>=1.0.0,\u003C2.0.0'` | TF1.14+ 使用 tf.keras |\n| Keras            | `pip install 'kashgari\u003C1.0.0'`         | keras 版本         |\n\n您还需要为 TensorFlow 安装 `tensorflow_addons`。\n\n| TensorFlow 版本       | tensorflow_addons 版本               |\n| ------------------------ | --------------------------------------- |\n| TensorFlow 2.1           | `pip install tensorflow_addons==0.9.1`  |\n| TensorFlow 2.2           | `pip install tensorflow_addons==0.11.2` |\n| TensorFlow 2.3, 2.4, 2.5 | `pip install tensorflow_addons==0.13.0` |\n\n## 教程\n\n这里有一系列快速教程，帮助您入门该库：\n\n- [教程 1：文本分类](.\u002Fdocs\u002Ftutorial\u002Ftext-classification.md)\n- [教程 2：文本标注](.\u002Fdocs\u002Ftutorial\u002Ftext-labeling.md)\n- [教程 3：Seq2Seq](.\u002Fdocs\u002Ftutorial\u002Fseq2seq.md)\n- [教程 4：语言嵌入](.\u002Fdocs\u002Fembeddings\u002Findex.md)\n\n此外，还有一些文章和帖子介绍了如何使用 Kashgari：\n\n- [基于 Kashgari 2 的短文本分类：数据分析和预处理](https:\u002F\u002Feliyar.biz\u002Fshort_text_classificaion_with_kashgari_v2_part_1\u002Findex.html)\n- [基于 Kashgari 2 的短文本分类：训练模型和调优](https:\u002F\u002Feliyar.biz\u002Fnlp\u002Fshort_text_classificaion_with_kashgari_v2_part_2\u002Findex.html)\n- [基于 Kashgari 2 的短文本分类：模型部署](https:\u002F\u002Feliyar.biz\u002Fnlp\u002Fshort_text_classificaion_with_kashgari_v2_part_3\u002Findex.html)\n- [15 分钟搭建中文文本分类模型](https:\u002F\u002Feliyar.biz\u002Fnlp_chinese_text_classification_in_15mins\u002F)\n- [基于 BERT 的中文命名实体识别（NER)](https:\u002F\u002Feliyar.biz\u002Fnlp_chinese_bert_ner\u002F)\n- [BERT\u002FERNIE 文本分类和部署](https:\u002F\u002Feliyar.biz\u002Fnlp_train_and_deploy_bert_text_classification\u002F)\n- [五分钟搭建一个基于BERT的NER模型](https:\u002F\u002Fwww.jianshu.com\u002Fp\u002F1d6689851622)\n- [使用 Kashgari 在15分钟内完成多分类文本分类](https:\u002F\u002Fmedium.com\u002F@BrikerMan\u002Fmulti-class-text-classification-with-kashgari-in-15mins-c3e744ce971d)\n\n示例：\n\n- [使用 Seq2Seq 进行神经机器翻译](.\u002Fexamples\u002Ftranslate_with_seq2seq.ipynb)\n\n## 贡献者 ✨\n\n感谢这些杰出的人士。参与的方式有很多。请先阅读[贡献指南](.\u002Fdocs\u002Fabout\u002Fcontributing.md)，然后查看这些开放的问题以获取具体的任务。\n\n[命名实体识别]: \u002Ftutorial\u002Ftext-labeling\u002F#chinese-ner-performance\n[人民日报 NER 语料库]: \u002Fapis\u002Fcorpus\u002F#kashgari.corpus.ChineseDailyNerCorpus\n[文本分类]: \u002Ftutorial\u002Ftext-classification\u002F#short-sentence-classification-performance\n[SMP2018ECDTCorpus]: \u002Fapis\u002Fcorpus\u002F#kashgari.corpus.SMP2018ECDTCorpus","# Kashgari 快速上手指南\n\nKashgari 是一个简单且强大的 NLP 迁移学习框架，旨在帮助开发者在 5 分钟内构建用于命名实体识别（NER）、词性标注（PoS）和文本分类任务的先进模型。它内置了 BERT 和 Word2vec 等预训练嵌入模型，支持 TensorFlow 2.x，并可直接导出为 `SavedModel` 格式用于生产部署。\n\n## 环境准备\n\n- **操作系统**：Linux, macOS, Windows\n- **Python 版本**：3.6 或更高版本\n- **后端依赖**：\n  - 推荐使用 **TensorFlow 2.2+** (配合 `tf.keras`)\n  - 需额外安装 `tensorflow_addons`，版本需与 TensorFlow 严格对应\n\n**版本对应关系表：**\n\n| TensorFlow 版本 | tensorflow_addons 版本 | Kashgari 版本要求 |\n| :--- | :--- | :--- |\n| TensorFlow 2.1 | `tensorflow_addons==0.9.1` | `>=2.0.2` |\n| TensorFlow 2.2 | `tensorflow_addons==0.11.2` | `>=2.0.2` |\n| TensorFlow 2.3 - 2.5 | `tensorflow_addons==0.13.0` | `>=2.0.2` |\n\n> **提示**：国内用户建议使用清华源或阿里源加速 pip 安装。\n\n## 安装步骤\n\n请根据你的 TensorFlow 版本选择对应的安装命令。以下以 **TensorFlow 2.2+** 为例：\n\n1. **安装 TensorFlow 和 addons**\n   ```bash\n   pip install tensorflow==2.2.0 -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n   pip install tensorflow_addons==0.11.2 -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n   ```\n\n2. **安装 Kashgari**\n   ```bash\n   pip install 'kashgari>=2.0.2' -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n   ```\n\n*注：若使用其他 TF 版本，请参照上方表格调整 `tensorflow` 和 `tensorflow_addons` 的版本号。*\n\n## 基本使用\n\n以下是一个最简单的**文本分类**示例，展示如何在几分钟内完成数据加载、模型构建、训练和评估。\n\n```python\nimport kashgari\nfrom kashgari.corpus import SMP2018ECDTCorpus\nfrom kashgari.tasks.classification import BiLSTM_Model\n\n# 1. 准备数据 (使用内置的中文短文本分类数据集)\ntrain_x, train_y = SMP2018ECDTCorpus.load_data('train')\nvalid_x, valid_y = SMP2018ECDTCorpus.load_data('valid')\n\n# 2. 构建模型 (使用 BiLSTM 架构，自动处理嵌入层)\nmodel = BiLSTM_Model()\n\n# 3. 训练模型\nmodel.fit(train_x, train_y, valid_x, valid_y)\n\n# 4. 评估模型\nmodel.evaluate(valid_x, valid_y)\n\n# 5. 预测新数据\nsample_data = [\"这书真不错，推荐购买\", \"质量太差了，非常失望\"]\npredictions = model.predict(sample_data)\nprint(predictions)\n\n# 6. 保存模型 (可用于 TensorFlow Serving 部署)\nmodel.save('my_text_classification_model')\n```\n\n**核心特点说明：**\n- **开箱即用**：`SMP2018ECDTCorpus` 是内置数据集，实际使用时可替换为自己的列表数据。\n- **自动迁移学习**：模型默认支持加载预训练权重（如 BERT），只需在初始化时指定 `embedding_model='bert'` 即可。\n- **生产就绪**：通过 `model.save()` 导出的模型可直接部署到云端或本地服务。","某电商公司的数据团队需要快速构建一个中文评论分析系统，以自动识别用户反馈中的产品实体（如“电池”、“屏幕”）并判断其情感倾向。\n\n### 没有 Kashgari 时\n- **开发周期漫长**：工程师需从零搭建 TensorFlow 模型架构，手动处理 Word2Vec 或 BERT 的复杂嵌入逻辑，仅原型验证就耗时数周。\n- **技术门槛过高**：团队成员若缺乏深厚的 NLP 算法背景，难以复现学术界最新的转移学习成果，代码调试困难重重。\n- **部署流程繁琐**：训练好的模型格式不统一，转换为生产环境所需的 SavedModel 格式往往需要额外编写大量适配代码。\n- **实验迭代缓慢**：尝试更换不同的预训练模型（如从 BERT 切换到 GPT2）涉及到底层代码的大规模重构，试错成本极高。\n\n### 使用 Kashgari 后\n- **分钟级建模**：利用 Kashgari 内置的迁移学习框架，只需几行代码即可调用预训练的 BERT 模型，5 分钟内完成命名实体识别模型的构建与训练。\n- **开箱即用**：直接集成高质量的中文预训练词向量，无需关心底层嵌入细节，让初级开发者也能轻松上手构建生产级应用。\n- **无缝部署**：Kashgari 支持一键导出标准的 SavedModel 格式，可直接对接 TensorFlow Serving 进行云端部署，大幅简化上线流程。\n- **灵活实验**：通过简单的参数配置即可切换不同的嵌入模型和网络结构，团队能迅速对比多种方案效果，加速算法迭代。\n\nKashgari 将复杂的 NLP 转移学习过程封装为简洁易用的接口，让企业能以最低的成本和最快的速度落地高精度的文本分析服务。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FBrikerMan_Kashgari_90f03526.png","BrikerMan","Eliyar Eziz","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FBrikerMan_39593982.png","Google Developer Experts @ ML. \r\nLove NLP, Love Python.\r\nAuthor of《TensorFlow 2 实战》","Yodo1, Ltd.","Beijing","eliyar917@gmail.com",null,"https:\u002F\u002Feliyar.biz","https:\u002F\u002Fgithub.com\u002FBrikerMan",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",99.4,{"name":88,"color":89,"percentage":90},"Shell","#89e051",0.6,2386,432,"2026-04-02T08:38:22","Apache-2.0","未说明","未说明 (取决于所选后端 TensorFlow 及具体模型，如 BERT)",{"notes":98,"python":99,"dependencies":100},"该工具支持 TensorFlow 2.x (kashgari>=2.0.2)、TensorFlow 1.14+ (1.0.0\u003C=kashgari\u003C2.0.0) 或纯 Keras (kashgari\u003C1.0.0) 作为后端。内置支持预训练的 BERT 和 Word2vec 嵌入模型。若使用 TensorFlow 2.10+，需确保使用 tf.keras。","3.6+",[101,102,103],"tensorflow>=2.2 (推荐版本 2.2-2.5)","tensorflow_addons (版本需与 TF 匹配：TF2.1->0.9.1, TF2.2->0.11.2, TF2.3-2.5->0.13.0)","tf.keras (随 TensorFlow 安装)",[35,15,105,14,106],"视频","音频",[108,109,110,111,112,113,114,115,116,117,118,119,120],"nlp","sequence-labeling","text-classification","bert-model","ner","machine-learning","nlp-framework","named-entity-recognition","gpt-2","transfer-learning","seq2seq","bert","text-labeling","2026-03-27T02:49:30.150509","2026-04-15T06:50:33.639201",[124,129,134,139,144,148],{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},33852,"为什么使用 model.predict 预测的结果中包含了 [BOS] 和 [EOS] 标记？","在 0.2.0 版本及以后，默认配置已移除输出中的 `BOS` 和 `EOS` 标记。如果您仍看到这些标记，可能是版本较旧或配置被修改。输入序列中依然包含 `BOS` 和 `EOS` 作为模型信息，但输出标签默认为 'O'。如果需要显式控制，可以在构建模型前调用以下代码：\n```python\nimport kashgari\n# 确保输出不包含 BOS\u002FEOS (默认行为)\n# 如果需要强制给 label 加上 BOS\u002FEOS，可设置为 True\nkashgari.config.sequence_labeling_tokenize_add_bos_eos = False\n```","https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fissues\u002F31",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},33853,"如何在 CPU 环境部署在 GPU 上训练的模型时避免 'CudnnRNN' 报错？","该错误是因为 GPU 训练时使用了 CuDNN 优化的 RNN 层，而 CPU 环境无法支持该算子。维护者已在 0.5.3 版本中修复了此问题。\n解决方案：\n1. 升级 Kashgari 到最新版本（至少 0.5.3）：\n```bash\npip install -U kashgari-tf\n```\n2. 重新训练并保存模型。更新后，库会自动处理兼容性，无需更改代码逻辑，即可在 CPU 服务器上正常加载和运行模型。","https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fissues\u002F198",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},33854,"如何使用 BERT Embedding 时解决 'tuple' object has no attribute 'layer' 错误？","此错误通常发生在 Google Colab 等环境中，原因是 `bert-embedding` 库返回的对象结构与预期不符（返回了 tuple 而非对象）。\n建议检查 `bert-embedding` 的版本兼容性，或者尝试重新安装依赖。如果问题依旧，请确保使用的是与 Kashgari 版本匹配的 `bert-embedding` 版本。在某些情况下，重启 Notebook 内核（Kernel）并重新安装最新版的 Kashgari 也能解决此类属性访问错误：\n```bash\n!pip uninstall -y kashgari-tf bert-embedding\n!pip install kashgari-tf bert-embedding\n```","https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fissues\u002F152",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},33855,"tf.keras 版本训练时出现验证集损失不下降（过拟合）而旧版本正常，如何解决？","这是 tf.keras 后端在处理生成器数据时的一个已知问题。维护者已修复了 `fit_generator` 的相关逻辑。\n解决方案：\n1. 升级到包含修复的最新版本。\n2. 现在可以将 `fit_with_generator` 作为默认方法来节省内存并解决验证集指标异常的问题。确保在训练时使用正确的数据加载方式，避免直接使用旧的 fit 接口导致的数据流问题。","https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fissues\u002F96",{"id":145,"question_zh":146,"answer_zh":147,"source_url":128},33856,"如何启用 cuDNN 层以加快模型训练速度？","在构建模型之前，可以通过设置全局配置来启用 cuDNN 优化的细胞单元，从而稍微提高训练速度。代码如下：\n```python\nimport kashgari\nkashgari.config.use_CuDNN_cell = True\n```\n注意：这需要您的环境支持 CUDA 和 cuDNN。设置完成后，再初始化并训练您的模型即可生效。",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},33857,"项目计划迁移到 tf.keras 吗？会有什么新功能？","是的，项目计划从原生 Keras 迁移到 `tf.keras`。这次迁移旨在提供更好的性能、更好的服务支持以及添加 TPU 支持。计划新增的功能包括：\n1. 多 GPU\u002FTPU 支持；\n2. 导出模型以支持 TensorFlow Serving；\n3. 为 W2V 和 BERT 模型增加微调（Fine-tune）能力。\n迁移可能涉及代码重构和文档完善，建议关注官方发布的最新版本以获取这些特性。","https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fissues\u002F77",[154,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249],{"id":155,"version":156,"summary_zh":157,"released_at":158},263689,"v2.0.2","- 🐛 修复了自定义模型加载问题。\n- 🐛 修复了 Windows 系统下的模型保存问题。\n- 🐛 修复了多标签模型加载问题。\n- 🐛 修复了 CRF 模型加载问题。\n- 🐛 修复了对 TensorFlow 2.3 及以上版本的支持问题。","2021-07-04T10:44:36",{"id":160,"version":161,"summary_zh":162,"released_at":163},263690,"v2.0.1","- ✨ 添加用于 TensorFlow Serving 场景的 `convert_to_saved_model` API。- ✨ 添加 TensorFlow Serving 相关文档。","2020-10-30T12:37:01",{"id":165,"version":166,"summary_zh":167,"released_at":168},263691,"v2.0.0","这是一个使用 TF2 完全重写的版本。\n\n- ✨ 嵌入\n- ✨ 文本分类任务\n- ✨ 文本标注任务\n- ✨ 序列到序列任务\n- ✨ 示例\n    - ✨ 基于序列到序列的神经机器翻译\n    - ✨ 基准测试","2020-09-10T10:28:41",{"id":170,"version":171,"summary_zh":172,"released_at":173},263692,"v1.1.5","- 🐛 修复 Transformer 嵌入错误，已加载自定义对象。（[#358]）","2020-04-25T05:05:59",{"id":175,"version":176,"summary_zh":177,"released_at":178},263693,"v1.1.4","- 🐛修复BERT嵌入v2的错误，将默认设置为不可训练。","2020-03-30T10:56:10",{"id":180,"version":181,"summary_zh":182,"released_at":183},263694,"v1.1.3","- 🐛 修复 vocab_path 拼写错误。","2020-03-29T01:12:34",{"id":185,"version":186,"summary_zh":187,"released_at":188},263695,"v1.1.2","- ✨ 添加保存最佳模型的回调 `KashgariModelCheckpoint`。- ⬆️ 将 `bert4keras` 版本升级至 `0.6.5`。","2020-03-27T03:43:11",{"id":190,"version":191,"summary_zh":192,"released_at":193},263696,"v1.1.1","- ✨ 添加 BERTEmbeddingV2。\n- 💥 将文档迁移到 https:\u002F\u002Freadthedoc.org 进行版本控制。","2020-03-13T03:27:20",{"id":195,"version":196,"summary_zh":197,"released_at":198},263697,"v1.1.0","- ✨ 添加评分任务。([#303])\n- ✨ 添加分词器。\n- 🐛 修复多标签分类模型加载问题。#304","2019-12-27T09:06:13",{"id":200,"version":201,"summary_zh":202,"released_at":203},263698,"v1.0.0","很遗憾，为了保持一致性和清晰性，我们再次进行了重命名。以下是新的命名规范。\n\n| 后端          | PyPI 版本   | 描述           |\n| -------------- | -------------- | -------------- |\n| TensorFlow 2.x | kashgari 2.x.x | 即将推出    |\n| TensorFlow 1.14+ | kashgari 1.x.x | 当前版本 |\n| Keras            | kashgari 0.x.x | 遗留版本 |\n\n如果您正在使用 `kashgari-tf` 版本，只需运行以下命令即可安装新版本：\n\n```bash\npip uninstall -y kashgari-tf\npip install kashgari\n```\n\n现有版本的变化如下：\n\n| 支持的后端 | Kashgari 版本 | Kahgsari-tf 版本 |\n| ----------- | -------------- | ----------------- |\n| TensorFlow 2.x    | kashgari 2.x.x    | -                   |\n| TensorFlow 1.14+  | kashgari 1.0.1    | -                   |\n| TensorFlow 1.14+  | kashgari 1.0.0    | 0.5.5               |\n| TensorFlow 1.14+  | -                 | 0.5.4               |\n| TensorFlow 1.14+  | -                 | 0.5.3               |\n| TensorFlow 1.14+  | -                 | 0.5.2               |\n| TensorFlow 1.14+  | -                 | 0.5.1               |\n| Keras（遗留）    | kashgari 0.2.6    | -                   |\n| Keras（遗留）    | kashgari 0.2.5    | -                   |\n| Keras（遗留）    | kashgari 0.x.x    | -                   |\n\n- 💥 将 PyPI 包名重命名为 `kashgari`。\n- ✨ 支持自定义平均类型，并将日志记录为数组，便于访问最后一个 epoch 的结果。\n- ✨ 在 `base_processor` 中添加了 `min_count` 参数。\n- ✨ 添加了 `disable_auto_summary` 配置。","2019-10-18T08:09:11",{"id":205,"version":206,"summary_zh":207,"released_at":208},263699,"v0.5.4","- ✨ Add shuffle parameter to fit function (#249 )\r\n- ✨ Improved type hinting for the loaded model (#248)\r\n- 🐛 Fix loading models with CRF layers (#244, #228)\r\n- 🐛 Fix the configuration changes during embedding save\u002Fload (#224)\r\n- 🐛 Fix stacked embedding save\u002Fload (#224)\r\n- 🐛 Fix evaluate function where the list has int instead of str (#222)\r\n- 💥 Renaming `model.pre_processor` to `model.processor`\r\n- 🚨 Removing TensorFlow and numpy warnings\r\n- 📝 Add docs how to specify which CPU or GPU\r\n- 📝 Add docs how to compile model with custom optimizer\r\n","2019-09-30T14:21:45",{"id":210,"version":211,"summary_zh":212,"released_at":213},263700,"v0.5.3","- 🐛 Fixing CuDNN Error (#198)","2019-08-11T15:39:13",{"id":215,"version":216,"summary_zh":217,"released_at":218},263701,"v0.5.2","- 💥 Add CuDNN Cell config, disable auto CuDNN cell. (#182, #198)","2019-08-10T06:25:21",{"id":220,"version":221,"summary_zh":222,"released_at":223},263702,"v0.5.1","- 📝 Rewrite documents with mkdocs\r\n- 📝 Add Chinese documents\r\n- ✨ Add `predict_top_k_class` for classification model to get predict probabilities ([#146](https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fissues\u002F146))\r\n- 🚸 Add `label2idx`, `token2idx` properties to Embeddings and Models\r\n- 🚸 Add `tokenizer` property for BERT Embedding. ([#136](https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fissues\u002F136))\r\n- 🚸 Add `predict_kwargs` for models `predict()` function\r\n- ⚡️ Change multi-label classification's default loss function to binary_crossentropy ([#151](https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fissues\u002F151))","2019-07-15T11:00:35",{"id":225,"version":226,"summary_zh":227,"released_at":228},263703,"v0.2.6","- 📝 Add tf.keras version info\r\n- 🐛 Fixing lstm issue in labeling model ([#125](https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fissues\u002F125))\r\n\r\n[Code Compare](https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fcompare\u002Fv0.2.4...v0.2.5)","2019-07-12T10:07:47",{"id":230,"version":231,"summary_zh":232,"released_at":233},263704,"v0.5.0","🎉🎉 tf.keras version 🎉🎉\r\n\r\n- 🎉 Rewrite Kashgari using `tf.keras`. Discussion: [#77](https:\u002F\u002Fgithub.com\u002FBrikerMan\u002FKashgari\u002Fissues\u002F77)\r\n- 🎉 Rewrite Documents.\r\n- ✨ Add TPU support.\r\n- ✨ Add TF-Serving support.\r\n- ✨ Add advance customization support, like multi-input model.\r\n- 🐎 Performance optimization.","2019-07-11T10:24:31",{"id":235,"version":236,"summary_zh":237,"released_at":238},263705,"v0.2.4","* Add BERT output feature layer finetune support. Discussion: #103\r\n* Add BERT output feature layer number selection, default 4 according to BERT paper.\r\n* Fix BERT embedding token index offset issue #104.","2019-06-06T09:55:27",{"id":240,"version":241,"summary_zh":242,"released_at":243},263706,"v0.2.1","* fix missing `sequence_labeling_tokenize_add_bos_eos` consig","2019-03-05T09:01:20",{"id":245,"version":246,"summary_zh":247,"released_at":248},263707,"v0.2.0","* multi-label classification for all classification models\r\n* support cuDNN cell for sequence labeling\r\n* add option for output `BOS` and `EOS` in sequence labeling result, fix #31 ","2019-03-05T08:54:57",{"id":250,"version":251,"summary_zh":252,"released_at":253},263708,"v0.1.9","* add `AVCNNModel`, `KMaxCNNModel`, `RCNNModel`, `AVRNNModel`, `DropoutBGRUModel`, `DropoutAVRNNModel` model to classification task.\r\n* fix several small bugs","2019-02-28T03:15:04"]