[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-amaiya--ktrain":3,"tool-amaiya--ktrain":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":96,"env_os":97,"env_gpu":97,"env_ram":97,"env_deps":98,"category_tags":112,"github_topics":113,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":122,"updated_at":123,"faqs":124,"releases":153},1113,"amaiya\u002Fktrain","ktrain","ktrain is a Python library that makes deep learning and AI more accessible and easier to apply","ktrain 是一个基于 Python 的深度学习工具库，致力于让 AI 技术更易用。它通过封装 TensorFlow Keras 等底层框架，为开发者提供了一套简洁高效的接口，可快速完成神经网络的构建、训练与部署。对于常见的文本、图像、表格等数据任务，ktrain 内置了多种预训练模型（如 BERT、DistilBERT、fastText 等），用户只需少量代码即可实现文本分类、命名实体识别、文档相似度分析等复杂功能。\n\n传统深度学习流程中，数据预处理、模型调参和部署往往需要大量技术积累。ktrain 通过模块化设计和自动化工具链，显著降低了使用门槛。例如其内置的「预测器」可一键调用模型，而「学习率查找器」等实用工具则能辅助超参数优化。对于需要生成式问答等高级功能的用户，还可通过配套的 OnPrem.LLM 工具包实现。\n\n该工具适合希望快速验证 AI 方案的开发者、需要高效实验的研究人员，以及希望将深度学习融入业务的数据科学家。其核心优势在于：1）多模态支持，覆盖文本、视觉、图网络等场景；2）开箱即用的预训练模型库；3）与前沿技术（如 LDA 主题建模、BiLSTM+CRF 序列标","ktrain 是一个基于 Python 的深度学习工具库，致力于让 AI 技术更易用。它通过封装 TensorFlow Keras 等底层框架，为开发者提供了一套简洁高效的接口，可快速完成神经网络的构建、训练与部署。对于常见的文本、图像、表格等数据任务，ktrain 内置了多种预训练模型（如 BERT、DistilBERT、fastText 等），用户只需少量代码即可实现文本分类、命名实体识别、文档相似度分析等复杂功能。\n\n传统深度学习流程中，数据预处理、模型调参和部署往往需要大量技术积累。ktrain 通过模块化设计和自动化工具链，显著降低了使用门槛。例如其内置的「预测器」可一键调用模型，而「学习率查找器」等实用工具则能辅助超参数优化。对于需要生成式问答等高级功能的用户，还可通过配套的 OnPrem.LLM 工具包实现。\n\n该工具适合希望快速验证 AI 方案的开发者、需要高效实验的研究人员，以及希望将深度学习融入业务的数据科学家。其核心优势在于：1）多模态支持，覆盖文本、视觉、图网络等场景；2）开箱即用的预训练模型库；3）与前沿技术（如 LDA 主题建模、BiLSTM+CRF 序列标注）的紧密集成。用户可通过 pip 安装，结合官方示例和 API 文档快速上手。","### [Overview](#overview) | [Tutorials](#tutorials) | [Examples](#examples) |  [Installation](#installation) | [FAQ](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002FFAQ.md) | [API Docs](https:\u002F\u002Famaiya.github.io\u002Fktrain\u002Findex.html) |  [How to Cite](#how-to-cite)\n[![PyPI Status](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fktrain.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fktrain) [![ktrain python compatibility](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fktrain.svg)](https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Fktrain) [![license](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg)](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002FLICENSE) [![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Famaiya_ktrain_readme_f617c87ed07f.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fktrain)\n\u003C!--[![Twitter URL](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttps\u002Ftwitter.com\u002Fktrain_ai.svg?style=social&label=Follow%20%40ktrain_ai)](https:\u002F\u002Ftwitter.com\u002Fktrain_ai)-->\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Famaiya_ktrain_readme_1ce1eedba13b.png\" width=\"200\"\u002F>\n\u003C\u002Fp>\n\n# Welcome to ktrain\n> a \"Swiss Army knife\" for machine learning\n\n\n\n### News and Announcements\n- **2024-02-20**\n  - **ktrain 0.41.x** is released and removes the `ktrain.text.qa.generative_qa` module.  Our [OnPrem.LLM](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fonprem) package should be used for Generative Question-Answering tasks. See [example notebook](https:\u002F\u002Famaiya.github.io\u002Fonprem\u002Fexamples_rag.html).\n----\n\n### Overview\n\n**ktrain** is a lightweight wrapper for the deep learning library [TensorFlow Keras](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fkeras\u002Foverview) (and other libraries) to help build, train, and deploy neural networks and other machine learning models.  Inspired by ML framework extensions like *fastai* and *ludwig*, **ktrain** is designed to make deep learning and AI more accessible and easier to apply for both newcomers and experienced practitioners. With only a few lines of code, **ktrain** allows you to easily and quickly:\n\n- employ fast, accurate, and easy-to-use pre-canned models for  `text`, `vision`, `graph`, and `tabular` data:\n  - `text` data:\n     - **Text Classification**: [BERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.04805), [DistilBERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01108), [NBSVM](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP12-2018), [fastText](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.01759), and other models \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FIMDb-BERT.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Text Regression**: [BERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.04805), [DistilBERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01108), Embedding-based linear text regression, [fastText](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.01759), and other models \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Ftext_regression_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Sequence Labeling (NER)**:  Bidirectional LSTM with optional [CRF layer](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.01360) and various embedding schemes such as pretrained [BERT](https:\u002F\u002Fhuggingface.co\u002Ftransformers\u002Fpretrained_models.html) and [fasttext](https:\u002F\u002Ffasttext.cc\u002Fdocs\u002Fen\u002Fcrawl-vectors.html) word embeddings and character embeddings \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FCoNLL2002_Dutch-BiLSTM.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Ready-to-Use NER models for English, Chinese, and Russian** with no training required \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fshallownlp-examples.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Sentence Pair Classification**  for tasks like paraphrase detection \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FMRPC-BERT.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Unsupervised Topic Modeling** with [LDA](http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume3\u002Fblei03a\u002Fblei03a.pdf)  \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002F20newsgroups-topic_modeling.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Document Similarity with One-Class Learning**:  given some documents of interest, find and score new documents that are thematically similar to them using [One-Class Text Classification](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FOne-class_classification) \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002F20newsgroups-document_similarity_scorer.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Document Recommendation Engines and Semantic Searches**:  given a text snippet from a sample document, recommend documents that are semantically-related from a larger corpus  \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002F20newsgroups-recommendation_engine.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Text Summarization**:  summarize long documents - no training required \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Ftext_summarization.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Extractive Question-Answering**:  ask a large text corpus questions and receive exact answers using BERT \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fquestion_answering_with_bert.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Generative Question-Answering**:  ask a large text corpus questions and receive answers with citations using local or OpenAI models \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Famaiya.github.io\u002Fonprem\u002Fexamples_rag.html)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Easy-to-Use Built-In Search Engine**:  perform keyword searches on large collections of documents \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fquestion_answering_with_bert.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Zero-Shot Learning**:  classify documents into user-provided topics **without** training examples \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fzero_shot_learning_with_nli.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Language Translation**:  translate text from one language to another \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Flanguage_translation_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Text Extraction**: Extract text from PDFs, Word documents, etc. \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Ftext_extraction_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Speech Transcription**: Extract text from audio files \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fspeech_transcription_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Universal Information Extraction**:  extract any kind of information from documents by simply phrasing it in the form of a question \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fqa_information_extraction.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Keyphrase Extraction**:  extract keywords from documents \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fkeyword_extraction_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **Sentiment Analysis**: easy-to-use wrapper to pretrained sentiment analysis \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fsentiment_analysis_example.ipynb)]\u003C\u002Fsup>\n     - **Generative AI with GPT**: Provide instructions to a lightweight ChatGPT-like model running on your own own machine to solve various tasks. \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Famaiya.github.io\u002Fonprem\u002Fexamples.html)]\u003C\u002Fsup>\n  - `vision` data:\n    - **image classification** (e.g., [ResNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385), [Wide ResNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07146), [Inception](https:\u002F\u002Fwww.cs.unc.edu\u002F~wliu\u002Fpapers\u002FGoogLeNet.pdf)) \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1WipQJUPL7zqyvLT10yekxf_HNMXDDtyR)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **image regression** for predicting numerical targets from photos (e.g., age prediction) \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Fvision\u002Futk_faces_age_prediction-resnet50.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **image captioning** with a pretrained model \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Fvision\u002Fimage_captioning_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **object detection** with a pretrained model \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Fvision\u002Fobject_detection_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n  - `graph` data:\n    - **node classification** with graph neural networks ([GraphSAGE](https:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fjure\u002Fpubs\u002Fgraphsage-nips17.pdf)) \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Fgraphs\u002Fpubmed_node_classification-GraphSAGE.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **link prediction** with graph neural networks ([GraphSAGE](https:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fjure\u002Fpubs\u002Fgraphsage-nips17.pdf)) \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Fgraphs\u002Fcora_link_prediction-GraphSAGE.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n  - `tabular` data:\n    - **tabular classification** (e.g., Titanic survival prediction) \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-08-tabular_classification_and_regression.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **tabular regression** (e.g., predicting house prices) \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftabular\u002FHousePricePrediction-MLP.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **causal inference** using meta-learners \u003Csub>\u003Csup>[[example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftabular\u002Fcausal_inference_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n\n- estimate an optimal learning rate for your model given your data using a Learning Rate Finder\n- utilize learning rate schedules such as the [triangular policy](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.01186), the [1cycle policy](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.09820), and [SGDR](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.03983) to effectively minimize loss and improve generalization\n- build text classifiers for any language (e.g., [Arabic Sentiment Analysis with BERT](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FArabicHotelReviews-AraBERT.ipynb), [Chinese Sentiment Analysis with NBSVM](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FChineseHotelReviews-nbsvm.ipynb))\n- easily train NER models for any language (e.g., [Dutch NER](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FCoNLL2002_Dutch-BiLSTM.ipynb) )\n- load and preprocess text and image data from a variety of formats\n- inspect data points that were misclassified and [provide explanations](https:\u002F\u002Feli5.readthedocs.io\u002Fen\u002Flatest\u002F) to help improve your model\n- leverage a simple prediction API for saving and deploying both models and data-preprocessing steps to make predictions on new raw data\n- built-in support for exporting models to [ONNX](https:\u002F\u002Fonnx.ai\u002F) and  [TensorFlow Lite](https:\u002F\u002Fwww.tensorflow.org\u002Flite) (see [example notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fktrain-ONNX-TFLite-examples.ipynb) for more information)\n\n\n\n### Tutorials\nPlease see the following tutorial notebooks for a guide on how to use **ktrain** on your projects:\n* Tutorial 1:  [Introduction](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-01-introduction.ipynb)\n* Tutorial 2:  [Tuning Learning Rates](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-02-tuning-learning-rates.ipynb)\n* Tutorial 3: [Image Classification](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-03-image-classification.ipynb)\n* Tutorial 4: [Text Classification](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-04-text-classification.ipynb)\n* Tutorial 5: [Learning from Unlabeled Text Data](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-05-learning_from_unlabeled_text_data.ipynb)\n* Tutorial 6: [Text Sequence Tagging](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-06-sequence-tagging.ipynb) for Named Entity Recognition\n* Tutorial 7: [Graph Node Classification](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-07-graph-node_classification.ipynb) with Graph Neural Networks\n* Tutorial 8: [Tabular Classification and Regression](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-08-tabular_classification_and_regression.ipynb)\n* Tutorial A1: [Additional tricks](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-A1-additional-tricks.ipynb), which covers topics such as previewing data augmentation schemes, inspecting intermediate output of Keras models for debugging, setting global weight decay, and use of built-in and custom callbacks.\n* Tutorial A2: [Explaining Predictions and Misclassifications](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-A2-explaining-predictions.ipynb)\n* Tutorial A3: [Text Classification with Hugging Face Transformers](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Ftutorials\u002Ftutorial-A3-hugging_face_transformers.ipynb)\n* Tutorial A4: [Using Custom Data Formats and Models: Text Regression with Extra Regressors](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-A4-customdata-text_regression_with_extra_regressors.ipynb)\n\n\nSome blog tutorials and other guides about **ktrain** are shown below:\n\n> [**ktrain: A Lightweight Wrapper for Keras to Help Train Neural Networks**](https:\u002F\u002Ftowardsdatascience.com\u002Fktrain-a-lightweight-wrapper-for-keras-to-help-train-neural-networks-82851ba889c)\n\n\n> [**BERT Text Classification in 3 Lines of Code**](https:\u002F\u002Ftowardsdatascience.com\u002Fbert-text-classification-in-3-lines-of-code-using-keras-264db7e7a358)\n\n> [**Text Classification with Hugging Face Transformers in  TensorFlow 2 (Without Tears)**](https:\u002F\u002Fmedium.com\u002F@asmaiya\u002Ftext-classification-with-hugging-face-transformers-in-tensorflow-2-without-tears-ee50e4f3e7ed)\n\n> [**Build an Open-Domain Question-Answering System With BERT in 3 Lines of Code**](https:\u002F\u002Ftowardsdatascience.com\u002Fbuild-an-open-domain-question-answering-system-with-bert-in-3-lines-of-code-da0131bc516b)\n\n> [**Finetuning BERT using ktrain for Disaster Tweets Classification**](https:\u002F\u002Fmedium.com\u002Fanalytics-vidhya\u002Ffinetuning-bert-using-ktrain-for-disaster-tweets-classification-18f64a50910b) by Hamiz Ahmed\n\n> [**Indonesian NLP Examples with ktrain**](https:\u002F\u002Fgithub.com\u002Filos-vigil\u002Fktrain-assessment-study) by Sandy Khosasi\n\n\n\n\n\n\n\n\n\n### Examples\n\nUsing **ktrain** on **Google Colab**?  See these Colab examples:\n-  **text classification:** [a simple demo of Multiclass Text Classification with BERT](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1AH3fkKiEqBpVpO5ua00scp7zcHs5IDLK)\n-  **text classification:** [a simple demo of Multiclass Text Classification with Hugging Face Transformers](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1YxcceZxsNlvK35pRURgbwvkgejXwFxUt)\n- **sequence-tagging (NER):** [NER example using `transformer` word embeddings](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1whrnmM7ElqbaEhXf760eiOMiYk5MNO-Z?usp=sharing)\n- **question-answering:** [End-to-End Question-Answering](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1tcsEQ7igx7lw_R0Pfpmsg9Wf3DEXyOvk?usp=sharing) using the 20newsgroups dataset.\n-  **image classification:** [image classification with Cats vs. Dogs](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1WipQJUPL7zqyvLT10yekxf_HNMXDDtyR)\n\n\n\nTasks such as text classification and image classification can be accomplished easily with\nonly a few lines of code.\n\n#### Example: Text Classification of [IMDb Movie Reviews](https:\u002F\u002Fai.stanford.edu\u002F~amaas\u002Fdata\u002Fsentiment\u002F) Using [BERT](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.04805.pdf) \u003Csub>\u003Csup>[[see notebook](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FIMDb-BERT.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\nimport ktrain\nfrom ktrain import text as txt\n\n# load data\n(x_train, y_train), (x_test, y_test), preproc = txt.texts_from_folder('data\u002FaclImdb', maxlen=500,\n                                                                     preprocess_mode='bert',\n                                                                     train_test_names=['train', 'test'],\n                                                                     classes=['pos', 'neg'])\n\n# load model\nmodel = txt.text_classifier('bert', (x_train, y_train), preproc=preproc)\n\n# wrap model and data in ktrain.Learner object\nlearner = ktrain.get_learner(model,\n                             train_data=(x_train, y_train),\n                             val_data=(x_test, y_test),\n                             batch_size=6)\n\n# find good learning rate\nlearner.lr_find()             # briefly simulate training to find good learning rate\nlearner.lr_plot()             # visually identify best learning rate\n\n# train using 1cycle learning rate schedule for 3 epochs\nlearner.fit_onecycle(2e-5, 3)\n```\n\n\n#### Example: Classifying Images of [Dogs and Cats](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fdogs-vs-cats) Using a Pretrained [ResNet50](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385) model \u003Csub>\u003Csup>[[see notebook](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1WipQJUPL7zqyvLT10yekxf_HNMXDDtyR)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\nimport ktrain\nfrom ktrain import vision as vis\n\n# load data\n(train_data, val_data, preproc) = vis.images_from_folder(\n                                              datadir='data\u002Fdogscats',\n                                              data_aug = vis.get_data_aug(horizontal_flip=True),\n                                              train_test_names=['train', 'valid'],\n                                              target_size=(224,224), color_mode='rgb')\n\n# load model\nmodel = vis.image_classifier('pretrained_resnet50', train_data, val_data, freeze_layers=80)\n\n# wrap model and data in ktrain.Learner object\nlearner = ktrain.get_learner(model=model, train_data=train_data, val_data=val_data,\n                             workers=8, use_multiprocessing=False, batch_size=64)\n\n# find good learning rate\nlearner.lr_find()             # briefly simulate training to find good learning rate\nlearner.lr_plot()             # visually identify best learning rate\n\n# train using triangular policy with ModelCheckpoint and implicit ReduceLROnPlateau and EarlyStopping\nlearner.autofit(1e-4, checkpoint_folder='\u002Ftmp\u002Fsaved_weights')\n```\n\n#### Example: Sequence Labeling for [Named Entity Recognition](https:\u002F\u002Fwww.kaggle.com\u002Fabhinavwalia95\u002Fentity-annotated-corpus\u002Fversion\u002F2) using a randomly initialized [Bidirectional LSTM CRF](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.01360) model \u003Csub>\u003Csup>[[see notebook](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FCoNLL2003-BiLSTM_CRF.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\nimport ktrain\nfrom ktrain import text as txt\n\n# load data\n(trn, val, preproc) = txt.entities_from_txt('data\u002Fner_dataset.csv',\n                                            sentence_column='Sentence #',\n                                            word_column='Word',\n                                            tag_column='Tag',\n                                            data_format='gmb',\n                                            use_char=True) # enable character embeddings\n\n# load model\nmodel = txt.sequence_tagger('bilstm-crf', preproc)\n\n# wrap model and data in ktrain.Learner object\nlearner = ktrain.get_learner(model, train_data=trn, val_data=val)\n\n\n# conventional training for 1 epoch using a learning rate of 0.001 (Keras default for Adam optmizer)\nlearner.fit(1e-3, 1)\n```\n\n\n#### Example: Node Classification on [Cora Citation Graph](https:\u002F\u002Flinqs-data.soe.ucsc.edu\u002Fpublic\u002Flbc\u002Fcora.tgz) using a [GraphSAGE](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02216) model \u003Csub>\u003Csup>[[see notbook](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Fgraphs\u002Fcora_node_classification-GraphSAGE.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\nimport ktrain\nfrom ktrain import graph as gr\n\n# load data with supervision ratio of 10%\n(trn, val, preproc)  = gr.graph_nodes_from_csv(\n                                               'cora.content', # node attributes\u002Flabels\n                                               'cora.cites',   # edge list\n                                               sample_size=20,\n                                               holdout_pct=None,\n                                               holdout_for_inductive=False,\n                                              train_pct=0.1, sep='\\t')\n\n# load model\nmodel=gr.graph_node_classifier('graphsage', trn)\n\n# wrap model and data in ktrain.Learner object\nlearner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=64)\n\n\n# find good learning rate\nlearner.lr_find(max_epochs=100) # briefly simulate training to find good learning rate\nlearner.lr_plot()               # visually identify best learning rate\n\n# train using triangular policy with ModelCheckpoint and implicit ReduceLROnPlateau and EarlyStopping\nlearner.autofit(0.01, checkpoint_folder='\u002Ftmp\u002Fsaved_weights')\n```\n\n\n#### Example: Text Classification with [Hugging Face Transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) on [20 Newsgroups Dataset](https:\u002F\u002Fscikit-learn.org\u002Fstable\u002Ftutorial\u002Ftext_analytics\u002Fworking_with_text_data.html) Using [DistilBERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01108) \u003Csub>\u003Csup>[[see notebook](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-A3-hugging_face_transformers.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\n# load text data\ncategories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']\nfrom sklearn.datasets import fetch_20newsgroups\ntrain_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)\ntest_b = fetch_20newsgroups(subset='test',categories=categories, shuffle=True)\n(x_train, y_train) = (train_b.data, train_b.target)\n(x_test, y_test) = (test_b.data, test_b.target)\n\n# build, train, and validate model (Transformer is wrapper around transformers library)\nimport ktrain\nfrom ktrain import text\nMODEL_NAME = 'distilbert-base-uncased'\nt = text.Transformer(MODEL_NAME, maxlen=500, class_names=train_b.target_names)\ntrn = t.preprocess_train(x_train, y_train)\nval = t.preprocess_test(x_test, y_test)\nmodel = t.get_classifier()\nlearner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)\nlearner.fit_onecycle(5e-5, 4)\nlearner.validate(class_names=t.get_classes()) # class_names must be string values\n\n# Output from learner.validate()\n#                        precision    recall  f1-score   support\n#\n#           alt.atheism       0.92      0.93      0.93       319\n#         comp.graphics       0.97      0.97      0.97       389\n#               sci.med       0.97      0.95      0.96       396\n#soc.religion.christian       0.96      0.96      0.96       398\n#\n#              accuracy                           0.96      1502\n#             macro avg       0.95      0.96      0.95      1502\n#          weighted avg       0.96      0.96      0.96      1502\n```\n\n\u003C!--\n#### Example: NER With [BioBERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.08746) Embeddings\n```python\n# NER with BioBERT embeddings\nimport ktrain\nfrom ktrain import text as txt\nx_train= [['IL-2', 'responsiveness', 'requires', 'three', 'distinct', 'elements', 'within', 'the', 'enhancer', '.'], ...]\ny_train=[['B-protein', 'O', 'O', 'O', 'O', 'B-DNA', 'O', 'O', 'B-DNA', 'O'], ...]\n(trn, val, preproc) = txt.entities_from_array(x_train, y_train)\nmodel = txt.sequence_tagger('bilstm-bert', preproc, bert_model='monologg\u002Fbiobert_v1.1_pubmed')\nlearner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=128)\nlearner.fit(0.01, 1, cycle_len=5)\n```\n-->\n\n#### Example: Tabular Classification for [Titanic Survival Prediction](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Ftitanic) Using an MLP  \u003Csub>\u003Csup>[[see notebook](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftabular\u002Ftabular_classification_and_regression_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\nimport ktrain\nfrom ktrain import tabular\nimport pandas as pd\ntrain_df = pd.read_csv('train.csv', index_col=0)\ntrain_df = train_df.drop(['Name', 'Ticket', 'Cabin'], 1)\ntrn, val, preproc = tabular.tabular_from_df(train_df, label_columns=['Survived'], random_state=42)\nlearner = ktrain.get_learner(tabular.tabular_classifier('mlp', trn), train_data=trn, val_data=val)\nlearner.lr_find(show_plot=True, max_epochs=5) # estimate learning rate\nlearner.fit_onecycle(5e-3, 10)\n\n# evaluate held-out labeled test set\ntst = preproc.preprocess_test(pd.read_csv('heldout.csv', index_col=0))\nlearner.evaluate(tst, class_names=preproc.get_classes())\n```\n\n\n\n\n\n\n\n#### Additional examples can be found [here](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Ftree\u002Fmaster\u002Fexamples).\n\n\n\n### Installation\n\n1. Make sure pip is up-to-date with: `pip install -U pip`\n\n2. [Install TensorFlow 2](https:\u002F\u002Fwww.tensorflow.org\u002Finstall) if it is not already installed (e.g., `pip install tensorflow`). \n\n3. Install *ktrain*: `pip install ktrain`\n\n4. If using `tensorflow>=2.16`:\n    - Install **tf_keras**: `pip install tf_keras`\n    - Set the environment variable `TF_USE_LEGACY_KERAS` to true before importing **ktrain**\n\n\nThe above should be all you need on Linux systems and cloud computing environments like Google Colab and AWS EC2.  If you are using **ktrain** on a **Windows computer**, you can follow these\n[more detailed instructions](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002FFAQ.md#how-do-i-install-ktrain-on-a-windows-machine) that include some extra steps.\n\n\n#### Notes about TensorFlow Versions\n- As of `tensorflow>=2.11`, you must only use legacy optimizers such as `tf.keras.optimizers.legacy.Adam`.  The newer `tf.keras.optimizers.Optimizer` base class is not supported at this time.  For instance, when using TensorFlow 2.11 and above, please use `tf.keras.optimzers.legacy.Adam()` instead of the string `\"adam\"` in `model.compile`. **ktrain** does this automatically when using out-of-the-box models (e.g., models from the `transformers` library).\n- As mentioned above, due to breaking changes in TensorFlow 2.16, you will need to install the `tf_keras` package and also set the environment variable `TF_USE_LEGACY_KERAS=True` before importing **ktrain** (e.g., add `export TF_USE_LEGACY_KERAS=1` in `.bashrc` or add `os.environ['TF_USE_LEGACY_KERAS']=\"1\"` at top of your code, etc.).\n\n#### Additional Notes About Installation\n\n- Some optional, extra libraries used for some operations can be installed as needed. (Notice that **ktrain** is using forked versions of the `eli5` and `stellargraph` libraries in order to support TensorFlow2.)\n```python\n# for graph module:\npip install https:\u002F\u002Fgithub.com\u002Famaiya\u002Fstellargraph\u002Farchive\u002Frefs\u002Fheads\u002Fno_tf_dep_082.zip\n# for text.TextPredictor.explain and vision.ImagePredictor.explain:\npip install https:\u002F\u002Fgithub.com\u002Famaiya\u002Feli5-tf\u002Farchive\u002Frefs\u002Fheads\u002Fmaster.zip\n# for tabular.TabularPredictor.explain:\npip install shap\n# for text.zsl (ZeroShotClassifier), text.summarization, text.translation, text.speech:\npip install torch\n# for text.speech:\npip install librosa\n# for tabular.causal_inference_model:\npip install causalnlp\n# for text.summarization.core.LexRankSummarizer:\npip install sumy\n# for text.kw.KeywordExtractor\npip install textblob\n# for text.generative_ai\npip install onprem\n```\n- **ktrain** purposely pins to a lower version of **transformers** to include support for older versions of TensorFlow.  If you need a newer version of `transformers`, it is usually safe for you to upgrade `transformers`, as long as you do it **after** installing **ktrain**.\n\n- As of v0.30.x, TensorFlow installation is optional and only required if training neural networks.  Although **ktrain** uses TensorFlow for neural network training, it also includes a variety of useful pretrained PyTorch models and sklearn models, which\ncan be used out-of-the-box **without** having TensorFlow installed, as summarized in this table:\n\n\n| Feature  | TensorFlow |  PyTorch | Sklearn\n| --- | :-: | :-: | :-: |\n| [training](https:\u002F\u002Ftowardsdatascience.com\u002Fktrain-a-lightweight-wrapper-for-keras-to-help-train-neural-networks-82851ba889c) any neural network (e.g., text or image classification)  |  ✅  | ❌  | ❌  |\n| [End-to-End Question-Answering](https:\u002F\u002Fnbviewer.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fquestion_answering_with_bert.ipynb) (pretrained)             |  ✅  | ✅  | ❌  |\n| [QA-Based Information Extraction](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fqa_information_extraction.ipynb) (pretrained)      |  ✅  | ✅  | ❌  |\n| [Zero-Shot Classification](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fzero_shot_learning_with_nli.ipynb) (pretrained)   |  ❌  | ✅  | ❌  |\n| [Language Translation](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Flanguage_translation_example.ipynb) (pretrained)      |  ❌  | ✅  | ❌  |\n| [Summarization](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Ftext_summarization_with_bart.ipynb) (pretrained)             |  ❌  | ✅  | ❌  |\n| [Speech Transcription](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fspeech_transcription_example.ipynb) (pretrained)     |  ❌  | ✅  |❌   |\n| [Image Captioning](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Fvision\u002Fimage_captioning_example.ipynb) (pretrained)     |  ❌  | ✅  |❌   |\n| [Object Detection](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Fvision\u002Fobject_detection_example.ipynb) (pretrained)     |  ❌  | ✅  |❌   |\n| [Sentiment Analysis](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fsentiment_analysis_example.ipynb) (pretrained)     |  ❌  | ✅  |❌   |\n| [GenerativeAI](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fgenerative_ai_example.ipynb) (sentence-transformers)     |  ❌  | ✅  |❌   |\n| [Topic Modeling](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-05-learning_from_unlabeled_text_data.ipynb) (sklearn)  |  ❌  | ❌  | ✅  |\n| [Keyphrase Extraction](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fkeyword_extraction_example.ipynb) (textblob\u002Fnltk\u002Fsklearn)   |  ❌  | ❌  | ✅  |\n\nAs noted above, end-to-end question-answering and information extraction in **ktrain** can be used with either TensorFlow (using `framework='tf'`) or PyTorch (using `framework='pt'`).\n\n\n\n\u003C!--\npip install pdoc3==0.9.2\npdoc3 --html -o docs ktrain\ndiff -qr docs\u002Fktrain\u002F \u002Fpath\u002Fto\u002Frepo\u002Fktrain\u002Fdocs\n-->\n\n### How to Cite\n\nPlease cite the [following paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.10703) when using **ktrain**:\n```\n@article{maiya2020ktrain,\n    title={ktrain: A Low-Code Library for Augmented Machine Learning},\n    author={Arun S. Maiya},\n    year={2020},\n    eprint={2004.10703},\n    archivePrefix={arXiv},\n    primaryClass={cs.LG},\n    journal={arXiv preprint arXiv:2004.10703},\n}\n\n```\n\n\n\u003C!--\n### Requirements\n\nThe following software\u002Flibraries should be installed:\n\n- [Python 3.6+](https:\u002F\u002Fwww.python.org\u002F) (tested on 3.6.7)\n- [Keras](https:\u002F\u002Fkeras.io\u002F)  (tested on 2.2.4)\n- [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002F)  (tested on 1.10.1)\n- [scikit-learn](https:\u002F\u002Fscikit-learn.org\u002Fstable\u002F) (tested on 0.20.0)\n- [matplotlib](https:\u002F\u002Fmatplotlib.org\u002F) (tested on 3.0.0)\n- [pandas](https:\u002F\u002Fpandas.pydata.org\u002F) (tested on 0.24.2)\n- [keras_bert](https:\u002F\u002Fgithub.com\u002FCyberZHG\u002Fkeras-bert\u002Ftree\u002Fmaster\u002Fkeras_bert)\n- [fastprogress](https:\u002F\u002Fgithub.com\u002Ffastai\u002Ffastprogress)\n-->\n\n\n\n----\n**Creator:  [Arun S. Maiya](http:\u002F\u002Farun.maiya.net)**\n\n**Email:** arun [at] maiya [dot] net\n\n","### [概述](#overview) | [教程](#tutorials) | [示例](#examples) |  [安装](#installation) | [常见问题](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002FFAQ.md) | [API 文档](https:\u002F\u002Famaiya.github.io\u002Fktrain\u002Findex.html) |  [引用方式](#how-to-cite)\n[![PyPI 状态](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fktrain.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fktrain) [![ktrain Python 兼容性](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fktrain.svg)](https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Fktrain) [![许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg)](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002FLICENSE) [![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Famaiya_ktrain_readme_f617c87ed07f.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fktrain)\n\u003C!--[![Twitter URL](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttps\u002Ftwitter.com\u002Fktrain_ai.svg?style=social&label=Follow%20%40ktrain_ai)](https:\u002F\u002Ftwitter.com\u002Fktrain_ai)-->\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Famaiya_ktrain_readme_1ce1eedba13b.png\" width=\"200\"\u002F>\n\u003C\u002Fp>\n\n# 欢迎来到 ktrain\n> 一个机器学习的\"瑞士军刀\"\n\n### 最新动态\n- **2024-02-20**\n  - **ktrain 0.41.x** 版本发布，移除了 `ktrain.text.qa.generative_qa` 模块。生成式问答（Generative Question-Answering）任务请使用我们的 [OnPrem.LLM](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fonprem) 包。参见[示例笔记本](https:\u002F\u002Famaiya.github.io\u002Fonprem\u002Fexamples_rag.html)。\n----\n\n### 概述\n\n**ktrain** 是深度学习库 [TensorFlow Keras](https:\u002F\u002Fwww.tensorflow.org\u002Fguide\u002Fkeras\u002Foverview)（及其他库）的轻量级封装工具，旨在帮助构建、训练和部署神经网络及其他机器学习模型。受 *fastai* 和 *ludwig* 等机器学习框架扩展的启发，**ktrain** 致力于让深度学习和人工智能对新手和资深从业者都更易用、更易应用。仅需数行代码，**ktrain** 即可帮助您轻松快速地实现以下功能：\n\n- 采用快速、准确且易于使用的预置模型处理 `text`（文本）、`vision`（视觉）、`graph`（图数据）和 `tabular`（表格）数据：\n  - `text` 数据：\n     - **文本分类**：[BERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.04805)、[DistilBERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01108)、[NBSVM](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP12-2018)、[fastText](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.01759) 及其他模型 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FIMDb-BERT.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **文本回归**：[BERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.04805)、[DistilBERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01108)、基于嵌入的线性文本回归、[fastText](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.01759) 及其他模型 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Ftext_regression_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **序列标注（NER）**：双向LSTM（可选[CRF层](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.01360)）及多种嵌入方案，如预训练[BERT](https:\u002F\u002Fhuggingface.co\u002Ftransformers\u002Fpretrained_models.html)和[fasttext](https:\u002F\u002Ffasttext.cc\u002Fdocs\u002Fen\u002Fcrawl-vectors.html)词向量及字符嵌入 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FCoNLL2002_Dutch-BiLSTM.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **开箱即用的英文、中文和俄文NER模型** 无需训练 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fshallownlp-examples.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **句子对分类** 用于检测复述等任务 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FMRPC-BERT.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **无监督主题建模** 使用[LDA](http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume3\u002Fblei03a\u002Fblei03a.pdf) \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002F20newsgroups-topic_modeling.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **基于单类学习的文档相似度**：给定一些感兴趣的文档，使用[单类文本分类](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FOne-class_classification)方法查找并评分与之主题相似的新文档 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002F20newsgroups-document_similarity_scorer.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **文档推荐引擎与语义搜索**：给定样本文档的文本片段，从更大语料库中推荐语义相关的文档 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002F20newsgroups-recommendation_engine.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **文本摘要**：总结长文档 - 无需训练 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Ftext_summarization.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **抽取式问答**：对大型文本语料库提问并使用BERT获取精确答案 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fquestion_answering_with_bert.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **生成式问答**：对大型文本语料库提问，使用本地或OpenAI模型获取带引用的完整答案 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Famaiya.github.io\u002Fonprem\u002Fexamples_rag.html)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **易用的内置搜索引擎**：对大量文档集合执行关键词搜索 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fquestion_answering_with_bert.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **零样本学习**：无需训练样本即可将文档分类到用户提供的主题 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fzero_shot_learning_with_nli.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **语言翻译**：将文本从一种语言翻译为另一种 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Flanguage_translation_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **文本提取**：从PDF、Word文档等提取文本 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Ftext_extraction_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **语音转录**：从音频文件提取文本 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fspeech_transcription_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **通用信息抽取**：通过将需求转化为问题形式从文档中提取任意类型信息 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fqa_information_extraction.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **关键词抽取**：从文档中提取关键词 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fkeyword_extraction_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n     - **情感分析**：封装预训练情感分析模型的易用接口 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fsentiment_analysis_example.ipynb)]\u003C\u002Fsup>\n     - **基于GPT的生成式AI**：通过本地轻量级ChatGPT类模型执行指令解决各种任务 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Famaiya.github.io\u002Fonprem\u002Fexamples.html)]\u003C\u002Fsup>\n  - `vision` 数据：\n    - **图像分类**（例如[ResNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385)、[Wide ResNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07146)、[Inception](https:\u002F\u002Fwww.cs.unc.edu\u002F~wliu\u002Fpapers\u002FGoogLeNet.pdf)）\u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1WipQJUPL7zqyvLT10yekxf_HNMXDDtyR)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **图像回归** 预测照片中的数值目标（如年龄预测）\u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Fvision\u002Futk_faces_age_prediction-resnet50.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **图像描述生成** 使用预训练模型 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Fvision\u002Fimage_captioning_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **目标检测** 使用预训练模型 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Fvision\u002Fobject_detection_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n  - `graph` 数据：\n    - **节点分类** 使用图神经网络（[GraphSAGE](https:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fjure\u002Fpubs\u002Fgraphsage-nips17.pdf)）\u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Fgraphs\u002Fpubmed_node_classification-GraphSAGE.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **链接预测** 使用图神经网络（[GraphSAGE](https:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fjure\u002Fpubs\u002Fgraphsage-nips17.pdf)）\u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Fgraphs\u002Fcora_link_prediction-GraphSAGE.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n  - `tabular` 数据：\n    - **表格分类**（例如泰坦尼克号生存预测）\u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-08-tabular_classification_and_regression.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **表格回归**（例如预测房价）\u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftabular\u002FHousePricePrediction-MLP.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n    - **因果推断** 使用元学习器 \u003Csub>\u003Csup>[[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftabular\u002Fcausal_inference_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n\n### 主要功能\n\n- 使用学习率查找器（Learning Rate Finder）根据您的数据为模型估算最佳学习率\n- 利用学习率调度策略，如[三角策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.01186)（triangular policy）、[单周期策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.09820)（1cycle policy）和[热重启随机梯度下降](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.03983)（SGDR）有效最小化损失并提升泛化能力\n- 构建任意语言的文本分类器（例如：[使用BERT的阿拉伯语情感分析](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FArabicHotelReviews-AraBERT.ipynb)、[使用NBSVM的中文情感分析](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FChineseHotelReviews-nbsvm.ipynb)）\n- 轻松训练任意语言的命名实体识别（NER）模型（例如：[荷兰语NER](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FCoNLL2002_Dutch-BiLSTM.ipynb)）\n- 从多种格式加载和预处理文本与图像数据\n- 检查分类错误的数据点并提供[解释](https:\u002F\u002Feli5.readthedocs.io\u002Fen\u002Flatest\u002F)以帮助优化模型\n- 利用简单预测API保存和部署模型及数据预处理步骤，对新原始数据进行预测\n- 内置支持导出模型为[ONNX](https:\u002F\u002Fonnx.ai\u002F)和[TensorFlow Lite](https:\u002F\u002Fwww.tensorflow.org\u002Flite)（详见[示例笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fktrain-ONNX-TFLite-examples.ipynb)）\n\n\n\n### 教程\n请参考以下教程笔记本了解如何在项目中使用**ktrain**：\n* 教程1：[简介](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-01-introduction.ipynb)\n* 教程2：[调优学习率](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-02-tuning-learning-rates.ipynb)\n* 教程3：[图像分类](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-03-image-classification.ipynb)\n* 教程4：[文本分类](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-04-text-classification.ipynb)\n* 教程5：[从未标记文本数据学习](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-05-learning_from_unlabeled_text_data.ipynb)\n* 教程6：[文本序列标注](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-06-sequence-tagging.ipynb)（用于命名实体识别）\n* 教程7：[图神经网络的图节点分类](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-07-graph-node_classification.ipynb)\n* 教程8：[表格分类与回归](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-08-tabular_classification_and_regression.ipynb)\n* 教程A1：[附加技巧](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-A1-additional-tricks.ipynb)（涵盖数据增强方案预览、调试时检查Keras模型中间输出、设置全局权重衰减，以及内置和自定义回调函数的使用）\n* 教程A2：[解释预测结果与误分类](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-A2-explaining-predictions.ipynb)\n* 教程A3：[使用Hugging Face Transformers进行文本分类](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Ftutorials\u002Ftutorial-A3-hugging_face_transformers.ipynb)\n* 教程A4：[使用自定义数据格式和模型：带额外回归器的文本回归](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-A4-customdata-text_regression_with_extra_regressors.ipynb)\n\n\n\n以下是关于**ktrain**的一些博客教程和其他指南：\n\n> [**ktrain：用于Keras的轻量级封装工具，帮助训练神经网络**](https:\u002F\u002Ftowardsdatascience.com\u002Fktrain-a-lightweight-wrapper-for-keras-to-help-train-neural-networks-82851ba889c)\n\n\n> [**使用3行代码进行BERT文本分类**](https:\u002F\u002Ftowardsdatascience.com\u002Fbert-text-classification-in-3-lines-of-code-using-keras-264db7e7a358)\n\n> [**在TensorFlow 2中使用Hugging Face Transformers进行文本分类（无痛实践）**](https:\u002F\u002Fmedium.com\u002F@asmaiya\u002Ftext-classification-with-hugging-face-transformers-in-tensorflow-2-without-tears-ee50e4f3e7ed)\n\n> [**使用BERT构建开放领域问答系统（仅需3行代码）**](https:\u002F\u002Ftowardsdatascience.com\u002Fbuild-an-open-domain-question-answering-system-with-bert-in-3-lines-of-code-da0131bc516b)\n\n> [**使用ktrain对灾难推文进行BERT微调分类**](https:\u002F\u002Fmedium.com\u002Fanalytics-vidhya\u002Ffinetuning-bert-using-ktrain-for-disaster-tweets-classification-18f64a50910b)（作者：Hamiz Ahmed）\n\n> [**使用ktrain的印尼语NLP示例**](https:\u002F\u002Fgithub.com\u002Filos-vigil\u002Fktrain-assessment-study)（作者：Sandy Khosasi）\n\n\n\n\n\n\n\n\n\n### 示例\n\n在**Google Colab**中使用**ktrain**？请参考以下Colab示例：\n-  **文本分类：** [使用BERT进行多类文本分类的简单演示](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1AH3fkKiEqBpVpO5ua00scp7zcHs5IDLK)\n-  **文本分类：** [使用Hugging Face Transformers进行多类文本分类的简单演示](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1YxcceZxsNlvK35pRURgbwvkgejXwFxUt)\n- **序列标注（NER）：** [使用`transformer`词嵌入的NER示例](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1whrnmM7ElqbaEhXf760eiOMiYk5MNO-Z?usp=sharing)\n- **问答系统：** [使用20newsgroups数据集的端到端问答演示](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1tcsEQ7igx7lw_R0Pfpmsg9Wf3DEXyOvk?usp=sharing)\n-  **图像分类：** [猫狗图像分类演示](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1WipQJUPL7zqyvLT10yekxf_HNMXDDtyR)\n\n\n\n通过**ktrain**，文本分类和图像分类等任务只需几行代码即可完成。\n\n#### 示例：使用[BERT](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.04805.pdf)对[IMDb电影评论](https:\u002F\u002Fai.stanford.edu\u002F~amaas\u002Fdata\u002Fsentiment\u002F)进行文本分类 \u003Csub>\u003Csup>[[查看笔记本](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FIMDb-BERT.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\nimport ktrain\nfrom ktrain import text as txt\n\n# 加载数据\n(x_train, y_train), (x_test, y_test), preproc = txt.texts_from_folder('data\u002FaclImdb', maxlen=500,\n                                                                     preprocess_mode='bert',\n                                                                     train_test_names=['train', 'test'],\n                                                                     classes=['pos', 'neg'])\n\n# 加载模型\nmodel = txt.text_classifier('bert', (x_train, y_train), preproc=preproc)\n```\n\n# 将模型和数据封装进ktrain.Learner对象\nlearner = ktrain.get_learner(model,\n                             train_data=(x_train, y_train),\n                             val_data=(x_test, y_test),\n                             batch_size=6)\n\n# 寻找最佳学习率\nlearner.lr_find()             # 通过简短的训练模拟寻找合适学习率\nlearner.lr_plot()             # 通过可视化方式确定最佳学习率\n\n# 使用1周期学习率策略训练3个epoch\nlearner.fit_onecycle(2e-5, 3)\n```\n\n\n#### 示例：使用预训练[ResNet50](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385)模型对[犬猫图像](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fdogs-vs-cats)进行分类 \u003Csub>\u003Csup>[[查看笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1WipQJUPL7zqyvLT10yekxf_HNMXDDtyR)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\nimport ktrain\nfrom ktrain import vision as vis\n\n# 加载数据\n(train_data, val_data, preproc) = vis.images_from_folder(\n                                              datadir='data\u002Fdogscats',\n                                              data_aug = vis.get_data_aug(horizontal_flip=True),\n                                              train_test_names=['train', 'valid'],\n                                              target_size=(224,224), color_mode='rgb')\n\n# 加载模型\nmodel = vis.image_classifier('pretrained_resnet50', train_data, val_data, freeze_layers=80)\n\n# 将模型和数据封装进ktrain.Learner对象\nlearner = ktrain.get_learner(model=model, train_data=train_data, val_data=val_data,\n                             workers=8, use_multiprocessing=False, batch_size=64)\n\n# 寻找最佳学习率\nlearner.lr_find()             # 通过简短的训练模拟寻找合适学习率\nlearner.lr_plot()             # 通过可视化方式确定最佳学习率\n\n# 使用三角波策略训练，包含ModelCheckpoint和隐式的ReduceLROnPlateau及EarlyStopping\nlearner.autofit(1e-4, checkpoint_folder='\u002Ftmp\u002Fsaved_weights')\n```\n\n#### 示例：使用随机初始化的[Bidirectional LSTM CRF](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.01360)模型进行[命名实体识别](https:\u002F\u002Fwww.kaggle.com\u002Fabhinavwalia95\u002Fentity-annotated-corpus\u002Fversion\u002F2) \u003Csub>\u003Csup>[[查看笔记本](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002FCoNLL2003-BiLSTM_CRF.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\nimport ktrain\nfrom ktrain import text as txt\n\n# 加载数据\n(trn, val, preproc) = txt.entities_from_txt('data\u002Fner_dataset.csv',\n                                            sentence_column='Sentence #',\n                                            word_column='Word',\n                                            tag_column='Tag',\n                                            data_format='gmb',\n                                            use_char=True) # 启用字符嵌入\n\n# 加载模型\nmodel = txt.sequence_tagger('bilstm-crf', preproc)\n\n# 将模型和数据封装进ktrain.Learner对象\nlearner = ktrain.get_learner(model, train_data=trn, val_data=val)\n\n\n# 使用常规训练方式（1个epoch），学习率为0.001（Adam优化器的Keras默认值）\nlearner.fit(1e-3, 1)\n```\n\n\n#### 示例：使用[GraphSAGE](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02216)模型在[Cora引文图](https:\u002F\u002Flinqs-data.soe.ucsc.edu\u002Fpublic\u002Flbc\u002Fcora.tgz)上进行节点分类 \u003Csub>\u003Csup>[[查看笔记本](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Fgraphs\u002Fcora_node_classification-GraphSAGE.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\nimport ktrain\nfrom ktrain import graph as gr\n\n# 加载数据（监督比例为10%）\n(trn, val, preproc)  = gr.graph_nodes_from_csv(\n                                               'cora.content', # 节点属性\u002F标签\n                                               'cora.cites',   # 边列表\n                                               sample_size=20,\n                                               holdout_pct=None,\n                                               holdout_for_inductive=False,\n                                              train_pct=0.1, sep='\\t')\n\n# 加载模型\nmodel=gr.graph_node_classifier('graphsage', trn)\n\n# 将模型和数据封装进ktrain.Learner对象\nlearner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=64)\n\n\n# 寻找最佳学习率\nlearner.lr_find(max_epochs=100) # 通过简短的训练模拟寻找合适学习率\nlearner.lr_plot()               # 通过可视化方式确定最佳学习率\n\n# 使用三角波策略训练，包含ModelCheckpoint和隐式的ReduceLROnPlateau和EarlyStopping\nlearner.autofit(0.01, checkpoint_folder='\u002Ftmp\u002Fsaved_weights')\n```\n\n\n#### 示例：使用[Hugging Face Transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers)在[20新闻组数据集](https:\u002F\u002Fscikit-learn.org\u002Fstable\u002Ftutorial\u002Ftext_analytics\u002Fworking_with_text_data.html)上进行文本分类（使用[DistilBERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.01108)模型） \u003Csub>\u003Csup>[[查看笔记本](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-A3-hugging_face_transformers.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\n# 加载文本数据\ncategories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']\nfrom sklearn.datasets import fetch_20newsgroups\ntrain_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)\ntest_b = fetch_20newsgroups(subset='test',categories=categories, shuffle=True)\n(x_train, y_train) = (train_b.data, train_b.target)\n(x_test, y_test) = (test_b.data, test_b.target)\n\n# 构建、训练和验证模型（Transformer是对transformers库的封装）\nimport ktrain\nfrom ktrain import text\nMODEL_NAME = 'distilbert-base-uncased'\nt = text.Transformer(MODEL_NAME, maxlen=500, class_names=train_b.target_names)\ntrn = t.preprocess_train(x_train, y_train)\nval = t.preprocess_test(x_test, y_test)\nmodel = t.get_classifier()\nlearner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)\nlearner.fit_onecycle(5e-5, 4)\nlearner.validate(class_names=t.get_classes()) # class_names必须为字符串类型\n\n# learner.validate()的输出结果\n#                        precision    recall  f1-score   support\n#\n#           alt.atheism       0.92      0.93      0.93       319\n#         comp.graphics       0.97      0.97      0.97       389\n#               sci.med       0.97      0.95      0.96       396\n#soc.religion.christian       0.96      0.96      0.96       398\n#\n#              accuracy                           0.96      1502\n#             macro avg       0.95      0.96      0.95      1502\n#          weighted avg       0.96      0.96      0.96      1502\n```\n\n\u003C!--\n#### Example: NER With [BioBERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.08746) Embeddings\n```python\n\n```\n\n# 使用 BioBERT 嵌入的命名实体识别（NER）\n```python\nimport ktrain\nfrom ktrain import text as txt\nx_train= [['IL-2', 'responsiveness', 'requires', 'three', 'distinct', 'elements', 'within', 'the', 'enhancer', '.'], ...]\ny_train=[['B-protein', 'O', 'O', 'O', 'O', 'B-DNA', 'O', 'O', 'B-DNA', 'O'], ...]\n(trn, val, preproc) = txt.entities_from_array(x_train, y_train)\nmodel = txt.sequence_tagger('bilstm-bert', preproc, bert_model='monologg\u002Fbiobert_v1.1_pubmed')\nlearner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=128)\nlearner.fit(0.01, 1, cycle_len=5)\n```\n\n#### 示例：使用多层感知机（MLP）进行[Titanic 生存预测](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Ftitanic) \u003Csub>\u003Csup>[[查看笔记本](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftabular\u002Ftabular_classification_and_regression_example.ipynb)]\u003C\u002Fsup>\u003C\u002Fsub>\n```python\nimport ktrain\nfrom ktrain import tabular\nimport pandas as pd\ntrain_df = pd.read_csv('train.csv', index_col=0)\ntrain_df = train_df.drop(['Name', 'Ticket', 'Cabin'], 1)\ntrn, val, preproc = tabular.tabular_from_df(train_df, label_columns=['Survived'], random_state=42)\nlearner = ktrain.get_learner(tabular.tabular_classifier('mlp', trn), train_data=trn, val_data=val)\nlearner.lr_find(show_plot=True, max_epochs=5) # 估计学习率\nlearner.fit_onecycle(5e-3, 10)\n\n# 评估保留的带标签测试集\ntst = preproc.preprocess_test(pd.read_csv('heldout.csv', index_col=0))\nlearner.evaluate(tst, class_names=preproc.get_classes())\n```\n\n#### 更多示例请访问[此处](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Ftree\u002Fmaster\u002Fexamples)\n\n### 安装指南\n\n1. 确保 pip 是最新版本：`pip install -U pip`\n\n2. 如果尚未安装，请[安装 TensorFlow 2](https:\u002F\u002Fwww.tensorflow.org\u002Finstall)（例如：`pip install tensorflow`）\n\n3. 安装 *ktrain*：`pip install ktrain`\n\n4. 如果使用 `tensorflow>=2.16`：\n    - 安装 **tf_keras**：`pip install tf_keras`\n    - 在导入 **ktrain** 前设置环境变量 `TF_USE_LEGACY_KERAS=True`\n\n\n在 Linux 系统和 Google Colab\u002FAWS EC2 等云环境只需以上步骤即可完成安装。如果在**Windows 系统**使用 **ktrain**，请参考这些[详细安装指南](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002FFAQ.md#how-do-i-install-ktrain-on-a-windows-machine)（包含额外步骤）。\n\n#### TensorFlow 版本注意事项\n- 从 `tensorflow>=2.11` 开始，必须使用旧版优化器如 `tf.keras.optimizers.legacy.Adam`。当前不支持新版 `tf.keras.optimizers.Optimizer` 基类。例如在 TensorFlow 2.11 及以上版本中，请在 `model.compile` 中使用 `tf.keras.optimizers.legacy.Adam()` 而非字符串 `\"adam\"`。**ktrain** 在使用开箱即用模型时（如 `transformers` 库的模型）会自动处理此问题。\n- 如上所述，由于 TensorFlow 2.16 的破坏性变更，需要安装 `tf_keras` 包并在导入 **ktrain** 前设置环境变量 `TF_USE_LEGACY_KERAS=True`（例如在 `.bashrc` 中添加 `export TF_USE_LEGACY_KERAS=1` 或在代码顶部添加 `os.environ['TF_USE_LEGACY_KERAS']=\"1\"`）。\n\n#### 安装附加说明\n\n- 某些可选扩展库需要按需安装（注意：**ktrain** 使用了支持 TensorFlow2 的 `eli5` 和 `stellargraph` 库定制版本）：\n```python\n# 用于图模块：\npip install https:\u002F\u002Fgithub.com\u002Famaiya\u002Fstellargraph\u002Farchive\u002Frefs\u002Fheads\u002Fno_tf_dep_082.zip\n# 用于 text.TextPredictor.explain 和 vision.ImagePredictor.explain：\npip install https:\u002F\u002Fgithub.com\u002Famaiya\u002Feli5-tf\u002Farchive\u002Frefs\u002Fheads\u002Fmaster.zip\n# 用于 tabular.TabularPredictor.explain：\npip install shap\n# 用于 text.zsl（零样本分类器）、text.summarization、text.translation、text.speech：\npip install torch\n# 用于 text.speech：\npip install librosa\n# 用于 tabular.causal_inference_model：\npip install causalnlp\n# 用于 text.summarization.core.LexRankSummarizer：\npip install sumy\n# 用于 text.kw.KeywordExtractor：\npip install textblob\n```\n\n# 用于生成式人工智能（generative_ai）\npip install onprem\n```\n- **ktrain**（轻量级机器学习库）故意锁定较低版本的 **transformers**（变压器库）以支持旧版TensorFlow。如果您需要更新版本的`transformers`，通常可以在安装 **ktrain** **之后**安全地升级`transformers`。\n\n- 从v0.30.x版本开始，TensorFlow安装是可选的，仅在训练神经网络时需要。虽然**ktrain**使用TensorFlow进行神经网络训练，但它也包含多种有用的预训练PyTorch模型和sklearn模型，这些模型可以在不安装TensorFlow的情况下直接使用，如下表所示：\n\n| 功能  | TensorFlow |  PyTorch | Sklearn\n| --- | :-: | :-: | :-: |\n| [训练](https:\u002F\u002Ftowardsdatascience.com\u002Fktrain-a-lightweight-wrapper-for-keras-to-help-train-neural-networks-82851ba889c) 任意神经网络（如文本或图像分类）  |  ✅  | ❌  | ❌  |\n| [端到端问答](https:\u002F\u002Fnbviewer.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fquestion_answering_with_bert.ipynb)（预训练）             |  ✅  | ✅  | ❌  |\n| [基于问答的信息抽取](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fqa_information_extraction.ipynb)（预训练）      |  ✅  | ✅  | ❌  |\n| [零样本分类](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Fzero_shot_learning_with_nli.ipynb)（预训练）   |  ❌  | ✅  | ❌  |\n| [语言翻译](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Flanguage_translation_example.ipynb)（预训练）      |  ❌  | ✅  | ❌  |\n| [文本摘要](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Fexamples\u002Ftext\u002Ftext_summarization_with_bart.ipynb)（预训练）             |  ❌  | ✅  | ❌  |\n| [语音转录](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fspeech_transcription_example.ipynb)（预训练）     |  ❌  | ✅  |❌   |\n| [图像描述生成](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Fvision\u002Fimage_captioning_example.ipynb)（预训练）     |  ❌  | ✅  |❌   |\n| [目标检测](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Fvision\u002Fobject_detection_example.ipynb)（预训练）     |  ❌  | ✅  |❌   |\n| [情感分析](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fsentiment_analysis_example.ipynb)（预训练）     |  ❌  | ✅  |❌   |\n| [生成式人工智能](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fgenerative_ai_example.ipynb)（sentence-transformers）     |  ❌  | ✅  |❌   |\n| [主题建模](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fmaster\u002Ftutorials\u002Ftutorial-05-learning_from_unlabeled_text_data.ipynb)（sklearn）  |  ❌  | ❌  | ✅  |\n| [关键词抽取](https:\u002F\u002Fnbviewer.jupyter.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fkeyword_extraction_example.ipynb)（textblob\u002Fnltk\u002Fsklearn）   |  ❌  | ❌  | ✅  |\n\n如上所述，**ktrain**中的端到端问答和信息抽取功能既可使用TensorFlow（使用`framework='tf'`）也可使用PyTorch（使用`framework='pt'`）。\n\n### 如何引用\n\n使用**ktrain**时请引用[以下论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.10703)：\n```\n@article{maiya2020ktrain,\n    title={ktrain: 一种低代码的增强机器学习库（A Low-Code Library for Augmented Machine Learning）},\n    author={Arun S. Maiya},\n    year={2020},\n    eprint={2004.10703},\n    archivePrefix={arXiv},\n    primaryClass={cs.LG},\n    journal={arXiv 预印本 arXiv:2004.10703},\n}\n```\n\n----\n**创建者：[Arun S. Maiya](http:\u002F\u002Farun.maiya.net)**\n\n**邮箱：** arun [at] maiya [dot] net","# ktrain 中文快速上手指南\n\n---\n\n## 环境准备\n- **系统要求**：Python 3.6+（建议使用 Python 3.8+）\n- **前置依赖**：\n  - TensorFlow（自动安装）\n  - 常用科学计算库（numpy, pandas, scikit-learn 等）\n  - 文本\u002F图像处理依赖（根据任务自动安装）\n\n---\n\n## 安装步骤\n```bash\n# 推荐使用国内镜像加速安装\npip install ktrain -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n---\n\n## 基本使用\n### 文本分类示例（使用 BERT）\n```python\nimport ktrain\nfrom ktrain import text\n\n# 加载数据（以 IMDb 电影评论数据集为例）\n(x_train, y_train), (x_test, y_test), preproc = text.texts_from_array(\n    x_train=['I love this movie', 'This film was terrible'],\n    y_train=[1, 0],\n    x_test=['Great acting', 'Poor script'],\n    y_test=[1, 0],\n    class_names=['negative', 'positive'],\n    preprocess_mode='bert'\n)\n\n# 构建 BERT 模型\nmodel = text.text_classifier('bert', (x_train, y_train), preproc=preproc)\n\n# 训练模型\nlearner = ktrain.get_learner(model, train_data=(x_train, y_train), val_data=(x_test, y_test))\nlearner.fit_onecycle(2e-5, 3)  # 学习率 2e-5，训练 3 轮\n\n# 评估模型\nlearner.validate()\n\n# 预测新数据\npredictor = ktrain.get_predictor(model, preproc)\npredictor.predict(['The movie was fantastic!'])  # 输出: ['positive']\n```\n\n---\n\n### 图像分类示例（使用 ResNet50）\n```python\nimport ktrain\nfrom ktrain import vision\n\n# 加载图像数据（需替换为实际路径）\ntrain_data, val_data, preproc = vision.images_from_folder(\n    datadir='path\u002Fto\u002Fimages',\n    train_test_names=['train', 'test'],\n    target_size=(224, 224),\n    color_mode='rgb'\n)\n\n# 构建 ResNet50 模型\nmodel = vision.image_classifier('resnet50', train_data, val_data, preproc=preproc)\n\n# 训练模型\nlearner = ktrain.get_learner(model, train_data=train_data, val_data=val_data)\nlearner.fit_onecycle(1e-3, 5)  # 学习率 1e-3，训练 5 轮\n\n# 预测新图像\npredictor = ktrain.get_predictor(model, preproc)\npredictor.predict('path\u002Fto\u002Ftest_image.jpg')\n```\n\n---\n\n**提示**：完整示例可参考官方 [Colab 教程](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1WipQJUPL7zqyvLT10yekxf_HNMXDDtyR)","某电商平台需要快速构建一个高精度的商品评论情感分析系统，以实时识别用户评论中的负面情绪并触发客服响应。\n\n### 没有 ktrain 时\n- 数据预处理需要手动编写代码清洗文本、构建词表并处理序列长度不一致问题，耗时且容易出错\n- 需要从头选择模型架构（如LSTM\u002FTransformer）并手动调整超参数，对团队的NLP经验要求较高\n- 训练过程缺乏可视化监控，难以及时发现过拟合或学习率设置不当等问题\n- 部署模型需要额外开发API服务代码，涉及模型序列化、推理优化等复杂步骤\n- 最终模型准确率波动较大，需要反复迭代特征工程和模型调参\n\n### 使用 ktrain 后\n- 通过`ktrain.texts.texts_from_array`自动完成文本标准化、分词和填充，数据准备时间缩短70%\n- 直接调用预置的BERT或DistilBERT模型（如`ktrain.text.text_classifier('distilbert')`），无需设计网络结构即可获得SOTA级基线\n- 内置的`Learner`类自动添加训练可视化、学习率预估和早停机制，训练效率提升3倍\n- 使用`predictor.save`导出模型后，通过`ktrain.deploy`一键生成Flask API服务代码，部署周期从天级缩短到小时级\n- 预训练模型配合自动超参优化，在相同数据集上准确率提升12%，且保持稳定输出\n\nktrain通过标准化流程和开箱即用的模型库，使团队在3天内完成从数据准备到服务上线的全流程，将NLP项目落地成本降低60%以上。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Famaiya_ktrain_8fd47f36.png","amaiya","Arun S. Maiya","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Famaiya_80488726.png","computer scientist",null,"http:\u002F\u002Farun.maiya.net","https:\u002F\u002Fgithub.com\u002Famaiya",[84,88],{"name":85,"color":86,"percentage":87},"Jupyter Notebook","#DA5B0B",81.3,{"name":89,"color":90,"percentage":91},"Python","#3572A5",18.7,1264,260,"2026-03-16T16:21:41","Apache-2.0",1,"未说明",{"notes":99,"python":100,"dependencies":101},"首次运行需下载预训练模型文件，建议使用conda管理环境","3.8+",[102,103,104,105,106,107,108,109,110,111],"tensorflow","transformers>=4.30","accelerate","torch","scikit-learn","numpy","pandas","matplotlib","jupyter","pillow",[51,14,13,26],[114,115,102,116,117,118,119,120,121],"deep-learning","machine-learning","keras","python","nlp","computer-vision","graph-neural-networks","tabular-data","2026-03-27T02:49:30.150509","2026-04-06T08:18:33.081213",[125,130,134,138,143,148],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},5017,"如何从本地路径加载HuggingFace Transformers模型？","需确保模型路径正确，并通过`transformers`库验证模型是否可加载。具体代码：\n```python\nfrom transformers import AutoTokenizer, TFAutoModelForSequenceClassification\npath_to_your_model = \"D:\\programming\\models\\tf_rbtl\"\ntokenizer = AutoTokenizer.from_pretrained(path_to_your_model)\nmodel = TFAutoModelForSequenceClassification.from_pretrained(path_to_your_model, from_pt=False)\n```\n同时检查`transformers`库版本是否兼容。","https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fissues\u002F62",{"id":131,"question_zh":132,"answer_zh":133,"source_url":129},5018,"使用GPU时模型加载失败怎么办？","尝试禁用GPU后加载模型。例如在TensorFlow中关闭GPU支持，再调用`get_classifier()`方法创建模型。",{"id":135,"question_zh":136,"answer_zh":137,"source_url":129},5019,"模型加载错误是否与版本有关？","可能是包版本问题。升级`ktrain`和相关依赖库（如`transformers`）后问题可能解决。",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},5020,"如何加载已训练的ktrain模型？","使用`ktrain.load_predictor`并指定模型路径。例如：\n```python\npredictor = ktrain.load_predictor('\u002Fcontent\u002Fdrive\u002FMyDrive\u002Fage_predictor')\n```\n需确保路径正确且模型文件完整。","https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fissues\u002F354",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},5021,"如何使用RoBERTa-like模型（如CodeBERT）进行NER任务？","需修改tokenization逻辑以保证token与嵌入向量对齐。关键代码：\n```python\ndef fix_tokenization(self, X, Y):\n    encode = self.te.tokenizer\n    new_X, new_Y = [], []\n    for x in X:\n        encoded = encode(s, add_special_tokens=False, return_offsets_mapping=True)\n        # 根据offsets映射生成与输入一致的token向量\n```\n已通过`nerroberta`分支支持该功能。","https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fissues\u002F437",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},5022,"如何加速SimpleQA的索引过程？","使用`index_from_folder`或`index_from_list`时添加参数优化性能。例如：\n```python\ntext.SimpleQA.index_from_folder('.\u002FPhilosophy', INDEXDIR, procs=4, multisegment=True)\n```\n其中`procs`控制并行进程数，`multisegment=True`提升长文本处理效率。","https:\u002F\u002Fgithub.com\u002Famaiya\u002Fktrain\u002Fissues\u002F157",[154,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249],{"id":155,"version":156,"summary_zh":157,"released_at":158},114249,"v0.41.4","## 0.41.4 (2024-06-18)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- Remove references to `paper-qa` (#530)\r\n- Reduce memory footprint of `TopicModel.filter` (#531)","2024-06-19T01:14:13",{"id":160,"version":161,"summary_zh":162,"released_at":163},114250,"v0.41.3","\r\n## 0.41.3 (2024-04-05)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- Removed `tf_keras` as dependencies due to issues in varioius dependencies\r\n  related to TF 2.16 and allow TF to prompt user for it (#528)\r\n- Removed auto-setting `TF_USE_LEGACY_KERAS`, as it causes problems in `tensorflow\u003C2.16` (#528)\r\n- Unpin `transformers` due to incompatibilites with different versions of TensorFlow.","2024-04-05T19:20:07",{"id":165,"version":166,"summary_zh":167,"released_at":168},114251,"0.41.2","\r\n## 0.41.2 (2024-03-11)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- Added `tf_keras` to dependencies and set `USE_TF_TF_USE_LEGACY_KERAS` (#525)","2024-04-02T21:08:28",{"id":170,"version":171,"summary_zh":172,"released_at":173},114252,"v0.41.1","## 0.41.1 (2024-03-02)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- temporarily pinning to `transformers==4.37.2` due to issue (#523) on Google Colab","2024-03-02T14:45:16",{"id":175,"version":176,"summary_zh":177,"released_at":178},114253,"v0.41.0","## 0.41.0 (2024-02-20)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- **Breaking Change**: Removed the `ktrain.text.qa.generative_qa` module. Users should use our [OnPrem.LLM](https:\u002F\u002Famaiya.github.io\u002Fonprem\u002Fexamples_rag.html)  for generative question-answering (#522)\r\n\r\n### fixed:\r\n- use arrays in `TextPredictor` due to possible issues with `tf.Dataset` (#521)","2024-02-20T16:04:16",{"id":180,"version":181,"summary_zh":182,"released_at":183},114254,"v0.40.0","\r\n## 0.40.0 (2024-01-27)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- Changed `shallownlp.classifier` API with respect to hyperparameters and defaults\r\n\r\n### fixed:\r\n- Ensure weight files in checkpoint folder have `val_loss` in file name (#519)","2024-01-28T02:04:42",{"id":185,"version":186,"summary_zh":187,"released_at":188},114255,"v0.39.0","## 0.39.0 (2023-11-18)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- Changes to custom `eli5` and `stellargraph` to support Python 3.11 (#515)\r\n\r\n### fixed:\r\n- Switch from unmaintained `cchardet` to `charset-normalizer` (#512)\r\n- Use `textract-py3` instead of `textract` (#511)","2023-11-18T20:03:36",{"id":190,"version":191,"summary_zh":192,"released_at":193},114256,"v0.38.0","## 0.38.0 (2023-09-05)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- **Breaking Change**: The `generative_ai.LLM` class replaces `generative_ai.GenerativeAI` is now powered by our [OnPrem.LLM](https:\u002F\u002Fgithub.com\u002Famaiya\u002Fonprem) package (see [example notebook](https:\u002F\u002F        nbviewer.org\u002Fgithub\u002Famaiya\u002Fktrain\u002Fblob\u002Fdevelop\u002Fexamples\u002Ftext\u002Fgenerative_ai_example.ipynb)).\r\n- `GenerativeQA` now recomends `langchain==0.0.240`\r\n\r\n### fixed:\r\n- N\u002FA","2023-09-05T19:18:15",{"id":195,"version":196,"summary_zh":197,"released_at":198},114257,"v0.37.6","## 0.37.6 (2023-07-23)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- Removed pin to `paper-qa==2.1.1` due to issue in latest `langchain` release. Added notification to install `langchain==0.0.180`","2023-07-23T14:16:32",{"id":200,"version":201,"summary_zh":202,"released_at":203},114258,"v0.373","## 0.37.3 (2023-07-22)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- fix `eda.py` topic visualization to work with `bokeh>=3.0.0` (#504)","2023-07-22T14:30:53",{"id":205,"version":206,"summary_zh":207,"released_at":208},114259,"v0.37.5","## 0.37.5 (2023-07-22)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- Removed pin on `scikit-learn`, as `eli5-tf` repo was updated to support `scikit-learn>=1.3` (#505)\r\n- pin to `paper-qa==2.1.1` due to breaking changes (#506)","2023-07-22T23:36:41",{"id":210,"version":211,"summary_zh":212,"released_at":213},114260,"v0.37.4","## 0.37.4 (2023-07-22)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- Temporarily pin to `scikit-learn\u003C1.3` to avoid `eli5` import error (#505)\r\n- Temporarily changed `generative_qa` imports to avoid `OPENAI_API_KEY error (#506)","2023-07-22T19:27:44",{"id":215,"version":216,"summary_zh":217,"released_at":218},114261,"v0.37.2","## 0.37.2 (2023-06-14)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- `text.models`, `vision.models`, and `tabular.models` now all automatically set metrics to use `binary_accuracy` for multilabel problems (#500)\r\n\r\n### fixed:\r\n- fix `validate` to support multilabel classification problems (#498)\r\n- add a warning to `TransformerPreprocessor.get_classifier` to use `binary_accuracy` for multilabel problems (#498)","2023-06-14T21:41:26",{"id":220,"version":221,"summary_zh":222,"released_at":223},114262,"v0.37.1","## 0.37.1 (2023-06-05)\r\n\r\n### new:\r\n- Supply arguments to `generate` in `TransformerSummarizer.summarize`\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- N\u002FA","2023-06-05T18:21:40",{"id":225,"version":226,"summary_zh":227,"released_at":228},114263,"v0.37.0","## 0.37.0 (2023-05-11)\r\n\r\n### new:\r\n- Support for **Generative Question-Answering** powered by OpenAI models, LangChain, and Paper-QA.  Ask questions to any set of documents and get back answers with citations to where the answer was found in your documents.\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- N\u002FA","2023-05-11T01:08:23",{"id":230,"version":231,"summary_zh":232,"released_at":233},114264,"v0.36.1","## 0.36.1 (2023-05-09)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- resolved issue with using DeBERTa embedding models with NER (#492)","2023-05-09T21:50:34",{"id":235,"version":236,"summary_zh":237,"released_at":238},114265,"v0.36.0","## 0.36.0 (2023-04-21)\r\n\r\n### new:\r\n- easy-to-use-wrapper for sentiment analysis\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- N\u002FA","2023-04-21T21:00:14",{"id":240,"version":241,"summary_zh":242,"released_at":243},114266,"v0.35.1","## 0.35.1 (2023-04-02)\r\n\r\n### new:\r\n- N\u002FA\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- Ensure `do_sample=True` for `GenerativeAI`","2023-04-02T17:51:26",{"id":245,"version":246,"summary_zh":247,"released_at":248},114267,"v0.35.0","## 0.35.0 (2023-04-01)\r\n\r\n### new:\r\n- Support for generative AI with few-shot and zero-shot prompting using a GPT-based model that can run on your own machine.\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- N\u002FA","2023-04-01T23:52:14",{"id":250,"version":251,"summary_zh":252,"released_at":253},114268,"v0.34.0","## 0.34.0 (2023-03-30)\r\n\r\n### new:\r\n- Support for LexRank summarization\r\n\r\n### changed\r\n- N\u002FA\r\n\r\n### fixed:\r\n- Bug fix in `dataset` module (#486)","2023-03-31T01:43:02"]