[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-cognitivetech--ollama-ebook-summary":3,"tool-cognitivetech--ollama-ebook-summary":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":79,"owner_url":82,"languages":83,"stars":100,"forks":101,"last_commit_at":102,"license":79,"difficulty_score":10,"env_os":103,"env_gpu":104,"env_ram":104,"env_deps":105,"category_tags":112,"github_topics":113,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":123,"updated_at":124,"faqs":125,"releases":154},1239,"cognitivetech\u002Follama-ebook-summary","ollama-ebook-summary","LLM for Long Text Summary (Comprehensive Bulleted Notes)","ollama-ebook-summary 是一个基于大语言模型的工具，用于自动生成书籍或其他长文本的要点式摘要，尤其适用于 EPUB 和 PDF 格式的电子书。它能够自动提取书籍目录，将内容分割为适合模型处理的片段（约 2000 个 token），并生成结构清晰、易于理解的笔记式总结。\n\n这个工具解决了传统阅读和摘要方式效率低、难以抓住重点的问题，特别适合需要快速掌握书籍核心内容的用户。通过将文档拆分成多个部分，并对每个部分进行提问，它能更全面地提取信息，避免遗漏关键点。\n\n它适合研究人员、学生、知识工作者以及任何需要高效处理大量文本的用户使用。对于开发者来说，它也提供了灵活的配置选项和模型支持，如 Ollama 和 HuggingFace 模型。\n\n其独特之处在于采用了一种不同于 RAG 的方法，通过对文档所有部分提出相同问题，从而更充分地利用大语言模型的能力，无需依赖多个第三方应用。","# Bulleted Notes Book Summaries\n\n_Built With: Python 3.11.9_\n\n## Introduction\nThis project creates bulleted notes summaries of books and other long texts, particularly epub and pdf which have ToC metadata available.\n\nWhen the ebooks contain approrpiate metadata, we are able to easily automate the extraction of chapters from most books, and split them into ~2000 token chunks, with fallbacks in case we are unable to access a document outline.\n\n### Why 2000 tokens?\n[*Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models*](https:\u002F\u002Fhuggingface.co\u002Fpapers\u002F2402.14848) (2024-02-19; Mosh Levy, Alon Jacoby, Yoav Goldberg) suggests that reasoning capacity drops off pretty sharply from 250 to 1000 tokens, starting to flatten out between 2000-3000 tokens.\n\n![](https:\u002F\u002Fi.imgur.com\u002FnyDkAzP.png)\n\nThis corresponds my own experience while summarizing many long documents using local llm.\n\nYou can check the [depreciated walkthroughs and rankings](notes\u002Fdepreciated\u002F) for more background on how I got here.\n\n### Comparison with RAG\n\nSimilar to Retrieval Augmented Generation (RAG), we split the document into many parts, so they fit into the context. The difference is that RAG systems try to determine what is the best chunk to ask their question to. Instead, we ask the same questions to *every part of the document*.\n\nIts very important towards unlocking the full capabilities of LLM without relying on a multitude of 3rd party apps.\n\n## Contents\n- [Setup](#setup)\n  - [Python Environment](#python-environment)\n  - [Install Dependencies](#install-dependencies)\n  - [Download Models](#download-models)\n  - [Update Config File](#update-config-file-_configyaml)\n- [Usage](#usage)\n  - [Convert E-book to chunked CSV or TXT](#convert-e-book-to-chunked-csv-or-txt)\n  - [Generate Summary](#generate-summary)\n- [Semi-Manual with Prototypes](#semi-manual-with-prototypes)\n- [Models](#models)\n  - [Ollama](#ollama)\n  - [HuggingFace](#huggingface)\n- [Check your Document Outline](#check-your-ebook-for-document-outline)\n  - [Firefox](#firefox)\n  - [Brave](#brave)\n- [Disclaimer](#disclaimer)\n- [Inspiration](#inspiration)\n- [Resources](#resources)\n\n## Setup\n### Python Environment\n\nBefore starting, ensure you have Python 3.11.9 installed. If not, you can use conda or pyenv to manage Python versions:\n\n**Using conda:**\n1. Install Anaconda from: https:\u002F\u002Fwww.anaconda.com\u002Fdownload\u002Fsuccess\n2. Create a new environment: `conda create -n book_summary python=3.11.9`\n3. Activate the environment: `conda activate book_summary`\n\n**Using pyenv:**\n1. Install pyenv: https:\u002F\u002Fgithub.com\u002Fpyenv\u002Fpyenv#installation\n2. Install Python 3.11.9: `pyenv install 3.11.9`\n3. Set local version: `pyenv local 3.11.9`\n\n### Install Dependencies\n```\npip install -r requirements.txt\n```\n- [Install Ollama](https:\u002F\u002Fgithub.com\u002Follama\u002Follama?tab=readme-ov-file#ollama)\n\n### Download Models\n\n#### 1. **Download a copy of Mistral Instruct v0.2 Bulleted Notes Fine-Tune**\n\n`ollama pull cognitivetech\u002Fobook_summary:q6_k`\n\n#### 2. **Download up a title model**\n\n##### a) *Download a preconfigured model*\n\n`ollama pull cognitivetech\u002Fobook_title:q4_k_m`\n\nFor your convenience Mistral 7b 0.3 is packaged with the necessary message history for title creation. \n\n***or***\n\n##### b) *Append this* [message history](Modelfile) *to the Modelfile of your choice*\n\n#### 3. **Download a general-purpose model**\n`ollama pull gemma2`\n\n### Update Config File `_config.yaml`\n\nEnsure the defaults are set accordingly! \n\n> This is an area subject to change which may differ from the documentation. **Make sure you have the models on your system as noted in `summary`, `general`, and `title` in the current [_config.yaml](.\u002F_config.yaml).** I have to clean up this aspect of the code, but I'm still working on that.\n\n```yaml\ndefaults:\n  prompt: bnotes\n  summary: cognitivetech\u002Fobook_summary:q6_k # default model for summaries\n  general: gemma2                           # default model for basic summary\n  title: cognitivetech\u002Fobook_title:q4_k_m   # default model for title generation\nprompts:\n  bnotes: # Default Prompt\n    prompt: Write comprehensive bulleted notes summarizing the provided text, with\n      headings and terms in bold.\n  research: # Also for use with summary model\n    prompt: Does this text make any arguments? If so list them here.\n  clean:  # The following prompts should be used with a general purpose model.\n    prompt: Repeat back this text exactly, remove only garbage characters that do\n      not contribute to the flow of text. Output only the main text content, condensed\n      onto a single line. If you encounter any chapter boundaries or subheadings,\n      start a new line beginning with its title.\n  concise:\n    prompt: Repeat the provided passage, with Concision.\n  md:\n    prompt: 'Print these notes in proper markdown format, with headings marked as\n      bold with double asterisks and terms in bold also, and bullet points as `-`.\n      Print the notes exactly, word-for-word, do not elaborate, do not add headings\n      with #'\n  sum: # basic\n    prompt: Comprehensive bulleted notes with headings and terms in bold.\n  teacher:\n    prompt: 'Write a list of questions that can be answered by 3rd graders who are\n      reading the provided text. Topics we like to focus on include: Main idea, supporting\n      details, Point of view, Theme, Sequence, Elements of fiction (setting, characters,\n      BME)'\n  quotes:\n    prompt: 'write a few dozen quotes inspired by the provided text'\ntitle_generation:\n  prompt: Write a title with fewer than 11 words to concisely describe this selection.\n```\n\n## Usage \n### Convert E-book to chunked CSV or TXT\n\n#### 1. Use automated script to split your `pdf` or `epub`.\n```bash\npython3 book2text.py ebook-name.epub # or ebook-name.pdf (Epub is preferred)\n```\n\n**This step produces two outputs**:\n- `out\u002Febook-name.csv` (split by chapter or section)\n- `out\u002Febook-name_processed.csv` (chunked)\n\n***or***\n\n#### 2. Remove or escape all newlines within each chunk, so they may be placed line by line [in a text file](notes\u002Fdepreciated\u002Fsummarize.txt), with each line surrounded by double quotes.\n\u003Ca href=\"notes\u002Fdepreciated\u002Fsummarize.txt\">\u003Cimg width=\"1163\" alt=\"image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcognitivetech_ollama-ebook-summary_readme_686dfc5e1b01.png\">\u003C\u002Fa>\n\n\\*_Note to be cautious of properly escaping or replacing double quotes from within each chunk._\n\n### Generate Summary\n\n`$``python3 sum.py --help`\n\n```bash\nUsage: python sum.py [OPTIONS] input_file\n\nOptions:\n-c, --csv        Process a CSV file. Expected columns: Title, Text\n-t, --txt        Process a text file. Each line should be a separate text chunk.\n-m, --model      Model name to use for generation (default from config)\n-p, --prompt     Alias of the prompt to use from config (default from config)\n-v, --verbose    Print markdown output additionally to terminal\n--help           Show this help message and exit.\n\nFor CSV input:\n- Ensure your CSV has 'Title' and 'Text' columns.\n\nFor Text input:\n- Each line should be a chunk of text surrounded by double quote.\n\nThe output CSV will include:\n- Title: Final title chosen or generated\n- Gen: Boolean indicating if the title was generated\n- Text: Original input text\n- model_name: Generated output\n- Time: Processing time in seconds\n- Len: Length of the output\n```    \n\nIf you have your defaults set, then all you need is to specify which type of input, manual `text`, or automated `csv`. \n```\npython3 sum.py -c ebook-name_processed.csv\n```\n\n## Semi-Manual with Prototypes\n\nIn this example, I've used a prototype [split_pdf.py](tools-prototype\u002Fsplit_pdf.py) to split the pdf not only by chapter but subsections (producing `ebook-name_extracted.csv`), then manually process that output (using [vscode](https:\u002F\u002Fcode.visualstudio.com\u002F)) to place each chunk [on a single line](notes\u002Fdepreciated\u002Fsummarize.txt) surrounded by double quotes.\n\nEventually that will be automated but provides challenges, which you will notice, that have prevented me from finishing that tool.\n\n**Split**:\n```\ntools-prototype\u002Fsplit_pdf.py ebook-name.pdf # produces ebook-name_extracted.csv\n```\n\n**Process**:\n```\npython3 sum.py -t ebook-name_extracted.csv\n```\n\n**This step generates two outputs**:\n- `ebook-name_extracted_processed_sum.md` (rendered markdown)\n- `ebook-name_extracted_processed_sum.csv` (csv with: input text, flattened md output, generation time, output length)\n\n## Models\nDownload from one of two sources:\n\n### Ollama\nYou can get any of them them right from ollama, template in all.\nexample: `ollama pull obook_summary:q5_k_m`\n\n- [obook_summary](https:\u002F\u002Follama.com\u002Fcognitivetech\u002Fobook_summary) - On Ollama.com\n  - `latest` • 7.7GB • Q_8\n  - `q3_k_m` • 3.5GB\n  - `q4_k_m` • 4.4GB\n  - `q5_k_m` • 5.1GB\n  - `q6_k` • 5.9GB (preferred)\n- [obook_title](https:\u002F\u002Follama.com\u002Fcognitivetech\u002Fobook_title) - On Ollama.com\n  - `latest` • 7.7GB • Q_8\n  - `q3_k_m` • 3.5GB\n  - `q4_k_m` • 4.4GB\n  - `q5_k_m` • 5.1GB\n  - `q6_k`   • 5.9GB (preferred)\n\n### HuggingFace\nThere is also complete weights, lora and ggguf on huggingface\n- [Mistral Instruct Bulleted Notes](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fcognitivetech\u002Fmistral-instruct-bulleted-notes-v02-66b6e2c16196e24d674b1940) - Collection on HuggingFace\n  - [cognitivetech\u002FMistral-7B-Inst-0.2-Bulleted-Notes](https:\u002F\u002Fhuggingface.co\u002Fcognitivetech\u002FMistral-7B-Inst-0.2-Bulleted-Notes)\n  - [cognitivetech\u002FMistral-7b-Inst-0.2-Bulleted-Notes_GGUF](https:\u002F\u002Fhuggingface.co\u002Fcognitivetech\u002Fcognitivetech\u002FMistral-7b-Inst-0.2-Bulleted-Notes_GGUF)\n  - [cognitivetech\u002FMistral-7B-Inst-0.2_Bulleted-Notes_LoRA](https:\u002F\u002Fhuggingface.co\u002Fcognitivetech\u002Fcognitivetech\u002FMistral-7B-Inst-0.2_Bulleted-Notes_LoRA)\n\n## Check your eBook for Document Outline\n\nHere you can see how to check whethere your eBook as the proper formatting, or not. **With ePub it should fail gracefully**.\n\n\\* In some rare occasion, even with clickable toc the script will not find that.\n\n### Firefox\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcognitivetech_ollama-ebook-summary_readme_34ec70c87242.png)\n\n### Brave \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcognitivetech_ollama-ebook-summary_readme_ba5472f63bef.png)\n\n## Disclaimer\n\nYou are responsible for verifying that the summary tool creates an accurate summary. There are a variety of issues which can interfere with a quality summary, and if you aren't paying attention may slip your notice.\n\n**1. References:**\n\nPersonally, I don't trust references from my fine-tuned model without verifying them manually. Maybe this is solved in newer models, but during my testing phase I noticed some bad references with 7b models I was using. I never tested this out to see the quality of the app on references, my personal preference is to remove any long references sections before summarizing, and deal with those separate. I don't think this is a permenant blocker, just an area that I haven't fully dealt with or understood, yet.\n\n**2. Other:**\n\nThere are a few other things to watch out for. \n\nOne of the reasons I keep the length of the input and output on CSV is that makes it easy to check when a summary is longer than the input, thats a red flag.\n\nwhen the structure of the summary greatly deviates from the others, this can indicate issues with the summary. Some of these can be realated to special characters, or if the input is too long and the AI just doesn't grasp it all.\n\n\n## Inspiration\n\nThe inspiration for this app was my intention to manually summarize a dozen books so I could tie together psychological theory and practice which they discuss and make a cohesive argument based on that information.\n\nI've already read the books a few times, but now I need easy access to the information within so that I can relate it to others in a cohesive fashion.\n\nOriginally, after working at it this project manually, for a week, I was only a few chapters into my first book, I could see this was going to take a loong time.\n\nOver the next 6 months I began learning how to use LLM, discovering were the best for my task, with fine-tuning to deliver production quality consistency in the results.\n\nNow with this tool, I'm able to review a lot more material more quickly. This is a content curation tool that empowers me to not only learn things but more readily share that knowledge, without having to spend ages that it takes to create quality content.\n\nMoreover, it can be used to create custom datasets based on whatever source materials you throw at it.\n\n## Resources\n* [Summarizing Books](https:\u002F\u002Fopenai.com\u002Fresearch\u002Fsummarizing-books) OpenAI\n\n### Leaderboards\n* [Small LLM Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fw601sxs\u002FSLM-Leaderboard) HuggingFace\n* [HuggingFace - Open LLM Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FHuggingFaceH4\u002Fopen_llm_leaderboard)\n* [Chatbox Arena Leaderboard](https:\u002F\u002Fchat.lmsys.org\u002F)\n* [Hallucination leaderboard](https:\u002F\u002Fgithub.com\u002Fvectara\u002Fhallucination-leaderboard\u002F) Vectara\n","# 项目简介：书本摘要的要点笔记\n\n_使用 Python 3.11.9 构建_\n\n## 引言\n本项目旨在为书籍及其他长篇文本生成要点式摘要，尤其适用于包含目录元数据的 EPUB 和 PDF 文件。\n\n当电子书具备合适的元数据时，我们能够轻松实现对大多数书籍章节的自动化提取，并将其拆分为约 2000 个 token 的小块；同时，若无法获取文档大纲，则提供备用方案。\n\n### 为何选择 2000 个 token？\n[*同一任务、更多 token：输入长度对大型语言模型推理性能的影响*](https:\u002F\u002Fhuggingface.co\u002Fpapers\u002F2402.14848)（2024 年 2 月 19 日；Mosh Levy、Alon Jacoby、Yoav Goldberg）指出，从 250 个 token 到 1000 个 token，模型的推理能力会急剧下降；而当 token 数量达到 2000–3000 个时，这种下降趋势则趋于平缓。\n\n![](https:\u002F\u002Fi.imgur.com\u002FnyDkAzP.png)\n\n这一结论也与我在使用本地大语言模型总结大量长文档时的个人经验相符。\n\n您可以在 [已弃用的使用指南与排名](notes\u002Fdepreciated\u002F) 中查看我如何得出这些结论的更多背景信息。\n\n### 与 RAG 的比较\n\n与检索增强生成（RAG）类似，我们也将文档拆分成多个部分，以便其内容能够适配上下文。不同之处在于，RAG 系统会尝试判断哪一部分最适合回答用户的问题；而我们的方法则是将相同的问题向 *文档的每一部分* 提出。\n\n这一点对于在不依赖众多第三方应用的情况下充分发挥大语言模型的能力至关重要。\n\n## 目录\n- [设置](#setup)\n  - [Python 环境](#python-environment)\n  - [安装依赖](#install-dependencies)\n  - [下载模型](#download-models)\n  - [更新配置文件](#update-config-file-_configyaml)\n- [使用](#usage)\n  - [将电子书转换为分块的 CSV 或 TXT](#convert-e-book-to-chunked-csv-or-txt)\n  - [生成摘要](#generate-summary)\n- [半手动原型方法](#semi-manual-with-prototypes)\n- [模型](#models)\n  - [Ollama](#ollama)\n  - [HuggingFace](#huggingface)\n- [检查您的电子书是否有文档大纲](#check-your-ebook-for-document-outline)\n  - [Firefox](#firefox)\n  - [Brave](#brave)\n- [免责声明](#disclaimer)\n- [灵感来源](#inspiration)\n- [资源](#resources)\n\n## 设置\n### Python 环境\n\n在开始之前，请确保已安装 Python 3.11.9。若未安装，您可以使用 Conda 或 Pyenv 来管理 Python 版本：\n\n**使用 Conda：**\n1. 从 https:\u002F\u002Fwww.anaconda.com\u002Fdownload\u002Fsuccess 下载并安装 Anaconda。\n2. 创建新环境：`conda create -n book_summary python=3.11.9`\n3. 激活环境：`conda activate book_summary`\n\n**使用 Pyenv：**\n1. 安装 Pyenv：https:\u002F\u002Fgithub.com\u002Fpyenv\u002Fpyenv#installation\n2. 安装 Python 3.11.9：`pyenv install 3.11.9`\n3. 设置本地版本：`pyenv local 3.11.9`\n\n### 安装依赖\n```\npip install -r requirements.txt\n```\n- [安装 Ollama](https:\u002F\u002Fgithub.com\u002Follama\u002Follama?tab=readme-ov-file#ollama)\n\n### 下载模型\n\n#### 1. **下载 Mistral Instruct v0.2 要点笔记微调版**\n\n`ollama pull cognitivetech\u002Fobook_summary:q6_k`\n\n#### 2. **下载标题生成模型**\n\n##### a) *下载预配置模型*\n\n`ollama pull cognitivetech\u002Fobook_title:q4_k_m`\n\n为方便起见，Mistral 7b 0.3 已打包了用于标题生成的必要消息历史。\n\n***或***\n\n##### b) *将此*[消息历史](Modelfile) *附加到您选择的模型文件中*\n\n#### 3. **下载通用模型**\n`ollama pull gemma2`\n\n### 更新配置文件 `_config.yaml`\n\n请确保默认设置已正确配置！\n\n> 此处可能存在变动，与文档描述有所不同。**请务必按照当前 [_config.yaml](.\u002F_config.yaml) 中 `summary`、`general` 和 `title` 部分的说明，在您的系统上安装相应模型。** 我正在清理代码中的这一部分，但仍在进行中。\n\n```yaml\ndefaults:\n  prompt: bnotes\n  summary: cognitivetech\u002Fobook_summary:q6_k # 默认摘要模型\n  general: gemma2                           # 默认基础摘要模型\n  title: cognitivetech\u002Fobook_title:q4_k_m   # 默认标题生成模型\nprompts:\n  bnotes: # 默认提示\n    prompt: 编写全面的要点笔记，概括所提供的文本，使用加粗的标题和术语。\n  research: # 也可用于摘要模型\n    prompt: 这段文字是否提出了任何论点？如果是，请在此列出。\n  clean:  # 以下提示应与通用模型配合使用。\n    prompt: 原封不动地复述这段文字，仅删除那些不影响文本流畅性的无用字符。只输出主要文本内容，浓缩成一行。如果遇到章节边界或子标题，就在其标题前另起一行。\n  concise:\n    prompt: 复述所提供的段落，要求简洁明了。\n  md:\n    prompt: “以规范的 Markdown 格式打印这些笔记，用双星号标记加粗的标题和术语，用短横线表示项目符号。严格按照原文逐字打印，不得扩展，也不得添加以 # 开头的标题。”\n  sum: # 基础\n    prompt: 包含加粗标题和术语的全面要点笔记。\n  teacher:\n    prompt: “编写一份问题清单，供三年级学生阅读所提供的文本时作答。我们关注的主题包括：主旨、支持细节、观点、主题、顺序以及小说要素（背景、人物、BME）。”\n  quotes:\n    prompt: “根据所提供的文本，撰写几十条受启发的引语。”\ntitle_generation:\n  prompt: 写一个不超过 11 个单词的标题，简明扼要地描述这一选段。\n```\n\n## 使用\n### 将电子书转换为分块的 CSV 或 TXT\n\n#### 1. 使用自动化脚本拆分您的 `pdf` 或 `epub` 文件。\n```bash\npython3 book2text.py ebook-name.epub # 或 ebook-name.pdf（优先推荐 EPUB）\n```\n\n**这一步会产生两个输出**：\n- `out\u002Febook-name.csv`（按章节或部分拆分）\n- `out\u002Febook-name_processed.csv`（分块处理后的文件）\n\n***或***\n\n#### 2. 删除或转义每个分块内的所有换行符，以便它们可以逐行放置在[文本文件](notes\u002Fdepreciated\u002Fsummarize.txt)中，每行都用双引号括起来。\n\u003Ca href=\"notes\u002Fdepreciated\u002Fsummarize.txt\">\u003Cimg width=\"1163\" alt=\"image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcognitivetech_ollama-ebook-summary_readme_686dfc5e1b01.png\">\u003C\u002Fa>\n\n\\*_请注意，务必正确转义或替换每个分块中的双引号。_\n\n### 生成摘要\n\n```\n$ python3 sum.py --help\n```\n\n```bash\n用法：python sum.py [选项] 输入文件\n\n选项：\n-c, --csv        处理 CSV 文件。预期列：标题、文本\n-t, --txt        处理文本文件。每行应为一个独立的文本片段。\n-m, --model      用于生成的模型名称（默认从配置中获取）\n-p, --prompt     配置中指定的提示别名（默认从配置中获取）\n-v, --verbose    在终端输出基础上额外打印 Markdown 格式的结果\n--help           显示此帮助信息并退出。\n\n对于 CSV 输入：\n- 确保您的 CSV 文件包含“标题”和“文本”两列。\n\n对于文本输入：\n- 每行应为一段用双引号括起的文本。\n\n输出的 CSV 将包含：\n- 标题：最终选定或生成的标题\n- Gen：布尔值，表示该标题是否为生成结果\n- 文本：原始输入文本\n- model_name：生成的输出\n- Time：处理时间，单位为秒\n- Len：输出长度\n```    \n\n如果您已设置好默认参数，则只需指定输入类型——手动的 `text` 还是自动化的 `csv` 即可。\n```\npython3 sum.py -c ebook-name_processed.csv\n```\n\n## 半手动与原型方法\n\n在本示例中，我使用了一个原型脚本 [split_pdf.py](tools-prototype\u002Fsplit_pdf.py) 来不仅按章节，还按小节拆分 PDF 文件（生成 `ebook-name_extracted.csv`），随后手动处理该输出（使用 [vscode](https:\u002F\u002Fcode.visualstudio.com\u002F)），将每个文本片段单独放在一行，并用双引号括起来（见 [notes\u002Fdepreciated\u002Fsummarize.txt](notes\u002Fdepreciated\u002Fsummarize.txt)）。\n\n尽管最终这一过程会实现自动化，但目前仍存在一些挑战，这些挑战也导致我尚未完成该工具的开发。\n\n**拆分**：\n```\ntools-prototype\u002Fsplit_pdf.py ebook-name.pdf # 生成 ebook-name_extracted.csv\n```\n\n**处理**：\n```\npython3 sum.py -t ebook-name_extracted.csv\n```\n\n**这一步会生成两个输出**：\n- `ebook-name_extracted_processed_sum.md`（渲染后的 Markdown 文件）\n- `ebook-name_extracted_processed_sum.csv`（包含：输入文本、展平后的 Markdown 输出、生成时间、输出长度的 CSV 文件）\n\n## 模型\n可从以下两个来源下载：\n\n### Ollama\n您可以直接从 Ollama 获取任意模型，模板齐全。\n例如：`ollama pull obook_summary:q5_k_m`\n\n- [obook_summary](https:\u002F\u002Follama.com\u002Fcognitivetech\u002Fobook_summary) - 在 Ollama.com 上\n  - `latest` • 7.7GB • Q_8\n  - `q3_k_m` • 3.5GB\n  - `q4_k_m` • 4.4GB\n  - `q5_k_m` • 5.1GB\n  - `q6_k` • 5.9GB（推荐）\n- [obook_title](https:\u002F\u002Follama.com\u002Fcognitivetech\u002Fobook_title) - 在 Ollama.com 上\n  - `latest` • 7.7GB • Q_8\n  - `q3_k_m` • 3.5GB\n  - `q4_k_m` • 4.4GB\n  - `q5_k_m` • 5.1GB\n  - `q6_k`   • 5.9GB（推荐）\n\n### HuggingFace\nHuggingFace 上也有完整的权重、LoRA 和 GGGUF 模型。\n- [Mistral Instruct Bulleted Notes](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fcognitivetech\u002Fmistral-instruct-bulleted-notes-v02-66b6e2c16196e24d674b1940) - HuggingFace 上的集合\n  - [cognitivetech\u002FMistral-7B-Inst-0.2-Bulleted-Notes](https:\u002F\u002Fhuggingface.co\u002Fcognitivetech\u002FMistral-7B-Inst-0.2-Bulleted-Notes)\n  - [cognitivetech\u002FMistral-7b-Inst-0.2-Bulleted-Notes_GGUF](https:\u002F\u002Fhuggingface.co\u002Fcognitivetech\u002Fcognitivetech\u002FMistral-7b-Inst-0.2-Bulleted-Notes_GGUF)\n  - [cognitivetech\u002FMistral-7B-Inst-0.2_Bulleted-Notes_LoRA](https:\u002F\u002Fhuggingface.co\u002Fcognitivetech\u002Fcognitivetech\u002FMistral-7B-Inst-0.2_Bulleted-Notes_LoRA)\n\n## 检查您的电子书是否有文档大纲\n\n此处展示了如何检查您的电子书是否具有正确的格式化。**对于 ePub 格式，应能优雅地失败**。\n\n\\* 在极少数情况下，即使有可点击的目录，脚本也可能无法找到它。\n\n### Firefox\n![图片](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcognitivetech_ollama-ebook-summary_readme_34ec70c87242.png)\n\n### Brave \n![图片](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcognitivetech_ollama-ebook-summary_readme_ba5472f63bef.png)\n\n## 免责声明\n\n您有责任确保摘要工具生成的摘要准确无误。多种因素都可能影响摘要的质量，若您不够留意，这些因素很可能被忽略。\n\n**1. 引用：**\n\n就我个人而言，未经人工核实，我不信任由微调模型生成的引用。也许在较新的模型中这一问题已得到解决，但在我的测试阶段，我发现所使用的 7B 模型有时会出现错误引用。我从未专门测试过该应用在引用方面的表现，因此个人建议在进行摘要之前先移除所有较长的引用部分，再单独处理这些内容。我认为这并非不可逾越的障碍，只是目前我尚未完全掌握或理解的领域而已。\n\n**2. 其他：**\n\n还有几点需要注意。\n\n我之所以在 CSV 中保留输入与输出的长度信息，是因为这样可以方便地检查摘要是否比输入更长，而这种情况通常是一个警示信号。\n\n此外，如果摘要的结构与其他摘要差异过大，也可能表明摘要存在问题。这些问题可能与特殊字符有关，或者是因为输入内容过长，导致 AI 无法完全理解。\n\n## 灵感\n\n开发这款应用的灵感源于我打算手动总结十几本书的内容，以便将书中讨论的心理学理论与实践结合起来，并基于这些信息形成一个连贯的论点。\n\n我已经把这几本书读了好几遍，但现在需要更便捷地获取其中的信息，以便能够以一种连贯的方式与他人分享。\n\n起初，我曾尝试手动完成这个项目，持续了一周，却只完成了第一本书的几章内容，这才意识到这项工作耗时太长。\n\n随后的六个月里，我开始学习如何使用大语言模型，探索哪些模型最适合我的任务，并通过微调来确保生成结果具备生产级的一致性。\n\n如今借助这款工具，我可以更快地审阅更多材料。这是一款内容策展工具，不仅帮助我更好地学习知识，还能更轻松地分享这些知识，而无需花费大量时间去创作高质量的内容。\n\n此外，它还可以根据您提供的任何源材料创建自定义数据集。\n\n## 资源\n* [Summarizing Books](https:\u002F\u002Fopenai.com\u002Fresearch\u002Fsummarizing-books) OpenAI\n\n### 排行榜\n* [Small LLM Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fw601sxs\u002FSLM-Leaderboard) HuggingFace\n* [HuggingFace - Open LLM Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FHuggingFaceH4\u002Fopen_llm_leaderboard)\n* [Chatbox Arena Leaderboard](https:\u002F\u002Fchat.lmsys.org\u002F)\n* [Hallucination leaderboard](https:\u002F\u002Fgithub.com\u002Fvectara\u002Fhallucination-leaderboard\u002F) Vectara","# ollama-ebook-summary 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- Python 3.11.9（推荐使用 conda 或 pyenv 管理版本）\n\n### 前置依赖\n- 安装 Python 环境\n- 安装 Ollama：[Ollama 官网](https:\u002F\u002Fgithub.com\u002Follama\u002Follama?tab=readme-ov-file#ollama)\n\n---\n\n## 安装步骤\n\n### 1. 创建并激活 Python 环境（可选）\n**使用 conda:**\n```bash\nconda create -n book_summary python=3.11.9\nconda activate book_summary\n```\n\n**使用 pyenv:**\n```bash\npyenv install 3.11.9\npyenv local 3.11.9\n```\n\n### 2. 安装项目依赖\n```bash\npip install -r requirements.txt\n```\n\n### 3. 下载模型\n```bash\nollama pull cognitivetech\u002Fobook_summary:q6_k\nollama pull cognitivetech\u002Fobook_title:q4_k_m\nollama pull gemma2\n```\n\n### 4. 配置文件 `_config.yaml`\n确保配置文件中 `summary`、`general` 和 `title` 字段指向你已下载的模型。\n\n示例配置片段：\n```yaml\ndefaults:\n  summary: cognitivetech\u002Fobook_summary:q6_k\n  general: gemma2\n  title: cognitivetech\u002Fobook_title:q4_k_m\n```\n\n---\n\n## 基本使用\n\n### 步骤一：将电子书转换为分块文本\n\n运行以下命令，将 `.epub` 或 `.pdf` 文件拆分为章节或段落，并生成 CSV 文件：\n\n```bash\npython3 book2text.py ebook-name.epub\n```\n\n此操作会生成两个输出文件：\n- `out\u002Febook-name.csv`（按章节或部分拆分）\n- `out\u002Febook-name_processed.csv`（按约 2000 token 分块）\n\n### 步骤二：生成摘要\n\n使用以下命令生成摘要：\n\n```bash\npython3 sum.py -c ebook-name_processed.csv\n```\n\n该命令将处理 `ebook-name_processed.csv` 文件，并生成包含标题、原始文本、生成内容、耗时和长度等信息的 CSV 输出文件。\n\n---\n\n> 📌 提示：如果你已经正确配置了默认模型，只需指定输入类型（CSV 或 TXT）即可快速生成摘要。","某高校图书馆管理员正在处理一批来自不同学科的电子书，需要为这些书籍生成简洁、结构化的摘要，以便学生和研究人员快速获取核心内容。由于书籍数量庞大且格式多样，手动整理效率低下。\n\n### 没有 ollama-ebook-summary 时\n\n- 需要手动逐章阅读并提炼内容，耗时费力，难以保证一致性  \n- 处理长文档时，模型输入长度受限，无法完整理解上下文，导致摘要不准确  \n- 缺乏自动化流程，每次生成摘要都需要重新配置模型和参数，操作复杂  \n- 不同书籍的章节划分不统一，需额外处理元数据，增加工作量  \n- 无法批量处理多个文件，效率极低  \n\n### 使用 ollama-ebook-summary 后\n\n- 自动识别书籍目录并按章节拆分文本，支持 epub 和 pdf 格式，无需人工干预  \n- 将文档分割为约 2000 token 的块，确保模型能充分理解上下文，提升摘要质量  \n- 提供预配置模型和清晰的配置文件，简化部署与使用流程  \n- 支持批量处理多本书籍，显著提高工作效率  \n- 可自定义摘要风格（如项目符号形式），满足不同用户需求  \n\nollama-ebook-summary 让大规模电子书摘要生成变得高效、精准且易于管理。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcognitivetech_ollama-ebook-summary_34ec70c8.png","cognitivetech","CognitiveTech","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fcognitivetech_84ad7145.jpg","Curating top modalities for mental and emotional wellbeing, personal growth, and prosocial dynamics.\r\n\r\nWorking with Local LLM",null,"cognitivetechniq@gmail.com","cognitivetech_","https:\u002F\u002Fgithub.com\u002Fcognitivetech",[84,88,92,96],{"name":85,"color":86,"percentage":87},"Python","#3572A5",88.9,{"name":89,"color":90,"percentage":91},"Jupyter Notebook","#DA5B0B",6.9,{"name":93,"color":94,"percentage":95},"Shell","#89e051",4,{"name":97,"color":98,"percentage":99},"Awk","#c30e9b",0.2,618,50,"2026-04-05T12:04:09","Linux, macOS, Windows","未说明",{"notes":106,"python":107,"dependencies":108},"需要安装 Ollama 并下载特定模型（如 cognitivetech\u002Fobook_summary:q6_k、cognitivetech\u002Fobook_title:q4_k_m 和 gemma2）。建议使用 conda 或 pyenv 管理 Python 环境。","3.11.9",[109,110,111],"pip","ollama","requirements.txt 中的依赖项",[13,26,15],[114,115,116,117,118,119,120,121,110,122],"privategpt","gpt","llm","summarization","privategpt4linux","localai","localgpt","generative-ai","ollama-app","2026-03-27T02:49:30.150509","2026-04-06T08:52:36.803378",[126,131,135,140,145,150],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},5634,"如何解决 API 请求错误：404 Not Found？","请检查 Ollama 服务是否正在运行，并确保本地地址和端口（如 http:\u002F\u002Flocalhost:11434）正确。此外，确认你使用的模型名称与 Ollama 中实际安装的模型名称一致。如果模型名包含斜杠（\u002F），可能会导致路径问题，可以尝试修改配置文件中的模型名称以避免此问题。","https:\u002F\u002Fgithub.com\u002Fcognitivetech\u002Follama-ebook-summary\u002Fissues\u002F10",{"id":132,"question_zh":133,"answer_zh":134,"source_url":130},5635,"如何解决 'model not found' 错误？","请确认你使用的模型名称是否与 Ollama 中实际安装的模型名称完全匹配。例如，如果教程中使用的是 'obook_title:q3_k_m'，但你尝试加载的是 'obook_title:q4_k_m'，则会报错。建议检查 Ollama 的模型列表，并在配置文件中使用正确的模型名称。",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},5636,"如何处理 'OverflowError: Python int too large to convert to C long' 错误？","该错误通常发生在设置 CSV 字段大小限制时。可以通过修改代码逻辑来修复，具体方法是将原来的代码替换为以下内容：\n\n```python\ntry:\n    max_int = sys.maxsize\n    while True:\n        try:\n            csv.field_size_limit(max_int)\n            break\n        except OverflowError:\n            max_int = int(max_int \u002F 10)\nexcept Exception as e:\n    print(f\"Unexpected error while setting field size limit: {e}\")\n```\n\n这样可以逐步降低 `max_int` 值，直到成功设置字段大小限制。","https:\u002F\u002Fgithub.com\u002Fcognitivetech\u002Follama-ebook-summary\u002Fissues\u002F12",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},5637,"为什么生成的输出文件是空的？","这可能是由于输入的 EPUB 文件格式不兼容或未正确解析导致的。请确保你的 EPUB 文件包含有效的目录结构，并且已通过 Calibre 或其他工具验证其内容。另外，请检查程序的日志输出，确认是否有任何错误信息被忽略。如果问题仍然存在，可以尝试使用其他 EPUB 转换工具生成 CSV 文件后再进行处理。","https:\u002F\u002Fgithub.com\u002Fcognitivetech\u002Follama-ebook-summary\u002Fissues\u002F3",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},5638,"如何提高书籍摘要的质量？","为了提高摘要质量，建议选择支持复杂指令的模型，如 Mistral、Llama 等，并根据需要对模型进行微调。同时，可以调整分块策略，按章节或主题划分文本，而不是简单地按固定 token 数量分割。此外，优化提示词（prompt engineering）也是提升摘要准确性和连贯性的有效方式。","https:\u002F\u002Fgithub.com\u002Fcognitivetech\u002Follama-ebook-summary\u002Fissues\u002F5",{"id":151,"question_zh":152,"answer_zh":153,"source_url":139},5639,"如何处理大文件时出现的 CSV 字段大小限制问题？","CSV 模块默认的字段大小限制可能不足以处理非常大的文本数据。你可以通过以下代码动态增加字段大小限制：\n\n```python\nimport sys\nimport csv\n\ntry:\n    max_int = sys.maxsize\n    while True:\n        try:\n            csv.field_size_limit(max_int)\n            break\n        except OverflowError:\n            max_int = int(max_int \u002F 10)\nexcept Exception as e:\n    print(f\"Unexpected error while setting field size limit: {e}\")\n```\n\n这段代码会不断尝试减少 `max_int` 值，直到成功设置字段大小限制。",[]]