[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-hyperfield--ai-file-sorter":3,"tool-hyperfield--ai-file-sorter":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":76,"owner_company":78,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":79,"owner_url":80,"languages":81,"stars":105,"forks":106,"last_commit_at":107,"license":108,"difficulty_score":10,"env_os":109,"env_gpu":110,"env_ram":110,"env_deps":111,"category_tags":114,"github_topics":115,"view_count":10,"oss_zip_url":76,"oss_zip_packed_at":76,"status":16,"created_at":121,"updated_at":122,"faqs":123,"releases":152},1195,"hyperfield\u002Fai-file-sorter","ai-file-sorter","Cross-platform desktop application for content-aware file organization and renaming. Supports local and remote LLMs, preview-based workflows, and fully user-controlled changes.","AI File Sorter 是一款跨平台的桌面应用，利用人工智能帮助用户整理文件并生成更清晰、一致的文件名。它能分析图片、文档和音视频文件，自动分类并建议命名，让文件更易于查找和管理。对于下载文件夹、外接硬盘或网络存储中的杂乱文件，它能根据名称、扩展名和用户习惯进行智能归类。\n\n这款工具解决了文件命名混乱、分类不规范的问题，尤其适合需要处理大量文件的用户。无论是开发者、设计师还是普通用户，都能通过它提升工作效率。AI File Sorter 支持本地和远程大模型，注重隐私保护，所有操作都在本地完成，无需上传数据。其基于内容的智能分析和可自定义的命名规则是主要技术亮点。","\u003C!-- markdownlint-disable MD046 -->\n# AI File Sorter\n\n[![Code Version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCode-1.7.3-blue)](#)\n[![Release Version](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fhyperfield\u002Fai-file-sorter?label=Release)](#)\n![filesorter.app Downloads](https:\u002F\u002Ffilesorter.app\u002Fdownload-stats\u002Fbadge.svg)\n[![SourceForge Downloads](https:\u002F\u002Fimg.shields.io\u002Fsourceforge\u002Fdt\u002Fai-file-sorter.svg?label=SourceForge%20downloads)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\n[![Codacy Badge](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_adaaa9495e23.png)](https:\u002F\u002Fapp.codacy.com\u002Fgh\u002Fhyperfield\u002Fai-file-sorter\u002Fdashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)\n[![Donate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSupport%20AI%20File%20Sorter-orange)](https:\u002F\u002Ffilesorter.app\u002Fdonate\u002F)\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_3818b3e8766f.png\" alt=\"AI File Sorter logo\" width=\"128\" height=\"128\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_424e9b9d25ef.png\" alt=\"Vulkan\" width=\"160\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_929247be44e7.png\" alt=\"CUDA\" width=\"160\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_12624773c3da.png\" alt=\"Apple Metal\" width=\"160\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_dbab6464bfa9.png\" alt=\"Windows\" width=\"160\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_6930cae846cb.png\" alt=\"macOS\" width=\"160\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_4691abb422a8.png\" alt=\"Linux\" width=\"160\">\n\u003C\u002Fp>\n\nAI File Sorter is a cross-platform desktop application that uses AI to organize files and suggest cleaner, more consistent names for images, documents, and supported audio\u002Fvideo files. It is designed to reduce clutter, improve consistency, and make files easier to find later, whether for review, archiving, or long-term storage.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_805049b9c55a.png\" alt=\"AI File Sorter before and after organization example\" width=\"600\">\n\u003C\u002Fp>\n\nThe app can analyze picture files locally and suggest meaningful, human-readable names. For example, a generic file like IMG_2048.jpg can be renamed to something descriptive such as clouds_over_lake.jpg. It can also analyze supported document files and propose clearer names based on their text content. AI File Sorter can also clean up messy audio and video filenames by using the metadata already stored inside supported media files. If tags such as year, artist, album, or title are available, the app can turn them into a clear suggestion like `2024_artist_album_title.mp3`, which you can review, edit, or ignore before any change is applied.\n\nAI File Sorter helps tidy up cluttered folders such as Downloads, external drives, or NAS storage by automatically grouping files based on their names, extensions, folder context, and learned organization patterns.\n\nInstead of relying on fixed rules, the app gradually builds an internal understanding of how your files are typically organized and named. This allows it to make more consistent categorization and naming suggestions over time, while still letting you review and adjust everything before anything is applied.\n\nCategories (and optional subcategories) are suggested for each file, and for supported file types, rename suggestions are provided as well. Once you confirm, the required folders are created automatically and files are sorted accordingly.\n\nPrivacy-first by design:\nAI File Sorter can run entirely on your device, using local AI models such as Llama 3B (Q4) and Mistral 7B. No files, filenames, images, or metadata are uploaded anywhere, and no telemetry is sent. An internet connection is only needed if you explicitly choose to enable a remote model.\n\n---\n\n#### How It Works\n\n1. Point the app at a folder or drive  \n2. Files (and image content, when applicable) are analyzed using the selected local or remote model  \n3. Category and rename suggestions are generated  \n4. You review and adjust if needed - done  \n\n---\n\n[![Download ai-file-sorter](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_0442d87feb83.png)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\n\n[![Get it from Microsoft](https:\u002F\u002Fget.microsoft.com\u002Fimages\u002Fen-us%20dark.svg)](https:\u002F\u002Fapps.microsoft.com\u002Fdetail\u002F9npk4dzd6r6s)\n\n![AI File Sorter Screenshot](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_547f5867237a.gif) ![AI File Sorter Screenshot](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_371779ac09bd.png) ![AI File Sorter Screenshot](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_bebde171bff2.png)\n\n---\n\n- [AI File Sorter](#ai-file-sorter)\n  - [Changelog](#changelog)\n  - [Features](#features)\n  - [Categorization](#categorization)\n    - [Categorization modes](#categorization-modes)\n    - [Category whitelists](#category-whitelists)\n  - [Image analysis (Visual LLM)](#image-analysis-visual-llm)\n    - [Required visual LLM files](#required-visual-llm-files)\n    - [Main window options](#main-window-options)\n  - [Document analysis (Text LLM)](#document-analysis-text-llm)\n    - [Supported document formats](#supported-document-formats)\n    - [Main window options (documents)](#main-window-options-documents)\n  - [Audio\u002Fvideo metadata filename suggestions](#audiovideo-metadata-filename-suggestions)\n    - [Supported audio\u002Fvideo formats](#supported-audiovideo-formats)\n  - [System compatibility check](#system-compatibility-check)\n  - [Requirements](#requirements)\n  - [Installation](#installation)\n    - [Linux](#linux)\n    - [macOS](#macos)\n    - [Windows](#windows)\n  - [Categorization cache database](#categorization-cache-database)\n  - [Uninstallation](#uninstallation)\n  - [Using your OpenAI API key](#using-your-openai-api-key)\n  - [Using your Gemini API key](#using-your-gemini-api-key)\n  - [Testing](#testing)\n  - [Diagnostics](#diagnostics)\n  - [How to Use](#how-to-use)\n  - [Sorting a Remote Directory (e.g., NAS)](#sorting-a-remote-directory-eg-nas)\n  - [Contributing](#contributing)\n  - [Credits](#credits)\n  - [License](#license)\n  - [Donation](#donation)\n\n---\n\n## Changelog\n\n## [1.7.3] - 2026-03-22\n\n- Non-English categorization is now more reliable: files are categorized canonically in English first, then translated into the selected category language. This change is due to LLM language limitations.\n- App updates now support separate update streams for Windows, macOS, and Linux, while still accepting the legacy single-stream manifest format for newer clients.\n- Windows feeds can now provide a direct installer URL plus SHA-256 checksum so the app can download the installer, show download progress, verify its integrity, and launch it after confirmation.\n- The UI translation system was migrated fully to Qt `.ts` \u002F `.qm` catalogs.\n- Local categorization with local LLMs is now more robust.\n- Cached category labels are sanitized more aggressively to avoid malformed UTF-8 data breaking later categorization or display.\n- Misc improvements.\n- Misc bug fixes.\n\nSee [CHANGELOG.md](CHANGELOG.md) for the full history.\n\n---\n\n## Features\n\n- **AI-Powered Categorization**: Classify files intelligently using either a **local LLM** (Llama, Mistral) or a remote model (ChatGPT with your own OpenAI API key, or Gemini with your own Gemini API key).\n- **Offline-Friendly**: Use a local LLM to categorize files entirely - no internet or API key required.\n- **Robust categorization**: Taxonomy and heuristics help keep labels more consistent across runs.\n- **Customizable sorting rules**: Automatically assign categories and subcategories for granular organization.\n- **Two categorization modes**: Pick **More Refined** for detailed labels or **More Consistent** to bias toward uniform categories within a folder.\n- **Category whitelists**: Define named whitelists of allowed categories\u002Fsubcategories, manage them under **Settings → Manage category whitelists…**, and toggle\u002Fselect them in the main window when you want to constrain model output for a session.\n- **Multilingual categorization**: Have the LLM assign categories in Dutch, French, German, Italian, Polish, Portuguese, Spanish, or Turkish (model dependent).\n- **Custom local LLMs**: Register your own local GGUF models directly from the **Select LLM** dialog.\n- **Image content analysis (Visual LLM)**: Analyze supported picture files with LLaVA to produce descriptions and optional filename suggestions (rename-only mode supported).\n- **Image date-to-category suffix (optional)**: Append image creation date metadata to image category names when available.\n- **Document content analysis (Text LLM)**: Analyze supported document files to summarize content and suggest filenames; uses the same selected LLM (local or remote).\n- **Audio\u002Fvideo metadata filename suggestions**: Turn embedded media tags into clean, library-style filenames for supported audio and video files, with full review before anything is renamed.\n- **Sortable review**: Sort the Categorization Review table by file name, category, or subcategory to triage faster.\n- **Qt6 Interface**: Lightweight and responsive UI with refreshed menus and icons.\n- **Interface languages**: English, Dutch, French, German, Italian, Korean, Spanish, and Turkish.\n- **Cross-Platform Compatibility**: Works on Windows, macOS, and Linux.\n- **Local Database Caching**: Speeds up repeated categorization and minimizes remote LLM usage costs.\n- **Sorting Preview**: See how files will be organized before confirming changes.\n- **Dry run** \u002F preview-only mode to inspect planned moves without touching files.\n- **Persistent Undo** (\"Undo last run\") even after closing the sort dialog.\n- **Bring your own key**: Paste your OpenAI or Gemini API key once; it's stored locally and reused for remote runs.\n- **Update Notifications**: Get notified about updates - with optional or required update flows.\n\n---\n\n## Categorization\n\n### Categorization modes\n\n- **More refined**: The flexible, detail-oriented mode. Consistency hints are disabled so the model can pick the most specific category\u002Fsubcategory it deems appropriate, which is useful for long-tail or mixed folders.\n- **More consistent**: The uniform mode. The model receives consistency hints from prior assignments in the current session so files with similar names\u002Fextensions trend toward the same categories. This is helpful when you want strict uniformity across a batch.\n- Switch between the two via the **Categorization type** radio buttons on the main window; your choice is saved for the next run.\n\n### Category whitelists\n\n- Enable **Use a whitelist** to inject the selected whitelist into the LLM prompt; disable it to let the model choose freely.\n- Manage lists (add, edit, remove) under **Settings → Manage category whitelists…**. A default list is auto-created only when no lists exist, and multiple named lists can be kept for different projects.\n- Keep each whitelist to roughly **15–20 categories\u002Fsubcategories** to avoid overlong prompts on smaller local models. Use several narrower lists instead of a single very long one.\n- Whitelists apply in either categorization mode; pair them with **More consistent** when you want the strongest adherence to a constrained vocabulary.\n\n---\n\n## Image analysis (Visual LLM)\n\nImage analysis uses a local LLaVA-based visual LLM to describe image contents and (optionally) suggest a better filename. This runs locally and does not require an API key.\n\n### Required visual LLM files\n\nThe **Select LLM** dialog now includes an \"Image analysis models (LLaVA)\" section with two downloads:\n\n- **LLaVA text model (GGUF)**: The main language model that produces the description and the filename suggestion.\n- **LLaVA mmproj (vision encoder projection, GGUF)**: The adapter that maps vision embeddings into the LLM token space so the model can accept images.\n\nBoth files are required. If either one is missing, image analysis is disabled and the app will prompt to open the **Select LLM** dialog to download them. The download URLs can be overridden with `LLAVA_MODEL_URL` and `LLAVA_MMPROJ_URL` (see [Environment variables](#environment-variables)).\n\n### Main window options\n\nImage analysis adds six related checkboxes to the main window:\n\n- **Analyze picture files by content (can be slow)**: Runs the visual LLM on supported picture files and reports progress in the analysis dialog.\n- **Process picture files only (ignore any other files)**: Restricts the run to supported picture files and disables the categorization controls while active.\n- **Add image creation date (if available) to category name**: Appends `YYYY-MM-DD` from image metadata to the category label when available. Disabled when rename-only is enabled.\n- **Add photo date and place to filename (if available)**: Adds metadata-based date\u002Fplace prefixes to suggested image filenames when available.\n- **Offer to rename picture files**: Shows a **Suggested filename** column in the Review dialog with the visual LLM proposal. You can edit it before confirming.\n- **Do not categorize picture files (only rename)**: Skips text categorization for images and keeps them in place while applying (optional) renames.\n\nThe separate top-level checkbox **Add audio\u002Fvideo metadata to file name (if available)** controls metadata-based rename suggestions for supported audio\u002Fvideo files. See [Audio\u002Fvideo metadata filename suggestions](#audiovideo-metadata-filename-suggestions).\n\n---\n\n## Document analysis (Text LLM)\n\nDocument analysis uses the same selected LLM (local or remote) to extract text from supported document files, summarize content, and optionally suggest a better filename. No extra model downloads are required.\n\n### Supported document formats\n\n- Plain text: `.txt`, `.md`, `.rtf`, `.csv`, `.tsv`, `.json`, `.xml`, `.yml`\u002F`.yaml`, `.ini`\u002F`.cfg`\u002F`.conf`, `.log`, `.html`\u002F`.htm`, `.tex`, `.rst`\n- PDF: `.pdf` (embedded PDFium by default; CLI fallback via `pdftotext` is available only if you explicitly configure `-DAI_FILE_SORTER_REQUIRE_EMBEDDED_PDF_BACKEND=OFF`)\n- Office\u002FOpenOffice: `.docx`, `.xlsx`, `.pptx`, `.odt`, `.ods`, `.odp` (embedded libzip+pugixml in bundled builds; CLI fallback uses `unzip` if you build without vendored libs)\n- Legacy binary formats like `.doc`, `.xls`, `.ppt` are not currently supported.\n\nSource builds: embedded extractors are used by default. If the vendored PDFium artifacts are missing for your target platform, CMake now fails loudly instead of silently disabling PDF content extraction. You can opt back into the old CLI fallback with `-DAI_FILE_SORTER_REQUIRE_EMBEDDED_PDF_BACKEND=OFF`.\n\n### Main window options (documents)\n\n- **Analyze document files by content**: Extracts document text and feeds it into the LLM for summary + rename suggestion.\n- **Process document files only (ignore any other files)**: Restricts the run to supported document files and disables the categorization controls while active.\n- **Offer to rename document files**: Shows a **Suggested filename** column in the Review dialog with the LLM proposal. You can edit it before confirming.\n- **Do not categorize document files (only rename)**: Skips text categorization for documents and keeps them in place while applying (optional) renames.\n- **Add document creation date (if available) to category name**: Appends `YYYY-MM` from metadata when available. Disabled when rename-only is enabled.\n\n---\n\n## Audio\u002Fvideo metadata filename suggestions\n\nLet AI File Sorter turn embedded media tags into clean, consistent filenames for your music and video library. When enabled, the app reads supported metadata fields and builds a polished suggested name in the format `year_artist_album_title.ext`. As with all rename suggestions, nothing is changed until you review and confirm it.\n\n### Supported audio\u002Fvideo formats\n\n- Audio extensions: `.aac`, `.aif`, `.aiff`, `.alac`, `.ape`, `.flac`, `.m4a`, `.mp3`, `.ogg`, `.oga`, `.opus`, `.wav`, `.wma`\n- Video extensions: `.3gp`, `.avi`, `.flv`, `.m4v`, `.mkv`, `.mov`, `.mp4`, `.mpeg`, `.mpg`, `.mts`, `.m2ts`, `.ts`, `.webm`, `.wmv`\n- Built-in tag readers currently cover MP3 (`ID3v1`\u002F`ID3v2`), FLAC (Vorbis comments), OGG\u002FOGA\u002FOpus (Vorbis comments), and MP4-family containers such as `.m4a`, `.mp4`, `.m4v`, `.mov`, and `.3gp` (MP4\u002FMOV metadata atoms).\n- When compiled with package-managed `MediaInfoLib`, the same rename flow can also use metadata exposed by MediaInfo for additional supported containers when available.\n\n---\n\n## System compatibility check\n\nThe **System compatibility check** runs a quick benchmark that estimates how well your system can handle:\n\n- **Categorization** with the selected local LLMs\n- **Document analysis** by content\n- **Image analysis** (visual LLM)\n\nYou can launch it from the menu (**File → System compatibility check…**). It only runs if at least one local or visual LLM is downloaded, and it won’t auto-rerun if it's already been run.\n\nWhat it does:\n\n- Detects available CPU threads and GPU backends (e.g., Vulkan\u002FCUDA)\n- Times a small categorization and document-analysis workload per default model\n- Times a single image-analysis pass if visual LLM files are present\n- Reports speed tiers (optimal \u002F acceptable \u002F a bit long) and suggests a recommended local LLM\n\nTip: quit CPU\u002FGPU‑intensive apps before running the check for more accurate results.\n\n---\n\n## Requirements\n\n- **Operating System**: Linux, macOS, or Windows. Linux\u002FmacOS source builds use the Makefile flow below; Windows source builds use the native Qt\u002FMSVC + CMake flow in the Windows section.\n- **Compiler**: A C++20-capable compiler (`g++` or `clang++` on Linux\u002FmacOS, MSVC 2022 on Windows).\n- **Qt 6**: Core, Gui, Widgets modules and the Qt resource compiler (`qt6-base-dev` \u002F `qt6-tools` on Linux, `brew install qt` on macOS, or a Qt 6 MSVC kit \u002F `qtbase` via vcpkg on Windows).\n- **Libraries**: `curl`, `sqlite3`, `fmt`, `spdlog`, `libmediainfo` (required for full source builds), and the prebuilt `llama` libraries shipped under `app\u002Flib\u002Fprecompiled` on Linux\u002FWindows or `app\u002Flib\u002Fprecompiled-*` for macOS variant builds. On Windows, these non-Qt libraries are supplied through the `app\u002Fvcpkg.json` manifest.\n- **MediaInfo policy**: MediaInfo must be installed through a package manager (`apt`\u002F`dnf`\u002F`pacman`\u002F`brew`\u002F`vcpkg`). The build rejects vendored MediaInfo submodules and checked-in binaries.\n- **Document analysis libraries** (vendored): PDFium, libzip, and pugixml. PDFium is required by default so packaged\u002Fsource builds keep PDF extraction embedded on Windows, macOS, and Linux; set `-DAI_FILE_SORTER_REQUIRE_EMBEDDED_PDF_BACKEND=OFF` only if you intentionally want the `pdftotext` fallback.\n- **Optional GPU backends**: A Vulkan 1.2+ runtime (preferred) or CUDA 12.x for NVIDIA cards. `StartAiFileSorter.exe`\u002F`run_aifilesorter.sh` auto-detect the best available backend and fall back to CPU\u002FOpenBLAS automatically, so CUDA is never required to run the app.\n- **Git** (optional): For cloning this repository. Archives can also be downloaded.\n- **OpenAI or Gemini API key** (optional): Required only when using the remote ChatGPT or Gemini workflow.\n\n---\n\n## Installation\n\nFile categorization with local LLMs is completely free of charge. If you prefer to use a remote workflow (ChatGPT or Gemini) you will need your own API key with a small balance or within the free tier (see [Using your OpenAI API key](#using-your-openai-api-key) or [Using your Gemini API key](#using-your-gemini-api-key)).\n\n### Linux\n\n#### Prebuilt Debian\u002FUbuntu package\n\n1. **Install runtime prerequisites** (Qt6, networking, database, math libraries):\n   - Ubuntu 24.04 \u002F Debian 12:\n     ```bash\n     sudo apt update && sudo apt install -y \\\n       libqt6widgets6 libcurl4 libjsoncpp25 libfmt9 libopenblas0-pthread \\\n       libvulkan1 mesa-vulkan-drivers patchelf\n     ```\n   - Debian 13 (trixie):\n     ```bash\n     sudo apt update && sudo apt install -y \\\n       libqt6widgets6 libcurl4t64 libjsoncpp26 libfmt10 libopenblas0-pthread \\\n       libvulkan1 mesa-vulkan-drivers patchelf\n     ```\n   If you build the Vulkan backend from source, install `glslc` (Debian\u002FUbuntu package: `glslc`; on some distros: `shaderc` or `shaderc-tools`).\n   On Debian 13, use `libjsoncpp26`, `libfmt10`, and `libcurl4t64` (APT may auto-select `libcurl4t64` if `libcurl4` is not available).\n   Ensure that the Qt platform plugins are installed (on Ubuntu 22.04 this is provided by `qt6-wayland`).\n   GPU acceleration additionally requires either a working Vulkan 1.2+ stack (Mesa, AMD\u002FIntel\u002FNVIDIA drivers) or, for NVIDIA users, the matching CUDA runtime (`nvidia-cuda-toolkit` or vendor packages). The launcher automatically prefers Vulkan when both are present and falls back to CPU if neither is available.\n2. **Install the package**\n   ```bash\n   sudo apt install .\u002Faifilesorter_*.deb\n   ```\n   Using `apt install` (rather than `dpkg -i`) ensures any missing dependencies listed above are installed automatically.\n\n#### Build from source\n\n1. **Install dependencies**\n   - Debian \u002F Ubuntu:\n    ```bash\n    sudo apt update && sudo apt install -y \\\n      build-essential cmake git qt6-base-dev qt6-base-dev-tools qt6-l10n-tools qt6-tools-dev-tools \\\n      libcurl4-openssl-dev libjsoncpp-dev libsqlite3-dev libssl-dev libfmt-dev libspdlog-dev libmediainfo-dev \\\n      zlib1g-dev\n    ```\n   - Fedora \u002F RHEL:\n\n    ```bash\n    export PATH=\"\u002Fusr\u002Flib64\u002Fqt6\u002Flibexec:$PATH\"\n    sudo dnf install -y gcc-c++ cmake git qt6-qtbase-devel qt6-qttools-devel \\\n      libcurl-devel jsoncpp-devel sqlite-devel openssl-devel fmt-devel spdlog-devel mediainfo-devel\n    ```\n\n   - Arch \u002F Manjaro:\n\n    ```bash\n     sudo pacman -S --needed base-devel git cmake qt6-base qt6-tools curl jsoncpp sqlite openssl fmt spdlog mediainfo\n    ```\n\n     Optional GPU acceleration also requires either the distro Vulkan 1.2+ driver\u002Fruntime (Mesa, AMD, Intel, NVIDIA) or CUDA packages for NVIDIA cards. Install whichever stack you plan to use; the app will fall back to CPU automatically if none are detected.\n     MediaInfo is enforced as package-managed only; vendored `MediaInfoLib` folders or repo-local binaries are rejected by the build.\n\n2. **Clone the repository**\n\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter.git\n   cd ai-file-sorter\n   git submodule update --init --recursive\n   ```\n\n   > **Submodule tip:** If you previously downloaded `llama.cpp` or Catch2 manually, remove or rename `app\u002Finclude\u002Fexternal\u002Fllama.cpp` and `external\u002FCatch2` before running the `git submodule` command. Git needs those directories to be empty so it can populate them with the tracked submodules.\n\n3. **Build vendored libzip** (generates `zipconf.h` and `libzip.a`)\n\n   ```bash\n   cmake -S external\u002Flibzip -B external\u002Flibzip\u002Fbuild \\\n    -DBUILD_SHARED_LIBS=OFF \\\n    -DBUILD_DOC=OFF \\\n    -DENABLE_BZIP2=OFF \\\n    -DENABLE_LZMA=OFF \\\n    -DENABLE_ZSTD=OFF \\\n    -DENABLE_OPENSSL=OFF \\\n    -DENABLE_GNUTLS=OFF \\\n    -DENABLE_MBEDTLS=OFF \\\n    -DENABLE_COMMONCRYPTO=OFF \\\n    -DENABLE_WINDOWS_CRYPTO=OFF\n\n   cmake --build external\u002Flibzip\u002Fbuild\n   ```\n\n   On Ubuntu\u002FDebian you will also need the Zlib development headers (`zlib1g-dev`) or\n   the libzip configure step will fail.\n\n   If you prefer system headers instead, install `libzip-dev` and ensure `zipconf.h` is on your include path.\n\n4. **Build the llama runtime variants** (run once per backend you plan to ship\u002Ftest)\n\n   ```bash\n   # CPU \u002F OpenBLAS\n   .\u002Fapp\u002Fscripts\u002Fbuild_llama_linux.sh cuda=off vulkan=off\n   # CUDA (optional; requires NVIDIA driver + CUDA toolkit)\n   .\u002Fapp\u002Fscripts\u002Fbuild_llama_linux.sh cuda=on vulkan=off\n   # Vulkan (optional; requires a working Vulkan 1.2+ stack and glslc, e.g. mesa-vulkan-drivers + vulkan-tools + glslc)\n   .\u002Fapp\u002Fscripts\u002Fbuild_llama_linux.sh cuda=off vulkan=on\n   ```\n\n   Each invocation stages the corresponding `llama`\u002F`ggml` libraries under `app\u002Flib\u002Fprecompiled\u002F\u003Cvariant>` and the runtime DLL\u002FSO copies under `app\u002Flib\u002Fggml\u002Fw\u003Cvariant>`. The script refuses to enable CUDA and Vulkan simultaneously, so run it separately for each backend. Shipping both directories lets the launcher pick Vulkan when available, then CUDA, and otherwise stay on CPU—no CUDA-only dependency remains.\n\n5. **Compile the application**\n\n   ```bash\n   cd app\n   make -j4\n   ```\n\n   The binary is produced at `app\u002Fbin\u002Faifilesorter`.\n   The Makefile requires `pkg-config` + package-managed `libmediainfo`; it intentionally rejects vendored MediaInfo copies.\n\n6. **Install system-wide (optional)**\n\n   ```bash\n   sudo make install\n   ```\n\n7. **Build a Debian package (optional)**\n\n   ```bash\n   .\u002Fapp\u002Fscripts\u002Fpackage_deb.sh\n   ```\n\n   The packaging script always bundles the CPU runtime and auto-includes any staged GPU\n   variants already present under `app\u002Flib\u002Fprecompiled` (for example `vulkan` after\n   `.\u002Fapp\u002Fscripts\u002Fbuild_llama_linux.sh cuda=off vulkan=on`). Use\n   `.\u002Fapp\u002Fscripts\u002Fpackage_deb.sh --cpu-only` for a smaller CPU-only package, or\n   `--include-vulkan` \u002F `--include-cuda` if you want the script to fail when a specific\n   staged variant is missing.\n\n### macOS\n\n1. **Install Xcode command-line tools** (`xcode-select --install`).\n2. **Install Homebrew** (if required).\n3. **Install dependencies**\n\n   ```bash\n   brew install qt curl jsoncpp sqlite openssl fmt spdlog mediainfo cmake git pkgconfig libffi\n   ```\n\n   Add Qt to your environment if it is not already present:\n\n   ```bash\n   export PATH=\"$(brew --prefix)\u002Fopt\u002Fqt\u002Fbin:$PATH\"\n   export PKG_CONFIG_PATH=\"$(brew --prefix)\u002Flib\u002Fpkgconfig:$(brew --prefix)\u002Fshare\u002Fpkgconfig:$PKG_CONFIG_PATH\"\n   ```\n\n4. **Clone the repository and submodules** (same commands as Linux).\n   > The macOS build pins `MACOSX_DEPLOYMENT_TARGET=11.0` so the Mach-O `LC_BUILD_VERSION` covers Apple Silicon and newer releases (including Sequoia). Raise or lower it (e.g., `export MACOSX_DEPLOYMENT_TARGET=15.0`) if you need a different floor.\n\n5. **Build vendored libzip** (generates `zipconf.h` and `libzip.a`)\n\n   ```bash\n   cmake -S external\u002Flibzip -B external\u002Flibzip\u002Fbuild \\\n     -DBUILD_SHARED_LIBS=OFF \\\n     -DBUILD_DOC=OFF \\\n     -DENABLE_BZIP2=OFF \\\n     -DENABLE_LZMA=OFF \\\n     -DENABLE_ZSTD=OFF \\\n     -DENABLE_OPENSSL=OFF \\\n     -DENABLE_GNUTLS=OFF \\\n     -DENABLE_MBEDTLS=OFF \\\n     -DENABLE_COMMONCRYPTO=OFF \\\n     -DENABLE_WINDOWS_CRYPTO=OFF\n   cmake --build external\u002Flibzip\u002Fbuild\n   ```\n\n6. **Build the llama runtime (Metal-enabled on Apple Silicon)**\n\n   ```bash\n   .\u002Fapp\u002Fscripts\u002Fbuild_llama_macos.sh\n   ```\n   The macOS app and `.app` bundles use the runtime staged under `app\u002Flib\u002Fprecompiled*`; they do not need Homebrew `ggml` or `llama.cpp` libraries.\n   If you have older `ggml` \u002F `llama.cpp` copies installed in generic library locations, prefer unlinking or removing them instead of relying on them implicitly.\n7. **Compile the application**\n\n   ```bash\n   cd app\n   make -j8                 # use -jN to control parallelism\n   sudo make install   # optional\n   ```\n\n   The default build places the binary at `app\u002Fbin\u002Faifilesorter`.\n\n   **Variant targets:**\n\n   ```bash\n   make -j8 MACOS_LLAMA_M1    # outputs app\u002Fbin\u002Fm1\u002Faifilesorter\n   make -j8 MACOS_LLAMA_M2    # outputs app\u002Fbin\u002Fm2\u002Faifilesorter\n   make -j8 MACOS_LLAMA_INTEL # outputs app\u002Fbin\u002Fintel\u002Faifilesorter\n   ```\n\n   These targets rebuild the llama.cpp runtime before compiling the app.\n   When cross-compiling Intel on Apple Silicon, use x86_64 Homebrew (under `\u002Fusr\u002Flocal`) or set `BREW_PREFIX=\u002Fusr\u002Flocal` so Qt\u002Fpkg-config resolve correctly.\n   `sudo make install` places the macOS runtime libraries under `\u002Fusr\u002Flocal\u002Flib\u002Faifilesorter` to avoid collisions with unrelated system or Homebrew ggml libraries.\n   Each variant uses distinct build directories to avoid cross-arch collisions:\n   - llama.cpp libs: `app\u002Flib\u002Fprecompiled-m1`, `app\u002Flib\u002Fprecompiled-m2`, `app\u002Flib\u002Fprecompiled-intel`\n   - object files: `app\u002Fobj\u002Farm64` or `app\u002Fobj\u002Fx86_64`\n\n### Windows\n\nBuild now targets native MSVC + Qt6 without MSYS2. Two options are supported; the vcpkg route is simplest.\n\nOption A - CMake + vcpkg (recommended)\n\n1. Install prerequisites:\n   - Visual Studio 2022 with Desktop C++ workload\n   - CMake 3.21+ (Visual Studio ships a recent version)\n   - vcpkg: \u003Chttps:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fvcpkg> (clone and bootstrap)\n   - package-managed `libmediainfo` via vcpkg manifest (no vendored MediaInfo submodule\u002Fbinaries)\n   - **MSYS2 MinGW64 + OpenBLAS**: install MSYS2 from \u003Chttps:\u002F\u002Fwww.msys2.org>, open an *MSYS2 MINGW64* shell, and run `pacman -S --needed mingw-w64-x86_64-openblas`. The `build_llama_windows.ps1` script uses this OpenBLAS copy for CPU-only builds (the vcpkg variant is not suitable), defaulting to `C:\\msys64\\mingw64` unless you pass `openblasroot=\u003Cpath>` or set `OPENBLAS_ROOT`.\n2. Clone repo and submodules:\n\n   ```powershell\n   git clone https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter.git\n   cd ai-file-sorter\n   git submodule update --init --recursive\n   ```\n\n3. **Build vendored libzip** (generates `zipconf.h` and `libzip.lib`)\n\n   Run from the same x64 Native Tools \u002F VS Developer PowerShell you will use to build the app:\n\n   ```powershell\n   cmake -S external\\libzip -B external\\libzip\\build -A x64 `\n     -DBUILD_SHARED_LIBS=OFF `\n     -DBUILD_DOC=OFF `\n     -DENABLE_BZIP2=OFF `\n     -DENABLE_LZMA=OFF `\n     -DENABLE_ZSTD=OFF `\n     -DENABLE_OPENSSL=OFF `\n     -DENABLE_GNUTLS=OFF `\n     -DENABLE_MBEDTLS=OFF `\n     -DENABLE_COMMONCRYPTO=OFF `\n     -DENABLE_WINDOWS_CRYPTO=OFF\n   cmake --build external\\libzip\\build --config Release\n   ```\n\n4. Determine your vcpkg root. It is the folder that contains `vcpkg.exe` (for example `C:\\dev\\vcpkg`).\n    - If `vcpkg` is on your `PATH`, run this command to print the location:\n\n      ```powershell\n      Split-Path -Parent (Get-Command vcpkg).Source\n      ```\n\n    - Otherwise use the directory where you cloned vcpkg.\n\n   MediaInfo note: you do **not** manually add `MediaInfoLib` include\u002Flib paths on Windows. The project already declares `libmediainfo` in `app\u002Fvcpkg.json`, and `app\\build_windows.ps1` configures CMake with the vcpkg toolchain + manifest so `find_package(MediaInfoLib ...)` resolves it automatically. If you want to preinstall or verify it explicitly, run `vcpkg install libmediainfo:x64-windows`.\n5. Build the bundled `llama.cpp` runtime variants (run from the same **x64 Native Tools** \u002F **VS 2022 Developer PowerShell** shell). Invoke the script once per backend you need. Make sure the MSYS2 OpenBLAS install from step 1 is present before running the CPU-only variant (or pass `openblasroot=\u003Cpath>` explicitly):\n\n   ```powershell\n   # CPU \u002F OpenBLAS only\n   app\\scripts\\build_llama_windows.ps1 cuda=off vulkan=off vcpkgroot=C:\\dev\\vcpkg\n   # CUDA (requires matching NVIDIA toolkit\u002Fdriver)\n   app\\scripts\\build_llama_windows.ps1 cuda=on vulkan=off vcpkgroot=C:\\dev\\vcpkg\n   # Vulkan (requires LunarG Vulkan SDK or vendor Vulkan 1.2+ runtime)\n   app\\scripts\\build_llama_windows.ps1 cuda=off vulkan=on vcpkgroot=C:\\dev\\vcpkg\n   ```\n  \n  Each run emits the appropriate `llama.dll` \u002F `ggml*.dll` pair under `app\\lib\\precompiled\\\u003Ccpu|cuda|vulkan>` and copies the runtime DLLs into `app\\lib\\ggml\\w\u003Cvariant>`. For Vulkan builds, install the latest LunarG Vulkan SDK (or the vendor's runtime), ensure `vulkaninfo` succeeds in the same shell, and then run the script. Supplying both Vulkan and (optionally) CUDA artifacts lets `StartAiFileSorter.exe` detect the best backend at launch—Vulkan is preferred, CUDA is used when Vulkan is missing, and CPU remains the fallback, so CUDA is not required.\n\n6. Build the Qt6 application using the helper script (still in the VS shell). The helper stages runtime DLLs via `windeployqt`, shares one dependency install tree across variants, and by default produces three Windows builds in one run:\n\n   ```powershell\n   # One-time per shell if script execution is blocked:\n   Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass\n\n   app\\build_windows.ps1 -Configuration Release -VcpkgRoot C:\\dev\\vcpkg\n   ```\n\n   - Replace `C:\\dev\\vcpkg` with the path where you cloned vcpkg; it must contain `scripts\\buildsystems\\vcpkg.cmake`.\n   - The helper produces these output directories by default:\n     - Standard installer build with Windows auto-update enabled: `app\\build-windows\\Release`\n     - Microsoft Store build with update checks disabled: `app\\build-windows-store\\Release`\n     - Standalone Windows build with notification-only\u002Fmanual updates: `app\\build-windows-standalone\\Release`\n   - Use `-Variants Standard`, `-Variants MsStore`, or `-Variants Standalone` to build only a subset.\n   - `aifilesorter.exe` is the primary Windows GUI entry point. `StartAiFileSorter.exe` is still built beside it as the legacy bootstrapper and carries the same updater mode.\n   - `-VcpkgRoot` is optional if `VCPKG_ROOT`\u002F`VPKG_ROOT` is set or `vcpkg`\u002F`vpkg` is on `PATH`.\n   - Each variant directory receives its own executable and staged Qt\u002Fthird-party DLLs. Pass `-SkipDeploy` if you only want the binaries without bundling runtime DLLs.\n   - Pass `-Parallel \u003CN>` to override the default “all cores” parallel build behaviour (for example, `-Parallel 8`). By default the script invokes `cmake --build ... --parallel \u003Ccore-count>` and `ctest -j \u003Ccore-count>` to keep both MSBuild and Ninja fully utilized.\n\nOption B - CMake + Qt online installer\n\n1. Install prerequisites:\n   - Visual Studio 2022 with Desktop C++ workload\n   - Qt 6.x MSVC kit via Qt Online Installer (e.g., Qt 6.6+ with MSVC 2019\u002F2022)\n   - CMake 3.21+\n   - vcpkg (for non-Qt libs): curl, jsoncpp, sqlite3, openssl, fmt, spdlog, gettext, libmediainfo\n2. **Build vendored libzip** (generates `zipconf.h` and `libzip.lib`)\n\n   Run from the same x64 Native Tools \u002F VS Developer PowerShell you will use to build the app:\n\n   ```powershell\n   cmake -S external\\libzip -B external\\libzip\\build -A x64 `\n     -DBUILD_SHARED_LIBS=OFF `\n     -DBUILD_DOC=OFF `\n     -DENABLE_BZIP2=OFF `\n     -DENABLE_LZMA=OFF `\n     -DENABLE_ZSTD=OFF `\n     -DENABLE_OPENSSL=OFF `\n     -DENABLE_GNUTLS=OFF `\n     -DENABLE_MBEDTLS=OFF `\n     -DENABLE_COMMONCRYPTO=OFF `\n     -DENABLE_WINDOWS_CRYPTO=OFF\n   cmake --build external\\libzip\\build --config Release\n   ```\n\n3. Build the bundled `llama.cpp` runtime (same VS shell). Any missing OpenBLAS\u002FcURL packages are installed automatically via vcpkg:\n\n   ```powershell\n   pwsh .\\app\\scripts\\build_llama_windows.ps1 [cuda=on|off] [vulkan=on|off] [vcpkgroot=C:\\dev\\vcpkg]\n   ```\n\n   This is required before configuring the GUI because the build links against the produced `llama` static libraries\u002FDLLs.\n4. Configure CMake from the repo root so CMake sees both the Qt install and the app's vcpkg manifest (adapt `CMAKE_PREFIX_PATH` to your Qt install):\n\n    ```powershell\n    $env:VCPKG_ROOT = \"C:\\path\\to\\vcpkg\"  # e.g. C:\\dev\\vcpkg\n    $qt = \"C:\\Qt\\6.6.3\\msvc2019_64\"  # example\n    cmake -S app -B build -G \"Ninja\" `\n      -DCMAKE_PREFIX_PATH=$qt `\n     -DCMAKE_TOOLCHAIN_FILE=$env:VCPKG_ROOT\\scripts\\buildsystems\\vcpkg.cmake `\n     -DVCPKG_MANIFEST_DIR=app `\n     -DAI_FILE_SORTER_REQUIRE_MEDIAINFOLIB=ON `\n     -DVCPKG_TARGET_TRIPLET=x64-windows\n   cmake --build build --config Release\n   ```\n\n   This configure step enables vcpkg manifest mode, so `libmediainfo` is installed\u002Fresolved from `app\\vcpkg.json` automatically. No manual linker or include-path edits are needed for MediaInfo on Windows.\n\nNotes\n\n- To rebuild from scratch, run `.\\app\\build_windows.ps1 -Clean`. The script removes the selected variant build directories and the shared `app\\build-windows-vcpkg_installed` dependency tree before configuring.\n- Runtime DLLs are copied automatically via `windeployqt` after each successful build; skip this step with `-SkipDeploy` if you manage deployment yourself.\n- If Visual Studio sets `VCPKG_ROOT` to its bundled copy under `Program Files`, clone vcpkg to a writable directory (for example `C:\\dev\\vcpkg`) and pass `vcpkgroot=\u003Cpath>` when running `build_llama_windows.ps1`.\n- If you plan to ship CUDA or Vulkan acceleration, run the `build_llama_*` helper for each backend you intend to include before configuring CMake so the libraries exist. The runtime can carry both and auto-select at launch, so CUDA remains optional.\n- `-BuildTests` and `-RunTests` currently build and execute tests only in the `Standard` variant, which is the primary Windows development\u002FCI configuration.\n\n### Running tests\n\nCatch2-based unit tests are optional. Enable them via CMake:\n\n```bash\ncmake -S app -B build-tests -DAI_FILE_SORTER_BUILD_TESTS=ON -DAI_FILE_SORTER_REQUIRE_MEDIAINFOLIB=ON\ncmake --build build-tests --target ai_file_sorter_tests --parallel $(nproc)\nctest --test-dir build-tests --output-on-failure -j $(nproc)\n```\n\nOn macOS, replace `$(nproc)` with `$(sysctl -n hw.ncpu)`.\n\nOn Windows (PowerShell), use:\n\n```powershell\ncmake -S app -B build-tests -DAI_FILE_SORTER_BUILD_TESTS=ON -DAI_FILE_SORTER_REQUIRE_MEDIAINFOLIB=ON\ncmake --build build-tests --target ai_file_sorter_tests --parallel $env:NUMBER_OF_PROCESSORS\nctest --test-dir build-tests --output-on-failure -j $env:NUMBER_OF_PROCESSORS\n```\n\nNotes\n\n- List individual Catch2 cases: `.\u002Fbuild-tests\u002Fai_file_sorter_tests --list-tests`\n- Print each case name (including successes): `.\u002Fbuild-tests\u002Fai_file_sorter_tests --verbosity high --success`\n\nOn Windows you can pass `-BuildTests` (and `-RunTests` to execute `ctest`) to `app\\build_windows.ps1`:\n\n```powershell\napp\\build_windows.ps1 -Configuration Release -Variants Standard -BuildTests -RunTests\n```\n\nThe current suite (under `tests\u002Funit`) focuses on core utilities; expand it as new functionality gains coverage.\n\n### Selecting a backend at runtime\n\nBoth the Linux launcher (`app\u002Fbin\u002Frun_aifilesorter.sh` \u002F `aifilesorter-bin`) and the Windows starter accept the following optional flags:\n\n- `--cuda={on|off}` – force-enable or disable the CUDA backend.\n- `--vulkan={on|off}` – force-enable or disable the Vulkan backend.\n\nWhen no flags are provided the app auto-detects available runtimes in priority order (Vulkan → CUDA → CPU). Use the flags to skip a backend (`--cuda=off` forces Vulkan\u002FCPU even if CUDA is installed, `--vulkan=off` tests CUDA explicitly) or to validate a newly installed stack (`--vulkan=on`). Passing `on` to both flags is rejected, and if neither GPU backend is detected the app automatically stays on CPU.\n\n#### Vulkan and VRAM notes\n\n- Vulkan is preferred when available; CUDA is used only if Vulkan is missing or explicitly requested.\n- The app auto-estimates `n_gpu_layers` based on available VRAM. Integrated GPUs are capped to 4 GiB for safety, which can limit offloading.\n- If VRAM is tight, the app may fall back to CPU or reduce offload. As a rule of thumb, 8 GB+ VRAM provides a smoother experience for Vulkan offload and image analysis; 4 GB often results in partial offload or CPU fallback.\n- Override auto-estimation with `AI_FILE_SORTER_N_GPU_LAYERS` (`-1` auto, `0` force CPU) or `AI_FILE_SORTER_GPU_BACKEND=cpu`.\n- For image analysis, `AI_FILE_SORTER_VISUAL_USE_GPU=0` forces the visual encoder to run on CPU to avoid VRAM allocation errors.\n\n### Environment variables\n\nRuntime and GPU:\n\n- `AI_FILE_SORTER_GPU_BACKEND` - select GPU backend: `auto` (default), `vulkan`, `cuda`, or `cpu`.\n- `AI_FILE_SORTER_N_GPU_LAYERS` - override `n_gpu_layers` for llama.cpp; `-1` = auto, `0` = force CPU.\n- `AI_FILE_SORTER_CTX_TOKENS` - override local LLM context length (default 2048; clamped 512-8192).\n- `AI_FILE_SORTER_GGML_DIR` - directory to load ggml backend shared libraries from. On macOS this is only auto-discovered from bundled or sibling app runtime directories; use this variable explicitly if you want a custom ggml runtime.\n\nVisual LLM:\n\n- `LLAVA_MODEL_URL` - download URL for the visual LLM GGUF model (required to enable image analysis).\n- `LLAVA_MMPROJ_URL` - download URL for the visual LLM mmproj GGUF file (required to enable image analysis).\n- `AI_FILE_SORTER_VISUAL_USE_GPU` - force visual encoder GPU usage (`1`) or CPU (`0`). Defaults to auto; Vulkan may fall back to CPU if VRAM is low.\n\nTimeouts and logging:\n\n- `AI_FILE_SORTER_LOCAL_LLM_TIMEOUT` - seconds to wait for local LLM responses (default 60).\n- `AI_FILE_SORTER_REMOTE_LLM_TIMEOUT` - seconds to wait for OpenAI\u002FGemini responses (default 10).\n- `AI_FILE_SORTER_CUSTOM_LLM_TIMEOUT` - seconds to wait for custom OpenAI-compatible API responses (default 60).\n- `AI_FILE_SORTER_LLAMA_LOGS` - enable verbose llama.cpp logs (`1`\u002F`true`); also honors `LLAMA_CPP_DEBUG_LOGS`.\n\nStorage and updates:\n\n- `AI_FILE_SORTER_CONFIG_DIR` - override the base config directory (where `config.ini` lives).\n- `CATEGORIZATION_CACHE_FILE` - override the SQLite cache filename inside the config dir.\n- `UPDATE_SPEC_FILE_URL` - override the update feed spec URL (dev\u002Ftesting). The updater now reads per-platform streams from `update.windows`, `update.macos`, and `update.linux`, with legacy single-stream feeds still accepted.\n- `AI_FILE_SORTER_UPDATER_TEST_MODE` - enable Windows updater live-test mode (`1`\u002F`true`). When enabled, the app skips the update feed fetch and synthesizes a newer version from the values below.\n- `AI_FILE_SORTER_UPDATER_TEST_URL` - direct URL for the Windows updater live-test package. This can point to an `.exe`, `.msi`, or a `.zip` containing exactly one `.exe` or `.msi`.\n- `AI_FILE_SORTER_UPDATER_TEST_SHA256` - SHA-256 checksum for the downloaded live-test package. If the URL points to a ZIP, this checksum must be for the ZIP archive itself.\n- `AI_FILE_SORTER_UPDATER_TEST_VERSION` - optional synthetic version shown by live-test mode. Defaults to the current app version with an extra trailing segment, for example `1.7.2.1`.\n- `AI_FILE_SORTER_UPDATER_TEST_MIN_VERSION` - optional synthetic minimum version for live-test mode. Defaults to `0.0.0` so the test behaves like an optional update.\n\nExample update feed:\n\n```json\n{\n  \"update\": {\n    \"current_version\": \"1.7.1\",\n    \"min_version\": \"1.6.0\",\n    \"download_url\": \"https:\u002F\u002Ffilesorter.app\u002Fdownload\",\n    \"windows\": {\n      \"current_version\": \"1.7.1\",\n      \"min_version\": \"1.6.0\",\n      \"download_url\": \"https:\u002F\u002Ffilesorter.app\u002Fdownload\",\n      \"installer_url\": \"https:\u002F\u002Ffilesorter.app\u002Fdownloads\u002FAIFileSorterSetup-1.7.1.exe\",\n      \"installer_sha256\": \"0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef\"\n    },\n    \"macos\": {\n      \"current_version\": \"1.7.1\",\n      \"min_version\": \"1.6.0\",\n      \"download_url\": \"https:\u002F\u002Ffilesorter.app\u002Fdownload\"\n    },\n    \"linux\": {\n      \"current_version\": \"1.7.1\",\n      \"min_version\": \"1.6.0\",\n      \"download_url\": \"https:\u002F\u002Ffilesorter.app\u002Fdownload\"\n    }\n  }\n}\n```\n\nCompatibility note:\n\n- Older app versions only read the flat top-level fields under `update`, so keep `current_version`, `min_version`, and `download_url` there as a legacy compatibility stream if you still need to support them.\n- Newer app versions prefer the platform-specific streams and will use `update.windows`, `update.macos`, or `update.linux` when present.\n- The legacy compatibility stream can only represent one generic stream, not separate per-platform versions or installers.\n\nWindows-only direct installer updates:\n\n- `installer_url` - direct URL to the Windows installer package.\n- `installer_sha256` - SHA-256 checksum used to verify the downloaded installer before launch.\n- `installer_url` can now also point to a ZIP archive, as long as the archive contains exactly one installer payload (`.exe` or `.msi`).\n- When both fields are present on Windows, the app can download the installer, verify it, and then prompt: `Quit the app and launch the installer to update`.\n\nWindows updater live-test mode:\n\n- `aifilesorter.exe` accepts the following flags directly on Windows:\n  `--updater-live-test`\n  `--updater-live-test-url=\u003Chttps:\u002F\u002F...\u002FAIFileSorterSetup.zip>`\n  `--updater-live-test-sha256=\u003Csha256-of-the-downloaded-package>`\n  `--updater-live-test-version=\u003Coptional-version>`\n  `--updater-live-test-min-version=\u003Coptional-min-version>`\n- `StartAiFileSorter.exe` accepts and forwards the same flag family if you still use the bootstrapper path.\n- Live-test mode is Windows-only and intentionally bypasses the normal update JSON feed.\n- If the ZIP contains more than one `.exe` or `.msi`, the updater stops instead of guessing which installer to launch.\n- If `--updater-live-test` is present and the URL \u002F SHA flags are omitted, `aifilesorter.exe` also looks for a `live-test.ini` file next to the executable and fills in the missing values from there.\n- Command-line flags still win over `live-test.ini`, so you can keep a default file and override just one field when needed.\n\nExample `live-test.ini`:\n\n```ini\n[LiveTest]\ndownload_url = https:\u002F\u002Ffiles.example.com\u002FAIFileSorterSetup-1.7.3.zip\nsha256 = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef\ncurrent_version = 1.7.3\nmin_version = 0.0.0\n```\n\nExample PowerShell launch:\n\n```powershell\n.\\aifilesorter.exe `\n  --development `\n  --updater-live-test\n```\n\n---\n\n## Categorization cache database\n\nAI File Sorter stores categorization results in a local SQLite database next to `config.ini` (the base directory can be overridden via `AI_FILE_SORTER_CONFIG_DIR`). This cache allows the app to skip already-processed files and preserve rename suggestions between runs.\n\nWhat is stored:\n\n- Directory path, file name, and file type (used as a unique key).\n- Category\u002Fsubcategory, taxonomy id, categorization style, and timestamp.\n- Suggested filename (for picture and document rename suggestions).\n- Rename-only flag (used when picture\u002Fdocument rename-only modes are enabled).\n- Rename-applied flag (marks when a rename was executed so it is not offered again).\n\nIf you rename or move a file from the Review dialog, the cache entry is updated to the new name. Already-renamed picture files are skipped for visual analysis and rename suggestions on later runs. In the Review dialog, those already-renamed rows are hidden when rename-only is enabled, but they stay visible when categorization is enabled so you can still move them into category folders. To reset a folder's cache, accept the recategorization prompt or delete the cache file (or point `CATEGORIZATION_CACHE_FILE` to a new filename).\n\n---\n\n## Uninstallation\n\n- **Debian\u002FUbuntu package installs**: `sudo apt remove aifilesorter`\n- **Linux source installs**: `cd app && sudo make uninstall`\n- **macOS source installs**: `cd app && sudo make uninstall`\n\nFor source installs, `make uninstall` removes the executable and the staged precompiled libraries. You can also delete cached local LLM models in `~\u002F.local\u002Fshare\u002Faifilesorter\u002Fllms` (Linux) or `~\u002FLibrary\u002FApplication Support\u002Faifilesorter\u002Fllms` (macOS) if you no longer need them.\n\n---\n\n## Using your OpenAI API key\n\nWant to use ChatGPT instead of the bundled local models? Bring your own OpenAI API key:\n\n1. Open **Settings -> Select LLM** in the app.\n2. Choose **ChatGPT (OpenAI API key)**, paste your key, and enter the ChatGPT model you want to use (for example `gpt-4o-mini`, `gpt-4.1`, or `o3-mini`).\n3. Click **OK**. The key is stored locally in your AI File Sorter config (`config.ini` in the app data folder) and reused for future runs. Clear the field to remove it.\n4. An internet connection is only required while this option is selected.\n\n> The app no longer embeds a bundled key; you always provide your own OpenAI key.\n\n---\n\n## Using your Gemini API key\n\nPrefer Google's models? Use your own Gemini API key:\n\n1. Visit **https:\u002F\u002Faistudio.google.com** and sign in with your Google account.\n2. In the left navigation, open **API keys** (or **Get API key**) and click **Create API key**. Choose *Create API key in new project* (or select an existing project) and copy the generated key.\n3. In the app, open **Settings -> Select LLM**, choose **Gemini (Google AI Studio API key)**, paste your key, and enter the Gemini model you want (for example `gemini-2.5-flash-lite`, `gemini-2.5-flash`, or `gemini-2.5-pro`).\n4. Click **OK**. The key is stored locally in your AI File Sorter config and reused for future runs. Clear the field to remove it.\n\n> AI Studio keys can be used on the free tier until you hit Google’s limits; higher quotas or enterprise use require billing via Google Cloud.\n> The app calls the Gemini `v1` `generateContent` endpoint; use model IDs from `https:\u002F\u002Fgenerativelanguage.googleapis.com\u002Fv1\u002Fmodels?key=YOUR_KEY`. You can enter them with or without the leading `models\u002F` prefix.\n\n---\n\n## Testing\n\n- From the repo root, clean any old cache and run the CTest wrapper:\n  \n  ```bash\n  cd app\n  rm -rf ..\u002Fbuild-tests      # clear a cache from another checkout\n  .\u002Fscripts\u002Frebuild_and_test.sh\n  ```\n\n- The script configures to `..\u002Fbuild-tests`, builds, then runs `ctest`.\n- If you have multiple copies of the repo (e.g., `ai-file-sorter` and `ai-file-sorter-mac-dist`), each needs its own `build-tests` folder; reusing one from a different path will make CMake complain about mismatched source\u002Fbuild directories.\n\n---\n\n## Diagnostics\n\nIf you need to report a bug or collect troubleshooting data, use the bundled diagnostics scripts:\n\n- **macOS:** `.\u002Fapp\u002Fscripts\u002Fcollect_macos_diagnostics.sh`\n- **Linux:** `.\u002Fapp\u002Fscripts\u002Fcollect_linux_diagnostics.sh`\n- **Windows (PowerShell):** `.\\app\\scripts\\collect_windows_diagnostics.ps1`\n\nEach script collects relevant logs, redacts common sensitive paths, and packages the result into a zip archive for sharing. See [app\u002Fscripts\u002FREADME.md](app\u002Fscripts\u002FREADME.md) for options such as time filtering and opening the output folder automatically.\n\n---\n\n## How to Use\n\n1. Launch the application (see the last step in [Installation](#installation) according your OS).\n2. Select a directory to analyze.\n\n### Using dry run and undo\n\n- In the results dialog, you can enable **\"Dry run (preview only, do not move files)\"** to preview planned moves. A preview dialog shows From\u002FTo without moving any files.\n- After a real sort, the app saves a persistent undo plan. You can revert later via **Edit → \"Undo last run\"** (best-effort; skips conflicts\u002Fchanges).\n\n3. Tick off the checkboxes on the main window according to your preferences.\n4. Click the **\"Analyze\"** button. The app will scan each file and\u002For directory based on your selected options.\n5. A review dialog will appear. Verify the assigned categories (and subcategories, if enabled in step 3).\n6. Click **\"Confirm & Sort!\"** to move the files, or **\"Continue Later\"** to postpone. You can always resume where you left off since categorization results are saved.\n\n---\n\n## Sorting a Remote Directory (e.g., NAS)\n\nFollow the steps in [How to Use](#how-to-use), but modify **step 2** as follows:  \n\n- **Windows:** Assign a drive letter (e.g., `Z:` or `X:`) to your network share ([instructions here](https:\u002F\u002Fsupport.microsoft.com\u002Fen-us\u002Fwindows\u002Fmap-a-network-drive-in-windows-29ce55d1-34e3-a7e2-4801-131475f9557d)).  \n- **Linux & macOS:** Mount the network share to a local folder using a command like:  \n\n  ```sh\n  sudo mount -t cifs \u002F\u002F192.168.1.100\u002Fshared_folder \u002Fmnt\u002Fnas -o username=myuser,password=mypass,uid=$(id -u),gid=$(id -g)\n  ```\n\n(Replace 192.168.1.100\u002Fshared_folder with your actual network location path and adjust options as needed.)\n\n---\n\n## Contributing\n\n- Fork the repository and submit pull requests.\n- Report issues or suggest features on the GitHub issue tracker.\n- Follow the existing code style and documentation format.\n\n---\n\n## Credits\n\n- Curl: \u003Chttps:\u002F\u002Fgithub.com\u002Fcurl\u002Fcurl>\n- Dotenv: \u003Chttps:\u002F\u002Fgithub.com\u002Fmotdotla\u002Fdotenv>\n- git-scm: \u003Chttps:\u002F\u002Fgit-scm.com>\n- Hugging Face: \u003Chttps:\u002F\u002Fhuggingface.co>\n- JSONCPP: \u003Chttps:\u002F\u002Fgithub.com\u002Fopen-source-parsers\u002Fjsoncpp>\n- Llama: \u003Chttps:\u002F\u002Fwww.llama.com>\n- libzip: \u003Chttps:\u002F\u002Flibzip.org>\n- Local File Organizer \u003Chttps:\u002F\u002Fgithub.com\u002FQiuYannnn\u002FLocal-File-Organizer>\n- llama.cpp \u003Chttps:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp>\n- MediaInfoLib: \u003Chttps:\u002F\u002Fmediaarea.net\u002Fen\u002FMediaInfo>\n- Mistral AI: \u003Chttps:\u002F\u002Fmistral.ai>\n- OpenAI: \u003Chttps:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Foverview>\n- OpenSSL: \u003Chttps:\u002F\u002Fgithub.com\u002Fopenssl\u002Fopenssl>\n- PDFium: \u003Chttps:\u002F\u002Fpdfium.googlesource.com\u002Fpdfium\u002F>\n- Poppler (pdftotext): \u003Chttps:\u002F\u002Fpoppler.freedesktop.org\u002F>\n- pugixml: \u003Chttps:\u002F\u002Fpugixml.org>\n- Qt: \u003Chttps:\u002F\u002Fwww.qt.io\u002F>\n- spdlog: \u003Chttps:\u002F\u002Fgithub.com\u002Fgabime\u002Fspdlog>\n- unzip (Info-ZIP): \u003Chttps:\u002F\u002Finfozip.sourceforge.net\u002F>\n\n## License\n\nThis project is licensed under the GNU AFFERO GENERAL PUBLIC LICENSE (GNU AGPL). See the [LICENSE](LICENSE) file for details, or https:\u002F\u002Fwww.gnu.org\u002Flicenses\u002Fagpl-3.0.html.\n\n---\n\n## Donation\n\nSupport the development of **AI File Sorter** and its future features. Every contribution counts!\n\n- **[Donate](https:\u002F\u002Ffilesorter.app\u002Fdonate\u002F)**\n\n---\n","\u003C!-- markdownlint-disable MD046 -->\n# AI 文件整理器\n\n[![代码版本](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCode-1.7.3-blue)](#)\n[![发布版本](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fhyperfield\u002Fai-file-sorter?label=Release)](#)\n![filesorter.app 下载量](https:\u002F\u002Ffilesorter.app\u002Fdownload-stats\u002Fbadge.svg)\n[![SourceForge 下载量](https:\u002F\u002Fimg.shields.io\u002Fsourceforge\u002Fdt\u002Fai-file-sorter.svg?label=SourceForge%20downloads)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\n[![Codacy 评分](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_adaaa9495e23.png)](https:\u002F\u002Fapp.codacy.com\u002Fgh\u002Fhyperfield\u002Fai-file-sorter\u002Fdashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)\n[![捐赠](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSupport%20AI%20File%20Sorter-orange)](https:\u002F\u002Ffilesorter.app\u002Fdonate\u002F)\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_3818b3e8766f.png\" alt=\"AI 文件整理器 logo\" width=\"128\" height=\"128\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_424e9b9d25ef.png\" alt=\"Vulkan\" width=\"160\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_929247be44e7.png\" alt=\"CUDA\" width=\"160\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_12624773c3da.png\" alt=\"Apple Metal\" width=\"160\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_dbab6464bfa9.png\" alt=\"Windows\" width=\"160\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_6930cae846cb.png\" alt=\"macOS\" width=\"160\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_4691abb422a8.png\" alt=\"Linux\" width=\"160\">\n\u003C\u002Fp>\n\nAI 文件整理器是一款跨平台桌面应用，利用人工智能对文件进行整理，并为图片、文档以及支持的音视频文件建议更简洁、更一致的命名。它旨在减少杂乱、提升一致性，使文件日后更容易查找，无论是用于审阅、归档还是长期存储。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_805049b9c55a.png\" alt=\"AI 文件整理器整理前后示例\" width=\"600\">\n\u003C\u002Fp>\n\n该应用可以在本地分析图片文件，并建议有意义、易于理解的名称。例如，一个通用文件如 IMG_2048.jpg 可以被重命名为更具描述性的 clouds_over_lake.jpg。它还可以分析支持的文档文件，并根据其文本内容提出更清晰的名称建议。AI 文件整理器还能通过使用已存储在支持媒体文件中的元数据来清理混乱的音频和视频文件名。如果存在年份、艺术家、专辑或标题等标签，该应用可以将其转化为清晰的建议，如 `2024_artist_album_title.mp3`，您可以在应用任何更改之前对其进行审查、编辑或忽略。\n\nAI 文件整理器通过自动根据文件名、扩展名、所属文件夹上下文以及学习到的整理模式对文件进行分组，帮助整理下载文件夹、外部硬盘或 NAS 存储等杂乱的文件夹。\n\n与依赖固定规则不同，该应用会逐步建立对您文件通常组织方式和命名习惯的内部理解。这使其能够随着时间推移提出更加一致的分类和命名建议，同时仍允许您在应用任何更改之前进行审查和调整。\n\n系统会为每个文件建议类别（以及可选的子类别），并对支持的文件类型提供重命名建议。一旦您确认，所需的文件夹将自动创建，文件也将相应地被分类整理。\n\n隐私优先的设计：\nAI 文件整理器完全可以在您的设备上运行，使用本地人工智能模型，如 Llama 3B (Q4) 和 Mistral 7B。不会上传任何文件、文件名、图像或元数据，也不会发送任何遥测信息。只有在您明确选择启用远程模型时才需要互联网连接。\n\n---\n\n#### 工作原理\n\n1. 将应用指向一个文件夹或驱动器  \n2. 使用选定的本地或远程模型分析文件（以及图像内容，如适用）  \n3. 生成分类和重命名建议  \n4. 您进行审查并根据需要调整 - 完成  \n\n---\n\n[![下载 ai-file-sorter](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_0442d87feb83.png)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\n\n[![从 Microsoft 获取](https:\u002F\u002Fget.microsoft.com\u002Fimages\u002Fen-us%20dark.svg)](https:\u002F\u002Fapps.microsoft.com\u002Fdetail\u002F9npk4dzd6r6s)\n\n![AI 文件整理器截图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_547f5867237a.gif) ![AI 文件整理器截图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_371779ac09bd.png) ![AI 文件整理器截图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_readme_bebde171bff2.png)\n\n---\n\n- [AI 文件整理器](#ai-file-sorter)\n  - [更新日志](#changelog)\n  - [功能](#features)\n  - [分类](#categorization)\n    - [分类模式](#categorization-modes)\n    - [类别白名单](#category-whitelists)\n  - [图像分析（视觉 LLM）](#image-analysis-visual-llm)\n    - [所需视觉 LLM 文件](#required-visual-llm-files)\n    - [主窗口选项](#main-window-options)\n  - [文档分析（文本 LLM）](#document-analysis-text-llm)\n    - [支持的文档格式](#supported-document-formats)\n    - [主窗口选项（文档）](#main-window-options-documents)\n  - [音频\u002F视频元数据文件名建议](#audiovideo-metadata-filename-suggestions)\n    - [支持的音频\u002F视频格式](#supported-audiovideo-formats)\n  - [系统兼容性检查](#system-compatibility-check)\n  - [要求](#requirements)\n  - [安装](#installation)\n    - [Linux](#linux)\n    - [macOS](#macos)\n    - [Windows](#windows)\n  - [分类缓存数据库](#categorization-cache-database)\n  - [卸载](#uninstallation)\n  - [使用您的 OpenAI API 密钥](#using-your-openai-api-key)\n  - [使用您的 Gemini API 密钥](#using-your-gemini-api-key)\n  - [测试](#testing)\n  - [诊断](#diagnostics)\n  - [使用方法](#how-to-use)\n  - [整理远程目录（如 NAS）](#sorting-a-remote-directory-eg-nas)\n  - [贡献](#contributing)\n  - [致谢](#credits)\n  - [许可证](#license)\n  - [捐赠](#donation)\n\n---\n\n## 更新日志\n\n## [1.7.3] - 2026-03-22\n\n- 非英语分类现在更加可靠：文件首先以英语进行规范分类，然后再翻译成所选的类别语言。这一变化是由于 LLM 的语言限制所致。\n- 应用更新现在支持 Windows、macOS 和 Linux 的独立更新流，同时仍然接受较新客户端使用的旧版单流清单格式。\n- Windows 更新源现在可以提供直接的安装程序 URL 以及 SHA-256 校验和，以便应用可以下载安装程序、显示下载进度、验证其完整性，并在确认后启动安装。\n- UI 翻译系统已完全迁移到 Qt `.ts` \u002F `.qm` 目录。\n- 使用本地 LLM 进行本地分类现在更加稳健。\n- 缓存的类别标签经过更严格的清理，以避免因 UTF-8 数据格式错误而导致后续分类或显示出现问题。\n- 其他改进。\n- 其他错误修复。\n\n完整历史请参阅 [CHANGELOG.md](CHANGELOG.md)。\n\n---\n\n## 功能\n\n- **AI 驱动的分类**：使用本地 LLM（Llama、Mistral）或远程模型（使用您自己的 OpenAI API 密钥的 ChatGPT，或使用 Gemini API 密钥的 Gemini）智能地对文件进行分类。\n- **离线友好**：使用本地 LLM 完全在本地对文件进行分类——无需互联网或 API 密钥。\n- **强大的分类能力**：通过分类体系和启发式方法，确保每次运行时标签更加一致。\n- **可自定义的排序规则**：自动分配类别和子类别，实现精细化组织。\n- **两种分类模式**：选择“更精细”以获得详细标签，或选择“更一致”以偏向于文件夹内统一的类别。\n- **类别白名单**：定义允许的类别\u002F子类别的命名白名单，在“设置 → 管理类别白名单…”中进行管理，并在主窗口中根据需要启用或选择，以限制会话期间模型的输出。\n- **多语言分类**：让 LLM 以荷兰语、法语、德语、意大利语、波兰语、葡萄牙语、西班牙语或土耳其语为文件分配类别（取决于所使用的模型）。\n- **自定义本地 LLM**：直接从“选择 LLM”对话框注册您自己的本地 GGUF 模型。\n- **图像内容分析（视觉 LLM）**：使用 LLaVA 分析支持的图片文件，生成描述并提供可选的文件名建议（支持仅重命名模式）。\n- **图像日期到类别后缀（可选）**：如果可用，将图像创建日期元数据附加到图像类别名称上。\n- **文档内容分析（文本 LLM）**：分析支持的文档文件以总结内容并建议文件名；使用相同的选定 LLM（本地或远程）。\n- **音视频元数据文件名建议**：将嵌入式媒体标签转换为干净的、图书馆风格的文件名，适用于支持的音频和视频文件，并在任何文件被重命名之前进行全面审核。\n- **可排序的审查**：按文件名、类别或子类别对分类审查表进行排序，以便更快地进行分类整理。\n- **Qt6 界面**：轻量且响应迅速的用户界面，配有更新的菜单和图标。\n- **界面语言**：英语、荷兰语、法国语、德语、意大利语、韩语、西班牙语和土耳其语。\n- **跨平台兼容性**：可在 Windows、macOS 和 Linux 上运行。\n- **本地数据库缓存**：加快重复分类速度，并最大限度减少远程 LLM 的使用成本。\n- **排序预览**：在确认更改之前，查看文件将如何被组织。\n- **试运行** \u002F 仅预览模式，用于检查计划中的移动而不会实际更改文件。\n- **持久撤销功能**（“撤销上次运行”），即使关闭排序对话框后仍可使用。\n- **自带密钥**：只需粘贴一次您的 OpenAI 或 Gemini API 密钥；该密钥将本地存储并在后续远程运行中重复使用。\n- **更新通知**：接收有关更新的通知——可以选择是否强制更新。\n\n---\n\n## 分类\n\n### 分类模式\n\n- **更精细**：灵活、注重细节的模式。禁用一致性提示，使模型能够选择其认为最具体的类别或子类别，这对于长尾或混合文件夹非常有用。\n- **更一致**：统一模式。模型会收到当前会话中先前分配的一致性提示，因此具有相似名称或扩展名的文件倾向于归入相同的类别。当您希望一批文件保持严格统一时，此模式非常有帮助。\n- 在主窗口上的“分类类型”单选按钮之间切换；您的选择将保存下来，供下次运行时使用。\n\n### 类别白名单\n\n- 启用“使用白名单”以将选定的白名单注入 LLM 提示中；禁用则让模型自由选择。\n- 在“设置 → 管理类别白名单…”中管理列表（添加、编辑、删除）。只有在没有现有列表时才会自动创建默认列表，并且可以为不同项目保留多个命名列表。\n- 将每个白名单控制在大约 15–20 个类别\u002F子类别以内，以避免在较小的本地模型上出现过长的提示。与其使用一个非常长的列表，不如使用几个较窄的列表。\n- 白名单适用于任一分类模式；当您希望最大程度地遵守受限词汇表时，可将其与“更一致”模式搭配使用。\n\n---\n\n## 图像分析（视觉 LLM）\n\n图像分析使用基于 LLaVA 的本地视觉 LLM 来描述图像内容，并（可选）建议更好的文件名。此过程完全在本地运行，无需 API 密钥。\n\n### 必需的视觉 LLM 文件\n\n“选择 LLM”对话框现在包含一个“图像分析模型（LLaVA）”部分，其中有两个下载选项：\n\n- **LLaVA 文本模型（GGUF）**：主要的语言模型，用于生成描述和文件名建议。\n- **LLaVA mmproj（视觉编码器投影，GGUF）**：将视觉嵌入映射到 LLM 令牌空间的适配器，使模型能够接受图像输入。\n\n这两个文件都是必需的。如果缺少任何一个，图像分析功能将被禁用，应用程序会提示您打开“选择 LLM”对话框以下载它们。下载 URL 可以通过 `LLAVA_MODEL_URL` 和 `LLAVA_MMPROJ_URL` 覆盖（参见“环境变量”部分）。\n\n### 主窗口选项\n\n图像分析在主窗口中增加了六个相关复选框：\n\n- **按内容分析图片文件（可能较慢）**：对支持的图片文件运行视觉 LLM，并在分析对话框中显示进度。\n- **仅处理图片文件（忽略其他文件）**：将运行限制在支持的图片文件上，并在启用时禁用分类控件。\n- **将图像创建日期（如可用）添加到类别名称**：如果可用，将图像元数据中的 `YYYY-MM-DD` 附加到类别标签上。启用仅重命名模式时禁用此选项。\n- **将照片日期和地点添加到文件名（如可用）**：如果可用，将基于元数据的日期\u002F地点前缀添加到建议的图像文件名中。\n- **提供重命名图片文件的选项**：在审查对话框中显示“建议文件名”列，列出视觉 LLM 的建议。您可以在确认之前对其进行编辑。\n- **不分类图片文件（仅重命名）**：跳过对图像的文本分类，仅对其应用（可选）重命名操作。\n\n位于顶部的独立复选框“将音视频元数据添加到文件名（如可用）”控制支持的音视频文件的基于元数据的重命名建议。请参阅“音视频元数据文件名建议”。\n\n---\n\n## 文档分析（文本 LLM）\n\n文档分析使用相同的选定 LLM（本地或远程）从支持的文档文件中提取文本，总结内容，并可选地建议更好的文件名。无需额外下载模型。\n\n### 支持的文档格式\n\n- 纯文本：`.txt`、`.md`、`.rtf`、`.csv`、`.tsv`、`.json`、`.xml`、`.yml`\u002F`.yaml`、`.ini`\u002F`.cfg`\u002F`.conf`、`.log`、`.html`\u002F`.htm`、`.tex`、`.rst`\n- PDF：`.pdf`（默认嵌入PDFium；仅在您显式配置 `-DAI_FILE_SORTER_REQUIRE_EMBEDDED_PDF_BACKEND=OFF` 时，才可使用 `pdftotext` 作为 CLI 备用方案）\n- Office\u002FOpenOffice：`.docx`、`.xlsx`、`.pptx`、`.odt`、`.ods`、`.odp`（捆绑构建中嵌入了 libzip+pugixml；若不使用 vendored 库构建，则 CLI 备用方案会使用 `unzip`）\n- 当前不支持 `.doc`、`.xls`、`.ppt` 等旧版二进制格式。\n\n源码构建：默认使用内置提取器。如果您的目标平台上缺少 vendored 的 PDFium 工件，CMake 现在会明确报错，而不再静默禁用 PDF 内容提取。您可以通过设置 `-DAI_FILE_SORTER_REQUIRE_EMBEDDED_PDF_BACKEND=OFF` 恢复旧有的 CLI 备用方案。\n\n### 主窗口选项（文档）\n\n- **按内容分析文档文件**：提取文档文本并将其输入 LLM 以生成摘要及重命名建议。\n- **仅处理文档文件（忽略其他文件）**：运行时仅处理支持的文档文件，并在启用此选项期间禁用分类控件。\n- **提供文档重命名建议**：在审查对话框中显示包含 LLM 建议的“建议文件名”列，您可在确认前进行编辑。\n- **不分类文档文件（仅重命名）**：跳过文档的文本分类步骤，在应用（可选）重命名的同时保持文件原位。\n- **将文档创建日期（如可用）添加到类别名称中**：在元数据中存在年月信息时，将其追加为 `YYYY-MM` 格式。启用仅重命名模式时，此功能将被禁用。\n\n---\n\n## 音频\u002F视频元数据的文件名建议\n\n让 AI 文件整理工具将嵌入式媒体标签转化为整洁、一致的文件名，用于您的音乐和视频库。启用后，应用程序会读取支持的元数据字段，并按照 `年份_艺术家_专辑_标题.扩展名` 的格式构建一个精美的建议文件名。与所有重命名建议一样，除非您查看并确认，否则不会对文件进行任何更改。\n\n### 支持的音频\u002F视频格式\n\n- 音频扩展名：`.aac`、`.aif`、`.aiff`、`.alac`、`.ape`、`.flac`、`.m4a`、`.mp3`、`.ogg`、`.oga`、`.opus`、`.wav`、`.wma`\n- 视频扩展名：`.3gp`、`.avi`、`.flv`、`.m4v`、`.mkv`、`.mov`、`.mp4`、`.mpeg`、`.mpg`、`.mts`、`.m2ts`、`.ts`、`.webm`、`.wmv`\n- 内置标签读取器目前支持 MP3（ID3v1\u002FID3v2）、FLAC（Vorbis 注释）、OGG\u002FOGA\u002FOpus（Vorbis 注释）以及 MP4 系列容器，例如 `.m4a`、`.mp4`、`.m4v`、`.mov` 和 `.3gp`（MP4\u002FMOV 元数据原子）。\n- 在使用包管理的 `MediaInfoLib` 编译时，相同的重命名流程还可以利用 MediaInfo 提供的元数据，为更多支持的容器生成文件名建议。\n\n---\n\n## 系统兼容性检查\n\n**系统兼容性检查**会运行一个快速基准测试，评估您的系统对以下任务的处理能力：\n\n- 使用选定的本地 LLM 进行**分类**\n- 按内容进行**文档分析**\n- 使用视觉 LLM 进行**图像分析**\n\n您可以通过菜单启动该检查（**文件 → 系统兼容性检查…**）。只有在至少下载了一个本地或视觉 LLM 时才会运行，并且一旦运行过就不会自动再次执行。\n\n检查内容包括：\n\n- 检测可用的 CPU 线程和 GPU 后端（例如 Vulkan\u002FCUDA）\n- 对每个默认模型的小规模分类和文档分析工作负载进行计时\n- 如果存在视觉 LLM 文件，则对单次图像分析进行计时\n- 报告速度等级（最佳\u002F可接受\u002F稍慢），并推荐合适的本地 LLM\n\n提示：为获得更准确的结果，请在运行检查前关闭占用大量 CPU\u002FGPU 资源的应用程序。\n\n---\n\n## 系统要求\n\n- **操作系统**：Linux、macOS 或 Windows。Linux\u002FmacOS 的源码构建采用下方的 Makefile 流程；Windows 的源码构建则使用 Windows 部分中的原生 Qt\u002FMSVC + CMake 流程。\n- **编译器**：支持 C++20 的编译器（Linux\u002FmacOS 上为 `g++` 或 `clang++`，Windows 上为 MSVC 2022）。\n- **Qt 6**：Core、Gui、Widgets 模块以及 Qt 资源编译器（Linux 上为 `qt6-base-dev` \u002F `qt6-tools`，macOS 上为 `brew install qt`，Windows 上则需使用 Qt 6 MSVC 工具包或通过 vcpkg 安装 `qtbase`）。\n- **库**：`curl`、`sqlite3`、`fmt`、`spdlog`、`libmediainfo`（完整源码构建必需），以及预编译的 `llama` 库，分别位于 Linux\u002FWindows 的 `app\u002Flib\u002Fprecompiled` 目录下，或 macOS 变体构建的 `app\u002Flib\u002Fprecompiled-*` 目录中。在 Windows 上，这些非 Qt 库通过 `app\u002Fvcpkg.json` 清单文件提供。\n- **MediaInfo 政策**：必须通过包管理器安装 MediaInfo（如 `apt`、`dnf`、`pacman`、`brew` 或 `vcpkg`）。构建过程拒绝使用 vendored 的 MediaInfo 子模块和已检入的二进制文件。\n- **文档分析相关库**（vendored）：PDFium、libzip 和 pugixml。PDFium 默认为必选，因此打包和源码构建在 Windows、macOS 和 Linux 上均会保留内置的 PDF 提取功能；仅当您有意使用 `pdftotext` 备用方案时，才需设置 `-DAI_FILE_SORTER_REQUIRE_EMBEDDED_PDF_BACKEND=OFF`。\n- **可选 GPU 后端**：推荐使用 Vulkan 1.2+ 运行时，或适用于 NVIDIA 显卡的 CUDA 12.x。`StartAiFileSorter.exe`\u002F`run_aifilesorter.sh` 会自动检测最佳可用后端，并在必要时回退到 CPU\u002FOpenBLAS，因此运行本应用并不需要 CUDA。\n- **Git**（可选）：用于克隆本仓库。也可直接下载压缩包。\n- **OpenAI 或 Gemini API 密钥**（可选）：仅在使用远程 ChatGPT 或 Gemini 工作流时才需提供。\n\n---\n\n## 安装说明\n\n使用本地 LLM 进行文件分类完全免费。如果您希望使用远程工作流（ChatGPT 或 Gemini），则需要拥有自己的 API 密钥，并确保账户中有少量余额或处于免费层级内（请参阅 [使用您的 OpenAI API 密钥](#using-your-openai-api-key) 或 [使用您的 Gemini API 密钥](#using-your-gemini-api-key)）。\n\n### Linux\n\n#### 预编译的 Debian\u002FUbuntu 软件包\n\n1. **安装运行时依赖**（Qt6、网络、数据库、数学库）：\n   - Ubuntu 24.04 \u002F Debian 12：\n     ```bash\n     sudo apt update && sudo apt install -y \\\n       libqt6widgets6 libcurl4 libjsoncpp25 libfmt9 libopenblas0-pthread \\\n       libvulkan1 mesa-vulkan-drivers patchelf\n     ```\n   - Debian 13（trixie）：\n     ```bash\n     sudo apt update && sudo apt install -y \\\n       libqt6widgets6 libcurl4t64 libjsoncpp26 libfmt10 libopenblas0-pthread \\\n       libvulkan1 mesa-vulkan-drivers patchelf\n     ```\n   如果您从源码构建 Vulkan 后端，请安装 `glslc`（Debian\u002FUbuntu 包：`glslc`；在某些发行版中为 `shaderc` 或 `shaderc-tools`）。\n   在 Debian 13 上，应使用 `libjsoncpp26`、`libfmt10` 和 `libcurl4t64`（如果 `libcurl4` 不可用，APT 可能会自动选择 `libcurl4t64`）。\n   确保已安装 Qt 平台插件（在 Ubuntu 22.04 中，这由 `qt6-wayland` 提供）。\n   GPU 加速还需要一个可用的 Vulkan 1.2+ 堆栈（Mesa、AMD\u002FIntel\u002FNVIDIA 驱动程序），或者对于 NVIDIA 用户，需要匹配的 CUDA 运行时环境（`nvidia-cuda-toolkit` 或厂商提供的软件包）。启动器会在两者都存在时优先使用 Vulkan，若两者均不可用则回退到 CPU。\n2. **安装软件包**\n   ```bash\n   sudo apt install .\u002Faifilesorter_*.deb\n   ```\n   使用 `apt install`（而非 `dpkg -i`）可确保自动安装上述所有缺失的依赖项。\n\n#### 从源码构建\n\n1. **安装依赖**\n   - Debian \u002F Ubuntu：\n    ```bash\n    sudo apt update && sudo apt install -y \\\n      build-essential cmake git qt6-base-dev qt6-base-dev-tools qt6-l10n-tools qt6-tools-dev-tools \\\n      libcurl4-openssl-dev libjsoncpp-dev libsqlite3-dev libssl-dev libfmt-dev libspdlog-dev libmediainfo-dev \\\n      zlib1g-dev\n    ```\n   - Fedora \u002F RHEL：\n\n    ```bash\n    export PATH=\"\u002Fusr\u002Flib64\u002Fqt6\u002Flibexec:$PATH\"\n    sudo dnf install -y gcc-c++ cmake git qt6-qtbase-devel qt6-qttools-devel \\\n      libcurl-devel jsoncpp-devel sqlite-devel openssl-devel fmt-devel spdlog-devel mediainfo-devel\n    ```\n\n   - Arch \u002F Manjaro：\n\n    ```bash\n     sudo pacman -S --needed base-devel git cmake qt6-base qt6-tools curl jsoncpp sqlite openssl fmt spdlog mediainfo\n    ```\n\n     可选的 GPU 加速还需要发行版提供的 Vulkan 1.2+ 驱动程序或运行时环境（Mesa、AMD、Intel、NVIDIA），或者用于 NVIDIA 显卡的 CUDA 软件包。请根据计划使用的堆栈进行安装；如果未检测到任何相关组件，应用程序将自动回退到 CPU。\n     MediaInfo 必须通过软件包管理方式提供；构建过程会拒绝使用自包含的 `MediaInfoLib` 文件夹或仓库本地的二进制文件。\n     \n2. **克隆仓库**\n\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter.git\n   cd ai-file-sorter\n   git submodule update --init --recursive\n   ```\n\n   > **子模块提示**：如果您之前手动下载了 `llama.cpp` 或 Catch2，请在运行 `git submodule` 命令前移除或重命名 `app\u002Finclude\u002Fexternal\u002Fllama.cpp` 和 `external\u002FCatch2`。Git 需要这些目录为空，以便填充跟踪的子模块。\n\n3. **构建 vendored libzip**（生成 `zipconf.h` 和 `libzip.a`）\n\n   ```bash\n   cmake -S external\u002Flibzip -B external\u002Flibzip\u002Fbuild \\\n    -DBUILD_SHARED_LIBS=OFF \\\n    -DBUILD_DOC=OFF \\\n    -DENABLE_BZIP2=OFF \\\n    -DENABLE_LZMA=OFF \\\n    -DENABLE_ZSTD=OFF \\\n    -DENABLE_OPENSSL=OFF \\\n    -DENABLE_GNUTLS=OFF \\\n    -DENABLE_MBEDTLS=OFF \\\n    -DENABLE_COMMONCRYPTO=OFF \\\n    -DENABLE_WINDOWS_CRYPTO=OFF\n\n   cmake --build external\u002Flibzip\u002Fbuild\n   ```\n\n   在 Ubuntu\u002FDebian 上，您还需要 Zlib 开发头文件（`zlib1g-dev`），否则 libzip 的配置步骤将会失败。\n   如果您更倾向于使用系统提供的头文件，请安装 `libzip-dev`，并确保 `zipconf.h` 位于您的包含路径中。\n\n4. **构建 llama 运行时变体**（针对您计划打包或测试的每个后端执行一次）\n\n   ```bash\n   # CPU \u002F OpenBLAS\n   .\u002Fapp\u002Fscripts\u002Fbuild_llama_linux.sh cuda=off vulkan=off\n   # CUDA（可选；需要 NVIDIA 驱动程序 + CUDA 工具包）\n   .\u002Fapp\u002Fscripts\u002Fbuild_llama_linux.sh cuda=on vulkan=off\n   # Vulkan（可选；需要可用的 Vulkan 1.2+ 堆栈及 glslc，例如 mesa-vulkan-drivers + vulkan-tools + glslc）\n   .\u002Fapp\u002Fscripts\u002Fbuild_llama_linux.sh cuda=off vulkan=on\n   ```\n\n   每次调用都会将相应的 `llama`\u002F`ggml` 库放置在 `app\u002Flib\u002Fprecompiled\u002F\u003Cvariant>` 目录下，并将运行时 DLL\u002FSO 文件复制到 `app\u002Flib\u002Fggml\u002Fw\u003Cvariant>`。该脚本不允许同时启用 CUDA 和 Vulkan，因此需分别为每个后端单独运行。打包这两个目录可以让启动器在可用时优先选择 Vulkan，其次选择 CUDA，否则将继续使用 CPU——这样就不会留下仅依赖 CUDA 的情况。\n\n5. **编译应用程序**\n\n   ```bash\n   cd app\n   make -j4\n   ```\n\n   生成的二进制文件位于 `app\u002Fbin\u002Faifilesorter`。Makefile 要求使用 `pkg-config` 并依赖于通过软件包管理的 `libmediainfo`；它会明确拒绝使用自包含的 MediaInfo 复制件。\n\n6. **系统级安装（可选）**\n\n   ```bash\n   sudo make install\n   ```\n\n7. **构建 Debian 软件包（可选）**\n\n   ```bash\n   .\u002Fapp\u002Fscripts\u002Fpackage_deb.sh\n   ```\n\n   打包脚本始终会捆绑 CPU 运行时，并自动包含已在 `app\u002Flib\u002Fprecompiled` 下准备好的任何 GPU 变体（例如，在执行 `.\u002Fapp\u002Fscripts\u002Fbuild_llama_linux.sh cuda=off vulkan=on` 后的 `vulkan` 变体）。若希望生成更小的仅包含 CPU 的软件包，可使用 `--cpu-only` 参数；若希望脚本在缺少特定预编译变体时报错，则可使用 `--include-vulkan` 或 `--include-cuda` 参数。\n\n### macOS\n\n1. **安装 Xcode 命令行工具**（运行 `xcode-select --install`）。\n2. **安装 Homebrew**（如果需要）。\n3. **安装依赖项**\n\n   ```bash\n   brew install qt curl jsoncpp sqlite openssl fmt spdlog mediainfo cmake git pkgconfig libffi\n   ```\n\n   如果 Qt 尚未添加到环境变量中，请执行以下命令：\n\n   ```bash\n   export PATH=\"$(brew --prefix)\u002Fopt\u002Fqt\u002Fbin:$PATH\"\n   export PKG_CONFIG_PATH=\"$(brew --prefix)\u002Flib\u002Fpkgconfig:$(brew --prefix)\u002Fshare\u002Fpkgconfig:$PKG_CONFIG_PATH\"\n   ```\n\n4. **克隆仓库及子模块**（与 Linux 相同的命令）。\n   > macOS 构建会固定 `MACOSX_DEPLOYMENT_TARGET=11.0`，以确保 Mach-O 的 `LC_BUILD_VERSION` 能够覆盖 Apple Silicon 及更高版本（包括 Sequoia）。如果您需要不同的最低目标版本，请相应地调整该值，例如 `export MACOSX_DEPLOYMENT_TARGET=15.0`。\n\n5. **构建 vendored libzip**（生成 `zipconf.h` 和 `libzip.a`）\n\n   ```bash\n   cmake -S external\u002Flibzip -B external\u002Flibzip\u002Fbuild \\\n     -DBUILD_SHARED_LIBS=OFF \\\n     -DBUILD_DOC=OFF \\\n     -DENABLE_BZIP2=OFF \\\n     -DENABLE_LZMA=OFF \\\n     -DENABLE_ZSTD=OFF \\\n     -DENABLE_OPENSSL=OFF \\\n     -DENABLE_GNUTLS=OFF \\\n     -DENABLE_MBEDTLS=OFF \\\n     -DENABLE_COMMONCRYPTO=OFF \\\n     -DENABLE_WINDOWS_CRYPTO=OFF\n   cmake --build external\u002Flibzip\u002Fbuild\n   ```\n\n6. **构建 llama 运行时（在 Apple Silicon 上启用 Metal）**\n\n   ```bash\n   .\u002Fapp\u002Fscripts\u002Fbuild_llama_macos.sh\n   ```\n   macOS 应用程序及其 `.app` 包使用位于 `app\u002Flib\u002Fprecompiled*` 下的预编译运行时；它们不需要 Homebrew 安装的 `ggml` 或 `llama.cpp` 库。如果您在通用库路径中安装了旧版本的 `ggml` 或 `llama.cpp`，建议将其卸载或移除，而不是依赖这些库。\n\n7. **编译应用程序**\n\n   ```bash\n   cd app\n   make -j8                 # 使用 -jN 控制并行度\n   sudo make install   # 可选\n   ```\n\n   默认情况下，二进制文件会被放置在 `app\u002Fbin\u002Faifilesorter`。\n\n   **变体目标：**\n\n   ```bash\n   make -j8 MACOS_LLAMA_M1    # 输出 app\u002Fbin\u002Fm1\u002Faifilesorter\n   make -j8 MACOS_LLAMA_M2    # 输出 app\u002Fbin\u002Fm2\u002Faifilesorter\n   make -j8 MACOS_LLAMA_INTEL # 输出 app\u002Fbin\u002Fintel\u002Faifilesorter\n   ```\n\n   这些目标会在编译应用程序之前重新构建 llama.cpp 运行时。在 Apple Silicon 上进行 Intel 交叉编译时，应使用 x86_64 版本的 Homebrew（位于 `\u002Fusr\u002Flocal`），或者设置 `BREW_PREFIX=\u002Fusr\u002Flocal`，以便 Qt 和 pkg-config 正确解析路径。运行 `sudo make install` 会将 macOS 运行时库放置在 `\u002Fusr\u002Flocal\u002Flib\u002Faifilesorter` 中，以避免与系统或其他 Homebrew 安装的 ggml 库发生冲突。每个变体使用不同的构建目录，以避免跨架构的文件冲突：\n   - llama.cpp 库：`app\u002Flib\u002Fprecompiled-m1`、`app\u002Flib\u002Fprecompiled-m2`、`app\u002Flib\u002Fprecompiled-intel`\n   - 对象文件：`app\u002Fobj\u002Farm64` 或 `app\u002Fobj\u002Fx86_64`\n\n### Windows\n\n现在构建支持原生 MSVC + Qt6，无需 MSYS2。有两种方式可供选择，其中 vcpkg 方法最为简单。\n\n选项 A - CMake + vcpkg（推荐）\n\n1. 安装先决条件：\n   - Visual Studio 2022，并安装“桌面 C++”工作负载\n   - CMake 3.21 或更高版本（Visual Studio 自带最新版本）\n   - vcpkg：[https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fvcpkg](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fvcpkg)（克隆并初始化）\n   - 通过 vcpkg 清单管理的 `libmediainfo` 包（无需使用 vendored MediaInfo 子模块或二进制文件）\n   - **MSYS2 MinGW64 + OpenBLAS**：从 [https:\u002F\u002Fwww.msys2.org](https:\u002F\u002Fwww.msys2.org) 安装 MSYS2，打开 *MSYS2 MINGW64* 终端，并运行 `pacman -S --needed mingw-w64-x86_64-openblas`。`build_llama_windows.ps1` 脚本会使用此 OpenBLAS 版本来进行仅 CPU 构建（vcpkg 方案不适用），默认路径为 `C:\\msys64\\mingw64`，除非您指定 `openblasroot=\u003Cpath>` 或设置 `OPENBLAS_ROOT`。\n2. 克隆仓库及子模块：\n\n   ```powershell\n   git clone https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter.git\n   cd ai-file-sorter\n   git submodule update --init --recursive\n   ```\n\n3. **构建 vendored libzip**（生成 `zipconf.h` 和 `libzip.lib`）\n\n   在您将用于构建应用程序的相同 x64 Native Tools \u002F VS Developer PowerShell 中运行以下命令：\n\n   ```powershell\n   cmake -S external\\libzip -B external\\libzip\\build -A x64 `\n     -DBUILD_SHARED_LIBS=OFF `\n     -DBUILD_DOC=OFF `\n     -DENABLE_BZIP2=OFF `\n     -DENABLE_LZMA=OFF `\n     -DENABLE_ZSTD=OFF `\n     -DENABLE_OPENSSL=OFF `\n     -DENABLE_GNUTLS=OFF `\n     -DENABLE_MBEDTLS=OFF `\n     -DENABLE_COMMONCRYPTO=OFF `\n     -DENABLE_WINDOWS_CRYPTO=OFF\n   cmake --build external\\libzip\\build --config Release\n   ```\n\n4. 确定您的 vcpkg 根目录。它就是包含 `vcpkg.exe` 的文件夹（例如 `C:\\dev\\vcpkg`）。\n   - 如果 `vcpkg` 已经在您的 `PATH` 中，可以运行以下命令来打印其位置：\n\n     ```powershell\n     Split-Path -Parent (Get-Command vcpkg).Source\n     ```\n\n   - 否则，请使用您克隆 vcpkg 的目录。\n\n   关于 MediaInfo 的说明：在 Windows 上，您无需手动添加 `MediaInfoLib` 的头文件和库路径。项目已经在 `app\u002Fvcpkg.json` 中声明了 `libmediainfo`，而 `app\\build_windows.ps1` 会使用 vcpkg 工具链和清单配置 CMake，从而让 `find_package(MediaInfoLib ...)` 自动解析该包。如果您想预先安装或明确验证，可以运行 `vcpkg install libmediainfo:x64-windows`。\n\n5. 构建捆绑的 `llama.cpp` 运行时变体（请在同一台 **x64 Native Tools** \u002F **VS 2022 Developer PowerShell** 终端中运行）。针对您需要的每个后端分别调用一次脚本。在运行仅 CPU 变体之前，请确保第 1 步中安装的 MSYS2 OpenBLAS 已就位（或显式指定 `openblasroot=\u003Cpath>`）：\n\n   ```powershell\n   # 仅 CPU \u002F OpenBLAS\n   app\\scripts\\build_llama_windows.ps1 cuda=off vulkan=off vcpkgroot=C:\\dev\\vcpkg\n   # CUDA（需要匹配的 NVIDIA 工具包\u002F驱动程序）\n   app\\scripts\\build_llama_windows.ps1 cuda=on vulkan=off vcpkgroot=C:\\dev\\vcpkg\n   # Vulkan（需要 LunarG Vulkan SDK 或厂商提供的 Vulkan 1.2+ 运行时）\n   app\\scripts\\build_llama_windows.ps1 cuda=off vulkan=on vcpkgroot=C:\\dev\\vcpkg\n   ```\n\n   每次运行都会在 `app\\lib\\precompiled\\\u003Ccpu|cuda|vulkan>` 中生成相应的 `llama.dll` \u002F `ggml*.dll` 对，并将运行时 DLL 复制到 `app\\lib\\ggml\\w\u003Cvariant>` 中。对于 Vulkan 构建，请安装最新的 LunarG Vulkan SDK（或厂商提供的运行时），确保在同一终端中运行 `vulkaninfo` 成功，然后再运行脚本。同时提供 Vulkan 和（可选）CUDA 构件，可以让 `StartAiFileSorter.exe` 在启动时自动检测最佳后端——优先使用 Vulkan，若 Vulkan 不可用则使用 CUDA，最后才回退到 CPU，因此 CUDA 并非必需。\n\n6. 使用辅助脚本构建 Qt6 应用程序（仍在 VS 终端中运行）。该辅助脚本会通过 `windeployqt` 阶段化运行时 DLL，并在不同变体之间共享同一套依赖项安装树，默认情况下可在一次运行中生成三个 Windows 版本：\n\n   ```powershell\n   # 如果脚本执行被阻止，需在每次运行前执行一次：\n   Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass\n\n   app\\build_windows.ps1 -Configuration Release -VcpkgRoot C:\\dev\\vcpkg\n   ```\n\n- 将 `C:\\dev\\vcpkg` 替换为你克隆 vcpkg 的路径；该路径必须包含 `scripts\\buildsystems\\vcpkg.cmake`。\n   - 默认情况下，辅助脚本会生成以下输出目录：\n     - 启用 Windows 自动更新的标准安装程序构建：`app\\build-windows\\Release`\n     - 禁用更新检查的 Microsoft Store 构建：`app\\build-windows-store\\Release`\n     - 仅通知\u002F手动更新的独立 Windows 构建：`app\\build-windows-standalone\\Release`\n   - 使用 `-Variants Standard`、`-Variants MsStore` 或 `-Variants Standalone` 只构建其中一部分。\n   - `aifilesorter.exe` 是主要的 Windows GUI 入口。`StartAiFileSorter.exe` 仍作为旧版引导程序与其并存，并采用相同的更新模式。\n   - 如果已设置 `VCPKG_ROOT` 或 `VPKG_ROOT` 环境变量，或者 `vcpkg`\u002F`vpkg` 已在 `PATH` 中，则 `-VcpkgRoot` 为可选。\n   - 每个变体目录都会生成各自的可执行文件以及暂存的 Qt\u002F第三方 DLL 文件。如果只需二进制文件而不打包运行时 DLL，请使用 `-SkipDeploy` 参数。\n   - 使用 `-Parallel \u003CN>` 可以覆盖默认的“所有核心”并行构建行为（例如，`-Parallel 8`）。默认情况下，脚本会调用 `cmake --build ... --parallel \u003Ccore-count>` 和 `ctest -j \u003Ccore-count>`，以确保 MSBuild 和 Ninja 均得到充分利用。\n\n选项 B - CMake + Qt 在线安装程序\n\n1. 安装先决条件：\n   - Visual Studio 2022，配备桌面 C++ 工作负载\n   - 通过 Qt 在线安装程序安装 Qt 6.x MSVC 工具包（例如 Qt 6.6+ 配合 MSVC 2019\u002F2022）\n   - CMake 3.21+\n   - vcpkg（用于非 Qt 库）：curl、jsoncpp、sqlite3、openssl、fmt、spdlog、gettext、libmediainfo\n2. **构建自包含 libzip**（生成 `zipconf.h` 和 `libzip.lib`）\n\n   请在同一 x64 本机工具 \u002F VS 开发者 PowerShell 中运行以下命令：\n\n   ```powershell\n   cmake -S external\\libzip -B external\\libzip\\build -A x64 `\n     -DBUILD_SHARED_LIBS=OFF `\n     -DBUILD_DOC=OFF `\n     -DENABLE_BZIP2=OFF `\n     -DENABLE_LZMA=OFF `\n     -DENABLE_ZSTD=OFF `\n     -DENABLE_OPENSSL=OFF `\n     -DENABLE_GNUTLS=OFF `\n     -DENABLE_MBEDTLS=OFF `\n     -DENABLE_COMMONCRYPTO=OFF `\n     -DENABLE_WINDOWS_CRYPTO=OFF\n   cmake --build external\\libzip\\build --config Release\n   ```\n\n3. 构建捆绑的 `llama.cpp` 运行时（使用相同的 VS Shell）。任何缺失的 OpenBLAS\u002FcURL 包都将通过 vcpkg 自动安装：\n\n   ```powershell\n   pwsh .\\app\\scripts\\build_llama_windows.ps1 [cuda=on|off] [vulkan=on|off] [vcpkgroot=C:\\dev\\vcpkg]\n   ```\n\n   在配置 GUI 之前需要完成此步骤，因为构建过程会链接到生成的 `llama` 静态库\u002FDLL。\n\n4. 从仓库根目录配置 CMake，使 CMake 能够同时识别 Qt 安装和应用的 vcpkg 清单文件（根据你的 Qt 安装调整 `CMAKE_PREFIX_PATH`）：\n\n    ```powershell\n    $env:VCPKG_ROOT = \"C:\\path\\to\\vcpkg\"  # 例如 C:\\dev\\vcpkg\n    $qt = \"C:\\Qt\\6.6.3\\msvc2019_64\"  # 示例\n    cmake -S app -B build -G \"Ninja\" `\n      -DCMAKE_PREFIX_PATH=$qt `\n     -DCMAKE_TOOLCHAIN_FILE=$env:VCPKG_ROOT\\scripts\\buildsystems\\vcpkg.cmake `\n     -DVCPKG_MANIFEST_DIR=app `\n     -DAI_FILE_SORTER_REQUIRE_MEDIAINFOLIB=ON `\n     -DVCPKG_TARGET_TRIPLET=x64-windows\n   cmake --build build --config Release\n   ```\n\n   此配置步骤启用了 vcpkg 清单模式，因此 `libmediainfo` 会自动从 `app\\vcpkg.json` 安装并解析。在 Windows 上无需手动编辑链接器或包含路径即可使用 MediaInfo。\n\n注意事项\n\n- 若要从头开始重新构建，运行 `.\\app\\build_windows.ps1 -Clean`。该脚本会在配置前删除选定变体的构建目录以及共享的 `app\\build-windows-vcpkg_installed` 依赖树。\n- 每次成功构建后，运行时 DLL 都会通过 `windeployqt` 自动复制；如果你自行管理部署，可以使用 `-SkipDeploy` 跳过此步骤。\n- 如果 Visual Studio 将 `VCPKG_ROOT` 设置为其安装目录下的内置副本，请将 vcpkg 克隆到一个可写目录（例如 `C:\\dev\\vcpkg`），并在运行 `build_llama_windows.ps1` 时指定 `vcpkgroot=\u003Cpath>`。\n- 如果你计划支持 CUDA 或 Vulkan 加速，请在配置 CMake 之前为每个目标后端运行 `build_llama_*` 辅助脚本，以确保相关库已存在。运行时可以同时支持两者，并在启动时自动选择，因此 CUDA 并非必需。\n- `-BuildTests` 和 `-RunTests` 目前仅在 `Standard` 变体中构建并执行测试，这是主要的 Windows 开发和 CI 配置。\n\n\n\n### 运行测试\n\n基于 Catch2 的单元测试是可选的。可通过 CMake 启用：\n\n```bash\ncmake -S app -B build-tests -DAI_FILE_SORTER_BUILD_TESTS=ON -DAI_FILE_SORTER_REQUIRE_MEDIAINFOLIB=ON\ncmake --build build-tests --target ai_file_sorter_tests --parallel $(nproc)\nctest --test-dir build-tests --output-on-failure -j $(nproc)\n```\n\n在 macOS 上，将 `$(nproc)` 替换为 `$(sysctl -n hw.ncpu)`。\n\n在 Windows（PowerShell）上，使用：\n\n```powershell\ncmake -S app -B build-tests -DAI_FILE_SORTER_BUILD_TESTS=ON -DAI_FILE_SORTER_REQUIRE_MEDIAINFOLIB=ON\ncmake --build build-tests --target ai_file_sorter_tests --parallel $env:NUMBER_OF_PROCESSORS\nctest --test-dir build-tests --output-on-failure -j $env:NUMBER_OF_PROCESSORS\n```\n\n注意事项\n\n- 列出单个 Catch2 测试用例：`.\u002Fbuild-tests\u002Fai_file_sorter_tests --list-tests`\n- 打印每个测试用例名称（包括通过的用例）：`.\u002Fbuild-tests\u002Fai_file_sorter_tests --verbosity high --success`\n\n在 Windows 上，你可以将 `-BuildTests`（以及 `-RunTests` 以执行 `ctest`）传递给 `app\\build_windows.ps1`：\n\n```powershell\napp\\build_windows.ps1 -Configuration Release -Variants Standard -BuildTests -RunTests\n```\n\n当前的测试套件（位于 `tests\u002Funit`）专注于核心实用工具；随着新功能的覆盖，可逐步扩展。\n\n### 运行时选择后端\n\nLinux 启动脚本（`app\u002Fbin\u002Frun_aifilesorter.sh` 或 `aifilesorter-bin`）以及 Windows 启动程序都接受以下可选标志：\n\n- `--cuda={on|off}` – 强制启用或禁用 CUDA 后端。\n- `--vulkan={on|off}` – 强制启用或禁用 Vulkan 后端。\n\n如果不提供任何标志，应用程序会按优先级顺序自动检测可用的运行时环境（Vulkan → CUDA → CPU）。可以使用这些标志来跳过某个后端（例如，`--cuda=off` 会强制使用 Vulkan 或 CPU，即使已安装 CUDA；`--vulkan=off` 则会显式尝试使用 CUDA），或者用于验证新安装的软件栈（例如，`--vulkan=on`）。同时为两个标志传递 `on` 将被拒绝；如果未检测到任何 GPU 后端，应用程序将自动回退到 CPU 模式。\n\n#### Vulkan 与显存注意事项\n\n- 在可用的情况下优先使用 Vulkan；只有在缺少 Vulkan 或明确请求时才会使用 CUDA。\n- 应用程序会根据可用显存自动估算 `n_gpu_layers` 值。为确保安全，集成显卡的上限被限制为 4 GiB，这可能会限制模型的卸载程度。\n- 如果显存紧张，应用程序可能会回退到 CPU 或减少卸载比例。一般来说，8 GB 及以上的显存能够提供更流畅的 Vulkan 卸载和图像分析体验；而 4 GB 显存通常会导致部分卸载或直接回退到 CPU。\n- 可以通过设置环境变量 `AI_FILE_SORTER_N_GPU_LAYERS` 来覆盖自动估算值（`-1` 表示自动，`0` 表示强制使用 CPU），或通过 `AI_FILE_SORTER_GPU_BACKEND=cpu` 强制使用 CPU 后端。\n- 对于图像分析任务，可以将 `AI_FILE_SORTER_VISUAL_USE_GPU=0` 设置为 0，以强制视觉编码器在 CPU 上运行，从而避免显存分配错误。\n\n### 环境变量\n\n运行时与 GPU：\n\n- `AI_FILE_SORTER_GPU_BACKEND` - 选择 GPU 后端：`auto`（默认）、`vulkan`、`cuda` 或 `cpu`。\n- `AI_FILE_SORTER_N_GPU_LAYERS` - 覆盖 llama.cpp 的 `n_gpu_layers`；`-1` 表示自动，`0` 表示强制使用 CPU。\n- `AI_FILE_SORTER_CTX_TOKENS` - 覆盖本地 LLM 上下文长度（默认 2048；限制在 512–8192）。\n- `AI_FILE_SORTER_GGML_DIR` - 指定加载 ggml 后端共享库的目录。在 macOS 上，此路径仅会从捆绑包或同级应用运行时目录中自动发现；若需自定义 ggml 运行时，请显式设置该变量。\n\n视觉 LLM：\n\n- `LLAVA_MODEL_URL` - 视觉 LLM GGUF 模型的下载 URL（启用图像分析所必需）。\n- `LLAVA_MMPROJ_URL` - 视觉 LLM mmproj GGUF 文件的下载 URL（启用图像分析所必需）。\n- `AI_FILE_SORTER_VISUAL_USE_GPU` - 强制视觉编码器使用 GPU（`1`）或 CPU（`0`）。默认为自动；若 VRAM 较低，Vulkan 可能会回退到 CPU。\n\n超时与日志：\n\n- `AI_FILE_SORTER_LOCAL_LLM_TIMEOUT` - 等待本地 LLM 响应的秒数（默认 60）。\n- `AI_FILE_SORTER_REMOTE_LLM_TIMEOUT` - 等待 OpenAI\u002FGemini 响应的秒数（默认 10）。\n- `AI_FILE_SORTER_CUSTOM_LLM_TIMEOUT` - 等待自定义 OpenAI 兼容 API 响应的秒数（默认 60）。\n- `AI_FILE_SORTER_LLAMA_LOGS` - 启用 verbose llama.cpp 日志（`1`\u002F`true`）；同时也会尊重 `LLAMA_CPP_DEBUG_LOGS` 设置。\n\n存储与更新：\n\n- `AI_FILE_SORTER_CONFIG_DIR` - 覆盖基础配置目录（`config.ini` 所在位置）。\n- `CATEGORIZATION_CACHE_FILE` - 覆盖配置目录内的 SQLite 缓存文件名。\n- `UPDATE_SPEC_FILE_URL` - 覆盖更新源规范 URL（开发\u002F测试专用）。更新程序现会分别读取 `update.windows`、`update.macos` 和 `update.linux` 中的平台特定流，同时仍兼容旧版单一流格式。\n- `AI_FILE_SORTER_UPDATER_TEST_MODE` - 启用 Windows 更新程序的实时测试模式（`1`\u002F`true`）。启用后，应用将跳过获取更新源，并根据以下值合成一个较新版本。\n- `AI_FILE_SORTER_UPDATER_TEST_URL` - Windows 更新程序实时测试包的直接 URL。该 URL 可指向 `.exe`、`.msi`，或包含且仅包含一个 `.exe` 或 `.msi` 的 `.zip` 文件。\n- `AI_FILE_SORTER_UPDATER_TEST_SHA256` - 下载的实时测试包的 SHA-256 校验和。若 URL 指向 ZIP 文件，则此校验和必须针对 ZIP 归档本身。\n- `AI_FILE_SORTER_UPDATER_TEST_VERSION` - 实时测试模式显示的可选合成版本。默认为当前应用版本加一个额外的尾段，例如 `1.7.2.1`。\n- `AI_FILE_SORTER_UPDATER_TEST_MIN_VERSION` - 实时测试模式的可选最小版本。默认为 `0.0.0`，使测试表现为可选更新。\n\n更新源示例：\n\n```json\n{\n  \"update\": {\n    \"current_version\": \"1.7.1\",\n    \"min_version\": \"1.6.0\",\n    \"download_url\": \"https:\u002F\u002Ffilesorter.app\u002Fdownload\",\n    \"windows\": {\n      \"current_version\": \"1.7.1\",\n      \"min_version\": \"1.6.0\",\n      \"download_url\": \"https:\u002F\u002Ffilesorter.app\u002Fdownload\",\n      \"installer_url\": \"https:\u002F\u002Ffilesorter.app\u002Fdownloads\u002FAIFileSorterSetup-1.7.1.exe\",\n      \"installer_sha256\": \"0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef\"\n    },\n    \"macos\": {\n      \"current_version\": \"1.7.1\",\n      \"min_version\": \"1.6.0\",\n      \"download_url\": \"https:\u002F\u002Ffilesorter.app\u002Fdownload\"\n    },\n    \"linux\": {\n      \"current_version\": \"1.7.1\",\n      \"min_version\": \"1.6.0\",\n      \"download_url\": \"https:\u002F\u002Ffilesorter.app\u002Fdownload\"\n    }\n  }\n}\n```\n\n兼容性说明：\n\n- 较旧版本的应用仅读取 `update` 下的扁平顶层字段，因此若仍需支持这些版本，请在其中保留 `current_version`、`min_version` 和 `download_url` 作为旧版兼容流。\n- 新版本的应用更倾向于使用平台特定的流，当存在 `update.windows`、`update.macos` 或 `update.linux` 时，将优先使用这些流。\n- 旧版兼容流只能表示一个通用流，无法区分不同平台的版本或安装程序。\n\n仅限 Windows 的直接安装程序更新：\n\n- `installer_url` - Windows 安装程序包的直接 URL。\n- `installer_sha256` - 用于在启动前验证下载的安装程序的 SHA-256 校验和。\n- `installer_url` 现在也可以指向 ZIP 归档，只要归档内恰好包含一个安装程序有效载荷（`.exe` 或 `.msi`）即可。\n- 当这两个字段同时存在于 Windows 平台上时，应用可以下载安装程序、验证其完整性，然后提示用户：“退出应用并启动安装程序以完成更新。”\n\nWindows 更新程序实时测试模式：\n\n- `aifilesorter.exe` 在 Windows 上直接接受以下标志：\n  `--updater-live-test`\n  `--updater-live-test-url=\u003Chttps:\u002F\u002F...\u002FAIFileSorterSetup.zip>`\n  `--updater-live-test-sha256=\u003C下载包的 SHA-256>`\n  `--updater-live-test-version=\u003C可选版本>`\n  `--updater-live-test-min-version=\u003C可选最小版本>`\n- 若您仍在使用引导程序路径，`StartAiFileSorter.exe` 也会接受并转发相同的标志系列。\n- 实时测试模式仅适用于 Windows，且会绕过正常的更新 JSON 流。\n- 如果 ZIP 包含多个 `.exe` 或 `.msi`，更新程序将停止操作，而不会猜测应启动哪个安装程序。\n- 若已设置 `--updater-live-test` 但未提供 URL 或 SHA 校验和标志，`aifilesorter.exe` 还会查找可执行文件旁边的 `live-test.ini` 文件，并从中填充缺失的值。\n- 命令行标志优先于 `live-test.ini`，因此您可以保留一个默认文件，并在需要时仅覆盖某一字段。\n\n`live-test.ini` 示例：\n\n```ini\n[LiveTest]\ndownload_url = https:\u002F\u002Ffiles.example.com\u002FAIFileSorterSetup-1.7.3.zip\nsha256 = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef\ncurrent_version = 1.7.3\nmin_version = 0.0.0\n```\n\nPowerShell 启动示例：\n\n```powershell\n.\\aifilesorter.exe `\n  --development `\n  --updater-live-test\n```\n\n## 分类缓存数据库\n\nAI 文件分类器会将分类结果存储在 `config.ini` 旁边的本地 SQLite 数据库中（可通过 `AI_FILE_SORTER_CONFIG_DIR` 覆盖基础目录）。此缓存允许应用程序跳过已处理的文件，并在多次运行之间保留重命名建议。\n\n存储的内容包括：\n\n- 目录路径、文件名和文件类型（用作唯一键）。\n- 类别\u002F子类别、分类体系 ID、分类风格和时间戳。\n- 建议的文件名（用于图片和文档的重命名建议）。\n- 仅重命名标志（在启用图片\u002F文档仅重命名模式时使用）。\n- 已应用重命名标志（标记已执行重命名操作，以避免再次提供该建议）。\n\n如果您从“审核”对话框中重命名或移动文件，缓存条目将更新为新名称。后续运行时，已重命名的图片文件将被跳过，不再进行视觉分析和重命名建议。在“审核”对话框中，当启用仅重命名模式时，这些已重命名的行会被隐藏；但若启用分类功能，则仍会显示，以便您将其移动到相应类别文件夹中。要重置某个文件夹的缓存，您可以接受重新分类提示，或删除缓存文件（或将 `CATEGORIZATION_CACHE_FILE` 指向一个新的文件名）。\n\n---\n\n## 卸载\n\n- **Debian\u002FUbuntu 包安装**：`sudo apt remove aifilesorter`\n- **Linux 源码安装**：`cd app && sudo make uninstall`\n- **macOS 源码安装**：`cd app && sudo make uninstall`\n\n对于源码安装，`make uninstall` 会移除可执行文件以及暂存的预编译库。如果您不再需要它们，也可以删除位于 `~\u002F.local\u002Fshare\u002Faifilesorter\u002Fllms`（Linux）或 `~\u002FLibrary\u002FApplication Support\u002Faifilesorter\u002Fllms`（macOS）中的本地 LLM 模型缓存。\n\n---\n\n## 使用您的 OpenAI API 密钥\n\n想使用 ChatGPT 而不是内置的本地模型吗？请提供您自己的 OpenAI API 密钥：\n\n1. 在应用程序中打开“设置 -> 选择 LLM”。\n2. 选择“ChatGPT（OpenAI API 密钥）”，粘贴您的密钥，并输入您想要使用的 ChatGPT 模型（例如 `gpt-4o-mini`、`gpt-4.1` 或 `o3-mini`）。\n3. 点击“确定”。密钥将本地存储在您的 AI 文件分类器配置文件中（应用程序数据文件夹中的 `config.ini`），并在后续运行中重复使用。清空该字段即可移除密钥。\n4. 只有在此选项被选中时才需要互联网连接。\n\n> 应用程序不再嵌入捆绑密钥；您始终需要提供自己的 OpenAI 密钥。\n\n---\n\n## 使用您的 Gemini API 密钥\n\n更喜欢 Google 的模型吗？请使用您自己的 Gemini API 密钥：\n\n1. 访问 **https:\u002F\u002Faistudio.google.com** 并使用您的 Google 帐户登录。\n2. 在左侧导航栏中，打开“API 密钥”（或“获取 API 密钥”），然后点击“创建 API 密钥”。选择“在新项目中创建 API 密钥”（或选择现有项目），并复制生成的密钥。\n3. 在应用程序中，打开“设置 -> 选择 LLM”，选择“Gemini（Google AI Studio API 密钥）”，粘贴您的密钥，并输入您想要的 Gemini 模型（例如 `gemini-2.5-flash-lite`、`gemini-2.5-flash` 或 `gemini-2.5-pro`）。\n4. 点击“确定”。密钥将本地存储在您的 AI 文件分类器配置文件中，并在后续运行中重复使用。清空该字段即可移除密钥。\n\n> AI Studio 密钥可在免费层级使用，直到达到 Google 的使用限制；更高的配额或企业级使用则需要通过 Google Cloud 进行付费。\n> 应用程序调用 Gemini 的 `v1` `generateContent` 端点；请使用来自 `https:\u002F\u002Fgenerativelanguage.googleapis.com\u002Fv1\u002Fmodels?key=YOUR_KEY` 的模型 ID。您可以带或不带前缀 `models\u002F` 输入这些 ID。\n\n---\n\n## 测试\n\n- 从仓库根目录清理任何旧缓存并运行 CTest 包装脚本：\n\n  ```bash\n  cd app\n  rm -rf ..\u002Fbuild-tests      # 清除另一个检出中的缓存\n  .\u002Fscripts\u002Frebuild_and_test.sh\n  ```\n\n- 该脚本会配置到 `..\u002Fbuild-tests`，构建后运行 `ctest`。\n- 如果您有多个仓库副本（例如 `ai-file-sorter` 和 `ai-file-sorter-mac-dist`），每个都需要自己的 `build-tests` 文件夹；如果复用其他路径下的文件夹，CMake 会报错，提示源代码和构建目录不匹配。\n\n---\n\n## 诊断工具\n\n如果您需要报告错误或收集故障排除数据，可以使用内置的诊断脚本：\n\n- **macOS：** `.\u002Fapp\u002Fscripts\u002Fcollect_macos_diagnostics.sh`\n- **Linux：** `.\u002Fapp\u002Fscripts\u002Fcollect_linux_diagnostics.sh`\n- **Windows（PowerShell）：** `.\\app\\scripts\\collect_windows_diagnostics.ps1`\n\n每个脚本会收集相关日志，屏蔽常见的敏感路径，并将结果打包成 ZIP 存档以便分享。有关时间过滤和自动打开输出文件夹等选项，请参阅 [app\u002Fscripts\u002FREADME.md](app\u002Fscripts\u002FREADME.md)。\n\n---\n\n## 使用方法\n\n1. 启动应用程序（根据您的操作系统，参见“安装”部分的最后一步）。\n2. 选择一个要分析的目录。\n\n### 使用试运行和撤销功能\n\n- 在结果对话框中，您可以启用“试运行（仅预览，不移动文件）”以预览计划中的移动操作。预览对话框会显示源路径和目标路径，但不会实际移动任何文件。\n- 实际排序完成后，应用程序会保存持久化的撤销计划。您可以通过“编辑 → ‘撤销上次运行’”来恢复原状（尽力而为；会跳过冲突或已更改的情况）。\n3. 根据您的偏好，在主窗口上勾选相应的复选框。\n4. 点击“分析”按钮。应用程序将根据您选择的选项扫描每个文件和\u002F或目录。\n5. 将出现一个审核对话框。请确认分配的类别（以及子类别，如果步骤 3 中已启用）。\n6. 点击“确认并排序！”以移动文件，或点击“稍后再继续”以推迟操作。由于分类结果已被保存，您可以随时从中断处继续。\n\n---\n\n## 对远程目录（如 NAS）进行分类\n\n按照“使用方法”中的步骤操作，但修改第 2 步如下：\n\n- **Windows：** 为您的网络共享分配一个驱动器号（例如 `Z:` 或 `X:`）[参考此处的说明](https:\u002F\u002Fsupport.microsoft.com\u002Fen-us\u002Fwindows\u002Fmap-a-network-drive-in-windows-29ce55d1-34e3-a7e2-4801-131475f9557d)。\n- **Linux 和 macOS：** 使用类似以下命令将网络共享挂载到本地文件夹：\n\n  ```sh\n  sudo mount -t cifs \u002F\u002F192.168.1.100\u002Fshared_folder \u002Fmnt\u002Fnas -o username=myuser,password=mypass,uid=$(id -u),gid=$(id -g)\n  ```\n\n（请将 `192.168.1.100\u002Fshared_folder` 替换为您实际的网络位置路径，并根据需要调整选项。）\n\n---\n\n## 贡献\n\n- 分支仓库并提交拉取请求。\n- 在 GitHub 问题跟踪器中报告问题或提出功能建议。\n- 遵循现有的代码风格和文档格式。\n\n---\n\n## 知识产权声明\n\n- Curl：\u003Chttps:\u002F\u002Fgithub.com\u002Fcurl\u002Fcurl>\n- Dotenv：\u003Chttps:\u002F\u002Fgithub.com\u002Fmotdotla\u002Fdotenv>\n- git-scm：\u003Chttps:\u002F\u002Fgit-scm.com>\n- Hugging Face：\u003Chttps:\u002F\u002Fhuggingface.co>\n- JSONCPP：\u003Chttps:\u002F\u002Fgithub.com\u002Fopen-source-parsers\u002Fjsoncpp>\n- Llama：\u003Chttps:\u002F\u002Fwww.llama.com>\n- libzip：\u003Chttps:\u002F\u002Flibzip.org>\n- 本地文件整理器：\u003Chttps:\u002F\u002Fgithub.com\u002FQiuYannnn\u002FLocal-File-Organizer>\n- llama.cpp：\u003Chttps:\u002F\u002Fgithub.com\u002Fggml-org\u002Fllama.cpp>\n- MediaInfoLib：\u003Chttps:\u002F\u002Fmediaarea.net\u002Fen\u002FMediaInfo>\n- Mistral AI：\u003Chttps:\u002F\u002Fmistral.ai>\n- OpenAI：\u003Chttps:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Foverview>\n- OpenSSL：\u003Chttps:\u002F\u002Fgithub.com\u002Fopenssl\u002Fopenssl>\n- PDFium：\u003Chttps:\u002F\u002Fpdfium.googlesource.com\u002Fpdfium\u002F>\n- Poppler (pdftotext)：\u003Chttps:\u002F\u002Fpoppler.freedesktop.org\u002F>\n- pugixml：\u003Chttps:\u002F\u002Fpugixml.org>\n- Qt：\u003Chttps:\u002F\u002Fwww.qt.io\u002F>\n- spdlog：\u003Chttps:\u002F\u002Fgithub.com\u002Fgabime\u002Fspdlog>\n- unzip (Info-ZIP)：\u003Chttps:\u002F\u002Finfozip.sourceforge.net\u002F>\n\n## 许可证\n\n本项目采用 GNU Affero 通用公共许可证（GNU AGPL）进行授权。详情请参阅 [LICENSE](LICENSE) 文件，或访问 \u003Chttps:\u002F\u002Fwww.gnu.org\u002Flicenses\u002Fagpl-3.0.html>。\n\n---\n\n## 捐赠\n\n支持 **AI 文件排序器** 的开发及其未来功能。您的每一份贡献都至关重要！\n\n- **[捐赠](https:\u002F\u002Ffilesorter.app\u002Fdonate\u002F)**\n\n---","# AI File Sorter 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- Windows 10\u002F11 (64-bit)\n- macOS 10.15 或更高版本\n- Linux (Ubuntu 20.04 或更高版本，支持 DEB\u002FRPM 包)\n\n### 前置依赖\n- Python 3.8 或以上（用于部分脚本）\n- Qt6 运行时库（已包含在安装包中）\n\n## 安装步骤\n\n### Windows\n1. 访问 [SourceForge 下载页面](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload) 下载安装包\n2. 双击下载的 `.exe` 文件并按照提示完成安装\n\n### macOS\n1. 通过 [Microsoft Store](https:\u002F\u002Fapps.microsoft.com\u002Fdetail\u002F9npk4dzd6r6s) 安装\n2. 或访问 [SourceForge 下载页面](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload) 获取 `.dmg` 文件并安装\n\n### Linux\n```bash\n# 使用 APT 安装（适用于 Ubuntu\u002FDebian）\nsudo apt install .\u002Fai-file-sorter_1.7.3_amd64.deb\n\n# 使用 YUM 安装（适用于 CentOS\u002FRHEL）\nsudo yum install .\u002Fai-file-sorter_1.7.3_x86_64.rpm\n```\n\n## 基本使用\n\n1. 启动 AI File Sorter 应用程序\n2. 在主界面点击 **\"Add Folder\"** 按钮，选择需要整理的文件夹\n3. 选择分类模式：\n   - **More Refined**：更细致的分类\n   - **More Consistent**：更统一的分类\n4. 如果需要，启用 **Category Whitelists** 来限制分类范围\n5. 点击 **\"Start Sorting\"** 开始分析和整理\n6. 在弹出的预览窗口中检查建议的分类和重命名\n7. 确认后，文件将被自动移动到新创建的文件夹中\n\n> 注意：AI File Sorter 默认使用本地模型（如 Llama 3B、Mistral 7B），无需网络连接。若需使用远程模型（如 OpenAI 或 Gemini），请在设置中配置 API 密钥。","一位自由摄影师在完成一次为期两周的旅行后，需要整理超过2000张照片和相关素材文件，包括视频、音频和文档。这些文件分散在多个设备和存储介质中，命名混乱，缺乏统一标准。\n\n### 没有 ai-file-sorter 时  \n- 文件名杂乱无章，如“DSC_001.jpg”、“IMG_20240501_123456.jpg”，难以快速识别内容  \n- 需要手动分类和重命名，耗时且容易出错  \n- 不同设备上的文件结构不一致，整理效率低下  \n- 缺乏统一命名规则，后期查找和归档困难  \n\n### 使用 ai-file-sorter 后  \n- 自动识别图片内容并生成描述性文件名，如“sunset_over_mountain.jpg”  \n- 根据文件类型和内容自动分类，如“风景照”、“人物照”、“视频素材”等  \n- 支持跨设备同步整理结果，保持文件结构一致性  \n- 提供清晰命名建议，提升后期检索和管理效率  \n\n通过 AI File Sorter，摄影师在不到两小时内完成了原本需要数天的工作，大幅提升了文件管理效率和组织质量。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhyperfield_ai-file-sorter_547f5867.gif","hyperfield",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhyperfield_b82a686c.png","Quicknode","quicknode.net","https:\u002F\u002Fgithub.com\u002Fhyperfield",[82,86,90,93,97,101],{"name":83,"color":84,"percentage":85},"C++","#f34b7d",89,{"name":87,"color":88,"percentage":89},"Shell","#89e051",3.8,{"name":91,"color":92,"percentage":10},"PowerShell","#012456",{"name":94,"color":95,"percentage":96},"CMake","#DA3434",2.7,{"name":98,"color":99,"percentage":100},"Makefile","#427819",1.5,{"name":102,"color":103,"percentage":104},"Python","#3572A5",0.1,643,76,"2026-04-05T03:41:02","AGPL-3.0","Linux, macOS, Windows","未说明",{"notes":112,"python":110,"dependencies":113},"支持本地运行，使用 Llama 3B (Q4) 和 Mistral 7B 模型。需要下载约 5GB 的模型文件，建议使用 conda 管理环境。",[],[14,13,26,15],[116,117,118,119,120],"ai","file-management","file-manager","llm","organizer","2026-03-27T02:49:30.150509","2026-04-06T07:11:51.666130",[124,129,134,139,144,148],{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},5449,"AI File Sorter 在分析文件夹时会突然关闭，如何解决？","此问题可能由文件数量过多或文件名中包含特殊字符引起。尝试更新到最新版本（https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002F），并确保使用的是最新构建。如果问题仍然存在，可以尝试重新安装或检查日志文件（C:\\Users\\your_user_name\\AppData\\Roaming\\AIFileSorter\\logs）。","https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fissues\u002F3",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},5450,"安装 AI File Sorter 后出现 DLL 错误，如何解决？","该问题通常由缺少 Microsoft Visual C++ Redistributable 引起。请安装 [MS Visual C++ redistributable](https:\u002F\u002Faka.ms\u002Fvs\u002F17\u002Frelease\u002Fvc_redist.x64.exe)。此外，确保从官方渠道下载安装包，并尝试重新安装。","https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fissues\u002F5",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},5451,"AI File Sorter 启动后直接崩溃，提示 libcairo-2.dll 错误，如何解决？","此问题可能是由于编译时的 CPU 架构不兼容导致。尝试使用实验性构建（https:\u002F\u002Ffilesorter.app\u002Fmedia\u002Fdownloads\u002FAI_File_Sorter_experimental.zip），该构建使用 Visual Studio 工具链编译，以提高 Windows 兼容性。","https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fissues\u002F10",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},5452,"点击分析后 AI File Sorter 立即崩溃，没有错误信息，如何解决？","尝试使用实验性构建（https:\u002F\u002Ffilesorter.app\u002Fmedia\u002Fdownloads\u002FAI_File_Sorter_experimental.zip）。该版本针对 Windows 系统进行了优化，可能解决了兼容性问题。无需手动编译，直接运行即可。","https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fissues\u002F9",{"id":145,"question_zh":146,"answer_zh":147,"source_url":128},5453,"AI File Sorter 在分析过程中崩溃，如何排查原因？","首先确认你的 CPU 是 Intel 还是 AMD，并在 CMD 或 PowerShell 中运行 `nvidia-smi.exe -l 1` 检查 GPU 是否被正确识别。同时，检查日志文件（C:\\Users\\your_user_name\\AppData\\Roaming\\AIFileSorter\\logs）是否有异常信息。",{"id":149,"question_zh":150,"answer_zh":151,"source_url":133},5454,"AI File Sorter 安装后无法启动，提示缺少 cudart64_12.dll 和 cublas64_12.dll，如何解决？","这些 DLL 文件通常是 CUDA 库的一部分。请确保已安装 NVIDIA CUDA 工具包，或尝试重新安装 AI File Sorter。如果问题依旧，建议联系开发者获取支持。",[153,158,163,168,173,178,183,188,193,198,202,206,211,216,221],{"id":154,"version":155,"summary_zh":156,"released_at":157},104939,"v1.7.3","AI File Sorter 1.7.3 is now released for Windows, macOS, and Linux. Available from [filesorter.app](https:\u002F\u002Ffilesorter.app) and SourceForge. Will be updated on the Microsoft Store soon.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\nAI File Sorter 1.7.3 improves reliability, localization, and update handling across all platforms. Categorization is now more consistent in non-English languages by using a canonical English taxonomy internally and translating results for display. Local LLM-based categorization has been hardened to handle longer or irregular outputs more safely, and scans are now more resilient to filesystem errors. The update system has been expanded to support platform-specific feeds, including improved Windows installer downloads with integrity verification. UI translations have been completed using Qt catalogs, and various improvements were made to packaging, runtime handling (especially on macOS and Linux), and diagnostics. This release also includes multiple stability improvements and bug fixes.\r\n\r\nSee the full changelog [here](https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fblob\u002Fmain\u002FCHANGELOG.md) for more details.","2026-03-22T23:39:58",{"id":159,"version":160,"summary_zh":161,"released_at":162},104940,"v1.7.0","AI File Sorter 1.7.0 is now released for Windows, macOS, and Linux. Available from [filesorter.app](https:\u002F\u002Ffilesorter.app), SourceForge, and the Microsoft Store.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\n[![Get it from Microsoft](https:\u002F\u002Fget.microsoft.com\u002Fimages\u002Fen-us%20dark.svg)](https:\u002F\u002Fapps.microsoft.com\u002Fdetail\u002F9npk4dzd6r6s)\r\n\r\nThis version introduces a redesigned stage-based progress view, improved image categorization options, and metadata-based rename suggestions for audio and video files using embedded media tags. It also includes various bug fixes and stability improvements.\r\n\r\nSee the full changelog [here](https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fblob\u002Fmain\u002FCHANGELOG.md) for more details.","2026-03-09T22:55:56",{"id":164,"version":165,"summary_zh":166,"released_at":167},104941,"v1.6.0","AI File Sorter 1.6 is now released for Windows, macOS, and Linux. Available from [filesorter.app](https:\u002F\u002Ffilesorter.app), SourceForge, and the Microsoft Store.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\n[![Get it from Microsoft](https:\u002F\u002Fget.microsoft.com\u002Fimages\u002Fen-us%20dark.svg)](https:\u002F\u002Fapps.microsoft.com\u002Fdetail\u002F9npk4dzd6r6s)\r\n\r\nThis version includes major feature additions such as renaming and arranging documents by their content, integration with a faster local LLM, custom API endpoints, improved stability and robustness, UI and usability enhancements, and the addition of Korean to the UI.\r\n\r\nSee the full changelog [here](https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fblob\u002Fmain\u002FCHANGELOG.md) for more details.","2026-02-05T14:04:43",{"id":169,"version":170,"summary_zh":171,"released_at":172},104942,"v1.5.0","AI File Sorter 1.5 is now released for Windows, macOS, and Linux. Available from [filesorter.app ](https:\u002F\u002Ffilesorter.app), SourceForge, and Microsoft Store.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\n[![Get it from Microsoft](https:\u002F\u002Fget.microsoft.com\u002Fimages\u002Fen-us%20dark.svg)](https:\u002F\u002Fapps.microsoft.com\u002Fdetail\u002F9npk4dzd6r6s)\r\n\r\nThis version includes major improvements such as image content analysis and renaming (with related options, using a locally running LLaVA model), UI and usability enhancements, Gemini API support, and the addition of Dutch to the UI.\r\n\r\nSee the full changelog [here](https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fblob\u002Fmain\u002FCHANGELOG.md) for more details.","2026-01-11T14:38:11",{"id":174,"version":175,"summary_zh":176,"released_at":177},104943,"v1.4.0","AI File Sorter 1.4 is now released for Windows, macOS, and Linux. Available from [filesorter.app ](https:\u002F\u002Ffilesorter.app)and SourceForge.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\nThis version includes major improvements like persistent Undo, Dry Run, UI tweaks, and Remote LLM support using your own API key.\r\nSee the full changelog [here](https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fblob\u002Fmain\u002FCHANGELOG.md) for more details.","2025-11-30T18:16:26",{"id":179,"version":180,"summary_zh":181,"released_at":182},104944,"v1.3.0","AI File Sorter 1.3.0 is now released for Windows, macOS, and Linux. Available from [filesorter.app ](https:\u002F\u002Ffilesorter.app)and SourceForge.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\nThis version includes significant improvements, such as categorization options, categorization and interface languages, and optional categorization whitelists! See the full changelog [here](https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fblob\u002Fmain\u002FCHANGELOG.md) for more details.","2025-11-22T19:54:21",{"id":184,"version":185,"summary_zh":186,"released_at":187},104945,"v1.1.0","AI File Sorter 1.1.0 is now released for Windows, macOS, and Linux. Available from [filesorter.app ](https:\u002F\u002Ffilesorter.app)and SourceForge.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\nThis version includes many improvements and fixes, including an improvement of the sorting algorithm for much more categorization consistency and support for Vulkan! See the [changelog](https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter?tab=readme-ov-file#110---2025-11-08) for more details.","2025-11-09T01:57:28",{"id":189,"version":190,"summary_zh":191,"released_at":192},104946,"v1.0.0","The latest AI File Sorter installer for Windows and macOS. Also available from SourceForge.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\nThis version includes major interface and functionality improvements, including switching to Qt6, internationalization, stability and reliability. See the [changelog](https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter?tab=readme-ov-file#100---2025-10-30) for more details.\r\n\r\nBinary releases for Linux are coming up soon. You can, of course, build Linux versions of the app from source by following simple instructions in the [README](https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fblob\u002Fmain\u002FREADME.md).","2025-10-30T14:19:16",{"id":194,"version":195,"summary_zh":196,"released_at":197},104947,"v0.9.7","The latest AI File Sorter installer for Windows. Also available from SourceForge.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\nThis version includes bug fixes and expanded logging coverage to assist with debugging potential issues.\r\n\r\nBinary releases for Linux and macOS are coming up soon. You can, of course, build macOS and Linux versions of the app by following detailed instructions in the [README](https:\u002F\u002Fgithub.com\u002Fhyperfield\u002Fai-file-sorter\u002Fblob\u002Fmain\u002FREADME.md).","2025-10-19T16:01:27",{"id":199,"version":200,"summary_zh":196,"released_at":201},104948,"v0.9.3","2025-09-22T00:32:13",{"id":203,"version":204,"summary_zh":196,"released_at":205},104949,"v0.9.2","2025-08-06T21:43:28",{"id":207,"version":208,"summary_zh":209,"released_at":210},104950,"v0.9.1","The latest AI File Sorter installer for Windows. Also available from SourceForge.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\nReleases for Linux and macOS are coming up soon.\r\n\r\nThis version includes bug fixes and minor improvements for improved stability. OpenCL support has been removed from the llama.cpp build, but Vulkan integration is planned for a future update, coming very soon. Vulkan will enable support for non-Nvidia GPUs. However, you can still enable OpenCL or Vulkan in llama.cpp if you compile it from source.","2025-08-01T18:09:21",{"id":212,"version":213,"summary_zh":214,"released_at":215},104951,"v0.9.0","The latest AI File Sorter installer for Windows. Also available from SourceForge.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)\r\n\r\nReleases for Linux and macOS are coming up soon.","2025-07-17T22:21:21",{"id":217,"version":218,"summary_zh":219,"released_at":220},104952,"v0.8.3","The latest AI File Sorter installer for Windows. Also available from SourceForge.\r\n\r\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Flatest\u002Fdownload)","2025-02-06T18:20:26",{"id":222,"version":223,"summary_zh":224,"released_at":225},104953,"v0.8.0","The latest AI File Sorter installer for Windows. Also available from SourceForge.\n\n[![Download ai-file-sorter](https:\u002F\u002Fa.fsdn.com\u002Fcon\u002Fapp\u002Fsf-download-button)](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fai-file-sorter\u002Ffiles\u002Fv0.8.0\u002FAIFileSorter_Installer.zip\u002Fdownload)","2025-01-30T22:54:20"]