[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jbexta--AgentPilot":3,"tool-jbexta--AgentPilot":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":76,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":78,"languages":79,"stars":84,"forks":85,"last_commit_at":86,"license":87,"difficulty_score":23,"env_os":88,"env_gpu":89,"env_ram":89,"env_deps":90,"category_tags":96,"github_topics":97,"view_count":10,"oss_zip_url":76,"oss_zip_packed_at":76,"status":16,"created_at":116,"updated_at":117,"faqs":118,"releases":147},255,"jbexta\u002FAgentPilot","AgentPilot","A versatile workflow automation platform to create, organize, and execute AI workflows, from a single LLM to complex AI-driven workflows.","AgentPilot 是一个开源的 AI 工作流自动化平台，让你轻松创建、组织和运行从简单到复杂的 AI 任务流程。无论是与单个大语言模型对话，还是协调多个 AI 智能体协同完成任务，它都能提供直观流畅的操作体验。  \n\n它解决了传统 AI 工具难以灵活编排多步骤、多人协作式任务的问题，特别适合需要反复调试、迭代优化工作流的场景。通过支持分支对话、消息重跑和可视化图结构编排，用户可以像搭积木一样构建并实时交互式调整 AI 流程。  \n\nAgentPilot 还具备可定制的界面和基于自然语言的时间调度功能（例如“每小时”或“每年2月29日”），让自动化任务更贴近实际需求。  \n\n这款工具主要面向开发者、AI 研究人员和技术爱好者，也适合有一定技术背景的产品经理或设计师，用于快速原型验证或构建个性化的 AI 助手系统。其桌面应用形式降低了部署门槛，开箱即用，同时保留了高度的扩展性与灵活性。","\u003Ch1 align=\"center\">💬 Agent Pilot\u003C\u002Fh1>\n\n\u003Cp align=\"center\">️\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_567fafe4c403.png\" width=\"600px\" alt=\"AgentPilot desktop demo\" \u002F>\n\u003Cbr>\u003Cbr>\nA versatile workflow automation system. Create, organize, and execute complex AI-driven tasks.\nAgent Pilot provides a seamless experience, whether you want to chat with a single LLM or a complex multi-member workflow.\n\u003Cbr>\u003Cbr>\nWith an intuitive and feature-rich interface, you can effortlessly design AI workflows and chat with them in real-time.\nBranching chats are supported, allowing flexible interactions and iterative refinement.\n\u003Cbr>\u003Cbr>\nAgent Pilot offers generative and customizable UI, allowing creation of custom pages and hierarchical configs.\nThis flexibility gives you the freedom to design an interface that aligns with your specific needs and effortlessly integrate into your workflows.\n\u003Cbr>\u003Cbr>\nThe system supports scheduled and recurring workflows that can be set to run based on natural language expressions of time, enabling automation that ranges from every second to every leap year.\n\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\n\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1169291612816420896?style=flat)](https:\u002F\u002Fdiscord.gg\u002Fge2ZzDGu9e)\n[![X (formerly Twitter) Follow](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002FAgentPilotAI)](https:\u002F\u002Ftwitter.com\u002FAgentPilotAI)\n\u003C\u002Fdiv>\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_b5789229336a.gif\" align=\"center\" height=\"255px\" alt=\"AgentPilot gif demo\" style=\"margin-right: 20px;\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_c4511147cddc.png\" align=\"center\" height=\"250px\" alt=\"AgentPilot gif demo\" style=\"margin-right: 20px;\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_1bdabcd9b033.png\" align=\"center\" height=\"250px\" alt=\"AgentPilot gif demo\" style=\"margin-right: 20px;\" \u002F>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_40cf391fee57.png\" align=\"center\" height=\"250px\" alt=\"AgentPilot gif demo\" style=\"margin-right: 20px;\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_6c13d5aca5bb.png\" align=\"center\" height=\"250px\" alt=\"AgentPilot gif demo\" style=\"margin-right: 20px;\" \u002F>\n\u003C\u002Fp>\n\n## Quickstart\n\n### Binaries\n\u003Ctable>\n  \u003Ctr>\n\t\u003Cth>Platform\u003C\u002Fth>\n\t\u003Cth>Downloads\u003C\u002Fth>\n  \u003C\u002Ftr>\n  \u003Ctr>\n\t\u003Ctd>\u003Cb>Linux\u003C\u002Fb>\u003C\u002Ftd>\n\t\u003Ctd>\n\u003Cb>\u003Ca href=\"https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fagentpilot\u002Ffiles\u002Fv0.5.1\u002FAgentPilot_0.5.1_Linux.tar.gz\u002Fdownload\" target=\"_blank\">AgentPilot_0.5.1_Linux.tar.gz\u003C\u002Fa>\u003C\u002Fb>\u003Cbr>\n\u003Cb>MD5:\u003C\u002Fb>  e74e736e3efbd459b411ecffc45e936e\u003Cbr>\n\u003Cb>SHA1:\u003C\u002Fb> 93b12bd208095f8d8b34395446de23d233a1baed\u003Cbr>\n\t\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n\t\u003Ctd>\u003Cb>Windows\u003C\u002Fb>\u003C\u002Ftd>\n\t\u003Ctd>\n\u003Cb>\u003Ca href=\"https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fagentpilot\u002Ffiles\u002Fv0.5.1\u002FAgentPilot_0.5.1_Windows.zip\u002Fdownload\" target=\"_blank\">AgentPilot_0.5.1_Windows.zip\u003C\u002Fa>\u003C\u002Fb>\u003Cbr>\n\u003Cb>MD5:\u003C\u002Fb> 17079a8f2faf9683c59d11d0b67a8092\u003Cbr>\n\u003Cb>SHA1:\u003C\u002Fb> c5a30c02f17782ead98c24098e874c9ba2edc950\u003Cbr>\n\t\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n\t\u003Ctd>\u003Cb>Mac Intel\u003C\u002Fb>\u003C\u002Ftd>\n\t\u003Ctd>\n\u003Cb>\u003Ca href=\"https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fagentpilot\u002Ffiles\u002Fv0.5.1\u002FAgentPilot_0.5.1_Mac_Intel.tar.gz\u002Fdownload\" target=\"_blank\">AgentPilot_0.5.1_Mac_Intel.tar.gz\u003C\u002Fa>\u003C\u002Fb>\u003Cbr>\n\u003Cb>MD5:\u003C\u002Fb> 2e1e03e5305ea279df1b76d1a8074cb7\u003Cbr>\n\u003Cb>SHA1:\u003C\u002Fb> 9369152f1b69ff2a4ca476ecf1b377b5ce0e072b\u003Cbr>\n\t\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\nBuilding from source: [How to build from source](docs\u002Fguides\u002Fhow_to_build.md) \u003Cbr>\n\n> [!TIP]\n> You can migrate your old database to the new version by replacing your executable with the new one before starting the application.\n\n## Features\n\n###  👤 Create Agents\nCreate new agents, edit their configuration and organise them into folders.\u003Cbr>\nMulti-member workflows can be saved as a single agent and nested infinitely.\n\n### 📝 Manage Chats\nView, continue and delete previous workflow chats and organise them into folders.\u003Cbr>\n\n### 🌱 Branching Workflows\nMessages, tools and code can be edited and re-run, allowing a more practical way to chat with your workflow.\u003Cbr>\nBranching works with all plugins and multi-member chats.\u003Cbr>\n\n### 👥 Graph Workflows\nSeamlessly add other members or blocks to a workflow and configure how they interact with each other.\u003Cbr>\nMembers aligned vertically are executed in parallel.\n\nAvailable members:\n- **User** - This is you and will await your input.\n- **Agent** - Gets an LLM response with integrated tools and messages.\n- **Text** - A simple text block that can nest other blocks.\n- **Code** - Gets the output of any given code.\n- **Prompt** - Gets an LLM response from a single prompt.\n- **Module** - Runs or retrieves a method or variable from any module.\n- **Workflow** - Any combination of the above types.\n\n### 📦 Blocks\nManage a collection of nestable blocks available to use in any workflow or text field, \nallowing reusability and consistency.\u003Cbr>\nBy default a block is a simple text block, but it can be any of the above member types, even a multi-member workflow.\u003Cbr>\nThese can be quickly dropped into any workflow, or used in text fields (such as system message) by using the block name in curly braces, e.g. `{block-name}`.\n\n### 🔨 Tools\nCreate and manage tools which can be assigned to agents.\u003Cbr>\nTools share the same functionality as blocks, except by default they are a single Code member.\u003Cbr> \nThey can also be an entire workflow, this allows your agents to not only run code but an entire workflow if you wish.\u003Cbr>\nConfigure their parameters, which can be accessed from all workflow member types.\nThese parameters can be modified at runtime and re-executed, this creates a branch point which you can cycle through.\n\n### 💻 Modules\nModules are python files which are imported at runtime.\u003Cbr>\nThese are useful for things like toolkits, daemons, memory, custom pages or anything that needs persistence.\n\n### 📐 Customizable UI\nIncludes a flexible and powerful set of base classes for building complex hierarchical configuration interfaces. \nThe entire app is built on this framework.\nDevelopers can modify or create configuration pages easily, even while the app is running.\n\n### 🕒 Scheduler (Premium)\nSchedule workflows to run at specific times or intervals.\u003Cbr>\nNatural language expressions are supported, allowing for flexible scheduling.\u003Cbr>\nFor example, you can schedule a workflow to run every 5 minutes, every day at 3pm, or every 2nd Tuesday of the month.\n\n### 📄 Structured Outputs\nMembers can be configured to output structured data, thanks to [Instructor](https:\u002F\u002Fgithub.com\u002Finstructor-ai\u002Finstructor).\u003Cbr>\n\n### 📦 Addons\nCreate and import custom addons to extend the functionality of Agent Pilot.\u003Cbr>\n\n### 💻 Code Interpreter\nOpen Interpreter is integrated into Agent Pilot, and can either be used standalone as a plugin \nor used to execute code in 9 languages (Python, Shell, AppleScript, HTML, JavaScript, PowerShell, R, React, Ruby)\n\nCode can be executed in multiple ways:\n- From any 'Code' member in any workflow (Chat, Block, Tool).\n- From a message with the role 'Code'\n\nYou should always understand the code that is being run, any code you execute is your own responsibility.\n\nFor code messages, auto-run can be enabled in the settings.\nTo see code messages in action talk to the pre-configured Open Interpreter agent.\n\n### 🪄 AI Generation\nBlocks under the 'System Blocks' folder are used for generating or enhancing fields.\nClaude's prompt generator is included by default, you can tweak it or create your own.\n- **Prompt** - AI enhanced user input\n- **Agent** - AI generated agent (Coming soon)\n- - **System message** - AI generated system message (Coming soon)\n- **Page** - AI generated page (Coming soon)\n\n### 🔌 Plugins\nAgent Pilot supports the following plugins:\n- **Agent** - Create custom agent behaviour.\n- - [Open Interpreter](https:\u002F\u002Fgithub.com\u002FKillianLucas\u002Fopen-interpreter)\n- - [OpenAI Assistant](\u002F)\n- - [CrewAI Agent](\u002F) (Currently disabled)\n- **Workflow** - Create workflow behaviour.\n- - [CrewAI Workflow](\u002F) (Currently disabled)\n- **Provider** - Add support for a model provider.\n- - [Litellm (100+ models)](\u002F)\n\n- [Create a plugin](\u002F)\n\n### 👄 Voice\n**Coming back soon**\u003Cbr>\n~~Agents can be linked to a text-to-speech service, combine with a personality context block and make your agent come to life!~~\u003Cbr>\n\n### 🔠 Models\nLiteLLM is integrated and supports the following providers:\u003Cbr>\n\n- AI21\n- AWS Bedrock\n- AWS Sagemaker\n- Aleph Alpha\n- Anthropic\n- Anyscale\n- Azure OpenAI\n- Baseten\n- Cloudflare\n- Cohere\n- Custom API Servers\n- DeepInfra\n- DeepSeek\n- Gemini\n- Github\n- Groq\n- Huggingface\n- Mistral\n- NLP Cloud\n- Nvidia NIM\n- Ollama\n- OpenAI\n- OpenRouter\n- PaLM API Google\n- Perplexity AI\n- Petals\n- Replicate\n- Together AI\n- VLLM\n- VertexAI Google\n- Voyage\n\n## Contributions\nContributions to Agent Pilot are welcome and appreciated. Please feel free to submit a pull request.\n\n## Known Issues\n- Be careful using auto run code and open interpreter, any chat you open, if code is the last message it will start auto running, I'll add a flag to remember if the countdown has been stopped.\n- Windows exe must have console visible due to a strange bug.\n- Issue on linux, creating venv does not install pip \n- Changing the config of an OpenAI Assistant won't reload the assistant, for now close and reopen the chat.\n\nIf you find this project useful please consider showing support by giving a star or leaving a tip :)\n\u003Cbr>\u003Cbr>\nBTC:\u003Cbr> \nETH: \u003Cbr>\n","\u003Ch1 align=\"center\">💬 Agent Pilot\u003C\u002Fh1>\n\n\u003Cp align=\"center\">️\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_567fafe4c403.png\" width=\"600px\" alt=\"AgentPilot desktop demo\" \u002F>\n\u003Cbr>\u003Cbr>\n一个多功能的工作流自动化系统。创建、组织并执行复杂的 AI 驱动任务。\u003Cbr>\n无论你是想与单个大语言模型（LLM, Large Language Model）聊天，还是与复杂的多成员工作流交互，Agent Pilot 都能提供无缝体验。\n\u003Cbr>\u003Cbr>\n凭借直观且功能丰富的界面，你可以轻松设计 AI 工作流，并实时与其进行对话。\u003Cbr>\n支持分支对话（Branching chats），实现灵活的交互和迭代优化。\n\u003Cbr>\u003Cbr>\nAgent Pilot 提供可生成且高度可定制的用户界面（UI），允许你创建自定义页面和层级化配置。\u003Cbr>\n这种灵活性让你能够根据自身需求自由设计界面，并轻松集成到你的工作流中。\n\u003Cbr>\u003Cbr>\n系统支持定时和周期性工作流，可通过自然语言表达的时间设定运行计划，自动化范围从每秒一次到每闰年一次均可实现。\n\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\n\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1169291612816420896?style=flat)](https:\u002F\u002Fdiscord.gg\u002Fge2ZzDGu9e)\n[![X (formerly Twitter) Follow](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002FAgentPilotAI)](https:\u002F\u002Ftwitter.com\u002FAgentPilotAI)\n\u003C\u002Fdiv>\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_b5789229336a.gif\" align=\"center\" height=\"255px\" alt=\"AgentPilot gif demo\" style=\"margin-right: 20px;\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_c4511147cddc.png\" align=\"center\" height=\"250px\" alt=\"AgentPilot gif demo\" style=\"margin-right: 20px;\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_1bdabcd9b033.png\" align=\"center\" height=\"250px\" alt=\"AgentPilot gif demo\" style=\"margin-right: 20px;\" \u002F>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_40cf391fee57.png\" align=\"center\" height=\"250px\" alt=\"AgentPilot gif demo\" style=\"margin-right: 20px;\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_readme_6c13d5aca5bb.png\" align=\"center\" height=\"250px\" alt=\"AgentPilot gif demo\" style=\"margin-right: 20px;\" \u002F>\n\u003C\u002Fp>\n\n## 快速开始\n\n### 二进制文件（Binaries）\n\u003Ctable>\n  \u003Ctr>\n\t\u003Cth>平台\u003C\u002Fth>\n\t\u003Cth>下载\u003C\u002Fth>\n  \u003C\u002Ftr>\n  \u003Ctr>\n\t\u003Ctd>\u003Cb>Linux\u003C\u002Fb>\u003C\u002Ftd>\n\t\u003Ctd>\n\u003Cb>\u003Ca href=\"https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fagentpilot\u002Ffiles\u002Fv0.5.1\u002FAgentPilot_0.5.1_Linux.tar.gz\u002Fdownload\" target=\"_blank\">AgentPilot_0.5.1_Linux.tar.gz\u003C\u002Fa>\u003C\u002Fb>\u003Cbr>\n\u003Cb>MD5:\u003C\u002Fb>  e74e736e3efbd459b411ecffc45e936e\u003Cbr>\n\u003Cb>SHA1:\u003C\u002Fb> 93b12bd208095f8d8b34395446de23d233a1baed\u003Cbr>\n\t\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n\t\u003Ctd>\u003Cb>Windows\u003C\u002Fb>\u003C\u002Ftd>\n\t\u003Ctd>\n\u003Cb>\u003Ca href=\"https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fagentpilot\u002Ffiles\u002Fv0.5.1\u002FAgentPilot_0.5.1_Windows.zip\u002Fdownload\" target=\"_blank\">AgentPilot_0.5.1_Windows.zip\u003C\u002Fa>\u003C\u002Fb>\u003Cbr>\n\u003Cb>MD5:\u003C\u002Fb> 17079a8f2faf9683c59d11d0b67a8092\u003Cbr>\n\u003Cb>SHA1:\u003C\u002Fb> c5a30c02f17782ead98c24098e874c9ba2edc950\u003Cbr>\n\t\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n\t\u003Ctd>\u003Cb>Mac Intel\u003C\u002Fb>\u003C\u002Ftd>\n\t\u003Ctd>\n\u003Cb>\u003Ca href=\"https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fagentpilot\u002Ffiles\u002Fv0.5.1\u002FAgentPilot_0.5.1_Mac_Intel.tar.gz\u002Fdownload\" target=\"_blank\">AgentPilot_0.5.1_Mac_Intel.tar.gz\u003C\u002Fa>\u003C\u002Fb>\u003Cbr>\n\u003Cb>MD5:\u003C\u002Fb> 2e1e03e5305ea279df1b76d1a8074cb7\u003Cbr>\n\u003Cb>SHA1:\u003C\u002Fb> 9369152f1b69ff2a4ca476ecf1b377b5ce0e072b\u003Cbr>\n\t\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\n从源码构建：[如何从源码构建](docs\u002Fguides\u002Fhow_to_build.md) \u003Cbr>\n\n> [!TIP]\n> 在启动应用前，用新版本可执行文件替换旧版本，即可将旧数据库迁移到新版本。\n\n## 功能特性\n\n### 👤 创建智能体（Agents）\n创建新智能体，编辑其配置，并将其整理到文件夹中。\u003Cbr>\n多成员工作流可保存为单个智能体，并支持无限嵌套。\n\n### 📝 管理对话（Chats）\n查看、继续或删除之前的工作流对话，并将其整理到文件夹中。\n\n### 🌱 分支工作流（Branching Workflows）\n可编辑消息、工具和代码并重新运行，从而以更实用的方式与工作流交互。\u003Cbr>\n分支功能适用于所有插件和多成员对话。\n\n### 👥 图形化工作流（Graph Workflows）\n无缝地向工作流中添加其他成员或模块，并配置它们之间的交互方式。\u003Cbr>\n垂直对齐的成员将并行执行。\n\n可用成员类型：\n- **User（用户）** - 即你自己，将等待你的输入。\n- **Agent（智能体）** - 调用大语言模型（LLM）并集成工具和消息生成响应。\n- **Text（文本）** - 简单的文本块，可嵌套其他模块。\n- **Code（代码）** - 执行任意代码并返回输出结果。\n- **Prompt（提示）** - 根据单条提示获取大语言模型（LLM）的响应。\n- **Module（模块）** - 运行或获取任意模块中的方法或变量。\n- **Workflow（工作流）** - 上述任意类型的组合。\n\n### 📦 模块块（Blocks）\n管理一组可在任意工作流或文本字段中使用的可嵌套模块块，提升复用性和一致性。\u003Cbr>\n默认情况下，模块块是一个简单的文本块，但它也可以是上述任意成员类型，甚至是一个多成员工作流。\u003Cbr>\n这些模块块可快速拖入任意工作流，或在文本字段（如系统消息）中通过花括号引用模块名使用，例如 `{block-name}`。\n\n### 🔨 工具（Tools）\n创建和管理可分配给智能体的工具。\u003Cbr>\n工具与模块块功能相同，但默认为单个 Code 成员。\u003Cbr>\n工具也可以是完整的工作流，这意味着你的智能体不仅能运行代码，还能执行整个工作流。\u003Cbr>\n可配置工具参数，这些参数可被所有工作流成员类型访问。\u003Cbr>\n参数可在运行时修改并重新执行，从而创建可循环遍历的分支点。\n\n### 💻 模块（Modules）\n模块是运行时导入的 Python 文件。\u003Cbr>\n适用于工具包、守护进程、记忆存储、自定义页面等需要持久化的场景。\n\n### 📐 可定制 UI\n包含一套灵活而强大的基础类，用于构建复杂的层级化配置界面。\u003Cbr>\n整个应用程序均基于此框架构建。\u003Cbr>\n开发者可轻松修改或创建配置页面，即使在应用运行时也可进行。\n\n### 🕒 调度器（Scheduler，高级功能）\n可安排工作流在特定时间或间隔自动运行。\u003Cbr>\n支持自然语言表达式，实现灵活调度。\u003Cbr>\n例如，可设置工作流每 5 分钟运行一次、每天下午 3 点运行，或每月第二个星期二运行。\n\n### 📄 结构化输出（Structured Outputs）\n得益于 [Instructor](https:\u002F\u002Fgithub.com\u002Finstructor-ai\u002Finstructor)，成员可配置为输出结构化数据。\n\n### 📦 插件（Addons）\n创建并导入自定义插件，以扩展 Agent Pilot 的功能。\n\n### 💻 代码解释器（Code Interpreter）\nOpen Interpreter 已集成到 Agent Pilot 中，既可作为独立插件使用，\u003Cbr>\n也可用于执行 9 种语言的代码（Python、Shell、AppleScript、HTML、JavaScript、PowerShell、R、React、Ruby）。\n\n代码可通过以下方式执行：\n- 任意工作流（对话、模块块、工具）中的 'Code' 成员。\n- 角色为 'Code' 的消息。\n\n你应始终理解正在运行的代码，任何执行的代码均由你自己负责。\n\n对于代码消息，可在设置中启用自动运行。\u003Cbr>\n要查看代码消息的实际效果，请与预配置的 Open Interpreter 智能体对话。\n\n### 🪄 AI 生成（AI Generation）\n'System Blocks' 文件夹下的模块用于生成或增强字段。  \n默认已包含 Claude 的提示词（prompt）生成器，你可以对其进行调整或创建自己的版本。\n\n- **Prompt** - 经 AI 增强的用户输入  \n- **Agent** - AI 生成的智能体（即将推出）  \n  - **System message** - AI 生成的系统消息（即将推出）  \n- **Page** - AI 生成的页面（即将推出）  \n\n### 🔌 插件（Plugins）\nAgent Pilot 支持以下插件：\n\n- **Agent** - 创建自定义智能体行为  \n  - [Open Interpreter](https:\u002F\u002Fgithub.com\u002FKillianLucas\u002Fopen-interpreter)  \n  - [OpenAI Assistant](\u002F)  \n  - [CrewAI Agent](\u002F)（当前已禁用）  \n- **Workflow** - 创建工作流行为  \n  - [CrewAI Workflow](\u002F)（当前已禁用）  \n- **Provider** - 添加对模型提供商的支持  \n  - [Litellm（支持 100+ 模型）](\u002F)  \n\n- [创建一个插件](\u002F)  \n\n### 👄 语音（Voice）\n**即将回归**\u003Cbr>\n~~智能体可连接文本转语音（text-to-speech）服务，结合个性上下文模块（personality context block），让你的智能体栩栩如生！~~\u003Cbr>\n\n### 🔠 模型（Models）\nLiteLLM 已集成，并支持以下提供商：\u003Cbr>\n\n- AI21  \n- AWS Bedrock  \n- AWS Sagemaker  \n- Aleph Alpha  \n- Anthropic  \n- Anyscale  \n- Azure OpenAI  \n- Baseten  \n- Cloudflare  \n- Cohere  \n- Custom API Servers（自定义 API 服务器）  \n- DeepInfra  \n- DeepSeek  \n- Gemini  \n- Github  \n- Groq  \n- Huggingface  \n- Mistral  \n- NLP Cloud  \n- Nvidia NIM  \n- Ollama  \n- OpenAI  \n- OpenRouter  \n- PaLM API Google  \n- Perplexity AI  \n- Petals  \n- Replicate  \n- Together AI  \n- VLLM  \n- VertexAI Google  \n- Voyage  \n\n## 贡献（Contributions）\n欢迎并感谢您为 Agent Pilot 项目做出贡献。请随时提交 Pull Request。\n\n## 已知问题（Known Issues）\n- 使用自动运行代码（auto run code）和 Open Interpreter 时需谨慎：任何聊天窗口中，如果最后一条消息是代码，它将自动开始执行。我将添加一个标记，用于记录倒计时是否已被停止。  \n- Windows 的 exe 版本由于一个奇怪的 bug，必须显示控制台窗口。  \n- Linux 上存在一个问题：创建虚拟环境（venv）时不会自动安装 pip。  \n- 修改 OpenAI Assistant 的配置不会重新加载该 Assistant，目前需要关闭并重新打开聊天窗口。\n\n如果你觉得这个项目对你有帮助，请考虑通过点个 Star 或打赏来表示支持 :)  \n\u003Cbr>\u003Cbr>\nBTC:\u003Cbr>  \nETH: \u003Cbr>","# AgentPilot 快速上手指南\n\n## 环境准备\n\n- **操作系统**：支持 Linux、Windows 和 macOS（Intel 架构）\n- **依赖项**：无需额外安装 Python 或其他运行时（官方提供预编译二进制包）\n- **网络要求**：首次启动需联网以加载模型配置；若使用本地模型（如 Ollama），请确保对应服务已运行\n\n> 💡 国内用户建议配置代理或使用支持国内访问的模型提供商（如 OpenRouter、Groq 等）\n\n## 安装步骤\n\n### 方法一：下载预编译二进制包（推荐）\n\n根据你的操作系统，从以下链接下载对应版本：\n\n| 平台        | 下载链接                                                                 |\n|-------------|--------------------------------------------------------------------------|\n| **Linux**   | [AgentPilot_0.5.1_Linux.tar.gz](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fagentpilot\u002Ffiles\u002Fv0.5.1\u002FAgentPilot_0.5.1_Linux.tar.gz\u002Fdownload) |\n| **Windows** | [AgentPilot_0.5.1_Windows.zip](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fagentpilot\u002Ffiles\u002Fv0.5.1\u002FAgentPilot_0.5.1_Windows.zip\u002Fdownload) |\n| **Mac Intel** | [AgentPilot_0.5.1_Mac_Intel.tar.gz](https:\u002F\u002Fsourceforge.net\u002Fprojects\u002Fagentpilot\u002Ffiles\u002Fv0.5.1\u002FAgentPilot_0.5.1_Mac_Intel.tar.gz\u002Fdownload) |\n\n解压后直接运行可执行文件即可：\n\n```bash\n# Linux 示例\ntar -xzf AgentPilot_0.5.1_Linux.tar.gz\ncd AgentPilot_0.5.1_Linux\n.\u002FAgentPilot\n```\n\n> ⚠️ Windows 用户需保持控制台窗口可见（已知问题）\n\n### 方法二：从源码构建（高级用户）\n\n参考官方文档：[How to build from source](docs\u002Fguides\u002Fhow_to_build.md)\n\n## 基本使用\n\n### 1. 启动应用\n运行 `AgentPilot` 可执行文件，首次启动会自动打开图形界面。\n\n### 2. 配置模型提供商\n- 进入 **Settings > Providers**\n- 选择一个支持的提供商（如 OpenAI、Ollama、Groq 等）\n- 填写 API Key 或本地地址（例如 Ollama 默认为 `http:\u002F\u002Flocalhost:11434`）\n\n### 3. 创建简单聊天代理\n- 点击左侧 **Agents** → **+ New Agent**\n- 选择 **Agent** 类型\n- 在配置中指定使用的模型（如 `gpt-4o` 或 `llama3`）\n- 保存后点击该代理即可开始对话\n\n### 4. 使用代码解释器（可选）\n- 与内置的 **Open Interpreter** 代理对话\n- 输入如 `画一个正弦波` 或 `列出当前目录文件` 等指令\n- 启用自动运行代码：**Settings > Code Interpreter > Auto-run code**\n\n> 🔒 注意：所有执行的代码由你本人负责，请勿运行不可信内容\n\n现在你已可以：\n- 与单个 AI 聊天\n- 创建多成员工作流\n- 使用自然语言调度任务（Premium 功能）\n- 通过 `{block-name}` 在任意文本字段复用代码块或提示词","一家跨境电商团队的运营专员小李，每周需要从多个平台（Shopify、Amazon、Google Analytics）提取销售和流量数据，生成中英文双语周报，并发送给不同地区的负责人。\n\n### 没有 AgentPilot 时\n- 需手动登录三个平台分别导出数据，再复制粘贴到 Excel 中清洗整合，耗时约2小时。\n- 撰写周报需先用 ChatGPT 分析数据、生成中文摘要，再切换另一个会话翻译成英文，过程割裂且容易遗漏上下文。\n- 若发现某处数据有误，需从头重跑整个流程，无法局部修改或回溯特定步骤。\n- 每次发送邮件前还需人工核对收件人列表和语言版本，容易发错对象。\n- 自动化尝试依赖复杂脚本，非技术人员难以维护或调整。\n\n### 使用 AgentPilot 后\n- 小李创建了一个多智能体工作流：一个代理调用各平台 API 获取数据，一个负责分析并生成中文报告，另一个自动翻译并格式化为英文版本。\n- 整个工作流可在图形界面中一键执行，全程无需切换窗口或手动中转信息。\n- 若某环节出错（如翻译不准确），可直接在聊天分支中修改提示词并重新运行该节点，不影响其他部分。\n- 设置“每周一上午9点”自动触发该流程，并根据预设规则自动分发邮件给对应区域负责人。\n- 界面支持自定义输入表单，未来只需调整配置即可适配新增的数据源或报告模板。\n\nAgentPilot 将原本碎片化、易出错的手动操作，转变为可复用、可调试、可调度的端到端 AI 工作流。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjbexta_AgentPilot_12d74911.png","jbexta",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjbexta_e990f6da.jpg","https:\u002F\u002Fgithub.com\u002Fjbexta",[80],{"name":81,"color":82,"percentage":83},"Python","#3572A5",100,538,77,"2026-04-01T17:17:25","AGPL-3.0","Linux, macOS, Windows","未说明",{"notes":91,"python":89,"dependencies":92},"支持通过插件集成多种大模型提供商；代码解释器默认支持9种语言；部分功能（如调度器）标记为Premium；从源码构建的详细依赖和环境要求需参考文档 docs\u002Fguides\u002Fhow_to_build.md；Windows版本运行时需保持控制台窗口可见；Linux下创建虚拟环境可能不会自动安装pip。",[93,94,95],"LiteLLM","Instructor","Open Interpreter",[26,13,14,15,53],[98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115],"agent","agi","ai","artificial-intelligence","copilot","copilot-chat","desktop-assistant","gui","python","windows-copilot","claude","gemini","openai","realtime-api","structured-output","tool-calling","workflow-automation","workflow-engine","2026-03-27T02:49:30.150509","2026-04-06T05:16:50.242056",[119,124,129,134,139,143],{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},800,"是否支持自定义 API 端点 URL？","从版本 0.1.0 开始，AgentPilot 已支持添加自定义 API 端点，但目前仅在 OpenAI 和 Perplexity 上经过测试。如果遇到信息无法保存的问题，请确保在添加新模型时已选择了一个 API。","https:\u002F\u002Fgithub.com\u002Fjbexta\u002FAgentPilot\u002Fissues\u002F9",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},801,"在 Linux（如 Debian 或 Ubuntu）上启动 AgentPilot AppImage 时出现 libGL 驱动错误怎么办？","该问题通常由 AppImage 捆绑的 libstdc++.so.6 与系统 OpenGL 驱动不兼容导致。开发者已在后续版本中修复此问题。建议升级到 0.1.6 或更高版本。若仍需临时解决，可尝试在启动前设置环境变量以使用系统库：`export LD_LIBRARY_PATH=\u002Fusr\u002Flib\u002Fx86_64-linux-gnu:$LD_LIBRARY_PATH`，再运行 AppImage。","https:\u002F\u002Fgithub.com\u002Fjbexta\u002FAgentPilot\u002Fissues\u002F18",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},802,"AgentPilot 在 Linux 启动时因找不到 configuration.yaml 而崩溃，如何解决？","早期版本未遵循 XDG Base Directory 规范，导致配置文件路径错误。正确做法是将配置文件放在 `$HOME\u002F.config\u002FAgentPilot\u002Fconfiguration.yaml`。开发者已在后续版本修复此问题，建议升级到 0.1.6 或更高版本。临时解决方案是手动创建该目录并将示例配置文件放入其中。","https:\u002F\u002Fgithub.com\u002Fjbexta\u002FAgentPilot\u002Fissues\u002F16",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},803,"在 Arch Linux 上运行 AgentPilot 时出现 PySide6 按钮初始化错误（'icon' is not a Qt property）怎么办？","该错误源于 PySide6 版本兼容性问题，已在新版 AppImage 中修复。建议不要使用旧版便携式 Linux 包（如 0.0.9），而应下载最新的 AppImage 版本，该版本不再显示终端调试信息且修复了 GUI 初始化错误。","https:\u002F\u002Fgithub.com\u002Fjbexta\u002FAgentPilot\u002Fissues\u002F8",{"id":140,"question_zh":141,"answer_zh":142,"source_url":133},804,"AgentPilot AppImage 在 KDE 桌面环境下启动后立即静默退出，如何排查？","此问题可能由 OpenGL 驱动加载失败引起（如 iris_dri.so 无法打开）。请确认已安装 `libgl1-mesa-dri` 并确保用户属于 `video` 和 `render` 组。开发者已在 0.1.6 版本中改用 Debian 构建环境以解决 OpenGL 兼容性问题，建议升级到该版本。",{"id":144,"question_zh":145,"answer_zh":146,"source_url":133},805,"如何正确放置 AgentPilot AppImage 和配置文件以符合 Linux 文件系统规范？","根据 XDG Base Directory 规范，AppImage 可放在 `$HOME\u002F.local\u002Fbin`（用户级）或 `\u002Fusr\u002Flocal\u002Fbin`（系统级）；配置文件应位于 `$XDG_CONFIG_HOME\u002FAgentPilot\u002Fconfiguration.yaml`（默认为 `$HOME\u002F.config\u002FAgentPilot\u002Fconfiguration.yaml`）；应用数据（如插件）应存于 `$XDG_DATA_HOME\u002FAgentPilot`（默认为 `$HOME\u002F.local\u002Fshare\u002FAgentPilot`）。",[148,153,158,163,168,173,178,183,188,193,198,203,208,213,217,221,226,231,236,241],{"id":149,"version":150,"summary_zh":151,"released_at":152},100342,"v0.5.1","Added support for elevenlabs api\r\nNew member:\r\n  Voice model\r\nAdded maximize button\r\nFixed edit bubble bug which erased xml\u002Fhtml tags\r\nFixed bug which only allowed resizing to the left and above\r\nFixed bug where workflow param input widget doesn't hide when empty\r\nProbably more","2025-05-15T19:27:31",{"id":154,"version":155,"summary_zh":156,"released_at":157},100343,"v0.5.0","New page: Addons (create and import addons)\r\nInitial GUI builder\u002Fmodifier (not finished, try it by clicking the invisible button above the settings icon)\r\nNew workflow component: \r\n  Notification (show a notification bubble)\r\n\r\nNew add-on: Tasks (Premium)\r\n  Scheduled or recurring workflows, you can say things like:\r\n    \"Every weekday at 9am\"\r\n    \"Every 2 seconds\"\r\n    \"On the last friday of every month that starts with a J\"\r\n\r\nFixed bug changing config in system settings\r\nFixed themes\r\nFixed '\u003C>' bracket bug in LLM response\r\nSome other bug fixes","2025-02-21T01:05:13",{"id":159,"version":160,"summary_zh":161,"released_at":162},100344,"v0.4.2","Removed unnecessary libraries\r\nFixed sqlupgrade bug on some machines\r\nFaster load times","2025-01-23T23:41:25",{"id":164,"version":165,"summary_zh":166,"released_at":167},100345,"v0.4.1","## New issues:\r\nExecutables take a while to start, need to investigate\r\nBinaries increased in size, most is not necessary, need to figure out why and clean\r\nFile size for Mac is wild (600mb), could be reduced by a lot\r\nWhen first executing code or talking to OpenInterpreter, it takes upto 5 seconds for ipykernel to start (Not new but should be noted)\r\nLinux appimage must be launched from terminal and missing lib on some systems, fix coming soon\r\n\r\n# What's new?\r\nNew binaries\r\nInitial Mac binary (Intel only)\r\n\r\n**Everything from 0.4.0:**\r\nWindow and sections are now resizeable.\r\nBlocks, tools and agents can now be an entire workflow.\r\nItems can be pinned to the top.\r\n\r\nNested workflows work much better.\r\nWorkflow can filter output to a specific role.\r\nYou can save workflows as different types (Agent, Block or Tool)\r\nWorkflow now has a selection box\r\nWorkflow can be panned with CTRL + drag\r\n\r\nAdded new workflow components:\r\n~ Text\r\n~ Code\r\n~ Prompt\r\n~ Module\r\n~~ Node (for organising, combining or splitting inputs) ~~ (Coming soon)\r\n\r\nMembers can change their default output role. Blocks or User can emulate assistant messages and vice versa.\r\nAgents and Prompts:\r\n  Can set a structured output through model options\r\n  Can map XML tags to roles\r\n\r\nText fields highlight XML tags in blue and are collapsible\u002Fexpandable.\r\nRe-added claude prompt enhancer as a block workflow.\r\nFixed prompt enhancer so it only returns the relevant XML block.\r\n\r\nBlock type Metaprompts have been removed and instead are just Prompt blocks stored in system blocks.\r\n\r\nNew System folders:\r\nBlocks:\r\n  Enhance prompt (used by the magic wand button on the user input box)\r\n  Enhance system msg (used by the magic wand button on the system message field)\r\nModules:\r\n  Pages (these are custom pages that can be modified at runtime)\r\n  Settings (edit default settings for members)\r\n  Toolkits (currently only recognises Anthropic tools)\r\n\r\nWhen a tool is called, a chat bubble is shown allowing you to edit the parameters and run the tool.\r\nBlocks and Tools can now take custom parameters, for tools these are filled automatically by the LLM.\r\nThe parameters can be accessed within the workflow using `{curly-braces}` from a Text\u002FPrompt block, or `snake_case` from a Code block.\r\n\r\nNew page: Modules\r\nThese are python files which are imported at runtime. Useful for daemons, toolkits, custom pages, memory or anything that needs persistence.\r\nPage modules can be used to create custom pages.\r\n\r\nChat with, and test Blocks and Tools like you do with Agents.\r\nConversation list can be changed to Agent, Block or Tool\r\nWorkflow parameters integrated into chat page\r\n\r\nNew Input settings supporting structured outputs\r\nAdded GUI for looper inputs, but not implemented yet\r\n\r\nFixed flicker when response generating\r\nFull support for Mac\r\nNotification bubbles instead of message popups","2025-01-21T18:39:35",{"id":169,"version":170,"summary_zh":171,"released_at":172},100346,"v0.4.0","# What's new?\r\nWindow and sections are now resizeable.\r\nBlocks, tools and agents can now be an entire workflow.\r\nItems can be pinned to the top.\r\n\r\nNested workflows work much better.\r\nWorkflow can filter output to a specific role.\r\nYou can save workflows as different types (Agent, Block or Tool)\r\nWorkflow now has a selection box\r\nWorkflow can be panned with CTRL + drag\r\n\r\nAdded new workflow components:\r\n~ Text\r\n~ Code\r\n~ Prompt\r\n~ Module\r\n~~ Node (for organising, combining or splitting inputs) ~~ (Coming soon)\r\n\r\nMembers can change their default output role. Blocks or User can emulate assistant messages and vice versa.\r\nAgents and Prompts:\r\n  Can set a structured output through model options\r\n  Can map XML tags to roles\r\n\r\nText fields highlight XML tags in blue and are collapsible\u002Fexpandable.\r\nRe-added claude prompt enhancer as a block workflow.\r\nFixed prompt enhancer so it only returns the relevant XML block.\r\n\r\nBlock type Metaprompts have been removed and instead are just Prompt blocks stored in system blocks.\r\n\r\nNew System folders:\r\nBlocks:\r\n  Enhance prompt (used by the magic wand button on the user input box)\r\n  Enhance system msg (used by the magic wand button on the system message field)\r\nModules:\r\n  Pages (these are custom pages that can be modified at runtime)\r\n  Settings (edit default settings for members)\r\n  Toolkits (currently only recognises Anthropic tools)\r\n\r\nWhen a tool is called, a chat bubble is shown allowing you to edit the parameters and run the tool.\r\nBlocks and Tools can now take custom parameters, for tools these are filled automatically by the LLM.\r\nThe parameters can be accessed within the workflow using `{curly-braces}` from a Text\u002FPrompt block, or `snake_case` from a Code block.\r\n\r\nNew page: Modules\r\nThese are python files which are imported at runtime. Useful for daemons, toolkits, custom pages, memory or anything that needs persistence.\r\nPage modules can be used to create custom pages.\r\n\r\nChat with, and test Blocks and Tools like you do with Agents.\r\nConversation list can be changed to Agent, Block or Tool\r\nWorkflow parameters integrated into chat page\r\n\r\nNew Input settings supporting structured outputs\r\nAdded GUI for looper inputs, but not implemented yet\r\n\r\nFixed flicker when response generating\r\nFull support for Mac\r\nNotification bubbles instead of message popups","2025-01-16T02:40:45",{"id":174,"version":175,"summary_zh":176,"released_at":177},100347,"v0.3.2.1","## Whats new?\r\nModel drop-down fields can now tweak the parameters of that field.\r\nNew provider architecture, still only 'litellm' provider is implemented.\r\nNew button to sync models to latest version (For now I only update popular providers so some may still be missing).\r\n\r\nTools finally integrated (new issue with numeric parameters)\r\nTool message bubble can edit it's parameters\r\nRe-running tools creates a new branch\r\nNew message role 'result'\r\n\r\nBlocks can now be nested\r\nAdded circular reference error for blocks (on execution)\r\nCode blocks do not check for nested blocks, but other blocks can nest code blocks\r\nButton to test a block from the blocks page\r\n\r\nNew page Envs\r\nCustom python virtual-envs can be created and deleted, freeing tools from the limited provided packages.\r\nSync PyPi packages, ~~install and remove them from venvs.~~\r\nEnvironment variables can be set in Envs, where secrets can be stored for tools.\r\n\r\nFixed prompt block bug\r\nFixed allow editing messages with markdown bug\r\nFixed auto completion tab bug\r\n\r\nNew block type 'Metaprompt'\r\nFirst metaprompt added is Claude prompt generator (while any model can be used, it works best with anthropic)\r\nNew \"Magic wand\" button added to message box input, this uses a metaprompt to enhances your prompt.\r\nThe enhancement should only be the text inside the \u003Cinstructions> tag, but it includes other tags, fix coming soon\r\n\r\nBlocks and Tools pages can be pinned to the main sidebar (right click to Pin\u002FUnpin)\r\nDefault chat model works now\r\nOther fixes\r\n\r\n## New bugs\r\nIssue on linux, creating venv does not install pip\r\nNumeric tool parameters get stuck at -99999\r\nWhen editing a previous message with markdown, to resend you have to press the resend button twice (because the first click makes the bubble lose focus, which blocks the event button click event)\r\n\r\n## Notes\r\nSame version as below but appimage built with some libs causing issue on my machine\r\nEnvironment variables set can be accessed by all other Envs (for now)\r\nLocal env is your own machine, this is not sandboxed at all, so it's important to trust and understand any code you run","2024-09-14T01:35:35",{"id":179,"version":180,"summary_zh":181,"released_at":182},100348,"v0.3.2","## Whats new?\r\nModel drop-down fields can now tweak the parameters of that field.\r\nNew provider architecture, still only 'litellm' provider is implemented.\r\nNew button to sync models to latest version (For now I only update popular providers so some may still be missing).\r\n\r\nTools finally integrated (new issue with numeric parameters)\r\nTool message bubble can edit it's parameters\r\nRe-running tools creates a new branch\r\nNew message role 'result'\r\n\r\nBlocks can now be nested\r\nAdded circular reference error for blocks (on execution)\r\nCode blocks do not check for nested blocks, but other blocks can nest code blocks\r\nButton to test a block from the blocks page\r\n\r\nNew page Envs\r\nCustom python virtual-envs can be created and deleted, freeing tools from the limited provided packages.\r\nSync PyPi packages, ~~install and remove them from venvs.~~\r\nEnvironment variables can be set in Envs, where secrets can be stored for tools.\r\n\r\nFixed prompt block bug\r\nFixed allow editing messages with markdown bug\r\nFixed auto completion tab bug\r\n\r\nNew block type 'Metaprompt'\r\nFirst metaprompt added is Claude prompt generator (while any model can be used, it works best with anthropic)\r\nNew \"Magic wand\" button added to message box input, this uses a metaprompt to enhances your prompt.\r\nThe enhancement should only be the text inside the \u003Cinstructions> tag, but it includes other tags, fix coming soon\r\n\r\nBlocks and Tools pages can be pinned to the main sidebar (right click to Pin\u002FUnpin)\r\nDefault chat model works now\r\nOther fixes\r\n\r\n## New bugs\r\nIssue on linux, creating venv does not install pip\r\nNumeric tool parameters get stuck at -99999\r\nWhen editing a previous message with markdown, to resend you have to press the resend button twice (because the first click makes the bubble lose focus, which blocks the event button click event)\r\n\r\n## Notes\r\nEnvironment variables set can be accessed by all other Envs (for now)\r\nLocal env is your own machine, this is not sandboxed at all, so it's important to trust and understand any code you run","2024-09-13T23:33:08",{"id":184,"version":185,"summary_zh":186,"released_at":187},100349,"v0.3.1","DO NOT USE\r\nEvery Prompt block will be executed each time an OI agent is loaded, wasting tokens.\r\n\r\n## What's new?\r\n- Modified open interpreter with a dirty workaround to allow the python kernel to be used from an executable.\r\n- All code execution from executable works now using open interpreter.\r\n- Fixed orphaned model bug\r\n\r\nNEW KNOWN ISSUES:\r\n- The linux release may not work on your machine because of a new dependency issue, you might need to build it yourself using the `build.py` script.\r\n- App becomes unresponsive for the first few seconds when the kernel launches. The kernel is launched whenever OpenInterpreter plugin is used or code is executed","2024-07-12T18:56:49",{"id":189,"version":190,"summary_zh":191,"released_at":192},100350,"v0.3.0","DO NOT USE\r\nEvery Prompt block will be executed each time an OI agent is loaded, wasting tokens.\r\n\r\nIt's been a while since the last release, this update might be a bit disappointing as it's not as complete as I wanted to get it. Most of my time has been spent on the architecture, allowing nested workflows, and dynamic GUI settings depending on the plugin.\r\nAswell as the below, I wanted to get vector stores, memory and files\u002Fimages supported with this release, but they just aren't ready yet.\r\n\r\nThere's a few new bugs, but I'll get those fixed, I'd recommend not trying to make nested workflows yet, as it's not finished, but saving a multi-agent workflow as a single entity is fine. A nested workflow would be a multi-member workflow where any of the members is another multi-member workflow\r\n\r\nI've had to strip out crewai for now because of a langchain dependency issue.\r\n\r\n## What's new?\r\n\r\nAdded anonymous telemetry, enabled by default, the only thing sent right now is an event when the app is started, to get an idea of user count.\r\n\r\nAdded member list to workflow\r\nButton to disable autorun for granular execution\r\nAdded circular reference error\r\nButton to show\u002Fhide hidden messages\r\nNew workflow components\r\n User - Add your input mid workflow\r\n ~~Tool - Get the output of a tool~~\r\nMembers aligned vertically are run asynchronously\r\nAdded 'Waiting for ..' bar to groups\r\n\r\nNew Open Interpreter fully integrated\r\nAdded auto-run code secs field\r\nPlugins can override the GUI settings (Agent & Workflow)\r\nOpenAI Assistants are streamable and support branching chats\r\nAdded Plugins settings pages\r\n\r\n~~Nested workflows~~ (unfinished, be careful)\r\nAdded Sandboxes page (initially only local)\r\nTools can use any of the 9 languages that OI supports\r\n\r\nDisplay theme presets\r\nProviders & models update\r\nBlocks can now be Text, Prompt or Code\r\n\r\nMade fields optional ('max messages', 'max turns')\r\nMax turns work with branching chats\r\n\r\nNEW KNOWN ISSUES:\r\nChanging the config of an OpenAI Assistant won't reload the assistant, for now close and reopen the chat.\r\nSome others\r\nBe careful using auto run code and open interpreter, any chat you open, if code is the last message it will start auto running, I'll add a flag to remember if the countdown has been stopped.\r\nLogs are broken and need reimplementing.\r\nFlickering when response is generating and scrolled up the page.\r\nSometimes the scroll position of the chat page jumps after response has finished.\r\nWindows exe must have console visible or it affects the streams","2024-07-04T00:58:16",{"id":194,"version":195,"summary_zh":196,"released_at":197},100351,"v0.2.0","This is an early release of version 0.2, it isn't fully featured yet but fairly stable, new experimental features will be coming in the next few weeks.\r\n\r\n## What's new?\r\n\r\n- All pages and fields are now created procedurally from a schema. This is a new framework for the GUI for easier maintenance, readability and extensibility, and lays a foundation for a more complex GUI to be built.\r\n- Folders are enabled for agents, chats, tools and blocks allow for better organization.\r\n- New Tools and Files pages (Files not yet used by the agents and tools partially implemented)\r\n- Added preloaded messages as a way to teach your agents how to respond.\r\n- Full support for light themed displays\r\n- OpenAI assistant plugin (docs coming soon)\r\n- CrewAI plugin (docs coming soon)\r\n- API and model list update (New providers: Mistral, Groq and others, New models: Claude 3 and others)\r\n- Cosmetic changes\r\n- Other features \u002F changes\r\n- Faster load times\r\n\r\n## Issues\r\n\r\n- Files aren't used by the agents yet, in the coming weeks will be integrated into native and oai assistants.\r\n- Only imported tools work right now, guides and documentation coming in the next few weeks.\r\n- Voice has been temporarily disabled\r\n- OpenAI assistants lose their 'instance_config' when config modified causing them to lose memory of the chat.\r\n- Tools use the controversial 'exec' temporarily until sandboxes are implemented, assume anyone with access to your database can execute code on your machine.\r\n- OpenInterpreter has been temporarily disabled because of a dependency issue, coming back soon\r\n\r\n\r\n### How to migrate your data to 0.2.0\r\nCopy your old database (data.db) to the new application folder before you start the app.\r\nAgents, chats and API keys are migrated, but anything else is not.\r\n\r\n## What next?\r\n\r\n- Fully implement files and tools\r\n- Define special behaviour for file types (images can be passed into any vision model)\r\n- Support sandboxed environments cloud & local.\r\n- Deeper integration of context plugins like CrewAI & autogen\r\n- Reimplement voice TTS and STT\r\n- Custom config pages for plugins\r\n- New workflow components\r\n- Rewrite workflow logic","2024-03-14T16:46:57",{"id":199,"version":200,"summary_zh":201,"released_at":202},100352,"v0.1.7","What's new?\r\n(Same as 0.1.6, ammended release to temporarily disable a new feature)\r\n\r\nNew plug-in architecture\r\nNew Open-Interpreter (not fully working yet)\r\nFix auto title blocking main thread\r\nFaster loading\r\nAdded Assistant API (no retrieval yet)\r\nFix for OpenGL issue (thanks to @mruderman)\r\nAdded API headers to organise the list of LLM models (thanks to @chymian)\r\nAdded button \"Set member config to default\" (In chat member settings)\r\nAdded button \"Set all member configs to default\" (In agent settings)\r\n\r\nNew issues \u002F bugs:\r\nWhen plugin settings are changed the context needs to be reloaded to take effect\r\nSet member default buttons don't work with OAI Assistants API, or any custom plugin with instance settings","2024-01-12T17:07:35",{"id":204,"version":205,"summary_zh":206,"released_at":207},100353,"v0.1.6","## What's new?\r\nNew plug-in architecture\r\nNew Open-Interpreter (not fully working yet)\r\nFix auto title blocking main thread\r\nFaster loading\r\nAdded Assistant API (no retrieval yet)\r\nFix for OpenGL issue (thanks to @mruderman)\r\nAdded API headers to organise the list of LLM models (thanks to @chymian)\r\nAdded button \"Set member config to default\" (In chat member settings)\r\nAdded button \"Set all member configs to default\" (In agent settings)\r\n\r\nNew issues \u002F bugs:\r\nWhen plugin settings are changed the context needs to be reloaded to take effect","2024-01-10T12:06:06",{"id":209,"version":210,"summary_zh":211,"released_at":212},100354,"v0.1.5","- Stable\r\n- Better style sheets for windows\r\n- Windows fix, window opens in bottom corner\r\n- Open Interpreter uses agent System Message\r\n- Fix auto title\r\n- Added auto title prompt + model settings\r\n- Added context title to chat\r\n- Added dev mode\r\n- Added button to fix all empty titles (in dev mode)\r\n- Re-enable branching (not stable yet but usable)\r\n- Fix stop generation button\r\n\r\nFollowing versions will only be bugfixes if any found, until next major version 0.2.0 which will introduce RAG & Functions","2023-12-13T04:44:50",{"id":214,"version":215,"summary_zh":76,"released_at":216},100355,"v0.1.4","2023-12-09T16:14:11",{"id":218,"version":219,"summary_zh":76,"released_at":220},100356,"v0.1.3","2023-12-08T13:17:24",{"id":222,"version":223,"summary_zh":224,"released_at":225},100357,"v0.1.2","Fix auto run code\r\nAdded setting \"Set members to user role\" with default true which improves group chat\r\nFixed critical bug when chat button clicked it creates a dozen contexts\r\nFixed linux opengl bug\r\nContext inputs work now by using the placeholder set in agent group settings\r\nFixed api page, api base and custom provider","2023-12-03T02:34:20",{"id":227,"version":228,"summary_zh":229,"released_at":230},100358,"v0.1.1","VERSION BROKEN\r\nFix auto run code\r\nAdded setting \"Set members to user role\" with default true which improves group chat\r\nFix critical bug when chat button clicked it creates a dozen contexts\r\nFix linux opengl bug\r\n-- Ammended --\r\nFixed api page","2023-12-01T01:17:19",{"id":232,"version":233,"summary_zh":234,"released_at":235},100359,"v0.1.0","## 👥 Introducing multi-agent chats with branching history! 🌱\r\n\r\nThis release brings branching contexts, multi-agent chats and an addition of many more providers through LiteLLM. Combine models from different providers under one context, and configure their interaction with each other in a no-code environment.\r\n\r\n### How to migrate your data to 0.1.0\r\nCopy your old database (data.db) to the new application folder before you start the app.\r\n\r\n### Release notes:\r\n\r\n- Branching chats\r\n- Multi-agent chats\r\n- Cosmetic changes\r\n- New agent settings\r\n- Chat member settings\r\n- Fix open interpreter print bug\r\n- LiteLLM support with 100+ models\r\n- Broken MemGPT until probably a few weeks\r\n- Selected text remains selected while generating\r\n- Other fixes and optimisations\r\n- Some groupchat features aren't working yet\r\n\r\nIssues with release:\r\n- Issue for some linux, libGL error: MESA-LOADER: failed to open iris: \u002Fusr\u002Flib\u002Fdri\u002Firis_dri.so \r\n- Windows issue: Window opens slightly off screen instead of bottom corner","2023-11-29T05:12:18",{"id":237,"version":238,"summary_zh":239,"released_at":240},100360,"v0.0.9","## Release of 0.0.9\r\nA fairly stable version before a major rewrite.\r\nSome minor bugs still exist and some features aren't implemented yet.\r\nThis version works for chatting to agents, modifying agents, openinterpreter & memgpt plugins, settings.\r\nActions do not work yet, or may work behind the scenes but there is no interface to them.\r\nNewer versions will reduce filesize by compiling agent plugins into standalone files, and cleaning up dependencies","2023-10-26T14:11:35",{"id":242,"version":243,"summary_zh":244,"released_at":245},100361,"v0.0.2","First release","2023-10-21T01:42:55"]