[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-kreneskyp--ix":3,"tool-kreneskyp--ix":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":75,"owner_website":81,"owner_url":82,"languages":83,"stars":114,"forks":115,"last_commit_at":116,"license":117,"difficulty_score":10,"env_os":118,"env_gpu":119,"env_ram":120,"env_deps":121,"category_tags":133,"github_topics":134,"view_count":23,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":139,"updated_at":140,"faqs":141,"releases":171},1401,"kreneskyp\u002Fix","ix","Autonomous GPT-4 agent platform","ix 是一个专为设计和部署自主及半自主大语言模型（LLM）智能体而打造的开源平台。它旨在解决复杂任务自动化中的协作难题，让用户能够轻松构建由多个 AI 智能体组成的团队，这些智能体不仅可以并行处理工作，还能相互沟通协作，从而高效完成代码生成、数据分析、研究辅助等各类流程。\n\nix 特别适合开发者、技术研究人员以及希望探索 AI 工作流自动化的创新者使用。其核心亮点在于提供了直观的“无代码智能体编辑器”，用户只需通过拖拽节点即可构建代表智能体认知逻辑的流程图，并直接在编辑器内嵌入聊天窗口进行快速测试与调试。此外，ix 支持多智能体聊天界面，允许用户在单一对话框中指挥整个智能体团队，甚至通过\"@提及”功能指定特定智能体执行任务。\n\n在技术架构上，ix 采用 Docker 容器化部署并结合 Celery 消息队列驱动后端，实现了出色的水平扩展能力，可支撑大规模智能体集群并行运行。平台不仅兼容 OpenAI 模型，还实验性支持 Google PaLM、Anthropic 及 Llama 等多种主流大模型，配合灵活的组件配置层，为构建高度定制化的 AI 应用提供了坚实基础。","# iX - Autonomous GPT-4 Agent Platform\n\n[![Unit Tests](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fkreneskyp\u002Fix\u002Ftest.yml)](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Factions\u002Fworkflows\u002Ftest.yml)\n[![Discord Server](https:\u002F\u002Fdcbadge.vercel.app\u002Fapi\u002Fserver\u002FjtrMKxzZZQ)](https:\u002F\u002Fdiscord.gg\u002FjtrMKxzZZQ)\n[![Twitter Follow](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fkreneskyp?style=social)](https:\u002F\u002Ftwitter.com\u002Fkreneskyp)\n\n\u003Cdiv>\n\u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkreneskyp_ix_readme_4fe48b150c40.png\" alt=\"midjourney prompt: The ninth planet around the sun\">\n\u003Cp>\n\u003Cbr>\n\u003Cbr>\n\u003Cbr>\n\u003Cbr>\nAmidst the swirling sands of the cosmos, Ix stands as an enigmatic jewel, \nwhere the brilliance of human ingenuity dances on the edge of forbidden \nknowledge, casting a shadow of intrigue over the galaxy.\n\n\\- Atreides Scribe, The Chronicles of Ixian Innovation\n\u003Cp>\n\u003C\u002Fdiv>\n\u003Cdiv>\n\u003Cbr>\n\u003Cbr>\n\u003Cbr>\n\u003Cbr>\n\u003Cbr>\n\u003C\u002Fdiv>\n\n\n## 🌌 About\n\u003Cdiv>\nIX is a platform for designing and deploying autonomous and [semi]-autonomous LLM powered agents and workflows. IX\nprovides a flexible and scalable solution for delegating tasks to AI powered agents. Agents created with the platform\ncan automate a wide variety of tasks, while running in parallel and communicating with each other.\n\u003Cbr>\n\u003C\u002Fdiv>\n\n##### Build AI powered workflows:\n- QA chat bots\n- Code generation\n- Data extraction\n- Data analysis\n- Data augmentation\n- Research assistants\n\n## Key Features\n\n### 🧠 Models\n  - OpenAI\n  - Google PaLM (Experimental)\n  - Anthropic (Experimental)\n  - Llama (Experimental)\n\n### ⚒️ No-code Agent Editor\nNo-code editor for creating and testing agents. The editor provides an interface to drop and connect nodes into a graph\nrepresenting the cognitive logic of an agent. Chat is embedded in the editor to allow for rapid testing and debugging.\n\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Ff43923b9-7bce-4b64-b30e-3204eb1673e4\n\n### 💬 Multi-Agent Chat interface\nCreate your own teams of agents and interact with them through a single interface. Chat room support multiple agents.\nBy default it includes the IX moderator agent, which delegates tasks to other agents. You can `@mention` specific \nagents to complete the tasks.\n\n\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Fd1418c23-afb5-4aed-91c7-bf99b1c165d5\n\n\n### 💡 Smart Input \nThe smart input bar auto-completes agent `@mentions` and file & data `{artifacts}` created by tasks.\n\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F27cf7085-7349-4641-9327-d31a3041a94c\n\n\n### ⚡ Message Queue Drive Agent Workers\nThe agent runner backend is dockerized and is triggered with a celery message queue. This allows the backend to scale\nhorizontally to support a fleet of agents running in parallel.\n\n![WorkerScalingTest_V3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkreneskyp_ix_readme_aae5b798684a.png)\n\n\n### ⚙️ Component Config Layer\n\nIX implements a component config layer that maps LangChain components to the configuration graph. The config layer\npowers a number of other systems and features. For example, component field and connector definitions are used to\nrender nodes and forms dynamically in the no-code editor. \n\n\n## 🛠️ Getting Started\n\n##### Prerequisites\n\u003Cdetails> \n  \u003Csummary>Windows Linux Subsystem (windows only)\u003C\u002Fsummary> \n  \u003Col>\n      \u003Cli>Open powershell\u003C\u002Fli>\n      \u003Cli>run `wsl --install` to install and\u002For activate WSL\u003C\u002Fli>\n  \u003C\u002Fol>\n\u003C\u002Fdetails>\n\u003Cdetails> \n  \u003Csummary>Docker\u003C\u002Fsummary>\n  Install Docker Desktop for your OS:\u003Cbr\u002F>\n  \u003CA href=\"https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop\u002F\">https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop\u002F\u003C\u002FA>\n\n  Detailed install instructions:\n  \u003Cul>\n    \u003Cli>\u003Ca href=\"https:\u002F\u002Fdocs.docker.com\u002Fdesktop\u002Finstall\u002Fmac-install\u002F\">Mac\u003C\u002Fa>\u003C\u002Fli>\n    \u003Cli>\u003Ca href=\"https:\u002F\u002Fdocs.docker.com\u002Fdesktop\u002Finstall\u002Fwindows-install\u002F\">Windows\u003C\u002Fa>\u003C\u002Fli>\n  \u003C\u002Ful>\n\u003C\u002Fdetails>\n\u003Cdetails> \n  \u003Csummary>Python\u003C\u002Fsummary> \n  Python 3.8 or higher is required for the CLI. The app python version is managed by the image.\n\u003C\u002Fdetails>\n\n\n### Agent-IX CLI\n\nThe quickest way to start IX is with the agent-ix CLI. The CLI starts a preconfigured docker cluster with \ndocker-compose. It downloads the required images automatically and starts the app cluster.\n\n```bash\npip install agent-ix\nix up\n```\n\nScale agent workers with the `scale` command. Each worker will run agent processes in parallel. The limit to the number\nof workers is based on available memory and CPU capacity.\n\n```bash\nix scale 5\n```\n\nThe client may start a specific version, including the unstable `dev` image built on `master` branch.\n```bash\nix up --version dev\n```\n\n\n## How does it work\n\n\n### Basic Usage\nYou chat with an agent that uses that direction to investigate, plan, and complete tasks. The agents are\ncapable of searching the web, writing code, creating images, interacting with other APIs and services. If it can be \ncoded, it's within the realm of possibility that an agent can be built to assist you.\n\n1. Setup the server and visit [http:\u002F\u002F0.0.0.0:8000](http:\u002F\u002F0.0.0.0:8000), a new chat will be created automatically with the default agents.\n\n2. Enter a request and the IX moderator will delegate the task to the agent best suited for the response. Or `@mention`\nan agent to request a specific agent to complete the task.\n\n3. Customized agents may be added or removed from the chat as needed to process your tasks\n\n### Creating Custom Agents and Chains\n\nIX provides the moderator agent IX, a coder agent, and other example agents. Custom agents \nmay be built using the chain editor or the python API. \n\n#### Chain Editor\n\n1. Navigate to the [chain editor](http:\u002F\u002Flocalhost:8000\u002Fchains\u002Fnew)\n2. Click on the root connector to open the component search\n3. Drag agents, chains, tools, and other components into the editor\n4. Connect the components to create a chain\n5. Open the test chat to try it out!\n\n#### Python API\nChains [python API docs](docs\u002Fchains\u002Fchains.rst)\n\n\n\n## 🧙 Development setup\n\n### 1. Prerequisites\n\nBefore getting started, ensure you have the following software installed on your system:\n\n\u003Cdetails> \n  \u003Csummary>Windows Linux Subsystem (windows only)\u003C\u002Fsummary> \n  \u003Col>\n      \u003Cli>Open powershell\u003C\u002Fli>\n      \u003Cli>run `wsl --install` to install and\u002For activate WSL\u003C\u002Fli>\n  \u003C\u002Fol>\n\u003C\u002Fdetails>\n\u003Cdetails> \n  \u003Csummary>Docker\u003C\u002Fsummary>\n  Install Docker Desktop for your OS:\u003Cbr\u002F>\n  \u003CA href=\"https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop\u002F\">https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop\u002F\u003C\u002FA>\n\n  Detailed install instructions:\n\n  \u003Cul>\n    \u003Cli>\u003Ca href=\"https:\u002F\u002Fdocs.docker.com\u002Fdesktop\u002Finstall\u002Fmac-install\u002F\">Mac\u003C\u002Fa>\u003C\u002Fli>\n    \u003Cli>\u003Ca href=\"https:\u002F\u002Fdocs.docker.com\u002Fdesktop\u002Finstall\u002Fwindows-install\u002F\">Windows\u003C\u002Fa>\u003C\u002Fli>\n  \u003C\u002Ful>\n\u003C\u002Fdetails>\n\u003Cdetails> \n  \u003Csummary>Git & Make\u003C\u002Fsummary> \n  \u003Cul>\n    \u003Cli>\u003Cb>Mac:\u003C\u002Fb> \u003Ccode>brew install git make\u003C\u002Fcode>\u003C\u002Fli>\n    \u003Cli>\u003Cb>Linux:\u003C\u002Fb> \u003Ccode>apt install git make\u003C\u002Fcode>\u003C\u002Fli>\n    \u003Cli>\u003Cb>Windows (WSL):\u003C\u002Fb> \u003Ccode>apt install git make\u003C\u002Fcode>\u003C\u002Fli>\n  \u003C\u002Ful>\n\u003C\u002Fdetails>\n\n\n### 2. Clone the repository\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix.git\ncd ix\n```\n\n### 3. Setup env\n\nSetup config in `.env`\n\n```bash\ncp .env.template .env\n```\n\n```\nOPENAI_API_KEY=YOUR_KEY_HERE\n```\n\n### 4. Build & Initialize the IX cluster.\nThe image will build automatically when needed in most cases. Set `NO_IMAGE_BUILD=1` to skip rebuilding the image.\n\nUse the `image` target to build and start the IX images. The `dev_setup` target will build the frontend and \ninitialize the database. See the developer tool section for more commands to manage the dev environment.\n\n```\nmake dev_setup\n```\n\n### 5. Run the IX cluster\n\nThe IX cluster runs using docker-compose. It will start containers for the web server, app server, agent workers, database,\nredis, and other supporting services.\n\n```bash\nmake cluster\n```\n\n### 6. View logs\n\nWeb and app container logs\n```bash\nmake server\n```\n\nAgent worker container logs\n```bash\nmake worker\n```\n\n\n### 7. Open User Interface\n\nVisit [http:\u002F\u002F0.0.0.0:8000](http:\u002F\u002F0.0.0.0:8000) to access the user interface. From there you may create and edit\nagents and chains. \nThe platform will automatically spawn agent processes to complete tasks as needed.\n\n\n### Scaling workers\nAdjust the number of active agent workers with the `scale` target. The default is 1 agent worker to handle tasks. There\nis no hard limit on agents, but the number of workers is limited by available memory and CPU capacity.\n\n```bash\nmake scale N=5\n```\n\n\n## Developer Tools\n\nHere are some helpful commands for developers to set up and manage the development environment:\n\n### Running:\n- `make up` \u002F `make cluster`: Start the application in development mode at [http:\u002F\u002F0.0.0.0:8000](http:\u002F\u002F0.0.0.0:8000).\n- `make server`: watch logs for web and app containers.\n- `make worker`: watch logs for agent worker containers.\n\n### Building:\n- `make image`: Build the Docker image.\n- `make frontend`: Rebuild the front end (GraphQL, relay, webpack).\n- `make webpack`: Rebuild JavaScript only.\n- `make webpack-watch`: Rebuild JavaScript on file changes.\n- `make dev_setup`: Builds frontend and generates database.\n- `make node_types_fixture`: Builds database fixture for component type definitions.\n\n### Database\n- `make migrate`: Run Django database migrations.\n- `make migrations`: Generate new Django database migration files.\n\n### Utility\n- `make bash`: Open a bash shell in the Docker container.\n- `make shell`: Open a Django shell_plus session.\n\n### Agent Fixtures\n\nDump fixtures with the `dump_agent` django command. This command will gather and dump the agent and chain, including\nthe component graph.\n\n1.\n    ```\n    make bash\n    ```\n2.\n    ```bash\n    .\u002Fmanage.py dump_agent -a alias\n    ```\n","# iX - 自主 GPT-4 代理平台\n\n[![单元测试](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Factions\u002Fworkflow\u002Fstatus\u002Fkreneskyp\u002Fix\u002Ftest.yml)](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Factions\u002Fworkflows\u002Ftest.yml)\n[![Discord 服务器](https:\u002F\u002Fdcbadge.vercel.app\u002Fapi\u002Fserver\u002FjtrMKxzZZQ)](https:\u002F\u002Fdiscord.gg\u002FjtrMKxzZZQ)\n[![Twitter 关注](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fkreneskyp?style=social)](https:\u002F\u002Ftwitter.com\u002Fkreneskyp)\n\n\u003Cdiv>\n\u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkreneskyp_ix_readme_4fe48b150c40.png\" alt=\"MidJourney 提示词：围绕太阳运行的第九颗行星\">\n\u003Cp>\n\u003Cbr>\n\u003Cbr>\n\u003Cbr>\n\u003Cbr>\n在浩瀚宇宙中翻滚的沙尘间，Ix 如一颗神秘而璀璨的宝石，人类智慧的光辉在禁忌之知识的边缘翩然起舞，为整个银河系投下一层充满谜团的阴影。\n\n－ 阿特雷德书记官，《伊克尼亚创新编年史》\n\u003Cp>\n\u003C\u002Fdiv>\n\u003Cdiv>\n\u003Cbr>\n\u003Cbr>\n\u003Cbr>\n\u003Cbr>\n\u003Cbr>\n\u003C\u002Fdiv>\n\n\n## 🌌 关于\n\u003Cdiv>\nIX 是一个用于设计和部署自主及半自主 LLM 驱动的代理与工作流的平台。IX 提供了一套灵活且可扩展的解决方案，可将任务委托给 AI 驱动的代理。通过该平台创建的代理能够自动化执行多种多样的任务，并且可以并行运行、彼此之间进行通信。\n\u003Cbr>\n\u003C\u002Fdiv>\n\n##### 构建 AI 驱动的工作流：\n- 质疑问答聊天机器人\n- 代码生成\n- 数据提取\n- 数据分析\n- 数据增强\n- 研究助理\n\n## 核心功能\n\n### 🧠 模型\n- OpenAI\n- Google PaLM（实验性）\n- Anthropic（实验性）\n- Llama（实验性）\n\n### ⚒️ 无代码代理编辑器\n无代码编辑器，用于创建并测试代理。该编辑器提供了一个界面，可将节点拖拽并连接成图，从而构建代理的认知逻辑。聊天功能内嵌于编辑器中，方便快速测试与调试。\n\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Ff43923b9-7bce-4b64-b30e-3204eb1673e4\n\n### 💬 多代理聊天接口\n创建属于您的代理团队，并通过单一界面与它们互动。聊天室支持多代理同时参与对话。默认情况下，会包含 IX 主持人代理，负责将任务委派给其他代理。您还可以通过“@提及”特定代理来完成任务。\n\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Fd1418c23-afb5-4aed-91c7-bf99b1c165d5\n\n\n### 💡 智能输入\n智能输入栏可自动补全代理的“@提及”，并自动填充由任务生成的文件与数据“{artifacts}”。\n\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F27cf7085-7349-4641-9327-d31a3041a94c\n\n\n### ⚡ 消息队列驱动的代理工作者\n代理运行器后端采用 Docker 方式部署，并通过 Celery 消息队列触发。这一机制使后端能够横向扩展，以支持并行运行的代理集群。\n\n![WorkerScalingTest_V3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkreneskyp_ix_readme_aae5b798684a.png)\n\n\n### ⚙️ 组件配置层\n\nIX 实现了组件配置层，将 LangChain 组件映射至配置图。该配置层为众多其他系统与功能提供了强大支持。例如，组件字段与连接器定义可用于在无代码编辑器中动态渲染节点与表单。\n\n## 🛠️ 入门指南\n\n### 前置条件\n\u003Cdetails> \n  \u003Csummary>Windows Linux 子系统（仅限 Windows）\u003C\u002Fsummary> \n  \u003Col>\n      \u003Cli>打开 PowerShell\u003C\u002Fli>\n      \u003Cli>运行 `wsl --install`，以安装并激活 WSL\u003C\u002Fli>\n  \u003C\u002Fol>\n\u003C\u002Fdetails>\n\u003Cdetails> \n  \u003Csummary>Docker\u003C\u002Fsummary>\n  为您的操作系统安装 Docker Desktop：\u003Cbr\u002F>\n  \u003CA href=\"https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop\u002F\">https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop\u002F\u003C\u002FA>\n\n  详细安装说明：\n  \u003Cul>\n    \u003Cli>\u003Ca href=\"https:\u002F\u002Fdocs.docker.com\u002Fdesktop\u002Finstall\u002Fmac-install\u002F\">Mac\u003C\u002Fa>\u003C\u002Fli>\n    \u003Cli>\u003Ca href=\"https:\u002F\u002Fdocs.docker.com\u002Fdesktop\u002Finstall\u002Fwindows-install\u002F\">Windows\u003C\u002Fa>\u003C\u002Fli>\n  \u003C\u002Ful>\n\u003C\u002Fdetails>\n\u003Cdetails> \n  \u003Csummary>Python\u003C\u002Fsummary> \n  需要 Python 3.8 或更高版本才能使用 CLI。应用的 Python 版本由镜像统一管理。\n\u003C\u002Fdetails>\n\n\n### Agent-IX CLI\n\n启动 IX 的最快方式是使用 Agent-IX CLI。CLI 会启动一个预先配置好的 Docker 集群，并通过 Docker-compose 进行管理。它会自动下载所需镜像，并启动应用集群。\n\n```bash\npip install agent-ix\nix up\n```\n\n通过 `scale` 命令对代理工作者进行扩展。每个工作者将并行运行代理进程。工作者数量的上限取决于可用内存和 CPU 能力。\n\n```bash\nix scale 5\n```\n\n客户端可以启动特定版本，包括基于 `master` 分支构建的不稳定版 `dev` 镜像。\n\n```bash\nix up --version dev\n```\n\n\n## 它是如何工作的\n\n\n### 基本用法\n您与代理进行对话，代理将根据您的指令展开调查、制定计划并完成各项任务。这些代理能够搜索网络、编写代码、生成图像、与各类 API 和服务交互。只要可以编程实现，就有机会打造代理来协助您完成任务。\n\n1. 设置好服务器，并访问 [http:\u002F\u002F0.0.0.0:8000](http:\u002F\u002F0.0.0.0:8000)，系统会自动创建一个新的聊天室，并配备默认代理。\n\n2. 输入请求后，IX 主持人会将任务委派给最适合完成该响应的代理。或者，您可以“@提及”某个代理，请求指定的代理来完成任务。\n\n3. 根据需要，您可以添加或移除自定义代理，以更好地处理您的任务。\n\n### 创建自定义代理与链\n\nIX 提供了主持人代理 IX、编码器代理以及其他示例代理。您可以使用链编辑器或 Python API 来构建自定义代理。\n\n#### 链编辑器\n\n1. 导航至 [链编辑器](http:\u002F\u002Flocalhost:8000\u002Fchains\u002Fnew)\n2. 点击根连接器，打开组件搜索界面\n3. 将代理、链、工具及其他组件拖拽至编辑器中\n4. 将各组件连接起来，构建一条链\n5. 打开测试聊天，体验其功能！\n\n#### Python API\n链 [Python API 文档](docs\u002Fchains\u002Fchains.rst)\n\n\n\n## 🧙 开发环境搭建\n\n### 1. 前置条件\n\n在开始之前，请确保您的系统已安装以下软件：\n\n\u003Cdetails> \n  \u003Csummary>Windows Linux 子系统（仅限 Windows）\u003C\u002Fsummary> \n  \u003Col>\n      \u003Cli>打开 PowerShell\u003C\u002Fli>\n      \u003Cli>运行 `wsl --install`，以安装并激活 WSL\u003C\u002Fli>\n  \u003C\u002Fol>\n\u003C\u002Fdetails>\n\u003Cdetails> \n  \u003Csummary>Docker\u003C\u002Fsummary>\n  为您的操作系统安装 Docker Desktop：\u003Cbr\u002F>\n  \u003CA href=\"https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop\u002F\">https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop\u002F\u003C\u002FA>\n\n  详细安装说明：\n\n  \u003Cul>\n    \u003Cli>\u003Ca href=\"https:\u002F\u002Fdocs.docker.com\u002Fdesktop\u002Finstall\u002Fmac-install\u002F\">Mac\u003C\u002Fa>\u003C\u002Fli>\n    \u003Cli>\u003Ca href=\"https:\u002F\u002Fdocs.docker.com\u002Fdesktop\u002Finstall\u002Fwindows-install\u002F\">Windows\u003C\u002Fa>\u003C\u002Fli>\n  \u003C\u002Ful>\n\u003C\u002Fdetails>\n\u003Cdetails> \n  \u003Csummary>Git & Make\u003C\u002Fsummary> \n  \u003Cul>\n    \u003Cli>\u003Cb>Mac：\u003C\u002Fb> \u003Ccode>brew install git make\u003C\u002Fcode>\u003C\u002Fli>\n    \u003Cli>\u003Cb>Linux：\u003C\u002Fb> \u003Ccode>apt install git make\u003C\u002Fcode>\u003C\u002Fli>\n    \u003Cli>\u003Cb>Windows（WSL）：\u003C\u002Fb> \u003Ccode>apt install git make\u003C\u002Fcode>\u003C\u002Fli>\n  \u003C\u002Ful>\n\u003C\u002Fdetails>\n\n\n### 2. 克隆仓库\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix.git\ncd ix\n```\n\n### 3. 设置环境变量\n\n在 `.env` 文件中配置环境变量。\n\n```bash\ncp .env.template .env\n```\n\n```bash\nOPENAI_API_KEY=YOUR_KEY_HERE\n```\n\n### 4. 构建并初始化 IX 集群\n在大多数情况下，镜像会自动构建。若需跳过镜像的重新构建，请将 `NO_IMAGE_BUILD=1` 设置为真。\n\n使用 `image` 目标来构建并启动 IX 镜像。`dev_setup` 目标将构建前端并初始化数据库。更多用于管理开发环境的命令，请参阅“开发者工具”部分。\n\n```bash\nmake dev_setup\n```\n\n### 5. 运行 IX 集群\n\nIX 集群通过 Docker Compose 运行。它会启动 Web 服务器、应用服务器、代理工作者、数据库、Redis 以及其他辅助服务的容器。\n\n```bash\nmake cluster\n```\n\n### 6. 查看日志\n\nWeb 和应用容器的日志：\n```bash\nmake server\n```\n\n代理工作者容器的日志：\n```bash\nmake worker\n```\n\n### 7. 打开用户界面\n\n访问 [http:\u002F\u002F0.0.0.0:8000](http:\u002F\u002F0.0.0.0:8000) 即可进入用户界面。在此界面中，您可以创建并编辑代理和链。\n平台会根据需要自动启动代理进程，以完成各项任务。\n\n### 扩展工作者数量\n使用 `scale` 目标调整活跃代理工作者的数量。默认情况下，系统会启动 1 个代理工作者来处理任务。代理的数量没有硬性限制，但工作者的数量受可用内存和 CPU 资源的限制。\n\n```bash\nmake scale N=5\n```\n\n## 开发者工具\n\n以下是一些对开发者非常有帮助的命令，可用于设置和管理开发环境：\n\n### 运行：\n- `make up` \u002F `make cluster`：以开发模式启动应用程序，地址为 [http:\u002F\u002F0.0.0.0:8000](http:\u002F\u002F0.0.0.0:8000)。\n- `make server`：实时监控 Web 和应用容器的日志。\n- `make worker`：实时监控代理工作者容器的日志。\n\n### 构建：\n- `make image`：构建 Docker 镜像。\n- `make frontend`：重新构建前端（GraphQL、Relay、Webpack）。\n- `make webpack`：仅重新构建 JavaScript。\n- `make webpack-watch`：在文件变更时自动重新构建 JavaScript。\n- `make dev_setup`：构建前端并生成数据库。\n- `make node_types_fixture`：为组件类型定义生成数据库 fixture。\n\n### 数据库\n- `make migrate`：执行 Django 数据库迁移。\n- `make migrations`：生成新的 Django 数据库迁移文件。\n\n### 实用工具\n- `make bash`：在 Docker 容器中打开 Bash shell。\n- `make shell`：打开 Django shell_plus 会话。\n\n### 代理 fixture\n\n使用 `dump_agent` Django 命令导出 fixture。该命令会收集并导出代理和链，包括组件图。\n\n1.\n    ```\n    make bash\n    ```\n2.\n    ```bash\n    .\u002Fmanage.py dump_agent -a alias\n    ```","# iX 自主 AI 智能体平台快速上手指南\n\niX 是一个用于设计和部署由大语言模型（LLM）驱动的自主及半自主智能体与工作流的平台。它支持无代码编辑、多智能体协作聊天以及横向扩展的任务处理能力。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**：\n    *   **Linux \u002F macOS**：原生支持。\n    *   **Windows**：必须安装并启用 **WSL (Windows Subsystem for Linux)**。\n        *   在 PowerShell 中运行：`wsl --install`\n*   **Docker**：需安装 Docker Desktop 以运行容器化服务。\n    *   下载地址：[Docker Desktop](https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop\u002F)\n*   **Python**：CLI 工具需要 Python 3.8 或更高版本（应用本身的 Python 环境由 Docker 镜像管理）。\n*   **开发工具（仅源码开发模式需要）**：`git` 和 `make`。\n    *   Mac: `brew install git make`\n    *   Linux\u002FWSL: `apt install git make`\n\n## 安装步骤\n\n您可以根据需求选择 **快速启动（推荐）** 或 **源码开发** 两种方式进行安装。\n\n### 方式一：使用 CLI 快速启动（推荐）\n\n这是最简单的启动方式，会自动下载所需的 Docker 镜像并配置集群。\n\n1.  **安装 CLI 工具**\n    ```bash\n    pip install agent-ix\n    ```\n\n2.  **启动服务**\n    运行以下命令启动预配置的 Docker 集群：\n    ```bash\n    ix up\n    ```\n\n3.  **（可选）扩展工作节点**\n    如果需要并行处理更多任务，可以扩展 Agent 工作节点数量（受限于内存和 CPU）：\n    ```bash\n    ix scale 5\n    ```\n\n4.  **（可选）指定版本**\n    如需体验最新开发版（基于 master 分支）：\n    ```bash\n    ix up --version dev\n    ```\n\n### 方式二：源码开发模式\n\n如果您需要修改代码或贡献项目，请按以下步骤操作：\n\n1.  **克隆仓库**\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix.git\n    cd ix\n    ```\n\n2.  **配置环境变量**\n    复制模板文件并填入您的 API Key（主要是 OpenAI）：\n    ```bash\n    cp .env.template .env\n    ```\n    编辑 `.env` 文件，设置：\n    ```text\n    OPENAI_API_KEY=YOUR_KEY_HERE\n    ```\n\n3.  **构建与初始化**\n    构建前端并初始化数据库：\n    ```bash\n    make dev_setup\n    ```\n\n4.  **启动集群**\n    启动 Web 服务器、应用服务器、Agent 工作节点、数据库及 Redis 等服务：\n    ```bash\n    make cluster\n    ```\n\n5.  **查看日志（可选）**\n    *   查看 Web\u002FApp 日志：`make server`\n    *   查看 Agent 工作节点日志：`make worker`\n\n6.  **扩展工作节点（可选）**\n    ```bash\n    make scale N=5\n    ```\n\n## 基本使用\n\n安装完成后，即可通过浏览器访问平台进行交互。\n\n1.  **访问界面**\n    打开浏览器访问：[http:\u002F\u002F0.0.0.0:8000](http:\u002F\u002F0.0.0.0:8000)\n    系统将自动创建一个包含默认智能体（如 IX 协调员、代码编写员等）的新聊天室。\n\n2.  **发起任务**\n    *   **自动分配**：直接在输入框输入自然语言指令（例如：“帮我分析这份数据”或“写一个 Python 脚本抓取网页”）。IX 协调员（Moderator）会自动将任务分配给最适合的智能体。\n    *   **指定智能体**：使用 `@` 符号提及特定智能体来指派任务（例如：`@coder 请帮我修复这个 bug`）。\n\n3.  **智能输入辅助**\n    输入栏支持自动补全：\n    *   输入 `@` 可自动联想可用的智能体。\n    *   输入 `{` 可自动联想任务生成的文件或数据产物（Artifacts）。\n\n4.  **创建自定义智能体（无代码）**\n    *   导航至 [Chain Editor](http:\u002F\u002Flocalhost:8000\u002Fchains\u002Fnew)。\n    *   点击根连接器打开组件搜索。\n    *   拖拽智能体、链、工具等组件到画布中。\n    *   连接组件以构建逻辑链。\n    *   打开内置的测试聊天窗口进行调试。\n\n> **提示**：iX 支持多种模型后端（OpenAI, Google PaLM, Anthropic, Llama 等），部分实验性模型需在配置中启用。","某电商数据团队需要每日从多个竞品网站抓取价格、提取关键参数并生成分析报告，同时需将异常数据同步给开发团队修复爬虫。\n\n### 没有 ix 时\n- 工程师需手动编写串联脚本，依次调用爬虫、LLM 提取和分析模型，任一环节失败导致整个流程中断。\n- 遇到复杂数据格式时，单个 AI 模型容易出错，缺乏多模型协作校验机制，数据准确率难以保证。\n- 任务排队串行执行，处理大量商品数据时耗时极长，无法在晨会前产出最新报表。\n- 调试困难，修改逻辑需重新部署代码，无法实时观察各步骤的中间状态和思维链。\n\n### 使用 ix 后\n- 通过无代码编辑器拖拽节点，快速构建包含“爬虫代理”、“数据清洗代理”和“报告撰写代理”的自动化工作流。\n- 利用多智能体聊天室，让不同专长的 Agent 并行协作，自动互相校验数据，显著提升了信息提取的准确性。\n- 基于消息队列的后端支持水平扩展，数十个代理同时运行，将原本数小时的数据处理时间缩短至分钟级。\n- 内置聊天界面允许开发者实时 @特定代理 干预任务或直接查看生成的中间产物（Artifacts），调试效率大幅提升。\n\nix 将繁琐的串行脚本转化为可平行协作、可视化的智能体网络，让复杂数据流水线具备自我演进与高效执行的能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkreneskyp_ix_4fe48b15.png","kreneskyp","Peter Krenesky","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkreneskyp_586fbb34.png","Full stack engineer.\r\nFluent in python, javascript, and postgres.\r\nMaker of things.","Myriad Genetics","Oregon",null,"https:\u002F\u002Fgithub.com\u002Fkreneskyp",[84,88,92,96,100,104,108,111],{"name":85,"color":86,"percentage":87},"Python","#3572A5",73.1,{"name":89,"color":90,"percentage":91},"JavaScript","#f1e05a",25.8,{"name":93,"color":94,"percentage":95},"Makefile","#427819",0.7,{"name":97,"color":98,"percentage":99},"Dockerfile","#384d54",0.2,{"name":101,"color":102,"percentage":103},"HTML","#e34c26",0.1,{"name":105,"color":106,"percentage":107},"CSS","#663399",0,{"name":109,"color":110,"percentage":107},"Shell","#89e051",{"name":112,"color":113,"percentage":107},"HCL","#844FBA",1042,129,"2026-04-02T07:19:23","MIT","Linux, macOS, Windows (需通过 WSL)","未说明","未说明（文档提及 worker 数量受限于可用内存和 CPU 容量）",{"notes":122,"python":123,"dependencies":124},"该工具主要基于 Docker 容器化部署，推荐使用 Docker Desktop。Windows 用户必须安装并启用 Windows Linux 子系统 (WSL)。核心运行依赖包括 Redis 消息队列和 Celery 用于代理任务调度。无需手动安装 Python 依赖库，环境由 Docker 镜像管理。","3.8+",[125,126,127,128,129,130,131,132],"Docker","Docker Compose","Celery","Redis","LangChain","OpenAI API","Git","Make",[14,15,13,26],[135,136,137,138],"ai","gpt-4","openai","python","2026-03-27T02:49:30.150509","2026-04-06T07:13:09.673451",[142,147,151,156,161,166],{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},6429,"安装后遇到数据库未初始化错误或 500 错误，如何解决？","最可能的原因是数据库尚未初始化。您可以手动运行以下命令进行初始化：\n```\nix up\nix setup\n```\n此问题已在 v0.11.0 版本中修复，建议升级或手动执行上述步骤。","https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fissues\u002F249",{"id":148,"question_zh":149,"answer_zh":150,"source_url":146},6430,"运行 Docker 容器时出现 'exec ... no such file or directory' 或静态资源加载失败（空白屏幕）怎么办？","这通常是因为数据库未正确初始化导致后端服务异常，进而引发前端静态资源加载失败（如 main.js 404 或连接拒绝）。请确保先执行数据库初始化命令：\n```\nix up\nix setup\n```\n如果问题依旧，请检查 Docker 容器日志确认 worker 是否正常启动。",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},6431,"运行 make dev_setup 时遇到 'docker: not found' 或凭证错误怎么办？","该错误表示在当前运行上下文中无法找到 Docker。如果您在 WSL (Windows Subsystem for Linux) 或特定容器中运行，请确保：\n1. Docker 已正确安装并在主机可用。\n2. 构建命令应在主机终端运行，而不是在缺乏 Docker 环境的内部容器中运行。\n3. 检查 Docker 凭证助手配置是否正确。","https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fissues\u002F30",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},6432,"界面显示 'MissingCommandMarkers' 错误是什么原因？","这通常是因为 AI 模型直接输出了回答，而没有按照要求的 JSON 格式（包含 agent 和 question 字段）进行格式化。这种情况在 gpt-3.5-turbo 模型上比 gpt-4 更常见。\n解决方案：该问题已在 v0.3 版本中通过重构 `@code` 和 `@ix` 代理使用 OpenAI Functions 得到解决，消除了大多数 JSON 解析失败。建议升级到 v0.3 或更高版本。","https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fissues\u002F54",{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},6433,"当 AI 没有返回命令导致 Celery Worker 报错或任务中断时如何处理？","如果第一个任务因提示词过于复杂等原因未返回有效命令，会导致 Worker 出错。此问题已在相关代码提交中修复（见 PR #15），主要改进了响应解析逻辑，使其能更好地处理非命令式的对话回复。建议更新到包含该修复的最新版本。","https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fissues\u002F8",{"id":167,"question_zh":168,"answer_zh":169,"source_url":170},6434,"项目是否支持 Langflow 集成？","目前官方暂无计划直接支持 Langflow。不过，项目已初步支持检索组件（Retrieval QA、Doc Loaders 和 Text Splitters），可用于构建文件导入和问答工作流。维护者鼓励用户针对特定的 loader\u002Fretriever 组件需求提交 Issue，或自行基于项目架构开发自定义组件。","https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fissues\u002F147",[172,177,182,187,192,197,202,207,211,216,221,225,230,235,240,245,250,255,260,265],{"id":173,"version":174,"summary_zh":175,"released_at":176},105993,"v0.19.1","This release focuses on improving OpenAPI components and introducing two test agents to that end.\r\n\r\n## SchemaForge & SkillsForge #439 #436 #437 \r\n\r\n`SchemaForge` & `SkillsForge` are agents configured to use IX's internal API to manage schemas (#400) and skills (#420 ).  These agents both use `RunOpenAPIRequest` and provide create, read, update, and delete functionality. \r\n\r\nThese agents are both an easy way to create new skills and a good test bed for OpenAPI and agent functionality.\r\n\r\n## OpenAPI & Function Calling Improvements\r\n\r\n\r\n- `RunOpenAPIRequest` now includes descriptions from schema description and fields in tool schema passed to OpenAI. #431\r\n- `Schema` description now included in function schema when used as a tool. #432\r\n- `RunOpenAPIRequest` now has a separate `instructions` field for extra instructions to pass in the tool function schema #433\r\n- `RunOpenAPIRequest` now properly handles paths with variables in them.\r\n- `RunOpenAPIRequest` now returns `response.json` for HTTP errors rather than raising errors. Allows agents to reflect on the errors. #438\r\n- Agents may now be configured to reflect on parsing errors and correct them.\r\n #440\r\n\r\n\r\n\r\n## Misc\r\n- LangChain `0.1.5` #430\r\n- Node config forms were spamming save notifications. This should be improved. #441 \r\n- Prop and link edges are now styled differently #429\r\n- Hide node config sections if they do not contain any properties #434\r\n- updated IX API fixtures for minor changes to support `SchemaForge` and `SkillsForge` #442","2024-02-08T17:24:17",{"id":178,"version":179,"summary_zh":180,"released_at":181},105994,"v0.18.2","Misc fixes:\r\n- Added a better dark theme for the CodeEditor","2024-02-03T19:01:12",{"id":183,"version":184,"summary_zh":185,"released_at":186},105995,"v0.18.1","Bugfix release:\r\n- The first node dropped on a new flow couldn't be edited #427 ","2024-02-03T17:55:19",{"id":188,"version":189,"summary_zh":190,"released_at":191},105996,"v0.18.0","## Skills\r\nSkills are components that may be created through the UI and API. Skills may be called as tools or used as a node in a flow. \r\n\r\nSkills are defined as typed python functions so they are easy to build. Input\u002FOutput types and schemas are generated from the methods type hinting.\r\n\r\n##### Skill Editor\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Fa64d1ce1-24ba-4b80-8afb-7840a8cf38eb)\r\n\r\n\r\n##### Prototype SkillForge agent\r\nAn unreleased SkillForge prototype agent that generates skills that may be used in other flows\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F0f551f94-af2c-4426-a584-186809c0bfae\r\n\r\n## Rich code editor\r\nA rich code editor was added to support the skilll editor. It has also been applied to the JSON Schema editor. It will eventually support other places where code is rendered or edited.\r\n\r\n## Misc\r\n- fixed a number of dependabot vulnerability warnings","2024-02-03T15:56:06",{"id":193,"version":194,"summary_zh":195,"released_at":196},105997,"v0.17.0","A small release that includes a new JSONTransform component to simplify flow and a number of bug fixes for the editor.\r\n\r\n### JSON Transform #411 \r\n\r\nIntroducing `JSONTransform` an advanced version of `JSONPath` that can build lists and objects in addition to single values. It's a flexible way to extract multiple values and build more complex values. \r\n\r\n`JSONTransform` can replace many instances where `JSONPath` and `Map` were chained to pack a value in a dict.  Shown here in the `dall-e` agent.\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F869162e9-fed1-4ac4-95be-e8a4b1585424)\r\n\r\n#### Previous Dall-e agent\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Fb54635b5-7a10-4c76-91da-cd1451a16b34)\r\n\r\n\r\n#### Updated Dall-e agent\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Fc9734be3-7b0d-4fb0-b90e-768b2e03b47c)\r\n\r\n\r\n### Misc\r\n- Run log viewer now renders input\u002Foutput with json highlighting #412\r\n- Run log viewer now expands to fill the modal #412\r\n- ChainSelect now shows initial values loaded from API and renders a custom option #413\r\n- Nodes now have a context menu containing action features like `delete` and `open-in-tab`  #415\r\n- Referenced chains may now be opened in a new tab via the context menu. #415\r\n\r\n### Bug Fixes\r\n\r\n##### tab state syncing\r\n- Edge create\u002Fupdate\u002Fdelete was not synced to tab state #416\r\n- Node delete was not synced to tab state #416\r\n- Edge updates weren't synced if they only moved between keys on the source or target #417\r\n\r\n##### misc fixes\r\n- JSON Schema didn't appear when dropped on the graph. #414\r\n","2024-01-27T23:12:59",{"id":198,"version":199,"summary_zh":200,"released_at":201},105998,"v0.16.1","### Bug fixes\r\n\r\n- Nodes that were edited were reverting to their original values when clicking back and forth between them. #410 ","2024-01-23T21:14:38",{"id":203,"version":204,"summary_zh":205,"released_at":206},105999,"v0.16.0","This release focuses on a new data module that includes an API and storage for JSON Schemas & OpenAPI Specs. Both include components to utilize them in flows.\r\n\r\n## JSON Schemas #387 #299 #391 #393 #398 #399\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Fa3aefe0c-f522-49fa-b869-9c0d69aec21c\r\n\r\nJSON Schemas may now be created through the UI. `Schema` objects are usable within the flow as an input or as a `function` with LLMs that support function calling.\r\n\r\n##### Components:\r\n- `SaveSchema` - generate new schemas with the `SaveSchema` component\r\n- `LoadSchema` - loads schemas from the registry.\r\n\r\n##### JSON Form inputs:\r\n- `JSONSchemaSelect`\r\n\r\n\r\n## OpenAPI Specs #387 #391 #394 #398 #408\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F2fbdf113-ea2f-459e-9601-993ed306a63d\r\n\r\nOpenAPI specs may now be imported using a URL. Use them in flows as tools or as regular chains to interact with remote APIs. \r\n\r\nThe editor provides a viewer for the spec's endpoints and schemas. Action specs may be dragged onto a flow to create a preconfigured `OpenAPI Request`.\r\n\r\n##### Components:\r\n- `OpenAPI Request` - load a schema and run a request. \r\n\r\n##### JSON Form inputs:\r\n- `OpenAPISchemaSelect`\r\n- `SchemaActionSelect`\r\n- `SchemaServer`\r\n\r\n## Drag-n-drop objects #400 #405 #409\r\n\r\nMany objects may now be dropped into the graph as preconfigured nodes. This is built on a new `DraggableNode` wrapper that simplified mapping types to nodes. \r\n\r\n- JSON schemas\r\n- OpenAPI specs & actions \r\n- Chains  \r\n\r\n### Fixes for tabs and new chains #391 #395 #397\r\nThere were still quite a few gremlins in the new state management for tabs. This caused various issues with nodes not appearing when saved, values reverting after saving, crashes, etc. Many bugs were fixed during the testing of this version.\r\n\r\n### Misc\r\n\r\n\r\n- Most edits within the editor now show a toast message on success #396\r\n- Generalized `react-select` components and hooks for easier re-use #388\r\n- LangChain 0.1.0 #390\r\n- Add `JSONSchemaDisplay` for rendering specs #401\r\n- Nodes now appear mostly centered on the header when dropped #403 \r\n- API endpoints have simplified `operation_id` for better integration with `OpenAPI Request` nodes #406\r\n- menu style now better configured #407","2024-01-22T16:54:33",{"id":208,"version":209,"summary_zh":205,"released_at":210},106000,"v0.16.0.rc1","2024-01-22T03:57:10",{"id":212,"version":213,"summary_zh":214,"released_at":215},106001,"v0.15.1","Bugfixes:\r\n- Knowledge agent is no longer broken. #389 ","2024-01-15T19:35:38",{"id":217,"version":218,"summary_zh":219,"released_at":220},106002,"v0.15.0","## Editor Tabs\r\n\r\nThe flow editor now has tabs to open multiple chains\u002Fagents. This should help user's manage context while working on nested chains simultaneously.\r\n\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F8927b653-011e-45da-b9ac-49af7718124d\r\n\r\n\r\n## Misc:\r\n- Component search popup now supports `SVGIcon` and other minor styling fixes. #385 \r\n- Run log now renders component type info for components newly added to the flow #384\r\n\r\n","2024-01-07T18:44:53",{"id":222,"version":223,"summary_zh":219,"released_at":224},106003,"v0.15.0.rc1","2024-01-07T17:35:31",{"id":226,"version":227,"summary_zh":228,"released_at":229},106004,"v0.14.0","## v0.14.0\r\n\r\nhttps:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Ilzs0D3WefI\r\n\r\n### Ingestion Flows\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F78d91416-4164-4e1e-beb4-15223460b6e3)\r\n\r\n#### Loaders and transforms. (#362)\r\nComponents needed for a vectorstore ingestion flow are now runnable based:\r\n- Document loaders \r\n- Text splitters and transforms\r\n\r\n#### Vector store Tools (#380)\r\nAdded runnable components for adding texts, documents, images to vectorstores.\r\n\r\n### Chains as Tools (#377, #380)\r\nChain references may now be used as agent tools. This enables entire flows to be called by agents as a step in a plan.\r\n\r\nWhen used with an `OpenAIFunctionAgent` function call args are generated using the chain's input schema. This enables the chain to specify a custom schema for it's args.\r\n\r\n### Agent Updates\r\n- `IngestURL` chain added to demonstrate ingestion components\r\n-  `Ingest` agent updated to use `IngestURL` as a tool\r\n\r\n\r\n### Unstructured IO (#374 )\r\nAdding all the document loaders LangChain provides for Unstructured IO. This adds a few more document types.\r\n\r\n### Misc\r\n\r\n- `import_langchain` now accepts `NodeType` instances so fixtures may be defined with pydantic type instances instead of JSON only. #375\r\n- config is now validated with pydantic models for certain types to coerce types. Needed for components that are not pydantic models and therefore don't do this on their own. #372 \r\n- `NodeType` schemas now type fields and array fields as `Any` when they don't know the type. #367\r\n- missing celery.sh should no longer be an issue #134, #366\r\n- shebang fix in manage.py #363\r\n","2024-01-02T17:27:17",{"id":231,"version":232,"summary_zh":233,"released_at":234},106005,"v0.14.0.rc1","## v0.14.0 (rc1)\r\n\r\nhttps:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Ilzs0D3WefI\r\n\r\n### Ingestion Flows\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F78d91416-4164-4e1e-beb4-15223460b6e3)\r\n\r\n#### Loaders and transforms.\r\nComponents needed for a vectorstore ingestion flow are now runnable based:\r\n- Document loaders (`BaseLoader`) \r\n- Text splitters and transforms (BaseTransform)\r\n\r\n#### Vector store Tools\r\nAdded runnable components for adding texts, documents, images to vectorstores.\r\n\r\n### Chains as Tools (#380 \r\nChain references may now be used as agent tools. This enables entire flows to be called by agents as a step in a plan.\r\n\r\nWhen used with an `OpenAIFunctionAgent` function call args are generated using the chain's input schema. This enables the chain to specify a custom schema for it's args.\r\n\r\n### Agent Updates\r\n- `IngestURL` chain added to demonstrate ingestion components\r\n-  `Ingest` agent updated to use `IngestURL` as a tool\r\n\r\n\r\n### Unstructured IO (#374 )\r\nAdding all the document loaders LangChain provides for Unstructured IO. This adds a few more document types.\r\n\r\n### Misc\r\n","2024-01-02T16:55:57",{"id":236,"version":237,"summary_zh":238,"released_at":239},106006,"v0.13.0","## Flow Redesign #322 #323 #337 #353\r\n\r\nAgent flow has been redesigned to be more data oriented. The arrows in flow now generally indicate the flow of data from one component to the next. This change makes it easier to follow what each component is doing. \r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Ffb1377b0-7a9f-431b-9ff2-ff48e6e2f6d0\r\n\r\nIncluded in this update are `Map`, `Branch`, and `Each` components for advanced flow control.\r\n\r\n#### LangChain Runnables\r\nThe redesign utilizes LangChain `Runnable` interfaces. These are the new style version of components that power LangChain Expression Language. `Runnable` interfaces provide a standard way to describe input\u002Foutput and a standard interface for invocation, streaming, a batch execution.\r\n\r\nFlows are compiled to runnable chains and automatically take advantage of parallel execution and streaming.\r\n\r\nMoving to the `Runnable` interface enables downstream features such as the Run Log, live status updates,  and eventually AI generated components.\r\n\r\n#### Migration status\r\nSome existing agents and most features have been updated to support run flow. Document loaders, splitters, transformers, and ingestion features have not been converted but are in progress for a future release.\r\n\r\n## Run Log #344 \r\n\r\nThe editor now provides both real time status and a log of output from each node in the chain. Status icons next to component nodes indicate the state of the last run and give a quick way to jump into the log to view input and output.\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F19ff8af2-6c5e-4df0-b321-5ad250899cc0)\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F7f9f3dc7-e2d1-468e-a233-1cfa3532648f)\r\n\r\n\r\n## Nested Chains #308 #348\r\nChains may now be nested in other chains and agents. This enable re-use of chains as custom components. Great for reusable flows and breaking apart large complicated flows.\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F0e20f5ba-c89d-4b8f-bad6-fe6e06090331)\r\n\r\n\r\n## Multi-modal support #354 #355 \r\n\r\nMulti-modal prompts are now supported generically for GPT, Gemini, other models. This can be integrated with artifact references to pass images from the Chat UI to an agent. The `vision` agent implements this with using gpt-4V\r\n\r\nVision support for local models such as Llava is available via ollama and llama-cpp\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F609640f0-9eba-4970-adb8-41de3997a03d\r\n\r\n\r\n## Dall-e Image Generation #333\r\nA `Dalle` flow component is now available for generating images. Responses are in the form of a URL and metadata.\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F66c6b369-1f33-4927-8849-7c34c5048805)\r\n\r\n\r\n## Gemini #350\r\nGoogle Gemini is now supported through the `Google Generative AI` LLM component.\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Fd2d73b2c-3b42-4806-9566-c7f74afb2c07)\r\n\r\n## Secure Secrets #252 \r\nIX can now store user based secrets that are reusable in components. The secrets system pulls secret schemas from components automatically. Just config your secret and go.\r\n\r\n![image](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F7634ae1a-04a6-4cca-ba41-7d575e277b2d)\r\n\r\n\r\n## Misc\r\n\r\n### New Components:\r\n- StringLoader: loads a string into a document to process arbitrary input and insert it into a VectoStore. #312\r\n- Dalle: generate images #333 \r\n- SaveArtifact, LoadArtifacts: save and load artifacts\r\n- LoadFile: load a file into memory\r\n- EncodeImage: base64 encode images for use in prompt\r\n- LoadArtifactImage: shortcut for `LoadArtifact | LoadFile | EncodeImage`\r\n- MultiModalChatPrompt: includes images\r\n- OpenAI w\u002F functions: flow compatible component with function support\r\n- Schema: inserts a schema into a flow\r\n- JSON: inserts json data into a flow\r\n- JSONPath: parses a variable from input using a JSON Path expression\r\n- Hugging face embeddings #335 \r\n- Postgres chat backend #336\r\n- Fireworks llm #288 \r\n- branch, map, each, sequential each components #322\r\n\r\n### Component updates:\r\n- LLMs now have `metadata`, `tags`, and `verbose` options. #300\r\n- System prompt is now optional for ChatPromptTemplate #351\r\n\r\n### JSONSchemaForm\r\nThe user interface now uses JSONSchemaForm to render forms. This is primarily used for dynamically rendering component config forms but is also used for secrets forms.\r\n\r\n- fixes to component schemas #307\r\n- JSONSchemaForm now renders component config forms from JSON Schema #299 #293\r\n- JSONSchemaForm can now group fields #306 #310\r\n- Adding form widgets for `list` and `dict` fields #296 #301 #298 #326\r\n- Component schema fields now have description, input_type, and style #319\r\n- fields may now be hidden and not displayed #325\r\n\r\n\r\n### Editor UI:\r\n- chain graph now stores source_key and edge_key to simplify logic around creating, editing, and displaying nodes and edges. #323\r\n- right side bar has been re-implemented to provide better control and to not break scrolling in other components #324\r\n- Added `ComponentTypeSelect` #313\r\n","2023-12-20T02:36:32",{"id":241,"version":242,"summary_zh":243,"released_at":244},106007,"v0.12.1","Ingest and Knowledge agents are now available when starting IX client","2023-10-19T16:38:20",{"id":246,"version":247,"summary_zh":248,"released_at":249},106008,"v0.12.0","Fixing two usability issues with artifacts.\r\n\r\n- Artifacts may now be downloaded via the UI (#290)\r\n- `workdir` where artifacts are saved is now located in the working directory where IX is started with the client. (#291)","2023-10-19T14:19:56",{"id":251,"version":252,"summary_zh":253,"released_at":254},106009,"v0.11.0","## Artifacts Uploads #283 \r\n\r\nFiles may now be dropped into the chat bar or artifact pane to upload them into the workspace as artifacts. Reference `{artifacts}` in the chat to use them with agents that support `ArtifactMemory`\r\n\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Fa501def3-0a28-4b33-b155-1c09275f6d37\r\n\r\n\r\n\r\n## Misc\r\n\r\n#### Generic forms for dict fields #282 \r\nChain editor can now render a generic form for `dict` fields.  This enables `metadata` and other `dict` fields to be added to component configs.\r\n\r\n\r\n#### Setup Fixes #284 \r\nsetup was broken in v0.10.0 through v0.10.2 due to a fixture \r\n\r\n","2023-10-13T01:53:39",{"id":256,"version":257,"summary_zh":258,"released_at":259},106010,"v0.10.2","### Bug fix\r\n- New agents will now save properly when setting name, alias, or description first.","2023-10-08T20:15:49",{"id":261,"version":262,"summary_zh":263,"released_at":264},106011,"v0.10.1","## Bugfixes\r\n\r\n- Chroma container wasn't starting up.  #274 \r\n   Thanks to new contributor @sebastienfi ","2023-10-07T00:30:33",{"id":266,"version":267,"summary_zh":268,"released_at":269},106012,"v0.10.0","## 🎉 Highlights\r\n\r\nThis release focuses on generic ingestion workflows with `IngestionTool`. \r\n\r\n- ⛏️ **IngestionTool**: A flexible new component for document ingestion.\r\n- 🤖 **Ingest and Knowledge Agents**: Demonstrating search, ingestion, and retrieval capabilities.\r\n- 💤 **Lazy Loading Node Templates**: Enable agents to initialize components within it's chain graph\r\n\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F349466de-47f7-40e2-9467-28f891f08956\r\n\r\n\r\n\r\n## Details\r\n\r\n### ⛏️ IngestionTool #268  \r\n\r\nIntroducing the new `IngestionTool` component, designed for versatile document ingestion. This tool seamlessly integrates with any vector store and any document source. It's a flexible way to build a customized document ingestion agent and RAG system.\r\n\r\n`IngestionTool` uses lazy loading `NodeTemplates` to parameterize properties in both the loader and vectorstore. This enables agents that can  combine search, transform, and ingestion in a single request.\r\n\r\n##### Prompt Example:\r\n`Search for TOPIC and add the urls to collection COLLECTION`\r\n\r\n\r\n### 🤖 Ingest and Knowledge Agents (#267, #269, #272) \r\n\r\nWith these agents, users can search, ingest, and retrieve content. Specifically:\r\n\r\n- The ingest agent can search Wikipedia, capturing URLs and adding them to the knowledge collection.\r\n- The knowledge agent can provide answers based on the stored content.\r\n\r\n##### Agent: Ingest\r\n![IngestionTool](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F26ab3468-f170-4951-a17d-5ede2640066a)\r\n\r\n##### Search & Ingest\r\n![AI_search_and_save_chat](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F4b56a8b0-8e7e-4567-bc9f-25a6319f2fd9)\r\n\r\n\r\n#### `@Ingest` is parameterized: \r\n- `URL` and `COLLECTION` are both variables.\r\n- `@ingest` can fetches a URL and stores the content in a specified COLLECTION.\r\n\r\n##### Web Loader Template Config\r\n\r\n![web_loader_template](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002Fa2c61689-04db-4843-939c-b4e5b5bf6dd7)\r\n\r\n##### Chroma Template Config\r\n\r\n![chroma_template](https:\u002F\u002Fgithub.com\u002Fkreneskyp\u002Fix\u002Fassets\u002F68635\u002F409f2bb4-d0fa-449e-a189-85ef047ada16)\r\n\r\n\r\n### 💤 Lazy Loading Node Templates (#267)\r\n\r\nLazy loading templates enable agents to initialize components in the graph. The default loading method uses statically defined configuration. Node templates delay loading until runtime and then initialize with input variables. Combine that into a custom tool and agent can initialize the component using an LLM function call.\r\n\r\n#### Templates & Variables\r\n\r\nCustom tools may define any connector as a template. The branch of nodes off the connector is wrapped is lazy loaded when the chain graph loads. Any node within the branch may define any field value as a `{variable}`. The variables are formatted with input variables or tool args when it runs.\r\n\r\n#### Tool Args\r\n\r\nLazy loading templates are compatible with LLM function calls. `NodeTemplate` builds Pydantic models for the templates automatically from variables within the branch. Custom tools can set `args_schema` from the templates schema. The schema is used to generate LLM function calls dynamically from the agent config.\r\n\r\n#### Example\r\n`IngestionTool` is the first example of `NodeTemplate` and demonstrates how to configure and format components from multiple components.\r\n\r\nSee #267 for more details on how to use templates","2023-10-06T14:49:34"]