[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-algorithmicsuperintelligence--openevolve":3,"tool-algorithmicsuperintelligence--openevolve":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":80,"stars":105,"forks":106,"last_commit_at":107,"license":108,"difficulty_score":32,"env_os":109,"env_gpu":109,"env_ram":109,"env_deps":110,"category_tags":114,"github_topics":115,"view_count":134,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":135,"updated_at":136,"faqs":137,"releases":168},618,"algorithmicsuperintelligence\u002Fopenevolve","openevolve","Open-source implementation of AlphaEvolve","OpenEvolve 是一款基于大语言模型的开源进化式代码代理，旨在将 LLM 转化为自主的代码优化引擎。它能够自动迭代代码，甚至发现人类未曾设想的全新算法，从而解决传统手动优化效率低、探索范围受限的难题。相比人工调试，OpenEvolve 能将优化周期从数周缩短至数小时，并在真实硬件上实现显著的性能提升。\n\n这款工具非常适合追求极致性能的开发者、算法研究人员以及需要进行科学计算优化的工程师。它支持 Python、Rust、Metal 等多种语言，具备科研级的可复现性和确定性评估流程。独特的进化机制允许它在无人工干预的情况下，通过并行演化探索无限的可能性，已在 GPU 内核优化、数学问题求解等领域取得突破性成果。无论是快速原型验证还是深度算法挖掘，OpenEvolve 都能提供强大的自动化支持，让代码进化变得简单高效。","# OpenEvolve\n\n\u003Cdiv align=\"center\">\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_4a12647d6927.png\" alt=\"OpenEvolve Logo\" width=\"400\">\n\n**🧬 The most advanced open-source evolutionary coding agent**\n\n*Turn your LLMs into autonomous code optimizers that discover breakthrough algorithms*\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fstargazers\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Falgorithmicsuperintelligence\u002Fopenevolve?style=social\" alt=\"GitHub stars\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fopenevolve\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fopenevolve\" alt=\"PyPI version\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fopenevolve\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fopenevolve\" alt=\"PyPI downloads\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fblob\u002Fmain\u002FLICENSE\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Falgorithmicsuperintelligence\u002Fopenevolve\" alt=\"License\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n[🚀 **Quick Start**](#quick-start) • [**Examples**](#examples-gallery) • [**System Messages**](#crafting-effective-system-messages) • [**Discussions**](https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fdiscussions)\n\n*From random search to state-of-the-art: Watch your code evolve in real-time*\n\n\u003C\u002Fdiv>\n\n---\n\n## Why OpenEvolve?\n\n\u003Ctable>\n\u003Ctr>\n\u003Ctd width=\"33%\">\n\n### **Autonomous Discovery**\nLLMs don't just optimize—they **discover** entirely new algorithms. No human guidance needed.\n\n\u003C\u002Ftd>\n\u003Ctd width=\"33%\">\n\n### **Proven Results**\n**2-3x speedups** on real hardware. **State-of-the-art** circle packing. **Breakthrough** optimizations.\n\n\u003C\u002Ftd>\n\u003Ctd width=\"33%\">\n\n### **Research Grade**\nFull reproducibility, extensive evaluation pipelines, and scientific rigor built-in.\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003C\u002Ftable>\n\n**OpenEvolve vs Manual Optimization:**\n\n| Aspect | Manual Optimization | OpenEvolve |\n|--------|-------------------|------------|\n| **Time to Solution** | Days to weeks | Hours |\n| **Exploration Breadth** | Limited by human creativity | Unlimited LLM creativity |\n| **Reproducibility** | Hard to replicate | Fully deterministic |\n| **Multi-objective** | Complex tradeoffs | Automatic Pareto optimization |\n| **Scaling** | Doesn't scale | Parallel evolution across islands |\n\n## Proven Achievements\n\n\u003Cdiv align=\"center\">\n\n| **Domain** | **Achievement** | **Example** |\n|---------------|-------------------|----------------|\n| **GPU Optimization** | Hardware-optimized kernel discovery | [MLX Metal Kernels](examples\u002Fmlx_metal_kernel_opt\u002F) |\n| **Mathematical** | State-of-the-art circle packing (n=26) | [Circle Packing](examples\u002Fcircle_packing\u002F) |\n| **Algorithm Design** | Adaptive sorting algorithms | [Rust Adaptive Sort](examples\u002Frust_adaptive_sort\u002F) |\n| **Scientific Computing** | Automated filter design | [Signal Processing](examples\u002Fsignal_processing\u002F) |\n| **Multi-Language** | Python, Rust, R, Metal shaders | [All Examples](examples\u002F) |\n\n\u003C\u002Fdiv>\n\n## 🚀 Quick Start\n\nGet from zero to evolving code in **30 seconds**:\n\n```bash\n# Install OpenEvolve\npip install openevolve\n\n# The example uses Google Gemini by default (free tier available)\n# Get your API key from: https:\u002F\u002Faistudio.google.com\u002Fapikey\nexport OPENAI_API_KEY=\"your-gemini-api-key\"  # Yes, use OPENAI_API_KEY env var\n\n# Run your first evolution!\npython openevolve-run.py examples\u002Ffunction_minimization\u002Finitial_program.py \\\n  examples\u002Ffunction_minimization\u002Fevaluator.py \\\n  --config examples\u002Ffunction_minimization\u002Fconfig.yaml \\\n  --iterations 50\n```\n\n**Note:** The example config uses Gemini by default, but you can use any OpenAI-compatible provider by modifying the `config.yaml`. See the [configs](configs\u002F) for full configuration options.\n\n### **Library Usage**\n\nOpenEvolve can be used as a library without any external files:\n\n```python\nfrom openevolve import run_evolution, evolve_function\n\n# Evolution with inline code (no files needed!)\nresult = run_evolution(\n    initial_program='''\n    def fibonacci(n):\n        if n \u003C= 1: return n\n        return fibonacci(n-1) + fibonacci(n-2)\n    ''',\n    evaluator=lambda path: {\"score\": benchmark_fib(path)},\n    iterations=100\n)\n\n# Evolve Python functions directly\ndef bubble_sort(arr):\n    for i in range(len(arr)):\n        for j in range(len(arr)-1):\n            if arr[j] > arr[j+1]:\n                arr[j], arr[j+1] = arr[j+1], arr[j] \n    return arr\n\nresult = evolve_function(\n    bubble_sort,\n    test_cases=[([3,1,2], [1,2,3]), ([5,2,8], [2,5,8])],\n    iterations=50\n)\nprint(f\"Evolved sorting algorithm: {result.best_code}\")\n```\n\n**Prefer Docker?** See the [Installation & Setup](#installation--setup) section for Docker options.\n\n## See It In Action\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Circle Packing: From Random to State-of-the-Art\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Watch OpenEvolve discover optimal circle packing in real-time:**\n\n| Generation 1 | Generation 190 | Generation 460 (Final) |\n|--------------|----------------|----------------------|\n| ![Initial](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_005e43178ad0.png) | ![Progress](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_c1931dd1f1c8.png) | ![Final](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_428861e6221c.png) |\n| Random placement | Learning structure | **State-of-the-art result** |\n\n**Result**: Matches published benchmarks for n=26 circle packing problem.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>GPU Kernel Evolution\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Before (Baseline)**:\n```metal\n\u002F\u002F Standard attention implementation\nkernel void attention_baseline(\u002F* ... *\u002F) {\n    \u002F\u002F Generic matrix multiplication\n    float sum = 0.0;\n    for (int i = 0; i \u003C seq_len; i++) {\n        sum += query[tid] * key[i];\n    }\n}\n```\n\n**After Evolution (2.8x faster)**:\n```metal\n\u002F\u002F OpenEvolve discovered optimization\nkernel void attention_evolved(\u002F* ... *\u002F) {\n    \u002F\u002F Hardware-aware tiling + unified memory optimization\n    threadgroup float shared_mem[256];\n    \u002F\u002F ... evolved algorithm exploiting Apple Silicon architecture\n}\n```\n\n**Performance Impact**: 2.8x speedup on Apple M1 Pro, maintaining numerical accuracy.\n\n\u003C\u002Fdetails>\n\n## How OpenEvolve Works\n\nOpenEvolve implements a sophisticated **evolutionary coding pipeline** that goes far beyond simple optimization:\n\n![OpenEvolve Architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_e6b912937afa.png)\n\n### **Core Innovation**: MAP-Elites + LLMs\n\n- **Quality-Diversity Evolution**: Maintains diverse populations across feature dimensions\n- **Island-Based Architecture**: Multiple populations prevent premature convergence\n- **LLM Ensemble**: Multiple models with intelligent fallback strategies\n- **Artifact Side-Channel**: Error feedback improves subsequent generations\n\n### **Advanced Features**\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Scientific Reproducibility\u003C\u002Fb>\u003C\u002Fsummary>\n\n- **Comprehensive Seeding**: Every component (LLM, database, evaluation) is seeded\n- **Default Seed=42**: Immediate reproducible results out of the box\n- **Deterministic Evolution**: Exact reproduction of runs across machines\n- **Component Isolation**: Hash-based isolation prevents cross-contamination\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Advanced LLM Integration\u003C\u002Fb>\u003C\u002Fsummary>\n\n- **Universal API**: Works with OpenAI, Google, local models, and proxies\n- **Intelligent Ensembles**: Weighted combinations with sophisticated fallback\n- **Test-Time Compute**: Enhanced reasoning through proxy systems (see [OptiLLM setup](#llm-provider-setup))\n- **Plugin Ecosystem**: Support for advanced reasoning plugins\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Evolution Algorithm Innovations\u003C\u002Fb>\u003C\u002Fsummary>\n\n- **Double Selection**: Different programs for performance vs inspiration\n- **Adaptive Feature Dimensions**: Custom quality-diversity metrics\n- **Migration Patterns**: Ring topology with controlled gene flow\n- **Multi-Strategy Sampling**: Elite, diverse, and exploratory selection\n\n\u003C\u002Fdetails>\n\n## Perfect For\n\n| **Use Case** | **Why OpenEvolve Excels** |\n|--------------|---------------------------|\n| **Performance Optimization** | Discovers hardware-specific optimizations humans miss |\n| **Algorithm Discovery** | Finds novel approaches to classic problems |\n| **Scientific Computing** | Automates tedious manual tuning processes |\n| **Competitive Programming** | Generates multiple solution strategies |\n| **Multi-Objective Problems** | Pareto-optimal solutions across dimensions |\n\n## 🛠 Installation & Setup\n\n### Requirements\n- **Python**: 3.10+ \n- **LLM Access**: Any OpenAI-compatible API\n- **Optional**: Docker for containerized runs\n\n### Installation Options\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>📦 PyPI (Recommended)\u003C\u002Fb>\u003C\u002Fsummary>\n\n```bash\npip install openevolve\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🔧 Development Install\u003C\u002Fb>\u003C\u002Fsummary>\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve.git\ncd openevolve\npip install -e \".[dev]\"\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🐳 Docker\u003C\u002Fb>\u003C\u002Fsummary>\n\n```bash\n# Pull the image\ndocker pull ghcr.io\u002Falgorithmicsuperintelligence\u002Fopenevolve:latest\n\n# Run an example\ndocker run --rm -v $(pwd):\u002Fapp ghcr.io\u002Falgorithmicsuperintelligence\u002Fopenevolve:latest \\\n  examples\u002Ffunction_minimization\u002Finitial_program.py \\\n  examples\u002Ffunction_minimization\u002Fevaluator.py --iterations 100\n```\n\n\u003C\u002Fdetails>\n\n### Cost Estimation\n\n**Cost depends on your LLM provider and iterations:**\n\n- **o3**: ~$0.15-0.60 per iteration (depending on code size)\n- **o3-mini**: ~$0.03-0.12 per iteration (more cost-effective)\n- **Gemini-2.5-Pro**: ~$0.08-0.30 per iteration\n- **Gemini-2.5-Flash**: ~$0.01-0.05 per iteration (fastest and cheapest)\n- **Local models**: Nearly free after setup\n- **OptiLLM**: Use cheaper models with test-time compute for better results\n\n**Cost-saving tips:**\n- Start with fewer iterations (100-200)\n- Use o3-mini, Gemini-2.5-Flash or local models for exploration\n- Use cascade evaluation to filter bad programs early\n- Configure smaller population sizes initially\n\n### LLM Provider Setup\n\nOpenEvolve works with **any OpenAI-compatible API**:\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🔥 OpenAI (Direct)\u003C\u002Fb>\u003C\u002Fsummary>\n\n```bash\nexport OPENAI_API_KEY=\"sk-...\"\n# Uses OpenAI endpoints by default\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🤖 Google Gemini\u003C\u002Fb>\u003C\u002Fsummary>\n\n```yaml\n# config.yaml\nllm:\n  api_base: \"https:\u002F\u002Fgenerativelanguage.googleapis.com\u002Fv1beta\u002Fopenai\u002F\"\n  model: \"gemini-2.5-pro\"\n```\n\n```bash\nexport OPENAI_API_KEY=\"your-gemini-api-key\"\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🏠 Local Models (Ollama\u002FvLLM)\u003C\u002Fb>\u003C\u002Fsummary>\n\n```yaml\n# config.yaml\nllm:\n  api_base: \"http:\u002F\u002Flocalhost:11434\u002Fv1\"  # Ollama\n  model: \"codellama:7b\"\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>⚡ OptiLLM (Advanced)\u003C\u002Fb>\u003C\u002Fsummary>\n\nFor maximum flexibility with rate limiting, model routing, and test-time compute:\n\n```bash\n# Install OptiLLM\npip install optillm\n\n# Start OptiLLM proxy\noptillm --port 8000\n\n# Point OpenEvolve to OptiLLM\nexport OPENAI_API_KEY=\"your-actual-key\"\n```\n\n```yaml\nllm:\n  api_base: \"http:\u002F\u002Flocalhost:8000\u002Fv1\"\n  model: \"moa&readurls-o3\"  # Test-time compute + web access\n```\n\n\u003C\u002Fdetails>\n\n## Examples Gallery\n\n\u003Cdiv align=\"center\">\n\n### **Showcase Projects**\n\n| Project | Domain | Achievement | Demo |\n|---------|--------|-------------|------|\n| [**Function Minimization**](examples\u002Ffunction_minimization\u002F) | Optimization | Random → Simulated Annealing | [View Results](examples\u002Ffunction_minimization\u002Fopenevolve_output\u002F) |\n| [**MLX GPU Kernels**](examples\u002Fmlx_metal_kernel_opt\u002F) | Hardware | Apple Silicon optimization | [Benchmarks](examples\u002Fmlx_metal_kernel_opt\u002FREADME.md) |\n| [**Rust Adaptive Sort**](examples\u002Frust_adaptive_sort\u002F) | Algorithms | Data-aware sorting | [Code Evolution](examples\u002Frust_adaptive_sort\u002F) |\n| [**Symbolic Regression**](examples\u002Fsymbolic_regression\u002F) | Science | Automated equation discovery | [LLM-SRBench](examples\u002Fsymbolic_regression\u002F) |\n| [**Web Scraper + OptiLLM**](examples\u002Fweb_scraper_optillm\u002F) | AI Integration | Test-time compute optimization | [Smart Scraping](examples\u002Fweb_scraper_optillm\u002F) |\n\n\u003C\u002Fdiv>\n\n### **Quick Example**: Function Minimization\n\n**Watch OpenEvolve evolve from random search to sophisticated optimization:**\n\n```python\n# Initial Program (Random Search)\ndef minimize_function(func, bounds, max_evals=1000):\n    best_x, best_val = None, float('inf')\n    for _ in range(max_evals):\n        x = random_point_in_bounds(bounds)\n        val = func(x)\n        if val \u003C best_val:\n            best_x, best_val = x, val\n    return best_x, best_val\n```\n\n**Evolution Process**\n\n```python\n# Evolved Program (Simulated Annealing + Adaptive Cooling)\ndef minimize_function(func, bounds, max_evals=1000):\n    x = random_point_in_bounds(bounds)\n    temp = adaptive_initial_temperature(func, bounds)\n    \n    for i in range(max_evals):\n        neighbor = generate_neighbor(x, temp, bounds)\n        delta = func(neighbor) - func(x)\n        \n        if delta \u003C 0 or random.random() \u003C exp(-delta\u002Ftemp):\n            x = neighbor\n            \n        temp *= adaptive_cooling_rate(i, max_evals)  # Dynamic cooling\n    \n    return x, func(x)\n```\n\n**Performance**: 100x improvement in convergence speed!\n\n### **Advanced Examples**\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>Prompt Evolution\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Evolve prompts instead of code** for better LLM performance. See the [LLM Prompt Optimization example](examples\u002Fllm_prompt_optimization\u002F) for a complete case study with HotpotQA achieving +23% accuracy improvement.\n\n[Full Example](examples\u002Fllm_prompt_optimization\u002F)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🏁 Competitive Programming\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Automatic solution generation** for programming contests:\n\n```python\n# Problem: Find maximum subarray sum\n# OpenEvolve discovers multiple approaches:\n\n# Evolution Path 1: Brute Force → Kadane's Algorithm\n# Evolution Path 2: Divide & Conquer → Optimized Kadane's\n# Evolution Path 3: Dynamic Programming → Space-Optimized DP\n```\n\n[Online Judge Integration](examples\u002Fonline_judge_programming\u002F)\n\n\u003C\u002Fdetails>\n\n## Configuration\n\nOpenEvolve offers extensive configuration for advanced users:\n\n```yaml\n# Advanced Configuration Example\nmax_iterations: 1000\nrandom_seed: 42  # Full reproducibility\n\nllm:\n  # Ensemble configuration\n  models:\n    - name: \"gemini-2.5-pro\"\n      weight: 0.6\n    - name: \"gemini-2.5-flash\"\n      weight: 0.4\n  temperature: 0.7\n\ndatabase:\n  # MAP-Elites quality-diversity\n  population_size: 500\n  num_islands: 5  # Parallel evolution\n  migration_interval: 20\n  feature_dimensions: [\"complexity\", \"diversity\", \"performance\"]\n\nevaluator:\n  enable_artifacts: true      # Error feedback to LLM\n  cascade_evaluation: true    # Multi-stage testing\n  use_llm_feedback: true      # AI code quality assessment\n\nprompt:\n  # Sophisticated inspiration system\n  num_top_programs: 3         # Best performers\n  num_diverse_programs: 2     # Creative exploration\n  include_artifacts: true     # Execution feedback\n  \n  # Custom templates\n  template_dir: \"custom_prompts\u002F\"\n  use_template_stochasticity: true  # Randomized prompts\n```\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🎯 Feature Engineering\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Control how programs are organized in the quality-diversity grid:**\n\n```yaml\ndatabase:\n  feature_dimensions: \n    - \"complexity\"      # Built-in: code length\n    - \"diversity\"       # Built-in: structural diversity\n    - \"performance\"     # Custom: from your evaluator\n    - \"memory_usage\"    # Custom: from your evaluator\n    \n  feature_bins:\n    complexity: 10      # 10 complexity levels\n    performance: 20     # 20 performance buckets\n    memory_usage: 15    # 15 memory usage categories\n```\n\n**Important**: Return raw values from evaluator, OpenEvolve handles binning automatically.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🎨 Custom Prompt Templates\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Advanced prompt engineering** with custom templates:\n\n```yaml\nprompt:\n  template_dir: \"custom_templates\u002F\"\n  use_template_stochasticity: true\n  template_variations:\n    greeting:\n      - \"Let's enhance this code:\"\n      - \"Time to optimize:\"\n      - \"Improving the algorithm:\"\n    improvement_suggestion:\n      - \"Here's how we could improve this code:\"\n      - \"I suggest the following improvements:\"\n      - \"We can enhance this code by:\"\n```\n\n**How it works:** Place `{greeting}` or `{improvement_suggestion}` placeholders in your templates, and OpenEvolve will randomly choose from the variations for each generation, adding diversity to prompts.\n\nSee [prompt examples](examples\u002Fllm_prompt_optimization\u002Ftemplates\u002F) for complete template customization.\n\n\u003C\u002Fdetails>\n\n## Crafting Effective System Messages\n\n**System messages are the secret to successful evolution.** They guide the LLM's understanding of your domain, constraints, and optimization goals. A well-crafted system message can be the difference between random mutations and targeted improvements.\n\n### Why System Messages Matter\n\nThe system message in your config.yaml is arguably the most important component for evolution success:\n\n- **Domain Expertise**: Provides LLM with specific knowledge about your problem space\n- **Constraint Awareness**: Defines what can and cannot be changed during evolution\n- **Optimization Focus**: Guides the LLM toward meaningful improvements\n- **Error Prevention**: Helps avoid common pitfalls and compilation errors\n\n### The Iterative Creation Process\n\nBased on successful OpenEvolve implementations, system messages are best created through iteration:\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🔄 Step-by-Step Process\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Phase 1: Initial Draft**\n\n1. Start with a basic system message describing your goal\n2. Run 20-50 evolution iterations to observe behavior\n3. Note where the system gets \"stuck\" or makes poor choices\n\n**Phase 2: Refinement**\n\n4. Add specific guidance based on observed issues\n5. Include domain-specific terminology and concepts\n6. Define clear constraints and optimization targets\n7. Run another batch of iterations\n\n**Phase 3: Specialization**\n\n8. Add detailed examples of good vs bad approaches\n9. Include specific library\u002Fframework guidance\n10. Add error avoidance patterns you've observed\n11. Fine-tune based on artifact feedback\n\n**Phase 4: Optimization**\n\n12. Consider using OpenEvolve itself to optimize your prompt\n13. Measure improvements using combined score metrics\n\n\u003C\u002Fdetails>\n\n### Examples by Complexity\n\n#### **Simple: General Optimization**\n```yaml\nprompt:\n  system_message: |\n    You are an expert programmer specializing in optimization algorithms.\n    Your task is to improve a function minimization algorithm to find the\n    global minimum reliably, escaping local minima that might trap simple algorithms.\n```\n\n#### **Intermediate: Domain-Specific Guidance**\n```yaml\nprompt:\n  system_message: |\n    You are an expert prompt engineer. Your task is to revise prompts for LLMs.\n\n    Your improvements should:\n    * Clarify vague instructions and eliminate ambiguity\n    * Strengthen alignment between prompt and desired task outcome\n    * Improve robustness against edge cases\n    * Include formatting instructions and examples where helpful\n    * Avoid unnecessary verbosity\n\n    Return only the improved prompt text without explanations.\n```\n\n#### ⚡ **Advanced: Hardware-Specific Optimization**\n```yaml\nprompt:\n  system_message: |\n    You are an expert Metal GPU programmer specializing in custom attention\n    kernels for Apple Silicon.\n\n    # TARGET: Optimize Metal Kernel for Grouped Query Attention (GQA)\n    # HARDWARE: Apple M-series GPUs with unified memory architecture\n    # GOAL: 5-15% performance improvement\n\n    # OPTIMIZATION OPPORTUNITIES:\n    **1. Memory Access Pattern Optimization:**\n    - Coalesced access patterns for Apple Silicon\n    - Vectorized loading using SIMD\n    - Pre-compute frequently used indices\n\n    **2. Algorithm Fusion:**\n    - Combine max finding with score computation\n    - Reduce number of passes through data\n\n    # CONSTRAINTS - CRITICAL SAFETY RULES:\n    **MUST NOT CHANGE:**\n    ❌ Kernel function signature\n    ❌ Template parameter names or types\n    ❌ Overall algorithm correctness\n\n    **ALLOWED TO OPTIMIZE:**\n    ✅ Memory access patterns and indexing\n    ✅ Computation order and efficiency\n    ✅ Vectorization and SIMD utilization\n    ✅ Apple Silicon specific optimizations\n```\n\n### Best Practices\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🎨 Prompt Engineering Patterns\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Structure Your Message:** Start with role definition → Define task\u002Fcontext → List optimization opportunities → Set constraints → Success criteria\n\n**Use Specific Examples:**\n```yaml\n# Good: \"Focus on reducing memory allocations. Example: Replace `new Vector()` with pre-allocated arrays.\"\n# Avoid: \"Make the code faster\"\n```\n\n**Include Domain Knowledge:**\n```yaml\n# Good: \"For GPU kernels: 1) Memory coalescing 2) Occupancy 3) Shared memory usage\"\n# Avoid: \"Optimize the algorithm\"\n```\n\n**Set Clear Boundaries:**\n```yaml\nsystem_message: |\n  MUST NOT CHANGE: ❌ Function signatures ❌ Algorithm correctness ❌ External API\n  ALLOWED: ✅ Internal implementation ✅ Data structures ✅ Performance optimizations\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🔬 Advanced Techniques\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Artifact-Driven Iteration:** Enable artifacts in config → Include common error patterns in system message → Add guidance based on stderr\u002Fwarning patterns\n\n**Multi-Phase Evolution:** Start broad (\"Explore different algorithmic approaches\"), then focus (\"Given successful simulated annealing, focus on parameter tuning\")\n\n**Template Stochasticity:** See the [Configuration section](#configuration) for complete template variation examples.\n\n\u003C\u002Fdetails>\n\n### Meta-Evolution: Using OpenEvolve to Optimize Prompts\n\n**You can use OpenEvolve to evolve your system messages themselves!** This powerful technique lets you optimize prompts for better LLM performance automatically.\n\nSee the [LLM Prompt Optimization example](examples\u002Fllm_prompt_optimization\u002F) for a complete implementation, including the HotpotQA case study with +23% accuracy improvement.\n\n### Common Pitfalls to Avoid\n\n- **Too Vague**: \"Make the code better\" → Specify exactly what \"better\" means\n- **Too Restrictive**: Over-constraining can prevent useful optimizations\n- **Missing Context**: Include relevant domain knowledge and terminology\n- **No Examples**: Concrete examples guide LLM better than abstract descriptions\n- **Ignoring Artifacts**: Don't refine prompts based on error feedback\n\n## Artifacts & Debugging\n\n**Artifacts side-channel** provides rich feedback to accelerate evolution:\n\n```python\n# Evaluator can return execution context\nfrom openevolve.evaluation_result import EvaluationResult\n\nreturn EvaluationResult(\n    metrics={\"performance\": 0.85, \"correctness\": 1.0},\n    artifacts={\n        \"stderr\": \"Warning: suboptimal memory access pattern\",\n        \"profiling_data\": {...},\n        \"llm_feedback\": \"Code is correct but could use better variable names\",\n        \"build_warnings\": [\"unused variable x\"]\n    }\n)\n```\n\n**Next generation prompt automatically includes:**\n\n```markdown\n## Previous Execution Feedback\n⚠️ Warning: suboptimal memory access pattern\n💡 LLM Feedback: Code is correct but could use better variable names\n🔧 Build Warnings: unused variable x\n```\n\nThis creates a **feedback loop** where each generation learns from previous mistakes!\n\n## Visualization\n\n**Real-time evolution tracking** with interactive web interface:\n\n```bash\n# Install visualization dependencies\npip install -r scripts\u002Frequirements.txt\n\n# Launch interactive visualizer\npython scripts\u002Fvisualizer.py\n\n# Or visualize specific checkpoint\npython scripts\u002Fvisualizer.py --path examples\u002Ffunction_minimization\u002Fopenevolve_output\u002Fcheckpoints\u002Fcheckpoint_100\u002F\n```\n\n**Features:**\n\n- 🌳 **Evolution tree** with parent-child relationships\n- 📈 **Performance tracking** across generations\n- 🔍 **Code diff viewer** showing mutations\n- 📊 **MAP-Elites grid** visualization\n- 🎯 **Multi-metric analysis** with custom dimensions\n\n![OpenEvolve Visualizer](openevolve-visualizer.png)\n\n## Roadmap\n\n### **🔥 Upcoming Features**\n\n- [ ] **Multi-Modal Evolution**: Images, audio, and text simultaneously\n- [ ] **Federated Learning**: Distributed evolution across multiple machines  \n- [ ] **AutoML Integration**: Hyperparameter and architecture evolution\n- [ ] **Benchmark Suite**: Standardized evaluation across domains\n\n### **🌟 Research Directions**\n\n- [ ] **Self-Modifying Prompts**: Evolution modifies its own prompting strategy\n- [ ] **Cross-Language Evolution**: Python → Rust → C++ optimization chains\n- [ ] **Neurosymbolic Reasoning**: Combine neural and symbolic approaches\n- [ ] **Human-AI Collaboration**: Interactive evolution with human feedback\n\nWant to contribute? Check out our [roadmap discussions](https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fdiscussions\u002Fcategories\u002Froadmap)!\n\n## FAQ\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>💰 How much does it cost to run?\u003C\u002Fb>\u003C\u002Fsummary>\n\nSee the [Cost Estimation](#cost-estimation) section in Installation & Setup for detailed pricing information and cost-saving tips.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🆚 How does this compare to manual optimization?\u003C\u002Fb>\u003C\u002Fsummary>\n\n| Aspect | Manual | OpenEvolve |\n|--------|--------|------------|\n| **Initial Learning** | Weeks to understand domain | Minutes to start |\n| **Solution Quality** | Depends on expertise | Consistently explores novel approaches |\n| **Time Investment** | Days-weeks per optimization | Hours for complete evolution |\n| **Reproducibility** | Hard to replicate exact process | Perfect reproduction with seeds |\n| **Scaling** | Doesn't scale beyond human capacity | Parallel evolution across islands |\n\n**OpenEvolve shines** when you need to explore large solution spaces or optimize for multiple objectives simultaneously.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🔧 Can I use my own LLM?\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Yes!** OpenEvolve supports any OpenAI-compatible API:\n\n- **Commercial**: OpenAI, Google, Cohere\n- **Local**: Ollama, vLLM, LM Studio, text-generation-webui\n- **Advanced**: OptiLLM for routing and test-time compute\n\nJust set the `api_base` in your config to point to your endpoint.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🚨 What if evolution gets stuck?\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Built-in mechanisms prevent stagnation:**\n\n- **Island migration**: Fresh genes from other populations\n- **Temperature control**: Exploration vs exploitation balance\n- **Diversity maintenance**: MAP-Elites prevents convergence\n- **Artifact feedback**: Error messages guide improvements\n- **Template stochasticity**: Randomized prompts break patterns\n\n**Manual interventions:**\n- Increase `num_diverse_programs` for more exploration\n- Add custom feature dimensions to diversify search\n- Use template variations to randomize prompts\n- Adjust migration intervals for more cross-pollination\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>📈 How do I measure success?\u003C\u002Fb>\u003C\u002Fsummary>\n\n**Multiple success metrics:**\n\n1. **Primary Metric**: Your evaluator's `combined_score` or metric average\n2. **Convergence**: Best score improvement over time\n3. **Diversity**: MAP-Elites grid coverage\n4. **Efficiency**: Iterations to reach target performance\n5. **Robustness**: Performance across different test cases\n\n**Use the visualizer** to track all metrics in real-time and identify when evolution has converged.\n\n\u003C\u002Fdetails>\n\n### **Contributors**\n\nThanks to all our amazing contributors who make OpenEvolve possible!\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_f5d5e74c902f.png\" \u002F>\n\u003C\u002Fa>\n\n### **Contributing**\n\nWe welcome contributions! Here's how to get started:\n\n1. 🍴 **Fork** the repository\n2. 🌿 **Create** your feature branch: `git checkout -b feat-amazing-feature`\n3. ✨ **Add** your changes and tests\n4. ✅ **Test** everything: `python -m unittest discover tests`\n5. 📝 **Commit** with a clear message\n6. 🚀 **Push** and create a Pull Request\n\n**New to open source?** Check out our [Contributing Guide](CONTRIBUTING.md) and look for [`good-first-issue`](https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fissues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) labels!\n\n### **Academic & Research**\n\n**Articles & Blog Posts About OpenEvolve**:\n- [Towards Open Evolutionary Agents](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fdriaforall\u002Ftowards-open-evolutionary-agents) - Evolution of coding agents and the open-source movement\n- [OpenEvolve: GPU Kernel Discovery](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fcodelion\u002Fopenevolve-gpu-kernel-discovery) - Automated discovery of optimized GPU kernels\n- [OpenEvolve: Evolutionary Coding with LLMs](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fcodelion\u002Fopenevolve) - Introduction to evolutionary algorithm discovery using large language models\n\n## Citation\n\nIf you use OpenEvolve in your research, please cite:\n\n```bibtex\n@software{openevolve,\n  title = {OpenEvolve: an open-source evolutionary coding agent},\n  author = {Asankhaya Sharma},\n  year = {2025},\n  publisher = {GitHub},\n  url = {https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve}\n}\n```\n---\n\n\u003Cdiv align=\"center\">\n\n### **🚀 Ready to evolve your code?**\n\n**Maintained by the OpenEvolve community**\n\n*If OpenEvolve helps you discover breakthrough algorithms, please consider starring this repository.*\n\n\u003C\u002Fdiv>\n","# OpenEvolve\n\n\u003Cdiv align=\"center\">\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_4a12647d6927.png\" alt=\"OpenEvolve Logo\" width=\"400\">\n\n**🧬 最先进的开源进化编码智能体**\n\n*将您的大语言模型 (LLMs) 转变为能够发现突破性算法的自主代码优化器*\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fstargazers\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Falgorithmicsuperintelligence\u002Fopenevolve?style=social\" alt=\"GitHub stars\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fopenevolve\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fopenevolve\" alt=\"PyPI version\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fopenevolve\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fopenevolve\" alt=\"PyPI downloads\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fblob\u002Fmain\u002FLICENSE\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Falgorithmicsuperintelligence\u002Fopenevolve\" alt=\"License\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n[🚀 **快速开始**](#quick-start) • [**示例**](#examples-gallery) • [**系统消息**](#crafting-effective-system-messages) • [**讨论**](https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fdiscussions)\n\n*从随机搜索到最先进 (State-of-the-art)：实时见证您的代码进化*\n\n\u003C\u002Fdiv>\n\n---\n\n## 为什么选择 OpenEvolve？\n\n\u003Ctable>\n\u003Ctr>\n\u003Ctd width=\"33%\">\n\n### **自主发现**\nLLM 不仅仅是优化——它们能**发现**全新的算法。无需人工指导。\n\n\u003C\u002Ftd>\n\u003Ctd width=\"33%\">\n\n### **已验证的成果**\n在真实硬件上实现 **2-3 倍加速**。**最先进**的圆打包问题。**突破性**优化。\n\n\u003C\u002Ftd>\n\u003Ctd width=\"33%\">\n\n### **研究级标准**\n完全可复现，内置广泛的评估流程和科学严谨性。\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003C\u002Ftable>\n\n**OpenEvolve 与手动优化对比：**\n\n| Aspect | Manual Optimization | OpenEvolve |\n|--------|-------------------|------------|\n| **Time to Solution** | Days to weeks | Hours |\n| **Exploration Breadth** | Limited by human creativity | Unlimited LLM creativity |\n| **Reproducibility** | Hard to replicate | Fully deterministic |\n| **Multi-objective** | Complex tradeoffs | Automatic 帕累托 (Pareto) 优化 |\n| **Scaling** | Doesn't scale | Parallel evolution across islands |\n\n## 已验证的成果\n\n\u003Cdiv align=\"center\">\n\n| **领域** | **成就** | **示例** |\n|---------------|-------------------|----------------|\n| **GPU 优化** | Hardware-optimized kernel discovery | [MLX Metal Kernels](examples\u002Fmlx_metal_kernel_opt\u002F) |\n| **数学** | State-of-the-art circle packing (n=26) | [Circle Packing](examples\u002Fcircle_packing\u002F) |\n| **算法设计** | Adaptive sorting algorithms | [Rust Adaptive Sort](examples\u002Frust_adaptive_sort\u002F) |\n| **科学计算** | Automated filter design | [Signal Processing](examples\u002Fsignal_processing\u002F) |\n| **多语言** | Python, Rust, R, Metal shaders | [All Examples](examples\u002F) |\n\n\u003C\u002Fdiv>\n\n## 🚀 快速开始\n\n从零开始进化代码仅需 **30 秒**：\n\n```bash\n# Install OpenEvolve\npip install openevolve\n\n# The example uses Google Gemini by default (free tier available)\n# Get your API key from: https:\u002F\u002Faistudio.google.com\u002Fapikey\nexport OPENAI_API_KEY=\"your-gemini-api-key\"  # Yes, use OPENAI_API_KEY env var\n\n# Run your first evolution!\npython openevolve-run.py examples\u002Ffunction_minimization\u002Finitial_program.py \\\n  examples\u002Ffunction_minimization\u002Fevaluator.py \\\n  --config examples\u002Ffunction_minimization\u002Fconfig.yaml \\\n  --iterations 50\n```\n\n**注意：** 示例配置默认使用 Gemini，但您可以通过修改 `config.yaml` 使用任何兼容 OpenAI 的服务商。查看 [configs](configs\u002F) 以获取完整配置选项。\n\n### **库用法**\n\nOpenEvolve 可作为库使用，无需任何外部文件：\n\n```python\nfrom openevolve import run_evolution, evolve_function\n\n# Evolution with inline code (no files needed!)\nresult = run_evolution(\n    initial_program='''\n    def fibonacci(n):\n        if n \u003C= 1: return n\n        return fibonacci(n-1) + fibonacci(n-2)\n    ''',\n    evaluator=lambda path: {\"score\": benchmark_fib(path)},\n    iterations=100\n)\n\n# Evolve Python functions directly\ndef bubble_sort(arr):\n    for i in range(len(arr)):\n        for j in range(len(arr)-1):\n            if arr[j] > arr[j+1]:\n                arr[j], arr[j+1] = arr[j+1], arr[j] \n    return arr\n\nresult = evolve_function(\n    bubble_sort,\n    test_cases=[([3,1,2], [1,2,3]), ([5,2,8], [2,5,8])],\n    iterations=50\n)\nprint(f\"Evolved sorting algorithm: {result.best_code}\")\n```\n\n**偏好 Docker？** 请参阅 [安装与设置](#installation--setup) 部分了解 Docker 选项。\n\n## 实际演示\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>圆打包：从随机到最先进\u003C\u002Fb>\u003C\u002Fsummary>\n\n**实时观看 OpenEvolve 发现最优圆打包方案：**\n\n| Generation 1 | Generation 190 | Generation 460 (Final) |\n|--------------|----------------|----------------------|\n| ![Initial](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_005e43178ad0.png) | ![Progress](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_c1931dd1f1c8.png) | ![Final](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_428861e6221c.png) |\n| Random placement | Learning structure | **最先进结果** |\n\n**结果**：匹配 n=26 圆打包问题的已发布基准测试。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>GPU 内核进化\u003C\u002Fb>\u003C\u002Fsummary>\n\n**进化前（基线）**：\n```metal\n\u002F\u002F Standard attention implementation\nkernel void attention_baseline(\u002F* ... *\u002F) {\n    \u002F\u002F Generic matrix multiplication\n    float sum = 0.0;\n    for (int i = 0; i \u003C seq_len; i++) {\n        sum += query[tid] * key[i];\n    }\n}\n```\n\n**进化后（快 2.8 倍）**：\n```metal\n\u002F\u002F OpenEvolve discovered optimization\nkernel void attention_evolved(\u002F* ... *\u002F) {\n    \u002F\u002F Hardware-aware tiling + unified memory optimization\n    threadgroup float shared_mem[256];\n    \u002F\u002F ... evolved algorithm exploiting Apple Silicon architecture\n}\n```\n\n**性能影响**：在 Apple M1 Pro 上实现 2.8 倍加速，同时保持数值精度。\n\n\u003C\u002Fdetails>\n\n## OpenEvolve 工作原理\n\nOpenEvolve 实现了一个复杂的**进化编码流水线 (evolutionary coding pipeline)**，远超简单的优化：\n\n![OpenEvolve Architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_e6b912937afa.png)\n\n### **核心创新**：MAP-Elites + LLMs\n\n- **质量多样性进化**：在特征维度上维持多样化的种群\n- **基于岛屿的架构**：多个种群防止过早收敛\n- **LLM 集成 (Ensemble)**：多个模型配合智能回退策略\n- **工件侧信道 (Artifact Side-Channel)**：错误反馈改进后续世代\n\n### **高级功能**\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>科学可复现性\u003C\u002Fb>\u003C\u002Fsummary>\n\n- **全面种子设置**：每个组件（大语言模型 LLM、数据库、评估）均已进行种子设置\n- **默认种子=42**：开箱即用即可立即获得可复现的结果\n- **确定性进化**：跨机器运行可实现精确复现\n- **组件隔离**：基于哈希的隔离防止交叉污染\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>高级大语言模型（LLM）集成\u003C\u002Fb>\u003C\u002Fsummary>\n\n- **通用 API**：适用于 OpenAI、Google、本地模型及代理\n- **智能集成（Ensembles）**：带有复杂回退机制的加权组合\n- **测试时计算（Test-Time Compute）**：通过代理系统增强推理能力（见 [OptiLLM 设置](#llm-provider-setup)）\n- **插件生态系统**：支持高级推理插件\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>进化算法创新\u003C\u002Fb>\u003C\u002Fsummary>\n\n- **双重选择**：针对性能与灵感使用不同的程序\n- **自适应特征维度**：自定义质量多样性指标\n- **迁移模式**：受控基因流的环状拓扑结构\n- **多策略采样**：精英、多样性和探索性选择\n\n\u003C\u002Fdetails>\n\n## 适用场景\n\n| **使用场景** | **OpenEvolve 为何表现出色** |\n|--------------|---------------------------|\n| **性能优化** | 发现人类遗漏的硬件特定优化 |\n| **算法发现** | 找到经典问题的新颖方法 |\n| **科学计算** | 自动化繁琐的手动调优过程 |\n| **编程竞赛** | 生成多种解决方案策略 |\n| **多目标问题** | 跨维度的帕累托最优（Pareto-optimal）解 |\n\n## 🛠 安装与设置\n\n### 依赖要求\n- **Python**：3.10+ \n- **LLM 访问**：任何兼容 OpenAI 的 API\n- **可选**：Docker 用于容器化运行\n\n### 安装选项\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>📦 PyPI（推荐）\u003C\u002Fb>\u003C\u002Fsummary>\n\n```bash\npip install openevolve\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🔧 开发环境安装\u003C\u002Fb>\u003C\u002Fsummary>\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve.git\ncd openevolve\npip install -e \".[dev]\"\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🐳 Docker\u003C\u002Fb>\u003C\u002Fsummary>\n\n```bash\n# 拉取镜像\ndocker pull ghcr.io\u002Falgorithmicsuperintelligence\u002Fopenevolve:latest\n\n# 运行示例\ndocker run --rm -v $(pwd):\u002Fapp ghcr.io\u002Falgorithmicsuperintelligence\u002Fopenevolve:latest \\\n  examples\u002Ffunction_minimization\u002Finitial_program.py \\\n  examples\u002Ffunction_minimization\u002Fevaluator.py --iterations 100\n```\n\n\u003C\u002Fdetails>\n\n### 成本估算\n\n**成本取决于您的 LLM 提供商和迭代次数：**\n\n- **o3**：每次迭代约 $0.15-0.60（取决于代码大小）\n- **o3-mini**：每次迭代约 $0.03-0.12（更具成本效益）\n- **Gemini-2.5-Pro**：每次迭代约 $0.08-0.30\n- **Gemini-2.5-Flash**：每次迭代约 $0.01-0.05（最快且最便宜）\n- **本地模型**：设置后几乎免费\n- **OptiLLM**：使用更便宜的模型配合测试时计算以获得更好结果\n\n**节省成本技巧：**\n- 从较少的迭代次数开始（100-200）\n- 使用 o3-mini、Gemini-2.5-Flash 或本地模型进行探索\n- 使用级联评估提前过滤劣质程序\n- 初始配置较小的种群大小\n\n### 大语言模型提供商设置\n\nOpenEvolve 支持**任何兼容 OpenAI 的 API**：\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🔥 OpenAI（直连）\u003C\u002Fb>\u003C\u002Fsummary>\n\n```bash\nexport OPENAI_API_KEY=\"sk-...\"\n# 默认使用 OpenAI 端点\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🤖 Google Gemini\u003C\u002Fb>\u003C\u002Fsummary>\n\n```yaml\n# config.yaml\nllm:\n  api_base: \"https:\u002F\u002Fgenerativelanguage.googleapis.com\u002Fv1beta\u002Fopenai\u002F\"\n  model: \"gemini-2.5-pro\"\n```\n\n```bash\nexport OPENAI_API_KEY=\"your-gemini-api-key\"\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🏠 本地模型（Ollama\u002FvLLM）\u003C\u002Fb>\u003C\u002Fsummary>\n\n```yaml\n# config.yaml\nllm:\n  api_base: \"http:\u002F\u002Flocalhost:11434\u002Fv1\"  # Ollama\n  model: \"codellama:7b\"\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>⚡ OptiLLM（高级）\u003C\u002Fb>\u003C\u002Fsummary>\n\n为了实现最大灵活性，包括速率限制、模型路由和测试时计算：\n\n```bash\n# 安装 OptiLLM\npip install optillm\n\n# 启动 OptiLLM 代理\noptillm --port 8000\n\n# 将 OpenEvolve 指向 OptiLLM\nexport OPENAI_API_KEY=\"your-actual-key\"\n```\n\n```yaml\nllm:\n  api_base: \"http:\u002F\u002Flocalhost:8000\u002Fv1\"\n  model: \"moa&readurls-o3\"  # 测试时计算 + Web 访问\n```\n\n\u003C\u002Fdetails>\n\n## 示例画廊\n\n\u003Cdiv align=\"center\">\n\n### **展示项目**\n\n| 项目 | 领域 | 成就 | 演示 |\n|---------|--------|-------------|------|\n| [**函数最小化**](examples\u002Ffunction_minimization\u002F) | 优化 | 随机 → 模拟退火 | [查看结果](examples\u002Ffunction_minimization\u002Fopenevolve_output\u002F) |\n| [**MLX GPU Kernels**](examples\u002Fmlx_metal_kernel_opt\u002F) | 硬件 | Apple Silicon 优化 | [基准测试](examples\u002Fmlx_metal_kernel_opt\u002FREADME.md) |\n| [**Rust 自适应排序**](examples\u002Frust_adaptive_sort\u002F) | 算法 | 数据感知排序 | [代码进化](examples\u002Frust_adaptive_sort\u002F) |\n| [**符号回归**](examples\u002Fsymbolic_regression\u002F) | 科学 | 自动方程发现 | [LLM-SRBench](examples\u002Fsymbolic_regression\u002F) |\n| [**Web 爬虫 + OptiLLM**](examples\u002Fweb_scraper_optillm\u002F) | AI 集成 | 测试时计算优化 | [智能爬取](examples\u002Fweb_scraper_optillm\u002F) |\n\n\u003C\u002Fdiv>\n\n### **快速示例**：函数最小化\n\n**观察 OpenEvolve 从随机搜索进化为复杂的优化算法：**\n\n```python\n# Initial Program (Random Search)\ndef minimize_function(func, bounds, max_evals=1000):\n    best_x, best_val = None, float('inf')\n    for _ in range(max_evals):\n        x = random_point_in_bounds(bounds)\n        val = func(x)\n        if val \u003C best_val:\n            best_x, best_val = x, val\n    return best_x, best_val\n```\n\n**进化过程**\n\n```python\n# Evolved Program (Simulated Annealing + Adaptive Cooling)\ndef minimize_function(func, bounds, max_evals=1000):\n    x = random_point_in_bounds(bounds)\n    temp = adaptive_initial_temperature(func, bounds)\n    \n    for i in range(max_evals):\n        neighbor = generate_neighbor(x, temp, bounds)\n        delta = func(neighbor) - func(x)\n        \n        if delta \u003C 0 or random.random() \u003C exp(-delta\u002Ftemp):\n            x = neighbor\n            \n        temp *= adaptive_cooling_rate(i, max_evals)  # Dynamic cooling\n    \n    return x, func(x)\n```\n\n**性能**：收敛速度提升 100 倍！\n\n### **高级示例**\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>提示词进化\u003C\u002Fb>\u003C\u002Fsummary>\n\n**进化提示词而非代码**以获得更好的 LLM 性能。查看 [LLM 提示词优化示例](examples\u002Fllm_prompt_optimization\u002F) 以获取完整的 HotpotQA 案例研究，准确率提升 +23%。\n\n[完整示例](examples\u002Fllm_prompt_optimization\u002F)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🏁 编程竞赛\u003C\u002Fb>\u003C\u002Fsummary>\n\n**自动生成解决方案**用于编程竞赛：\n\n```python\n# Problem: Find maximum subarray sum\n# OpenEvolve discovers multiple approaches:\n\n# Evolution Path 1: Brute Force → Kadane's Algorithm\n```\n\n# 进化路径 2：分治法 → 优化的 Kadane 算法\n# 进化路径 3：动态规划 → 空间优化 DP\n\n```\n\n[在线判题系统集成](examples\u002Fonline_judge_programming\u002F)\n\n\u003C\u002Fdetails>\n\n## 配置\n\nOpenEvolve 为高级用户提供了广泛的配置选项：\n\n```yaml\n# 高级配置示例\nmax_iterations: 1000\nrandom_seed: 42  # 完全可复现\n\nllm:\n  # 集成配置\n  models:\n    - name: \"gemini-2.5-pro\"\n      weight: 0.6\n    - name: \"gemini-2.5-flash\"\n      weight: 0.4\n  temperature: 0.7\n\ndatabase:\n  # MAP-Elites 质量多样性算法\n  population_size: 500\n  num_islands: 5  # 并行进化\n  migration_interval: 20\n  feature_dimensions: [\"complexity\", \"diversity\", \"performance\"]\n\nevaluator:\n  enable_artifacts: true      # 向 LLM（大语言模型）提供错误反馈\n  cascade_evaluation: true    # 多阶段测试\n  use_llm_feedback: true      # AI 代码质量评估\n\nprompt:\n  # 复杂的灵感系统\n  num_top_programs: 3         # 表现最佳的程序\n  num_diverse_programs: 2     # 创造性探索\n  include_artifacts: true     # 执行反馈\n  \n  # 自定义模板\n  template_dir: \"custom_prompts\u002F\"\n  use_template_stochasticity: true  # 随机化提示词\n```\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🎯 特征工程\u003C\u002Fb>\u003C\u002Fsummary>\n\n**控制程序在质量多样性网格中的组织方式：**\n\n```yaml\ndatabase:\n  feature_dimensions: \n    - \"complexity\"      # 内置：代码长度\n    - \"diversity\"       # 内置：结构多样性\n    - \"performance\"     # 自定义：来自你的评估器\n    - \"memory_usage\"    # 自定义：来自你的评估器\n    \n  feature_bins:\n    complexity: 10      # 10 个复杂度层级\n    performance: 20     # 20 个性能区间\n    memory_usage: 15    # 15 个内存使用类别\n```\n\n**重要**：从评估器返回原始值，OpenEvolve 会自动处理分箱。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🎨 自定义提示词模板\u003C\u002Fb>\u003C\u002Fsummary>\n\n**使用自定义模板的高级提示词工程：**\n\n```yaml\nprompt:\n  template_dir: \"custom_templates\u002F\"\n  use_template_stochasticity: true\n  template_variations:\n    greeting:\n      - \"Let's enhance this code:\"\n      - \"Time to optimize:\"\n      - \"Improving the algorithm:\"\n    improvement_suggestion:\n      - \"Here's how we could improve this code:\"\n      - \"I suggest the following improvements:\"\n      - \"We can enhance this code by:\"\n```\n\n**工作原理**：在模板中放置 `{greeting}` 或 `{improvement_suggestion}` 占位符，OpenEvolve 将在每一代中随机选择变体，为提示词增加多样性。\n\n查看 [提示词示例](examples\u002Fllm_prompt_optimization\u002Ftemplates\u002F) 以获取完整的模板定制说明。\n\n\u003C\u002Fdetails>\n\n## 编写有效的系统消息\n\n**系统消息是成功进化的关键。** 它们指导 LLM（大语言模型）理解你的领域、约束和优化目标。精心设计的系统消息可以决定是产生随机变异还是针对性改进。\n\n### 为什么系统消息很重要\n\n你的 `config.yaml` 中的系统消息可以说是进化成功最重要的组件：\n\n- **领域专业知识**：为 LLM 提供关于问题空间的具体知识\n- **约束感知**：定义进化过程中可以更改和不可更改的内容\n- **优化重点**：引导 LLM 朝向有意义的改进\n- **错误预防**：帮助避免常见陷阱和编译错误\n\n### 迭代创建过程\n\n基于成功的 OpenEvolve 实现，系统消息最好通过迭代创建：\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🔄 逐步流程\u003C\u002Fb>\u003C\u002Fsummary>\n\n**第一阶段：初始草稿**\n\n1. 从一个描述你目标的基本系统消息开始\n2. 运行 20-50 次进化迭代以观察行为\n3. 记录系统在哪里“卡住”或做出糟糕的选择\n\n**第二阶段：细化**\n\n4. 根据观察到的问题添加具体指导\n5. 包含特定领域的术语和概念\n6. 定义清晰的约束和优化目标\n7. 运行另一批迭代\n\n**第三阶段：专业化**\n\n8. 添加好坏方法的详细示例\n9. 包含特定的库\u002F框架指导\n10. 添加你观察到的错误避免模式\n11. 根据执行产物反馈进行微调\n\n**第四阶段：优化**\n\n12. 考虑使用 OpenEvolve 本身来优化你的提示词\n13. 使用组合分数指标衡量改进\n\n\u003C\u002Fdetails>\n\n### 按复杂度分类的示例\n\n#### **简单：通用优化**\n```yaml\nprompt:\n  system_message: |\n    You are an expert programmer specializing in optimization algorithms.\n    Your task is to improve a function minimization algorithm to find the\n    global minimum reliably, escaping local minima that might trap simple algorithms.\n```\n\n#### **中级：特定领域指导**\n```yaml\nprompt:\n  system_message: |\n    You are an expert prompt engineer. Your task is to revise prompts for LLMs.\n\n    Your improvements should:\n    * Clarify vague instructions and eliminate ambiguity\n    * Strengthen alignment between prompt and desired task outcome\n    * Improve robustness against edge cases\n    * Include formatting instructions and examples where helpful\n    * Avoid unnecessary verbosity\n\n    Return only the improved prompt text without explanations.\n```\n\n#### ⚡ **高级：硬件特定优化**\n```yaml\nprompt:\n  system_message: |\n    You are an expert Metal GPU programmer specializing in custom attention\n    kernels for Apple Silicon.\n\n    # TARGET: Optimize Metal Kernel for Grouped Query Attention (GQA)\n    # HARDWARE: Apple M-series GPUs with unified memory architecture\n    # GOAL: 5-15% performance improvement\n\n    # OPTIMIZATION OPPORTUNITIES:\n    **1. Memory Access Pattern Optimization:**\n    - Coalesced access patterns for Apple Silicon\n    - Vectorized loading using SIMD\n    - Pre-compute frequently used indices\n\n    **2. Algorithm Fusion:**\n    - Combine max finding with score computation\n    - Reduce number of passes through data\n\n    # CONSTRAINTS - CRITICAL SAFETY RULES:\n    **MUST NOT CHANGE:**\n    ❌ Kernel function signature\n    ❌ Template parameter names or types\n    ❌ Overall algorithm correctness\n\n    **ALLOWED TO OPTIMIZE:**\n    ✅ Memory access patterns and indexing\n    ✅ Computation order and efficiency\n    ✅ Vectorization and SIMD utilization\n    ✅ Apple Silicon specific optimizations\n```\n\n### 最佳实践\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🎨 提示词工程模式\u003C\u002Fb>\u003C\u002Fsummary>\n\n**结构化你的消息**：从角色定义开始 → 定义任务\u002F上下文 → 列出优化机会 → 设置约束 → 成功标准\n\n**使用具体示例**：\n```yaml\n# Good: \"Focus on reducing memory allocations. Example: Replace `new Vector()` with pre-allocated arrays.\"\n# Avoid: \"Make the code faster\"\n```\n\n**包含领域知识**：\n```yaml\n# Good: \"For GPU kernels: 1) Memory coalescing 2) Occupancy 3) Shared memory usage\"\n```\n\n# 避免：“优化算法”\n```\n\n**Set Clear Boundaries:**\n```yaml\nsystem_message: |\n  MUST NOT CHANGE: ❌ Function signatures ❌ Algorithm correctness ❌ External API\n  ALLOWED: ✅ Internal implementation ✅ Data structures ✅ Performance optimizations\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🔬 高级技巧\u003C\u002Fb>\u003C\u002Fsummary>\n\n**工件 (Artifact) 驱动迭代：** 在配置中启用工件 → 在系统消息中包含常见错误模式 → 基于 stderr\u002F警告模式添加指导\n\n**多阶段进化：** 从广泛开始（\"探索不同的算法方法\"），然后聚焦（\"鉴于成功的模拟退火，专注于参数调整\"）\n\n**模板随机性：** 参见 [配置部分](#configuration) 获取完整的模板变体示例。\n\n\u003C\u002Fdetails>\n\n### 元进化：使用 OpenEvolve 优化提示词\n\n**你可以使用 OpenEvolve 来进化你自己的系统消息！** 这种强大的技术让你能够自动优化提示词以获得更好的大语言模型 (LLM) 性能。\n\n参见 [LLM 提示词优化示例](examples\u002Fllm_prompt_optimization\u002F) 以获取完整实现，包括 HotpotQA 案例研究，准确率提升了 +23%。\n\n### 需要避免的常见陷阱\n\n- **过于模糊**：\"让代码更好\" → 具体说明\"更好\"意味着什么\n- **限制过多**：过度约束可能会阻止有用的优化\n- **缺少上下文**：包含相关的领域知识和术语\n- **缺乏示例**：具体的示例比抽象的描述更能引导 LLM\n- **忽略工件**：不要根据错误反馈来优化提示词\n\n## 工件与调试\n\n**工件侧信道 (Artifacts side-channel)** 提供丰富的反馈以加速进化：\n\n```python\n# Evaluator can return execution context\nfrom openevolve.evaluation_result import EvaluationResult\n\nreturn EvaluationResult(\n    metrics={\"performance\": 0.85, \"correctness\": 1.0},\n    artifacts={\n        \"stderr\": \"Warning: suboptimal memory access pattern\",\n        \"profiling_data\": {...},\n        \"llm_feedback\": \"Code is correct but could use better variable names\",\n        \"build_warnings\": [\"unused variable x\"]\n    }\n)\n```\n\n**下一代提示词将自动包含：**\n\n```markdown\n## Previous Execution Feedback\n⚠️ Warning: suboptimal memory access pattern\n💡 LLM Feedback: Code is correct but could use better variable names\n🔧 Build Warnings: unused variable x\n```\n\n这创建了一个**反馈循环**，其中每一代都从之前的错误中学习！\n\n## 可视化\n\n**实时进化追踪** 带有交互式 Web 界面：\n\n```bash\n# Install visualization dependencies\npip install -r scripts\u002Frequirements.txt\n\n# Launch interactive visualizer\npython scripts\u002Fvisualizer.py\n\n# Or visualize specific checkpoint\npython scripts\u002Fvisualizer.py --path examples\u002Ffunction_minimization\u002Fopenevolve_output\u002Fcheckpoints\u002Fcheckpoint_100\u002F\n```\n\n**功能：**\n\n- 🌳 **进化树** 显示父子关系\n- 📈 **性能追踪** 跨代展示\n- 🔍 **代码差异查看器** 显示变异\n- 📊 **MAP-Elites (网格映射精英算法) 网格** 可视化\n- 🎯 **多指标分析** 带自定义维度\n\n![OpenEvolve Visualizer](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_c9f2ed8125bd.png)\n\n## 路线图\n\n### **🔥 即将推出的功能**\n\n- [ ] **多模态进化**：图像、音频和文本同时进行\n- [ ] **联邦学习**：跨多台机器的分布式进化  \n- [ ] **AutoML 集成**：超参数和架构进化\n- [ ] **基准测试套件**：跨领域的标准化评估\n\n### **🌟 研究方向**\n\n- [ ] **自修改提示词**：进化修改其自身的提示策略\n- [ ] **跨语言进化**：Python → Rust → C++ 优化链\n- [ ] **神经符号推理**：结合神经和符号方法\n- [ ] **人机协作**：带有人类反馈的交互式进化\n\n想贡献吗？查看我们的 [路线图讨论](https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fdiscussions\u002Fcategories\u002Froadmap)!\n\n## 常见问题\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>💰 运行成本是多少？\u003C\u002Fb>\u003C\u002Fsummary>\n\n参见安装与设置部分的 [成本估算](#cost-estimation)，了解详细的定价信息和节省成本的技巧。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🆚 与手动优化相比如何？\u003C\u002Fb>\u003C\u002Fsummary>\n\n| 方面 | 手动 | OpenEvolve |\n|--------|--------|------------|\n| **初始学习** | 数周才能理解领域 | 几分钟即可开始 |\n| **解决方案质量** | 取决于专业知识 | 持续探索新颖方法 |\n| **时间投入** | 每次优化需数天至数周 | 完整进化仅需数小时 |\n| **可复现性** | 难以完全复现过程 | 通过种子完美复现 |\n| **扩展性** | 无法超越人类能力范围扩展 | 跨岛屿并行进化 |\n\n**OpenEvolve 表现出色** 当你需要探索大型解空间或同时优化多个目标时。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🔧 我能使用自己的 LLM 吗？\u003C\u002Fb>\u003C\u002Fsummary>\n\n**可以！** OpenEvolve 支持任何 OpenAI 兼容 API：\n\n- **商业**：OpenAI, Google, Cohere\n- **本地**：Ollama, vLLM, LM Studio, text-generation-webui\n- **高级**：OptiLLM 用于路由和测试时计算\n\n只需在配置中设置 `api_base` 指向你的端点。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>🚨 如果进化停滞怎么办？\u003C\u002Fb>\u003C\u002Fsummary>\n\n**内置机制可防止停滞：**\n\n- **岛屿迁移 (Island migration)**：来自其他种群的新鲜基因\n- **温度控制 (Temperature control)**：探索与利用的平衡\n- **多样性维护**：MAP-Elites 防止收敛\n- **工件反馈**：错误消息指导改进\n- **模板随机性**：随机化提示词打破模式\n\n**手动干预：**\n- 增加 `num_diverse_programs` 以增加探索\n- 添加自定义特征维度以多样化搜索\n- 使用模板变体随机化提示词\n- 调整迁移间隔以增加交叉融合\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cb>📈 我如何衡量成功？\u003C\u002Fb>\u003C\u002Fsummary>\n\n**多种成功指标：**\n\n1. **主要指标**：评估器的 `combined_score` 或指标平均值\n2. **收敛**：随时间推移的最佳分数提升\n3. **多样性**：MAP-Elites 网格覆盖率\n4. **效率**：达到目标性能的迭代次数\n5. **鲁棒性**：不同测试用例的性能\n\n**使用可视化器** 实时跟踪所有指标并识别进化何时已收敛。\n\n\u003C\u002Fdetails>\n\n### **贡献者**\n\n感谢所有让 OpenEvolve 成为可能的杰出贡献者！\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_readme_f5d5e74c902f.png\" \u002F>\n\u003C\u002Fa>\n\n### **贡献指南**\n\n我们欢迎贡献！以下是开始步骤：\n\n1. 🍴 **Fork（仓库分叉）** 该仓库\n2. 🌿 **创建**你的功能分支：`git checkout -b feat-amazing-feature`\n3. ✨ **添加**你的更改和测试\n4. ✅ **测试**所有内容：`python -m unittest discover tests`\n5. 📝 **Commit（提交）** 附带清晰的信息\n6. 🚀 **Push（推送）** 并创建 Pull Request（拉取请求）\n\n**开源新手？** 请查看我们的 [贡献指南](CONTRIBUTING.md)，并寻找 [`good-first-issue`](https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fissues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) 标签！\n\n### **学术与研究**\n\n**关于 OpenEvolve 的文章与博客**：\n- [迈向开放进化智能体](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fdriaforall\u002Ftowards-open-evolutionary-agents) - 编码智能体的演进与开源运动\n- [OpenEvolve：GPU 内核 (GPU Kernel) 发现](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fcodelion\u002Fopenevolve-gpu-kernel-discovery) - 优化后的 GPU 内核自动化发现\n- [OpenEvolve：使用大语言模型 (LLMs) 进行进化编码](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fcodelion\u002Fopenevolve) - 介绍如何使用大语言模型发现进化算法\n\n## 引用\n\n如果你在研究中使用 OpenEvolve，请引用：\n\n```bibtex\n@software{openevolve,\n  title = {OpenEvolve: an open-source evolutionary coding agent},\n  author = {Asankhaya Sharma},\n  year = {2025},\n  publisher = {GitHub},\n  url = {https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve}\n}\n```\n---\n\n\u003Cdiv align=\"center\">\n\n### **🚀 准备好进化你的代码了吗？**\n\n**由 OpenEvolve 社区维护**\n\n*如果 OpenEvolve 帮助你发现了突破性算法，请考虑为此仓库点亮星标。*\n\n\u003C\u002Fdiv>","# OpenEvolve 快速上手指南\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n- **Python 版本**：3.10 或更高版本\n- **网络环境**：需能访问 PyPI 及目标 LLM API 服务（如 OpenAI、Google Gemini 等）\n- **LLM 账户**：拥有支持 OpenAI 兼容协议的 LLM API Key（例如 Google Gemini API Key）\n\n> 💡 **提示**：国内用户建议使用国内镜像源加速安装（见下文）。\n\n## 安装步骤\n\n### 1. 通过 PyPI 安装（推荐）\n\n推荐使用国内镜像源以提升下载速度：\n\n```bash\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple openevolve\n```\n\n### 2. 其他安装方式\n\n- **开发模式安装**：\n  ```bash\n  git clone https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve.git\n  cd openevolve\n  pip install -e \".[dev]\"\n  ```\n\n- **Docker 容器运行**：\n  ```bash\n  docker pull ghcr.io\u002Falgorithmicsuperintelligence\u002Fopenevolve:latest\n  ```\n\n## 基本使用\n\n### 1. 配置环境变量\n\n设置您的 LLM API Key。本工具默认示例使用 Google Gemini，但兼容任何 OpenAI 兼容接口：\n\n```bash\nexport OPENAI_API_KEY=\"your-gemini-api-key\"\n```\n\n> 获取密钥地址：https:\u002F\u002Faistudio.google.com\u002Fapikey\n\n### 2. 命令行运行示例\n\n运行第一个进化任务（函数最小化），从随机搜索进化到模拟退火算法：\n\n```bash\npython openevolve-run.py examples\u002Ffunction_minimization\u002Finitial_program.py \\\n  examples\u002Ffunction_minimization\u002Fevaluator.py \\\n  --config examples\u002Ffunction_minimization\u002Fconfig.yaml \\\n  --iterations 50\n```\n\n### 3. 作为库使用\n\n您也可以直接在代码中调用 OpenEvolve 进行进化，无需外部文件：\n\n```python\nfrom openevolve import run_evolution, evolve_function\n\n# 内联代码进化（无需文件）\nresult = run_evolution(\n    initial_program='''\n    def fibonacci(n):\n        if n \u003C= 1: return n\n        return fibonacci(n-1) + fibonacci(n-2)\n    ''',\n    evaluator=lambda path: {\"score\": benchmark_fib(path)},\n    iterations=100\n)\n\n# 直接进化 Python 函数\ndef bubble_sort(arr):\n    for i in range(len(arr)):\n        for j in range(len(arr)-1):\n            if arr[j] > arr[j+1]:\n                arr[j], arr[j+1] = arr[j+1], arr[j] \n    return arr\n\nresult = evolve_function(\n    bubble_sort,\n    test_cases=[([3,1,2], [1,2,3]), ([5,2,8], [2,5,8])],\n    iterations=50\n)\nprint(f\"Evolved sorting algorithm: {result.best_code}\")\n```","某视频流媒体公司的后端工程师团队正在负责优化实时图像处理中的高斯模糊滤镜核心函数，目标是显著降低服务器端渲染延迟以提升整体用户体验。\n\n### 没有 openevolve 时\n- 手动重构代码逻辑往往耗时数天，且很难保证能找到全局最优解而非陷入局部最优。\n- 严重依赖工程师个人经验，容易陷入思维定势，从而错过潜在的关键性能提升点。\n- 在不同硬件环境下的性能表现极不稳定，难以验证和优化效果是否真正具有可复现性。\n- 需要在执行速度和图像精度之间反复权衡，人工调整参数效率极低且极易引入新错误。\n\n### 使用 openevolve 后\n- 自动运行进化算法，在短短几小时内就能发现比人工手写更快的代码结构变体。\n- 利用大模型的创造力探索无限代码变体，生成人类未曾设想的创新优化解决方案。\n- 内置全自动化评估流程，确保生成的优化代码在不同硬件环境下都能稳定可复现。\n- 自动进行多目标帕累托优化，直接输出速度与精度平衡的最佳版本无需人工反复干预。\n\nopenevolve 不仅将原本需要数周的代码调优工作压缩至小时级，更能通过自主进化挖掘出超越人类经验的性能突破，彻底改变传统优化模式。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Falgorithmicsuperintelligence_openevolve_005e4317.png","algorithmicsuperintelligence","Algorithmic SuperIntelligence Labs","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Falgorithmicsuperintelligence_2f2eccb0.png","",null,"research@algorithmicsuperintelligence.ai","https:\u002F\u002Falgorithmicsuperintelligence.ai","https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence",[81,85,89,93,97,101],{"name":82,"color":83,"percentage":84},"Python","#3572A5",83.6,{"name":86,"color":87,"percentage":88},"JavaScript","#f1e05a",12.3,{"name":90,"color":91,"percentage":92},"CSS","#663399",2.9,{"name":94,"color":95,"percentage":96},"HTML","#e34c26",0.8,{"name":98,"color":99,"percentage":100},"Makefile","#427819",0.3,{"name":102,"color":103,"percentage":104},"Dockerfile","#384d54",0.1,5934,943,"2026-04-10T18:31:55","Apache-2.0","未说明",{"notes":111,"python":112,"dependencies":113},"必须配置 LLM API Key（支持 OpenAI、Google Gemini 或本地模型如 Ollama\u002FvLLM）；支持 Docker 容器化运行；运行成本取决于迭代次数及所选 LLM 提供商；可结合 OptiLLM 优化推理成本与性能。","3.10+",[109],[13,14,35],[116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,64],"alphacode","coding-agent","deepmind","deepmind-lab","discovery","distributed-evolutionary-algorithms","evolutionary-algorithms","evolutionary-computation","genetic-algorithm","genetic-algorithms","iterative-methods","iterative-refinement","llm-engineering","llm-ensemble","llm-inference","optimize","alpha-evolve","alphaevolve",8,"2026-03-27T02:49:30.150509","2026-04-11T18:29:36.921425",[138,143,148,153,158,163],{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},2534,"如何在 OpenEvolve 中复现圆堆积（circle packing）问题的最优解？","最佳程序的信息存储在 `best_program_info.json` 文件中，其中包含当时的输出指标（如 sum_radii）。你可以运行 `python demo\u002Fget.py` 来获取实际的构建结果。注意，由于随机性，不同次运行的半径总和可能略有差异（例如 2.626 或 2.635），具体取决于离线运行的结果。","https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fissues\u002F75",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},2535,"多岛屿模式下，为什么只有 Island 0 的代际数在增加？","这通常与 `parallel_evaluations` 配置设置有关。该问题已在版本 0.2.18 的新 PyPI 发布中修复。请确保升级到最新版本并检查并行评估的配置是否正确。","https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fissues\u002F289",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},2536,"运行示例（如 function_minimization）时出现代码解析或执行错误怎么办？","这通常是因为模型返回的代码不是有效的 Python，导致评估阶段解析失败（如 IndentationError 或 NameError）。建议尝试更新提示词（prompt）或使用能更好遵循指令的模型模式来提高生成代码的质量。","https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fissues\u002F19",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},2537,"LLM 提示词优化示例中，多个系统消息文件（如 `evaluator_system_message.txt`）如何区分和使用？","需要被重写的主提示词（`target_prompt`）实际上是作为 `full_rewrite_user` 变量的一部分传入的，对应模板中的 `{current_program}` 占位符。演化历史、性能指标等信息也会被注入到用户提示中，最终要求模型仅生成改进后的提示文本。","https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fissues\u002F243",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},2538,"MAP-Elite 算法在 OpenEvolve 中的实现是否有问题？","之前关于 `self.feature_map` 无用性的问题已在 PR #152 中修复。如果你遇到相关实现问题，请确保拉取了最新的代码仓库，并确认已签署 CLA 以便合并相关更改。","https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fissues\u002F150",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},2539,"使用默认设置配合 OpenAI API 时报错 401 Unauthorized 如何解决？","HTTP 401 Unauthorized 错误表明身份验证失败。请检查你的 API Key 是否正确配置，并确保请求端点（如 OpenRouter）支持该密钥。验证环境变量或配置文件中是否设置了正确的密钥值。","https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fissues\u002F69",[169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249,254,259,264],{"id":170,"version":171,"summary_zh":172,"released_at":173},200677,"v0.2.27","## What's Changed\r\n* fix(prompt): warn when custom template directory doesn't exist by @jiezhuzzz in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F388\r\n* Show actual SEARCH\u002FREPLACE content in diff summaries by @fangchenli in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F390\r\n* Fix island-based evolution not distributing programs across islands by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F392\r\n* Fix bugs by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F442\r\n\r\n## New Contributors\r\n* @jiezhuzzz made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F388\r\n* @fangchenli made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F390\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fcompare\u002Fv0.2.26...v0.2.27","2026-03-18T12:25:39",{"id":175,"version":176,"summary_zh":177,"released_at":178},200678,"v0.2.26","## What's Changed\r\n* Add rich feedback mode to k_module_problem example by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F366\r\n* fix(logging): fix logging bug in reject sampling by @zigzagcai in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F382\r\n* Fixes #372: Reliability issues in `examples\u002Fmlx_metal_kernel_opt`  by @lanmogu98 in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F377\r\n* Manual Mode Support: extended visualizer UI by @strangecreator in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F373\r\n* ARC-AGI-2 example + Event-based early stopping by @omkar-rjoglekar in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F375\r\n* Large codebase support through LLM changes description + TSP example using this approach by @strangecreator in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F376\r\n* Fix visualization with -inf scores (from PR #380) by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F384\r\n* Fix Anthropic models error when both temperature and top_p are passed by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F385\r\n* Make max snapshot artifacts limit configurable by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F386\r\n\r\n## New Contributors\r\n* @lanmogu98 made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F377\r\n* @strangecreator made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F373\r\n* @omkar-rjoglekar made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F375\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fcompare\u002Fv0.2.25...v0.2.26","2026-01-28T04:30:57",{"id":180,"version":181,"summary_zh":182,"released_at":183},200679,"v0.2.25","## What's Changed\r\n* Refactor: Simplify `Config.from_dict` with `dacite` and integrate pre-commit by @yuxuan-z19 in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F360\r\n* Feat update circle example by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F364\r\n* Feat update circle example by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F365\r\n* refactor(prompt): Replace hardcoded strings with template fragments by @FunMelon in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F363\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fcompare\u002Fv0.2.24...v0.2.25","2025-12-23T18:01:06",{"id":185,"version":186,"summary_zh":187,"released_at":188},200680,"v0.2.24","## What's Changed\r\n* feat: ${VAR} for API key configuration by @Winston-503 in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F336\r\n* Change config.yaml for function_minimization to use Gemini by default. by @codrut3 in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F324\r\n* Add configurable diff_pattern by @HenriqueAssumpcao in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F352\r\n* Fix island population counting per child program by @lyx1237 in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F354\r\n* fix for #356 by @cometta in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F357\r\n* bump version for new release by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F358\r\n\r\n## New Contributors\r\n* @Winston-503 made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F336\r\n* @codrut3 made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F324\r\n* @lyx1237 made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F354\r\n* @cometta made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F357\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fcompare\u002Fv0.2.23...v0.2.24","2025-12-18T03:05:33",{"id":190,"version":191,"summary_zh":192,"released_at":193},200681,"v0.2.23","## What's Changed\r\n* Make max_tasks_per_child configurable to fix CUDA memory leaks by @yuxuan-z19 in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F331\r\n* Add SLDBench by @linhaowei1 in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F334\r\n* fix(scripts\u002Fstatic\u002Fjs\u002Fsidebar.js): fixed unicode escape error in disp… by @FunMelon in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F338\r\n* Fix readme by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F341\r\n* Update controller.py by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F345\r\n\r\n## New Contributors\r\n* @yuxuan-z19 made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F331\r\n* @FunMelon made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F338\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fcompare\u002Fv0.2.22...v0.2.23","2025-12-11T02:40:23",{"id":195,"version":196,"summary_zh":197,"released_at":198},200682,"v0.2.22","## What's Changed\r\n* Fix key embedding model by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F329\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fcompare\u002Fv0.2.21...v0.2.22","2025-11-26T00:28:31",{"id":200,"version":201,"summary_zh":202,"released_at":203},200683,"v0.2.21","## What's Changed\r\n* Fix embedding async error by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F328\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fcompare\u002Fv0.2.20...v0.2.21","2025-11-25T08:34:56",{"id":205,"version":206,"summary_zh":207,"released_at":208},200684,"v0.2.20","## What's Changed\r\n* Update README.md by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F316\r\n* Simplify reasoning model detection logic by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F322\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fcompare\u002Fv0.2.19...v0.2.20","2025-11-18T03:34:12",{"id":210,"version":211,"summary_zh":212,"released_at":213},200685,"v0.2.19","## What's Changed\r\n* Align parameter names for the `generate_with_context` function by @chxu2000 in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F296\r\n* Fix: Support OpenAI regional API endpoints (EU\u002FAPAC) for reasoning models by @Sahilraj3107 in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F300\r\n* Add all AlphaEvolve mathematical problems by @HenriqueAssumpcao in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F302\r\n* Fix rust example by @ifsheldon in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F305\r\n* Update _version.py by @codelion in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F310\r\n\r\n## New Contributors\r\n* @chxu2000 made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F296\r\n* @Sahilraj3107 made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F300\r\n* @ifsheldon made their first contribution in https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fpull\u002F305\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Falgorithmicsuperintelligence\u002Fopenevolve\u002Fcompare\u002Fv0.2.18...v0.2.19","2025-11-01T05:56:23",{"id":215,"version":216,"summary_zh":217,"released_at":218},200686,"v0.2.18","## What's Changed\r\n* Use combined_score for target_score analysis instead of an average of all numeric scores by @theahura in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F286\r\n* Add novelty rejection sampling feature by @bluebread in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F287\r\n* Allow setting programs_per_island from config by @theahura in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F291\r\n* Update _version.py by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F292\r\n\r\n## New Contributors\r\n* @bluebread made their first contribution in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F287\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.17...v0.2.18","2025-10-13T02:31:37",{"id":220,"version":221,"summary_zh":222,"released_at":223},200687,"v0.2.17","## What's Changed\r\n* Fixes islands by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F280\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.16...v0.2.17","2025-10-07T03:41:29",{"id":225,"version":226,"summary_zh":227,"released_at":228},200688,"v0.2.16","## What's Changed\r\n* Update README.md by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F273\r\n* escape html in sidebar by @amueller in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F274\r\n* UI: Resizable sidebar, view diff to parent in sidebar (vibecoded) by @amueller in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F275\r\n* Fix parallel sample island by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F279\r\n\r\n## New Contributors\r\n* @amueller made their first contribution in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F274\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.15...v0.2.16","2025-10-05T02:45:49",{"id":230,"version":231,"summary_zh":232,"released_at":233},200689,"v0.2.15","## What's Changed\r\n* Update process_parallel.py by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F272\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.14...v0.2.15","2025-09-16T22:51:41",{"id":235,"version":236,"summary_zh":237,"released_at":238},200690,"v0.2.14","## What's Changed\r\n* feature: Add evolution trace logging for RL training support by @totoluo in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F261\r\n* Update test_library_api.py by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F262\r\n* Fix link in example README.md by @Ag2S1 in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F263\r\n* Update README.md by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F265\r\n* Update README.md by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F266\r\n* Fix random seed by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F268\r\n\r\n## New Contributors\r\n* @totoluo made their first contribution in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F261\r\n* @Ag2S1 made their first contribution in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F263\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.13...v0.2.14","2025-09-14T00:53:31",{"id":240,"version":241,"summary_zh":242,"released_at":243},200691,"v0.2.13","## What's Changed\r\n* Add LLM config option to allow the use of custom LLM clients by @mmalmrud in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F254\r\n* Fix reasoning effort by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F260\r\n\r\n## New Contributors\r\n* @mmalmrud made their first contribution in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F254\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.12...v0.2.13","2025-09-06T23:40:41",{"id":245,"version":246,"summary_zh":247,"released_at":248},200692,"v0.2.12","## What's Changed\r\n* refine(trace): save prompts in the checkpoint output folder by @zigzagcai in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F251\r\n* Fix islands map conflict by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F256\r\n\r\n## New Contributors\r\n* @zigzagcai made their first contribution in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F251\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.11...v0.2.12","2025-09-04T23:17:25",{"id":250,"version":251,"summary_zh":252,"released_at":253},200693,"v0.2.11","## What's Changed\r\n* fix timeout by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F250\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.10...v0.2.11","2025-08-30T15:08:22",{"id":255,"version":256,"summary_zh":257,"released_at":258},200694,"v0.2.10","## What's Changed\r\n* fix early stopping by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F249\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.9...v0.2.10","2025-08-30T13:50:13",{"id":260,"version":261,"summary_zh":262,"released_at":263},200695,"v0.2.9","## What's Changed\r\n* Update README.md by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F240\r\n* Fix prompt optimizer example by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F244\r\n* Update openai.py by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F248\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.8...v0.2.9","2025-08-29T23:41:41",{"id":265,"version":266,"summary_zh":267,"released_at":268},200696,"v0.2.8","## What's Changed\r\n* add early stopping by @codelion in https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fpull\u002F238\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fcodelion\u002Fopenevolve\u002Fcompare\u002Fv0.2.7...v0.2.8","2025-08-26T10:17:07"]