[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ShishirPatil--gorilla":3,"tool-ShishirPatil--gorilla":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":78,"owner_twitter":78,"owner_website":81,"owner_url":82,"languages":83,"stars":120,"forks":121,"last_commit_at":122,"license":123,"difficulty_score":23,"env_os":124,"env_gpu":125,"env_ram":124,"env_deps":126,"category_tags":133,"github_topics":134,"view_count":143,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":144,"updated_at":145,"faqs":146,"releases":177},2296,"ShishirPatil\u002Fgorilla","gorilla","Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)","Gorilla 是一个专为提升大语言模型（LLM）函数调用能力而设计的开源项目，旨在让 AI 更精准、安全地连接和操作海量 API。它核心解决了大模型在面对复杂工具链时“选错工具”或“参数生成错误”的难题，通过专门的训练与评估体系，显著提高了模型在真实场景中执行任务的准确率。\n\n该项目不仅提供了一系列经过微调的模型，还推出了权威的“伯克利函数调用排行榜”（BFCL），持续追踪并评估业界模型在多轮对话、多步骤工作流及智能体记忆管理等高难度场景下的表现。其独特的技术亮点包括支持多跳推理、错误恢复机制，以及配套的 GoEx 运行时环境——后者能提供“事后验证”和“操作回滚”功能，有效管控自动执行代码或 API 时可能产生的风险，为构建完全自主的智能体系统奠定基础。\n\nGorilla 非常适合致力于开发 AI 智能体（Agent）的开发者、需要评估模型工具调用能力的研究人员，以及希望将大模型深度集成到现有业务系统中的企业技术团队。如果你正在探索如何让 AI 从“单纯聊天”进化为“真正办事”，Gorilla 提供了宝贵的模型资源、评测基准与安全执行框架。","# Gorilla: Large Language Model Connected with Massive APIs\n\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShishirPatil_gorilla_readme_fc5a1aa21630.png\" width=\"50%\" height=\"50%\">\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n  \n[![Arxiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGorilla_Paper-2305.15334-\u003CCOLOR>.svg?style=flat-square)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15334) [![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1111172801899012102?label=Discord&logo=discord&logoColor=green&style=flat-square)](https:\u002F\u002Fdiscord.gg\u002FgrXXvj9Whz) [![Gorilla Website](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWebsite-gorilla.cs.berkeley.edu-blue?style=flat-square)](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002F) [![Gorilla Blog](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBlog-gorilla.cs.berkeley.edu\u002Fblog.html-blue?style=flat-square)](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblog.html) [![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-gorilla--llm-yellow.svg?style=flat-square)](https:\u002F\u002Fhuggingface.co\u002Fgorilla-llm)\n\n\u003C\u002Fdiv>\n\n## Latest Updates\n> 📢 Check out our detailed [Berkeley Function Calling Leaderboard changelog](\u002Fberkeley-function-call-leaderboard\u002FCHANGELOG.md) (Last updated: ![Last Updated](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002FShishirPatil\u002Fgorilla?path=berkeley-function-call-leaderboard\u002FCHANGELOG.md)) for the latest dataset \u002F model updates to the Berkeley Function Calling Leaderboard!\n\n\n- 🤖 [07\u002F17\u002F2025] Announcing BFCL V4 Agentic! As function-calling forms the bedrock of Agentic systems, BFCL V4 Agentic benchmark focuses on tool-calling in real-world agentic settings, featuring web search with multi-hop reasoning and error recovery, agent memory management, and format sensitivity evaluation. [[Web-search Blog](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F15_bfcl_v4_web_search.html)] [[Memory Blog](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F16_bfcl_v4_memory.html)] [[Format Sensitivity Blog](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F17_bfcl_v4_prompt_variation.html)] [[PR](https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F1019)] [[Tweet](https:\u002F\u002Fx.com\u002Fshishirpatil_\u002Fstatus\u002F1946020561626546176)]\n\n- 🎯 [10\u002F04\u002F2024] Introducing the Agent Arena by Gorilla X LMSYS Chatbot Arena! Compare different agents in tasks like search, finance, RAG, and beyond. Explore which models and tools work best for specific tasks through our novel ranking system and community-driven prompt hub. [[Blog](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F14_agent_arena.html)] [[Arena](http:\u002F\u002Fagent-arena.com)] [[Leaderboard](http:\u002F\u002Fagent-arena.com\u002Fleaderboard)] [[Dataset](https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Ftree\u002Fmain\u002Fagent-arena#evaluation-directory)] [[Tweet](https:\u002F\u002Fx.com\u002Fshishirpatil_\u002Fstatus\u002F1841876885757977044)]\n\n- 📣 [09\u002F21\u002F2024] Announcing BFCL V3 - Evaluating multi-turn and multi-step function calling capabilities! New state-based evaluation system tests models on handling complex workflows, sequential functions, and service states. [[Blog](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F13_bfcl_v3_multi_turn.html)] [[Leaderboard](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard.html)] [[Code](https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Ftree\u002Fmain\u002Fberkeley-function-call-leaderboard)] [[Tweet](https:\u002F\u002Fx.com\u002Fshishirpatil_\u002Fstatus\u002F1837205152132153803)]\n\n- 🚀 [08\u002F20\u002F2024] Released BFCL V2 • Live! The Berkeley Function-Calling Leaderboard now features enterprise-contributed data and real-world scenarios. [[Blog](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F12_bfcl_v2_live.html)] [[Live Leaderboard](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard_live.html)] [[V2 Categories Leaderboard](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard.html)] [[Tweet](https:\u002F\u002Fx.com\u002Fshishirpatil_\u002Fstatus\u002F1825577931697233999)]\n\n- ⚡️ [04\u002F12\u002F2024] Excited to release GoEx - a runtime for LLM-generated actions like code, API calls, and more. Featuring \"post-facto validation\" for assessing LLM actions after execution, \"undo\" and \"damage confinement\" abstractions to manage unintended actions & risks. This paves the way for fully autonomous LLM agents, enhancing interaction between apps & services with human-out-of-loop. [[Blog](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F10_gorilla_exec_engine.html)] [[Code](https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Ftree\u002Fmain\u002Fgoex)] [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06921)] [[Tweet](https:\u002F\u002Fx.com\u002Fshishirpatil_\u002Fstatus\u002F1778485140257452375)]\n\n- ⏰ [04\u002F01\u002F2024] Introducing cost and latency metrics into [Berkeley function calling leaderboard](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard)!\n- :rocket: [03\u002F15\u002F2024] RAFT: Adapting Language Model to Domain Specific RAG is live! [[MSFT-Meta blog](https:\u002F\u002Ftechcommunity.microsoft.com\u002Ft5\u002Fai-ai-platform-blog\u002Fbg-p\u002FAIPlatformBlog)] [[Berkeley Blog](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F9_raft.html)]\n- :trophy: [02\u002F26\u002F2024] [Berkeley Function Calling Leaderboard](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard) is live!\n- :dart: [02\u002F25\u002F2024] [OpenFunctions v2](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F7_open_functions_v2.html) sets new SoTA for open-source LLMs!\n- :fire: [11\u002F16\u002F2023] Excited to release [Gorilla OpenFunctions](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F4_open_functions.html)\n- 💻 [06\u002F29\u002F2023] Released [gorilla-cli](https:\u002F\u002Fgithub.com\u002Fgorilla-llm\u002Fgorilla-cli), LLMs for your CLI!\n- 🟢 [06\u002F06\u002F2023] Released Commercially usable, Apache 2.0 licensed Gorilla models\n- :rocket: [05\u002F30\u002F2023] Provided the [CLI interface](inference\u002FREADME.md) to chat with Gorilla!\n- :rocket: [05\u002F28\u002F2023] Released Torch Hub and TensorFlow Hub Models!\n- :rocket: [05\u002F27\u002F2023] Released the first Gorilla model! [![Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1y78Zj7xHysX0xMpr9S468HYs12Mj6X1F?usp=sharing) or [:hugs:](https:\u002F\u002Fhuggingface.co\u002Fgorilla-llm\u002Fgorilla-7b-hf-delta-v0)!\n- :fire: [05\u002F27\u002F2023] We released the APIZoo contribution guide for community API contributions!\n- :fire: [05\u002F25\u002F2023] We release the APIBench dataset and the evaluation code of Gorilla!\n\n\n## About\n\n**Gorilla enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla comes up with the semantically- and syntactically- correct API to invoke.** \n\nWith Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. This repository contains [inference code](\u002Fgorilla\u002Finference) for running Gorilla finetuned models, [evaluation code](\u002Fgorilla\u002Feval) for reproducing results from our paper, and [APIBench](\u002Fdata) - the largest collection of APIs, curated and easy to be trained on!\n\nSince our initial release, we've served ~500k requests and witnessed incredible adoption by developers worldwide. The project has expanded to include tools, evaluations, leaderboard, end-to-end finetuning recipes, infrastructure components, and the Gorilla API Store:\n\n| Project | Type | Description (click to expand) |\n|---------|------|---------------------------|\n| [Gorilla Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15334) | 🤖 Model\u003Cbr>📝 Fine-tuning\u003Cbr>📚 Dataset\u003Cbr>📊 Evaluation\u003Cbr>🔧 Infra | \u003Cdetails>\u003Csummary>Large Language Model Connected with Massive APIs\u003C\u002Fsummary>• Novel finetuning approach for API invocation\u003Cbr>• Evaluation on 1,600+ APIs (APIBench)\u003Cbr>• Retrieval-augmented training for test-time adaptation |\n| [Gorilla OpenFunctions-V2](openfunctions\u002F) | 🤖 Model | \u003Cdetails>\u003Csummary>Drop-in alternative for function calling, supporting multiple complex data types and parallel execution\u003C\u002Fsummary>• Multiple & parallel function execution with OpenAI-compatible endpoints\u003Cbr>• Native support for Python, Java, JavaScript, and REST APIs with expanded data types\u003Cbr>• Function relevance detection to reduce hallucinations\u003Cbr>• Enhanced RESTful API formatting capabilities\u003Cbr>• State-of-the-art performance among open-source models\u003C\u002Fdetails> |\n| [Berkeley Function Calling Leaderboard (BFCL)](berkeley-function-call-leaderboard\u002F) | 📊 Evaluation\u003Cbr>🏆 Leaderboard\u003Cbr>🔧 Function Calling Infra\u003Cbr>📚 Dataset | \u003Cdetails>\u003Csummary>Comprehensive evaluation of function-calling capabilities\u003C\u002Fsummary>• V1: Expert-curated dataset for evaluating single-turn function calling\u003Cbr>• V2: Enterprise-contributed data for real-world scenarios\u003Cbr>• V3: Multi-turn & multi-step function calling evaluation\u003Cbr>• Cost and latency metrics for all models\u003Cbr>• Interactive API explorer for testing\u003Cbr>• Community-driven benchmarking platform\u003C\u002Fdetails> |\n| [Agent Arena](agent-arena\u002F) | 📊 Evaluation\u003Cbr>🏆 Leaderboard | \u003Cdetails>\u003Csummary>Compare LLM agents across models, tools, and frameworks\u003C\u002Fsummary>• Head-to-head agent comparisons with ELO rating system\u003Cbr>• Framework compatibility testing (LangChain, AutoGPT)\u003Cbr>• Community-driven evaluation platform\u003Cbr>• Real-world task performance metrics\u003C\u002Fdetails> |\n| [Gorilla Execution Engine (GoEx)](goex\u002F) | 🔧 Infra | \u003Cdetails>\u003Csummary>Runtime for executing LLM-generated actions with safety guarantees\u003C\u002Fsummary>• Post-facto validation for verifying LLM actions after execution\u003Cbr>• Undo capabilities and damage confinement for risk mitigation\u003Cbr>• OAuth2 and API key authentication for multiple services\u003Cbr>• Support for RESTful APIs, databases, and filesystem operations\u003Cbr>• Docker-based sandboxed execution environment\u003C\u002Fdetails> |\n| [Retrieval-Augmented Fine-tuning (RAFT)](raft\u002F) | 📝 Fine-tuning\u003Cbr>🤖 Model | \u003Cdetails>\u003Csummary>Fine-tuning LLMs for robust domain-specific retrieval\u003C\u002Fsummary>• Novel fine-tuning recipe for domain-specific RAG\u003Cbr>• Chain-of-thought answers with direct document quotes\u003Cbr>• Training with oracle and distractor documents\u003Cbr>• Improved performance on PubMed, HotpotQA, and Gorilla benchmarks\u003Cbr>• Efficient adaptation of smaller models for domain QA\u003C\u002Fdetails> |\n| [Gorilla CLI](https:\u002F\u002Fgithub.com\u002Fgorilla-llm\u002Fgorilla-cli) | 🤖 Model\u003Cbr>🔧 Local CLI Infra | \u003Cdetails>\u003Csummary>LLMs for your command-line interface\u003C\u002Fsummary>• User-friendly CLI tool supporting ~1500 APIs (Kubernetes, AWS, GCP, etc.)\u003Cbr>• Natural language command generation with multi-LLM fusion\u003Cbr>• Privacy-focused with explicit execution approval\u003Cbr>• Command history and interactive selection interface\u003C\u002Fdetails> |\n| [Gorilla API Zoo](apizoo\u002F) | 📚 Dataset | \u003Cdetails>\u003Csummary>A community-maintained repository of up-to-date API documentation\u003C\u002Fsummary>• Centralized, searchable index of APIs across domains\u003Cbr>• Structured documentation format with arguments, versioning, and examples\u003Cbr>• Community-driven updates to keep pace with API changes\u003Cbr>• Rich data source for model training and fine-tuning\u003Cbr>• Enables retrieval-augmented training and inference\u003Cbr>• Reduces hallucination through up-to-date documentation\u003C\u002Fdetails> |\n\n## Getting Started\n\n### Quick Start\nTry Gorilla in your browser:\n- 🚀 [Gorilla Colab Demo](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1y78Zj7xHysX0xMpr9S468HYs12Mj6X1F?usp=sharing): Try the base Gorilla model\n- 🌐 [Gorilla Gradio Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fgorilla-llm\u002Fgorilla-demo\u002F): Interactive web interface\n- 🔥 [OpenFunctions Colab Demo](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Td3_R5vPael9PnKYHcl-PxmZkZzA9TCo?usp=sharing): Try the latest OpenFunctions model\n- 🎯 [OpenFunctions Website Demo](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard.html#api-explorer): Experiment with function calling\n- 📊 [Berkeley Function Calling Leaderboard](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard): Compare function calling capabilities\n\n### Installation Options\n\n1. **Gorilla CLI** - Fastest way to get started\n```bash\npip install gorilla-cli\ngorilla generate 100 random characters into a file called test.txt\n```\n[Learn more about Gorilla CLI →](https:\u002F\u002Fgithub.com\u002Fgorilla-llm\u002Fgorilla-cli)\n\n2. **Run Gorilla Locally**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla.git\ncd gorilla\u002Finference\n```\n[Detailed local setup instructions →](\u002Fgorilla\u002Finference\u002FREADME.md)\n\n3. **Use OpenFunctions**\n```python\nimport openai\n\nopenai.api_key = \"EMPTY\"\nopenai.api_base = \"http:\u002F\u002Fluigi.millennium.berkeley.edu:8000\u002Fv1\"\n\n# Define your functions\nfunctions = [{\n    \"name\": \"get_current_weather\",\n    \"description\": \"Get weather in a location\",\n    \"parameters\": {\n        \"type\": \"object\",\n        \"properties\": {\n            \"location\": {\"type\": \"string\"},\n            \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]}\n        },\n        \"required\": [\"location\"]\n    }\n}]\n\n# Make API call\ncompletion = openai.ChatCompletion.create(\n    model=\"gorilla-openfunctions-v2\",\n    messages=[{\"role\": \"user\", \"content\": \"What's the weather in San Francisco?\"}],\n    functions=functions\n)\n```\n[OpenFunctions documentation →](\u002Fopenfunctions\u002FREADME.md)\n\n### 🔧 Other Quick Starts\n\n- 📊 **Evaluation & Benchmarking**\n  - [Berkeley Function Calling Leaderboard](\u002Fberkeley-function-call-leaderboard\u002FREADME.md): Compare function calling capabilities\n  - [Agent Arena](\u002Fagent-arena\u002FREADME.md): Evaluate agent workflows\n  - [Gorilla Paper Evaluation Scripts](\u002Fgorilla\u002Feval\u002FREADME.md): Run your own evaluations\n\n- 🛠️ **Development Tools**\n  - [GoEx](\u002Fgoex\u002FREADME.md): Safe execution of LLM-generated actions\n  - [RAFT](\u002Fraft\u002FREADME.md): Fine-tune models for domain-specific tasks\n  - [API Store](\u002Fdata\u002FREADME.md): Contribute and use APIs\n\n\n## Frequently Asked Questions\n1. I would like to use Gorilla commercially. Is there going to be an Apache 2.0 licensed version?\n\nYes! We now have models that you can use commercially without any obligations.\n\n\n2. Can we use Gorilla with other tools like Langchain etc?\n\nAbsolutely! You've highlighted a great aspect of our tools. Gorilla is  an  end-to-end model, specifically tailored to serve correct API calls (tools) without requiring any additional coding. It's designed to work as part of a wider ecosystem and can be flexibly integrated within agentic frameworks and other tools.\n\nLangchain, is a versatile developer tool. Its \"agents\" can efficiently swap in any LLM, Gorilla included, making it a highly adaptable solution for various needs.\n\nThe beauty of these tools truly shines when they collaborate, complementing each other's strengths and capabilities to create an even more powerful and comprehensive solution. This is where your contribution can make a difference. We enthusiastically welcome any inputs to further refine and enhance these tools. \n\nCheck out our blog on [How to Use Gorilla: A Step-by-Step Walkthrough](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F5_how_to_gorilla.html) to see all the different ways you can integrate Gorilla in your projects.\n\n## Project Roadmap\nIn the immediate future, we plan to release the following:\n\n- [ ] Multimodal function-calling leaderboard\n- [ ] Agentic function-calling leaderboard\n- [ ] New batch of user contributed live function calling evals. \n- [ ] BFCL metrics to evaluate contamination\n- [ ] Openfunctions-v3 model to support more languages and multi-turn capability\n- [x] Agent Arena to compare LLM agents across models, tools, and frameworks [10\u002F04\u002F2024]\n- [x] Multi-turn and multi-step function calling evaluation [09\u002F21\u002F2024]\n- [x] User contributed Live Function Calling Leaderboard [08\u002F20\u002F2024]\n- [x] BFCL systems metrics including cost and latency [04\u002F01\u002F2024]\n- [x] Gorilla Execution Engine (GoEx) - Runtime for executing LLM-generated actions with safety guarantees [04\u002F12\u002F2024]\n- [x] Berkeley Function Calling leaderboard (BFCL) for evaluating tool-calling\u002Ffunction-calling models [02\u002F26\u002F2024]\n- [x] Openfunctions-v2 with more languages (Java, JS, Python), relevance detection [02\u002F26\u002F2024]\n- [x] API Zoo Index for easy access to all APIs [02\u002F16\u002F2024]\n- [x] Openfunctions-v1, Apache 2.0, with parallel and multiple function calling [11\u002F16\u002F2023]\n- [x] Openfunctions-v0, Apache 2.0 function calling model [11\u002F16\u002F2023]\n- [X] Release a commercially usable, Apache 2.0 licensed Gorilla model [06\u002F05\u002F2023] \n- [X] Release weights for all APIs from APIBench [05\u002F28\u002F2023]\n- [X] Run Gorilla LLM locally [05\u002F28\u002F2023]\n- [X] Release weights for HF model APIs [05\u002F27\u002F2023]\n- [X] Hosted Gorilla LLM chat for HF model APIs [05\u002F27\u002F2023]\n- [X] Opening up the APIZoo for contributions from community\n- [X] Dataset and Eval Code\n\n## License\n\nGorilla is Apache 2.0 licensed, making it suitable for both academic and commercial use.\n\n## Contact\n\n- 💬 Join our [Discord Community](https:\u002F\u002Fdiscord.gg\u002FgrXXvj9Whz)\n- 🐦 Follow us on [X](https:\u002F\u002Fx.com\u002Fshishirpatil_)\n\n## Citation\n\n```text\n@article{patil2023gorilla,\n  title={Gorilla: Large Language Model Connected with Massive APIs},\n  author={Shishir G. Patil and Tianjun Zhang and Xin Wang and Joseph E. Gonzalez},\n  year={2023},\n  journal={arXiv preprint arXiv:2305.15334},\n} \n```\n","# Gorilla：连接海量API的大语言模型\n\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShishirPatil_gorilla_readme_fc5a1aa21630.png\" width=\"50%\" height=\"50%\">\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n  \n[![Arxiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGorilla_Paper-2305.15334-\u003CCOLOR>.svg?style=flat-square)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15334) [![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1111172801899012102?label=Discord&logo=discord&logoColor=green&style=flat-square)](https:\u002F\u002Fdiscord.gg\u002FgrXXvj9Whz) [![Gorilla Website](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWebsite-gorilla.cs.berkeley.edu-blue?style=flat-square)](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002F) [![Gorilla Blog](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBlog-gorilla.cs.berkeley.edu\u002Fblog.html-blue?style=flat-square)](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblog.html) [![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-gorilla--llm-yellow.svg?style=flat-square)](https:\u002F\u002Fhuggingface.co\u002Fgorilla-llm)\n\n\u003C\u002Fdiv>\n\n## 最新动态\n> 📢 请查看我们详细的[Berkeley函数调用排行榜变更日志](\u002Fberkeley-function-call-leaderboard\u002FCHANGELOG.md)（最后更新时间：![最后更新](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002FShishirPatil\u002Fgorilla?path=berkeley-function-call-leaderboard\u002FCHANGELOG.md)），了解Berkeley函数调用排行榜的最新数据集和模型更新！\n\n\n- 🤖 [2025年7月17日] 宣布BFCL V4 Agentic！由于函数调用是Agentic系统的基础，BFCL V4 Agentic基准测试专注于现实世界中的代理式工具调用场景，包括多跳推理与错误恢复的网络搜索、代理内存管理以及格式敏感性评估。[[网络搜索博客](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F15_bfcl_v4_web_search.html)] [[内存博客](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F16_bfcl_v4_memory.html)] [[格式敏感性博客](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F17_bfcl_v4_prompt_variation.html)] [[PR](https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F1019)] [[推文](https:\u002F\u002Fx.com\u002Fshishirpatil_\u002Fstatus\u002F1946020561626546176)]\n\n- 🎯 [2024年10月4日] 推出由Gorilla X LMSYS聊天机器人竞技场打造的Agent Arena！在搜索、金融、RAG等任务中比较不同代理的表现。通过我们创新的排名系统和社区驱动的提示中心，探索哪些模型和工具最适合特定任务。[[博客](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F14_agent_arena.html)] [[竞技场](http:\u002F\u002Fagent-arena.com)] [[排行榜](http:\u002F\u002Fagent-arena.com\u002Fleaderboard)] [[数据集](https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Ftree\u002Fmain\u002Fagent-arena#evaluation-directory)] [[推文](https:\u002F\u002Fx.com\u002Fshishirpatil_\u002Fstatus\u002F1841876885757977044)]\n\n- 📣 [2024年9月21日] 宣布BFCL V3——评估多轮次和多步骤的函数调用能力！全新的基于状态的评估体系测试模型处理复杂工作流、序列化函数及服务状态的能力。[[博客](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F13_bfcl_v3_multi_turn.html)] [[排行榜](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard.html)] [[代码](https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Ftree\u002Fmain\u002Fberkeley-function-call-leaderboard)] [[推文](https:\u002F\u002Fx.com\u002Fshishirpatil_\u002Fstatus\u002F1837205152132153803)]\n\n- 🚀 [2024年8月20日] BFCL V2正式上线！伯克利函数调用排行榜现已纳入企业贡献的数据和真实场景。[[博客](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F12_bfcl_v2_live.html)] [[实时排行榜](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard_live.html)] [[V2分类排行榜](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard.html)] [[推文](https:\u002F\u002Fx.com\u002Fshishirpatil_\u002Fstatus\u002F1825577931697233999)]\n\n- ⚡️ [2024年4月12日] 很高兴发布GoEx——一个用于LLM生成操作（如代码、API调用等）的运行时环境。该平台具备“事后验证”功能，可在执行后评估LLM的操作，并提供“撤销”和“损害限制”抽象机制，以管理意外行为和风险。这为完全自主的LLM代理铺平了道路，增强了应用程序和服务之间无需人工干预的交互能力。[[博客](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F10_gorilla_exec_engine.html)] [[代码](https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Ftree\u002Fmain\u002Fgoex)] [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.06921)] [[推文](https:\u002F\u002Fx.com\u002Fshishirpatil_\u002Fstatus\u002F1778485140257452375)]\n\n- ⏰ [2024年4月1日] 将成本和延迟指标引入[伯克利函数调用排行榜](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard)！\n- :rocket: [2024年3月15日] RAFT：将语言模型适配到领域特定的RAG已上线！[[MSFT-Meta博客](https:\u002F\u002Ftechcommunity.microsoft.com\u002Ft5\u002Fai-ai-platform-blog\u002Fbg-p\u002FAIPlatformBlog)] [[伯克利博客](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F9_raft.html)]\n- :trophy: [2024年2月26日] [伯克利函数调用排行榜](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard)正式上线！\n- :dart: [2024年2月25日] [OpenFunctions v2](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F7_open_functions_v2.html)为开源LLM树立了新的SOTA！\n- :fire: [2023年11月16日] 很高兴发布[Gorilla OpenFunctions](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F4_open_functions.html)\n- 💻 [2023年6月29日] 发布[gorilla-cli](https:\u002F\u002Fgithub.com\u002Fgorilla-llm\u002Fgorilla-cli)，让LLM走进你的命令行界面！\n- 🟢 [2023年6月6日] 发布可商用、采用Apache 2.0许可的Gorilla模型\n- :rocket: [2023年5月30日] 提供了与Gorilla对话的[CLI接口](inference\u002FREADME.md)！\n- :rocket: [2023年5月28日] 发布Torch Hub和TensorFlow Hub模型！\n- :rocket: [2023年5月27日] 发布首个Gorilla模型！[![Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1y78Zj7xHysX0xMpr9S468HYs12Mj6X1F?usp=sharing)或[:hugs:](https:\u002F\u002Fhuggingface.co\u002Fgorilla-llm\u002Fgorilla-7b-hf-delta-v0)!\n- :fire: [2023年5月27日] 我们发布了APIZoo社区API贡献指南！\n- :fire: [2023年5月25日] 我们发布了APIBench数据集以及Gorilla的评估代码！\n\n## 关于\n\n**Gorilla 通过调用 API 使大型语言模型能够使用工具。给定自然语言查询，Gorilla 会生成语义和语法都正确的 API 调用。**\n\n借助 Gorilla，我们首次展示了如何利用大型语言模型准确地调用 1,600 多个（且数量仍在增长）API，并有效减少幻觉现象。本仓库包含用于运行 Gorilla 微调模型的 [推理代码](\u002Fgorilla\u002Finference)、用于复现我们论文结果的 [评估代码](\u002Fgorilla\u002Feval)，以及 [APIBench](\u002Fdata)——一个规模最大的 API 集合，经过精心整理且易于训练！\n\n自我们首次发布以来，已处理约 50 万次请求，并在全球开发者中获得了广泛采用。该项目现已扩展至包括工具、评估、排行榜、端到端微调流程、基础设施组件以及 Gorilla API 商店：\n\n| 项目 | 类型 | 描述（点击展开） |\n|---------|------|---------------------------|\n| [Gorilla 论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15334) | 🤖 模型\u003Cbr>📝 微调\u003Cbr>📚 数据集\u003Cbr>📊 评估\u003Cbr>🔧 基础设施 | \u003Cdetails>\u003Csummary>与海量 API 连接的大语言模型\u003C\u002Fsummary>• 创新的 API 调用微调方法\u003Cbr>• 在 1,600 多个 API 上进行评估（APIBench）\u003Cbr>• 检索增强型训练，实现测试时适应性\u003C\u002Fdetails> |\n| [Gorilla OpenFunctions-V2](openfunctions\u002F) | 🤖 模型 | \u003Cdetails>\u003Csummary>函数调用的即插即用替代方案，支持多种复杂数据类型及并行执行\u003C\u002Fsummary>• 兼容 OpenAI 的接口，可同时或并行执行多个函数\u003Cbr>• 原生支持 Python、Java、JavaScript 和 REST API，并扩展了数据类型支持\u003Cbr>• 函数相关性检测，以减少幻觉\u003Cbr>• 增强的 RESTful API 格式化能力\u003Cbr>• 开源模型中的最先进性能\u003C\u002Fdetails> |\n| [伯克利函数调用排行榜 (BFCL)](berkeley-function-call-leaderboard\u002F) | 📊 评估\u003Cbr>🏆 排行榜\u003Cbr>🔧 函数调用基础设施\u003Cbr>📚 数据集 | \u003Cdetails>\u003Csummary>全面评估函数调用能力\u003C\u002Fsummary>• V1：专家精选的数据集，用于评估单轮函数调用\u003Cbr>• V2：企业贡献的真实场景数据\u003Cbr>• V3：多轮、多步骤函数调用评估\u003Cbr>• 所有模型的成本和延迟指标\u003Cbr>• 交互式 API 探索器，可供测试使用\u003Cbr>• 社区驱动的基准测试平台\u003C\u002Fdetails> |\n| [Agent Arena](agent-arena\u002F) | 📊 评估\u003Cbr>🏆 排行榜 | \u003Cdetails>\u003Csummary>跨模型、工具和框架比较 LLM 代理\u003C\u002Fsummary>• 采用 ELO 评分系统进行代理间的直接对比\u003Cbr>• 测试框架兼容性（LangChain、AutoGPT）\u003Cbr>• 社区驱动的评估平台\u003Cbr>• 真实任务表现指标\u003C\u002Fdetails> |\n| [Gorilla 执行引擎 (GoEx)](goex\u002F) | 🔧 基础设施 | \u003Cdetails>\u003Csummary>用于执行 LLM 生成动作的安全保障运行时\u003C\u002Fsummary>• 执行后的事后验证，确保 LLM 动作的正确性\u003Cbr>• 支持撤销操作和损害限制，以降低风险\u003Cbr>• 支持 OAuth2 和 API 密钥认证，适用于多种服务\u003Cbr>• 支持 RESTful API、数据库和文件系统操作\u003Cbr>• 基于 Docker 的沙盒执行环境\u003C\u002Fdetails> |\n| [检索增强型微调 (RAFT)](raft\u002F) | 📝 微调\u003Cbr>🤖 模型 | \u003Cdetails>\u003Csummary>为强大的领域特定检索而微调 LLM\u003C\u002Fsummary>• 领域特定 RAG 的全新微调方案\u003Cbr>• 结合思维链的回答，并直接引用文档内容\u003Cbr>• 使用 oracle 文档和干扰文档进行训练\u003Cbr>• 在 PubMed、HotpotQA 和 Gorilla 基准测试上性能提升\u003Cbr>• 可高效适配小型模型用于领域问答\u003C\u002Fdetails> |\n| [Gorilla CLI](https:\u002F\u002Fgithub.com\u002Fgorilla-llm\u002Fgorilla-cli) | 🤖 模型\u003Cbr>🔧 本地 CLI 基础设施 | \u003Cdetails>\u003Csummary>将 LLM 应用于你的命令行界面\u003C\u002Fsummary>• 用户友好的 CLI 工具，支持约 1,500 个 API（Kubernetes、AWS、GCP 等）\u003Cbr>• 多 LLM 融合生成自然语言指令\u003Cbr>• 注重隐私，需明确确认后才执行\u003Cbr>• 提供命令历史记录和交互式选择界面\u003C\u002Fdetails> |\n| [Gorilla API Zoo](apizoo\u002F) | 📚 数据集 | \u003Cdetails>\u003Csummary>由社区维护的最新 API 文档库\u003C\u002Fsummary>• 跨领域的集中式、可搜索 API 索引\u003Cbr>• 结构化的文档格式，包含参数、版本管理和示例\u003Cbr>• 社区驱动的更新，紧跟 API 变化\u003Cbr>• 丰富的数据来源，可用于模型训练和微调\u003Cbr>• 支持检索增强型训练和推理\u003Cbr>• 通过最新文档减少幻觉\u003C\u002Fdetails> |\n\n## 快速入门\n\n### 快速开始\n在浏览器中体验 Gorilla：\n- 🚀 [Gorilla Colab 演示](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1y78Zj7xHysX0xMpr9S468HYs12Mj6X1F?usp=sharing)：试用基础 Gorilla 模型\n- 🌐 [Gorilla Gradio 演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fgorilla-llm\u002Fgorilla-demo\u002F)：交互式网页界面\n- 🔥 [OpenFunctions Colab 演示](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Td3_R5vPael9PnKYHcl-PxmZkZzA9TCo?usp=sharing)：试用最新的 OpenFunctions 模型\n- 🎯 [OpenFunctions 官网演示](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard.html#api-explorer)：体验函数调用\n- 📊 [伯克利函数调用排行榜](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard)：比较函数调用能力\n\n### 安装选项\n\n1. **Gorilla CLI** - 最快的入门方式\n```bash\npip install gorilla-cli\ngorilla generate 100 random characters into a file called test.txt\n```\n[了解更多关于 Gorilla CLI →](https:\u002F\u002Fgithub.com\u002Fgorilla-llm\u002Fgorilla-cli)\n\n2. **本地运行 Gorilla**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla.git\ncd gorilla\u002Finference\n```\n[详细的本地设置说明 →](\u002Fgorilla\u002Finference\u002FREADME.md)\n\n3. **使用 OpenFunctions**\n```python\nimport openai\n\nopenai.api_key = \"EMPTY\"\nopenai.api_base = \"http:\u002F\u002Fluigi.millennium.berkeley.edu:8000\u002Fv1\"\n\n# 定义你的函数\nfunctions = [{\n    \"name\": \"get_current_weather\",\n    \"description\": \"获取某个地点的天气\",\n    \"parameters\": {\n        \"type\": \"object\",\n        \"properties\": {\n            \"location\": {\"type\": \"string\"},\n            \"unit\": {\"type\": \"string\", \"enum\": [\"摄氏度\", \"华氏度\"]}\n        },\n        \"required\": [\"location\"]\n    }\n}]\n\n# 发起 API 调用\ncompletion = openai.ChatCompletion.create(\n    model=\"gorilla-openfunctions-v2\",\n    messages=[{\"role\": \"user\", \"content\": \"旧金山的天气怎么样？\"}],\n    functions=functions\n)\n```\n[OpenFunctions 文档 →](\u002Fopenfunctions\u002FREADME.md)\n\n### 🔧 其他快速入门\n\n- 📊 **评估与基准测试**\n  - [伯克利函数调用排行榜](\u002Fberkeley-function-call-leaderboard\u002FREADME.md)：比较函数调用能力\n  - [Agent Arena](\u002Fagent-arena\u002FREADME.md)：评估智能体工作流\n  - [Gorilla 论文评估脚本](\u002Fgorilla\u002Feval\u002FREADME.md)：运行您自己的评估\n\n- 🛠️ **开发工具**\n  - [GoEx](\u002Fgoex\u002FREADME.md)：安全执行大模型生成的操作\n  - [RAFT](\u002Fraft\u002FREADME.md)：针对特定领域任务微调模型\n  - [API 商店](\u002Fdata\u002FREADME.md)：贡献并使用 API\n\n\n## 常见问题解答\n1. 我希望将 Gorilla 用于商业用途。是否会推出 Apache 2.0 许可的版本？\n\n是的！我们现在提供了可以无任何限制地用于商业用途的模型。\n\n\n2. 我们能否将 Gorilla 与其他工具（如 Langchain 等）一起使用？\n\n当然可以！这正是我们工具的一大亮点。Gorilla 是一个端到端模型，专门设计用于正确调用 API（工具），而无需额外编码。它旨在作为更广泛生态系统的一部分，能够灵活地集成到智能体框架及其他工具中。\n\nLangchain 是一款功能强大的开发者工具。“智能体”可以高效地替换任何大模型，包括 Gorilla，使其成为一种高度适应各种需求的解决方案。\n\n这些工具的魅力在于它们的协同合作，相互补充各自的优势与能力，从而打造出更加强大、全面的解决方案。而这正是您能够发挥作用的地方。我们热忱欢迎任何有助于进一步优化和提升这些工具的意见和建议。\n\n请查看我们的博客文章[如何使用 Gorilla：分步指南](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fblogs\u002F5_how_to_gorilla.html)，了解在您的项目中集成 Gorilla 的各种方式。\n\n## 项目路线图\n在接下来的一段时间内，我们计划发布以下内容：\n\n- [ ] 多模态函数调用排行榜\n- [ ] 智能体式函数调用排行榜\n- [ ] 新一批用户贡献的实时函数调用评估\n- [ ] BFCL 指标，用于评估污染情况\n- [ ] Openfunctions-v3 模型，以支持更多语言和多轮对话能力\n- [x] Agent Arena，用于跨模型、工具和框架比较 LLM 智能体 [2024年10月4日]\n- [x] 多轮及多步骤函数调用评估 [2024年9月21日]\n- [x] 用户贡献的实时函数调用排行榜 [2024年8月20日]\n- [x] 包括成本和延迟在内的 BFCL 系统指标 [2024年4月1日]\n- [x] Gorilla 执行引擎 (GoEx)——用于在安全保障下执行大模型生成操作的运行时环境 [2024年4月12日]\n- [x] 伯克利函数调用排行榜 (BFCL)，用于评估工具调用\u002F函数调用模型 [2024年2月26日]\n- [x] Openfunctions-v2，支持更多语言（Java、JS、Python），并具备相关性检测功能 [2024年2月26日]\n- [x] API 博物馆索引，方便访问所有 API [2024年2月16日]\n- [x] Openfunctions-v1，采用 Apache 2.0 许可，支持并行及多函数调用 [2023年11月16日]\n- [x] Openfunctions-v0，Apache 2.0 许可的函数调用模型 [2023年11月16日]\n- [X] 发布可商用的、Apache 2.0 许可的 Gorilla 模型 [2023年6月5日] \n- [X] 发布来自 APIBench 的所有 API 权重 [2023年5月28日]\n- [X] 在本地运行 Gorilla LLM [2023年5月28日]\n- [X] 发布适用于 HF 模型的 API 权重 [2023年5月27日]\n- [X] 为 HF 模型 API 提供托管的 Gorilla LLM 聊天服务 [2023年5月27日]\n- [X] 向社区开放 APIZoo，接受贡献\n- [X] 数据集和评估代码\n\n## 许可协议\n\nGorilla 采用 Apache 2.0 许可，因此既适合学术研究，也适合商业用途。\n\n## 联系方式\n\n- 💬 加入我们的 [Discord 社区](https:\u002F\u002Fdiscord.gg\u002FgrXXvj9Whz)\n- 🐦 关注我们在 [X](https:\u002F\u002Fx.com\u002Fshishirpatil_) 上的账号\n\n## 引用\n\n```text\n@article{patil2023gorilla,\n  title={Gorilla: Large Language Model Connected with Massive APIs},\n  author={Shishir G. Patil and Tianjun Zhang and Xin Wang and Joseph E. Gonzalez},\n  year={2023},\n  journal={arXiv preprint arXiv:2305.15334},\n} \n```","# Gorilla 快速上手指南\n\nGorilla 是一个能够连接大规模 API 的大型语言模型（LLM）。它能将自然语言查询转化为语义和语法正确的 API 调用，显著减少幻觉，支持 1600+ 种 API 的精准调用。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows (WSL2 推荐)\n*   **Python 版本**：Python 3.8 或更高版本\n*   **依赖管理**：已安装 `pip` 和 `git`\n*   **硬件要求**：\n    *   **CLI 模式**：无特殊要求，依赖云端或本地现有 LLM。\n    *   **本地推理模式**：若需本地运行模型，建议配备 NVIDIA GPU (显存 16GB+ 推荐用于 7B 模型) 或使用 CPU 推理（速度较慢）。\n\n## 安装步骤\n\n您可以根据需求选择以下两种主要安装方式：\n\n### 方式一：安装 Gorilla CLI（推荐，最快上手）\n这是最简单的使用方式，无需配置复杂的推理环境，直接通过命令行调用。\n\n```bash\npip install gorilla-cli\n```\n\n### 方式二：本地部署推理环境\n如果您希望下载模型并在本地运行推理代码，请克隆仓库并进入推理目录：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla.git\ncd gorilla\u002Finference\n```\n> **注意**：本地部署需要进一步安装 Python 依赖（如 `torch`, `transformers` 等），具体请参考项目内 `\u002Fgorilla\u002Finference\u002FREADME.md` 的详细指引。国内用户建议使用清华源或阿里源加速 pip 包安装：\n> ```bash\n> pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 基本使用\n\n### 1. 使用 Gorilla CLI\n安装完成后，直接使用 `gorilla` 命令即可将自然语言转换为具体的 API 命令或代码。\n\n**示例：生成一个创建文件的命令**\n```bash\ngorilla generate 100 random characters into a file called test.txt\n```\n\n**示例：查询 Kubernetes 状态（需配置相应权限）**\n```bash\ngorilla get all pods in namespace default\n```\n\n### 2. 使用 OpenFunctions (Python 调用)\nGorilla 提供了兼容 OpenAI 格式的接口，支持函数调用（Function Calling）。您可以将其作为本地服务运行或通过 API 调用。\n\n**Python 调用示例：**\n\n```python\nimport openai\n\n# 配置指向 Gorilla 服务端点 (如果是本地部署，请修改 api_base)\nopenai.api_key = \"EMPTY\"\nopenai.api_base = \"http:\u002F\u002Fluigi.millennium.berkeley.edu:8000\u002Fv1\"\n\n# 定义函数描述\nfunctions = [{\n    \"name\": \"get_current_weather\",\n    \"description\": \"Get weather in a location\",\n    \"parameters\": {\n        \"type\": \"object\",\n        \"properties\": {\n            \"location\": {\n                \"type\": \"string\",\n                \"description\": \"The city and state, e.g. San Francisco, CA\"\n            },\n            \"unit\": {\n                \"type\": \"string\",\n                \"enum\": [\"celsius\", \"fahrenheit\"]\n            }\n        },\n        \"required\": [\"location\"]\n    }\n}]\n\n# 发起请求\nresponse = openai.ChatCompletion.create(\n    model=\"gorilla-openfunctions-v1\",\n    messages=[{\"role\": \"user\", \"content\": \"What's the weather like in Boston?\"}],\n    functions=functions,\n    function_call=\"auto\"\n)\n\nprint(response.choices[0].message)\n```\n\n### 3. 在线体验\n如果不想本地安装，可以直接通过以下链接在浏览器中体验：\n*   **Gorilla Colab 演示**: [点击跳转](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1y78Zj7xHysX0xMpr9S468HYs12Mj6X1F?usp=sharing)\n*   **OpenFunctions 演示**: [点击跳转](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Td3_R5vPael9PnKYHcl-PxmZkZzA9TCo?usp=sharing)\n*   **Berkeley 函数调用排行榜**: [查看评测与在线测试](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard)","某电商公司的后端团队需要构建一个智能客服助手，使其能根据用户自然语言指令自动调用库存查询、订单修改及物流追踪等数十个内部 API。\n\n### 没有 gorilla 时\n- 开发人员需为每个 API 手动编写复杂的规则解析代码，一旦接口参数变更，维护成本极高且容易出错。\n- 通用大模型常因不理解特定 API 的严格格式要求，生成错误的 JSON 参数或调用不存在的函数，导致服务频繁崩溃。\n- 面对“先查库存再下单”这类多步骤任务，模型缺乏状态管理能力，无法按正确顺序串联多个 API 调用。\n- 难以区分语义相似的意图（如“取消订单”与“退货”），往往需要大量人工标注数据微调模型，耗时数周。\n\n### 使用 gorilla 后\n- gorilla 经过海量 API 文档训练，能精准理解自然语言并自动生成符合规范的 API 调用代码，无需手写解析逻辑。\n- 凭借对工具调用的专项优化，gorilla 输出的参数格式准确率大幅提升，有效避免了因格式错误导致的运行时异常。\n- 依托其强大的多轮对话与状态追踪能力，gorilla 能自主规划并执行涉及多个 API 的复杂工作流，确保业务逻辑连贯。\n- 利用少样本学习特性，gorilla 能快速适应新上线的 API 接口，将新功能的集成周期从数周缩短至几小时。\n\ngorilla 将原本脆弱的规则匹配系统升级为高鲁棒性的自主代理，让大模型真正具备了精准操控大规模企业级 API 的能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FShishirPatil_gorilla_31b9a215.png","ShishirPatil","Shishir Patil","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FShishirPatil_fc6f5f0b.jpg",null,"UC Berkeley, Microsoft Research ","Berkeley","https:\u002F\u002Fshishirpatil.github.io\u002F","https:\u002F\u002Fgithub.com\u002FShishirPatil",[84,88,92,96,100,104,107,110,114,117],{"name":85,"color":86,"percentage":87},"Python","#3572A5",73.2,{"name":89,"color":90,"percentage":91},"Jupyter Notebook","#DA5B0B",18.4,{"name":93,"color":94,"percentage":95},"JavaScript","#f1e05a",7.8,{"name":97,"color":98,"percentage":99},"CSS","#663399",0.2,{"name":101,"color":102,"percentage":103},"Rust","#dea584",0.1,{"name":105,"color":106,"percentage":103},"Tree-sitter Query","#8ea64c",{"name":108,"color":109,"percentage":103},"Shell","#89e051",{"name":111,"color":112,"percentage":113},"HTML","#e34c26",0,{"name":115,"color":116,"percentage":113},"C++","#f34b7d",{"name":118,"color":119,"percentage":113},"Dockerfile","#384d54",12791,1343,"2026-04-03T04:39:47","Apache-2.0","未说明","未说明 (模型支持在 CPU 或 GPU 上运行，具体显存需求取决于所选模型大小，如 7B 模型通常建议 8GB+ 显存)",{"notes":127,"python":124,"dependencies":128},"README 中未直接列出具体的版本号和系统要求，但提供了多种使用方式：1. 通过 pip 安装 gorilla-cli 快速使用；2. 克隆仓库并在 inference 目录下运行（需参考该目录下的详细设置文档）；3. 使用 OpenFunctions 兼容接口；4. 提供 Colab 和 Hugging Face Spaces 在线演示。主要依赖包括 PyTorch、Transformers 和 Accelerate 等主流深度学习库。",[129,130,131,132],"torch","transformers","accelerate","openai",[26,13,51,53],[135,136,137,138,139,140,141,142],"api","llm","api-documentation","chatgpt","gpt-4-api","claude-api","openai-api","openai-functions",7,"2026-03-27T02:49:30.150509","2026-04-06T09:46:07.967217",[147,152,157,162,167,172],{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},10538,"执行 bfcl evaluate 命令后，生成的 score 目录下的指标文件（如 data_overall.csv）为空或没有具体数据，是什么原因？","这通常是由于网络连接问题导致的。错误日志中若出现 `HTTPSConnectionPool... Read timed out`，说明在下载模型或数据时连接超时。此外，如果相关的 API 服务暂时掉线，也会导致无法获取结果。建议检查网络连接，或等待 API 服务恢复后重试。","https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fissues\u002F784",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},10539,"运行 Gorilla OpenFunctions v2 时遇到 'SSL: CERTIFICATE_VERIFY_FAILED' 或证书过期错误怎么办？","该问题可能是暂时的网络配置错误或本地环境缓存问题。许多用户反馈稍后重试即可恢复正常。如果错误变为 `{\"detail\":\"Method Not Allowed\"}`，则可能不是证书问题，而是请求方法或端点使用不当，需检查调用方式是否正确。","https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fissues\u002F668",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},10540,"如何在本地（Mac\u002FLinux\u002FWSL） without GPU 运行 Gorilla 模型？","可以使用量化版本的模型配合 llama.cpp 等工具在 CPU 或 MPS (Mac) 上运行。注意：早期部分量化模型（如 gorilla-7b-hf-v1-ggml）曾因未正确合并权重导致推理效果差，且曾设为私有仓库导致 401 错误。目前维护者已将其公开，请确保拉取最新的公开模型版本。","https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fissues\u002F77",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},10541,"在哪里可以找到用于复现实验结果的 bm25 和 gpt-index 检索脚本？","这些脚本尚未直接包含在主仓库的默认发布包中，但社区贡献者已表示会提交 PR (Pull Request) 来补充缺失的代码。建议关注仓库的 PR 列表或直接向维护者询问最新进展，也可以参考 Issue 中的讨论自行基于 BM25 算法和 OpenAI Davinci v1 嵌入实现。","https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fissues\u002F58",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},10542,"GPT-4 的训练数据截止于 2021 年 9 月，这对 Gorilla 的评估结果有何影响？","Gorilla 团队在训练时已严格避免测试集数据泄露，但无法控制闭源模型（如 GPT-4）的训练数据。关于 API 截止时间的影响，官方建议社区利用开源的训练\u002F评估数据集，自行过滤出 2021 年 9 月之后发布的 API 进行对比实验，以验证新 API 对性能的具体影响。","https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fissues\u002F50",{"id":173,"question_zh":174,"answer_zh":175,"source_url":176},10543,"使用 bfcl generate 命令时，为什么提示 '--local-model-path' 选项不被识别？","该错误通常是因为命令格式错误或参数位置不对。在某些版本中，`--local-model-path` 可能需要特定的后端配置或与 `--model` 参数配合使用。如果命令仍强制尝试连接远程模型（如显示正在运行 vllm 拉取远程模型），请检查 BFCL 版本是否最新，并确认命令行中参数之间是否有空格遗漏（例如 `simple--local` 应为 `simple --local`）。","https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fissues\u002F1003",[178,183,188,193,198,203,208,213],{"id":179,"version":180,"summary_zh":181,"released_at":182},71088,"v1.3","## Highlights\r\n\r\n🏆 Stable release of Berkeley Function Calling Leaderboard V3 with Multi-step and Multi-turn function call evaluation\r\n\r\n## What's Changed\r\n* Gorilla README and repo structure revamp by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F799\r\n* [BFCL] Fix `live_parallel_multiple_9-8-0` copy-paste issue by @pkesseli in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F865\r\n* [BFCL] Fix Typo in `multi_turn_base_34` Ground Truth by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F876\r\n* Adding New Model Haha-7B by @ZydHaha in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F858\r\n* [BFCL Chore] Implement `retry_with_backoff` for Amazon Nova Handler by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F880\r\n* [BFCL] Fix `live_simple_183-108-0` by @pkesseli in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F872\r\n* [BFCL] Fix live_simple_165-98-0 by @pkesseli in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F871\r\n* [BFCL] Fix `live_simple_44-18-0` and `live_simple_45-18-1` by @pkesseli in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F870\r\n* [BFCL] Fix Nova Handler for Consecutive User Prompt Issue by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F881\r\n* Add support for QwQ and Sky-T1-32B-Preview by @SumanthRH in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F888\r\n* add handler for Bielik by @dominikabasaj in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F887\r\n* [BFCL Chore] Align Score File `id` with Result File Test Case IDs by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F893\r\n* Fix minor typo in default system prompt without func by @canyon289 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F895\r\n* Falcon3 support by @kirill-fedyanin in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F894\r\n* [BFCL] Update tool construction for Palmyra models by @samjulien in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F897\r\n* Added compute_exchange_rate to multi_turn_base entry 180 ground truth by @Raymond112514 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F892\r\n* [BFCL] Add New Model `o3-mini-2025-01-31` and `o3-mini-2025-01-31-FC` by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F898\r\n* Add CALM models by @jgreer013 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F900\r\n* [BFCL] Add New Model `gemini-2.0-flash-001`, `gemini-2.0-flash-lite-preview-02-05`, `gemini-2.0-pro-exp-02-05`. by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F902\r\n* chore: added snippet for hf datasets compatibility by @alt-glitch in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F906\r\n* Update model_metadata.py by @jgreer013 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F907\r\n* Rename CALM to CoALM by @jgreer013 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F913\r\n* Bitagent 8b submission by @VectorForger in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F917\r\n* Bitagent 8b Metadata Change by @VectorForger in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F919\r\n* [BFCL] Add New Model `gpt-4.5-preview-2025-02-27`, `gpt-4.5-preview-2025-02-27-FC` by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F922\r\n* [BFCL] fix bug in how score_dir is handled for bfcl evaluate by @liamcli in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F924\r\n* [BFCL] Add New Model `DeepSeek-R1` by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F901\r\n* Make all import paths absolute. by @fvisin in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F935\r\n* Move logic to eval a task in a separate function. by @fvisin in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F933\r\n* Fix Gorilla Paper `requirements.txt` Location to Remove Global Dependency Confusion by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F937\r\n* [BFCL] Add _unused Suffix to Unused Dataset Files in the BFCL Benchmark by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F938\r\n* [BFCL] Support Local Inference for `deepseek-ai\u002FDeepSeek-R1` by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F926\r\n* [BFCL] Add Support for `Qwen2.5` Models in Function Calling Mode by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F925\r\n* [BFCL] Add New Model `claude-3-7-sonnet-20250219`, `claude-3-7-sonnet-20250219-FC` by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F923\r\n* [BFCL] Add handler and meta info for ToolACE-2-8B by @XuHwang in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F941\r\n* [BFCL] Reorganized All `constant.py` Files to a `constants` Folder by @catherineruoxiwu in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F944\r\n* [BFCL] Add New Models `gemini-2.0-flash-lite-001`, `gemini-2.0-flash-thinking-exp-01-21` by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F942\r\n* [BFCL] Add Google `Gemma-3` Series Models by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F939\r\n* [BFCL] Move `model_metadata.py` to `constants` folder by @catherineruoxiwu in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F949\r\n* Add Cohere Command A by @harry-cohere in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F951\r","2025-07-17T21:10:14",{"id":184,"version":185,"summary_zh":186,"released_at":187},71089,"v1.2","## Highlights\r\n\r\n🏆 Berkeley Function Calling Leaderboard V3 with Multi-step and Multi-turn function call evaluation\r\n\r\n## What's Changed\r\n* [BFCL] Package the Codebase by @devanshamin in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F565\r\n* Added python script named as raft_local.py to raft directory to run script completely locally using HF models by @himanshushukla12 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F605\r\n* RAFT Enhancements: Improved robustness, logging, checkpointing, threading, Llama support, Azure auth and eval by @cedricvidal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F604\r\n* Fix\u002Fmerge commit #605 and #604 by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F609\r\n* Fix issue #614: [BFCL] ModuleNotFoundError after commit 70d6722 by @kobe0938 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F615\r\n* Fix some bugs in test case prompts\u002Fground truths by @aw632 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F608\r\n* [BFCL] Dataset and Possible Answer Fix by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F600\r\n* Add Salesforce xLAM model series by @zuxin666 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F616\r\n* Update gemini_handler.py to better handle NL+FC model output by @vandyxiaowei in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F617\r\n* [BFCL] Fix Decoding Issue in Nvidia Handler by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F623\r\n* [BFCL] Fix Llama Handler by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F626\r\n* [BFCL] add MadeAgents\u002FHammer-7b handler by @linqq9 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F627\r\n* [BFCL] Refactor Model Handler into OSS and Proprietary Components by @devanshamin in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F612\r\n* [BFCL] Hot Fix to Remove Extra Parameters for NoAPIKeyError by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F636\r\n* fix: bug for glm prompt format by @zhangch-ss in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F638\r\n* [BFCL] Add New Model `o1-preview-2024-09-12` and `o1-mini-2024-09-12`  by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F635\r\n* [BFCL] BFCL v3 by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F644\r\n* removed unnecessary comments in raft\u002Fraft_local.py by @himanshushukla12 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F654\r\n* [BFCL] Chore: Separate Change Log. by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F648\r\n* [BFCL] Bug Fix inference_single_turn_FC function for base_handler by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F656\r\n* [BFCL] Bug Fix parse_nested_value function for model_handler utils by @VishnuSuresh27 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F660\r\n* added Phi-3 handlers by @AndyChenYH in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F640\r\n* Update agent arena frontend and evals by @NithikYekollu in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F666\r\n* [BFCL] Speed Up Locally-hosted Model Inference Process by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F671\r\n* [BFCL] Fix Hanging Inference for OSS Models on GPU Platforms by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F663\r\n* [BFCL] Add gemini-1.5-pro-002, gemini-1.5-pro-002-FC, gemini-1.5-pro-001, gemini-1.5-pro-001-FC, gemini-1.5-flash-002, gemini-1.5-flash-002-FC, gemini-1.0-pro-002, gemini-1.0-pro-002-FC by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F658\r\n* [BFCL] Add Llama-3.2-1B-Instruct, Llama-3.2-3B-Instruct, Llama-3.1-8B-Instruct, Llama-3.1-70B-Instruct by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F657\r\n* [BFCL] Add ToolACE handler for BFCL-v3 by @XuHwang in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F653\r\n* Add Qwen handler and fix mean_latency calculation error for OSS models by @zhangch-ss in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F642\r\n* update README.md by @leosun12 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F669\r\n* [BFCL] Chore: Various Improvements and Adjustments by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F673\r\n* [BFCL] Chore: Refactor File Path Handling and Automate apply_function_credential_config.py by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F675\r\n* docs: update README.md by @eltociear in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F676\r\n* [BFCL-v3]  Multi-Turn Possible Answer Order Change by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F679\r\n* update hammer handler and  add Hammer2.0 model by @linqq9 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F667\r\n* [BFCL] Chore: Improve Multi Turn Error Logs by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F689\r\n* Update google-cloud-aiplatform dependency by @jieru-hu in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F677\r\n* add minicpm3 4b by @Cppowboy in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F633\r\n* [BFCL-v2] Dataset and Possible Answer Fix by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F661\r\n* [BFCL] Add Gemma-2 models by @jacovki","2025-01-05T04:39:54",{"id":189,"version":190,"summary_zh":191,"released_at":192},71090,"v1.1","## Highlights\r\n🏆 Berkeley Function Calling Leaderboard V2 along with Live data \r\n\r\n## What's Changed\r\n* Added Agent Arena Frontend Client to Gorilla Repository  by @NithikYekollu in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F586\r\n* [BFCL] Add BFCL_V2_Live Dataset by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F580\r\n* Create an issue template for BFCL by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F599\r\n* [BFCL] Relocate Formatting Instructions and Function Documentation to System Prompt by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F593\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fcompare\u002Fv1.0...v1.1","2024-08-27T06:13:22",{"id":194,"version":195,"summary_zh":196,"released_at":197},71091,"v1.0","## Highlights\r\n\r\n🏆 We are thrilled to announce the stable v1.0 release of the Berkeley Function Calling Leaderboard data-set and eval-pipeline! A heartfelt thank you to all our contributors and users for your enthusiastic engagement and support throughout v1. We are just getting started! Buckle-up for v2 🚀 🚀 🚀\r\n\r\n## What's Changed\r\n* better handle float value comparison by @vandyxiaowei in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F407\r\n* Bump pymysql from 1.1.0 to 1.1.1 in \u002Fgoex by @dependabot in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F453\r\n* Fixes For NexusHandler by @VenkatKS in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F437\r\n* [BFCL] PR#407 Evaluation Pipeline Robustness Patch by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F462\r\n* Add firefunction-v2 to the leaderboard by @pgarbacki in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F470\r\n* [BFCL] Add Claude 3.5 Sonnet Function Calling Infernece Inference  by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F480\r\n* [BFCL] Standardize Model Name Among handler_map and eval_runner_helper by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F439\r\n* Remove redundant tokens from GPT-handler by @hellovai in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F490\r\n* [GoEx] Undo Minor Bug Fix + README Minor Improvement by @royh02 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F468\r\n* [BFCL] Add ability to evaluate Nemotron-4-340B-Instruct by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F489\r\n* fix some data issues in parallel\u002Fparallel multiple answers by @vandyxiaowei in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F423\r\n* [BFCL] Add Support for GLM-4-9B function calling inference by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F474\r\n* [BFCL] Sanity check is now optional by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F496\r\n* [BFCL] Improved tree-sitter java, javascript installation by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F505\r\n* [BFCL] Fix Possible Answer for AST Parallel and Parallel_Multiple Category by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F503\r\n* [BFCL] Add Test Dataset to Repository by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F504\r\n* [BFCL] Support Category-Specific Generation for OSS Model, Remove eval_data_compilation Step by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F512\r\n* [BFCL] Fix Double-Casting Issue in model_handler for Java and JS category.  by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F516\r\n* [BFCL] Fix Dataset Issue for executable_parallel_multiple Category by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F522\r\n* [BFCL] add ibm-granite-20b-functioncallling model by @MayankAgarwal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F525\r\n* [BFCL] Overhaul apply_function_credential_config.py for Enhanced Usability by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F508\r\n* Fixed the warning message \"Setting `pad_token_id` to `eos_token_id`:1… by @dineshkumarsarangapani in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F110\r\n* [BFCL] Specify package version in requirements.txt by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F515\r\n* [BFCL] Standardize TEST_CATEGORY Among eval_runner.py and openfunctions_evaluation.py by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F506\r\n* fix line return by @fantasist in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F531\r\n* [BFCL] Apply Fix to Newly Introduced Model Handler Missed in Previous PR Merge by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F536\r\n* [RAFT] Fix Datapoint Field in Formatter for Data Generation by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F535\r\n* [BFCL] Fix language_specific_pre_processing for Java and JavaScript Test Category by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F538\r\n* [BFCL] Patch Generation Script for Locally Hosted OSS model by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F537\r\n* [BFCL] Support Multi-Model Multi-Category Generation; Add Index to Dataset; Handle vLLM Benign Error by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F540\r\n* Add NousResearch\u002F{Hermes-2-Pro-Llama-3-8B,Hermes-2-Theta-Llama-3-8B}  models by @alonsosilvaallende in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F542\r\n* [BFCL] Fix Dataset Pre-Processing for Java and JavaScript Test Category, Part 2 by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F545\r\n* Add Salesforce xLAM handler and fix minor issues by @zuxin666 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F532\r\n* Add NousResearch\u002FHermes-2-{Pro-Llama-3-80B,Theta-Llama-3-80B} by @alonsosilvaallende in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F556\r\n* Add Yi Handler by @fantasist in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F543\r\n* Add more descriptive error message in eval_runner.py by @alonsosilvaallende in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F552\r\n* [BFC","2024-08-15T04:35:13",{"id":199,"version":200,"summary_zh":201,"released_at":202},71092,"v0.3","😍 v0.3 release 🚀\r\n\r\n## Highlights\r\n\r\n⚡️ Released GoEx: A runtime that presents abstractions for safe execution of LLM generated code, APIs, actions, etc\r\n\r\n🏆 Updates to Berkeley Function Calling Leaderboard (aka Berkeley Tool Calling Leaderboard) : Newer models including GPT-4o, gemini-flash and 1.5-pro, Hermes-2-Pro, etc. All measured along P95 and P99 latency, and costs besides accuracy.\r\n\r\n## What's Changed\r\n* Fix Typos in Evaluation Script and System Prompt. Identify Errors in a Dataset by @zuxin666 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F335\r\n* BFCL April 8th Release by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F330\r\n* Initial goex commit by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F336\r\n* BFCL April 9th Release (Dataset Bug Fix) by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F338\r\n* BFCL April 10th Release (API Sanity Check) by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F339\r\n* Add Support for NousResearch\u002FHermes-2-Pro-Mistral-7B Function Calling by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F327\r\n* Update raft.py with default `p` to match paper by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F353\r\n* GoEx Import Issues by @royh02 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F354\r\n* BFCL April 11th Patch. Add Latency Statistics. by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F347\r\n* GoEx Gitignore User Credentials by @royh02 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F344\r\n* Fix Circular Import Issue for BFCL evluation pipeline by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F356\r\n* Added Docker to README by @Noppapon in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F355\r\n* [Bug fix] Add Hermes-2-Pro-Mistral-7B model to UNDERSCORE_TO_DOT to parse API properly  by @JasonZhu1313 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F364\r\n* Update requirements.txt by @viniciuslazzari in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F343\r\n* Fix script argument by @ricklamers in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F367\r\n* BFCL April 16th Release by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F366\r\n* Log error messages from API validation by @eitanturok in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F369\r\n* Update .gitignore by @eitanturok in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F370\r\n* BFCL April 18th Release (Pipeline only) by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F375\r\n* Add missing argument to `OSSHandler`'s `_format_prompt` function by @eitanturok in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F373\r\n* Add FC + Prompt for Cohere command-r-plus by @harry-cohere in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F350\r\n* BFCL April 19th Release (Dataset & Pipeline) by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F377\r\n* Azure OpenAI support in raft.py by @cedricvidal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F381\r\n* BFCL April 25th Release (New Models) by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F386\r\n* Colored logging configuration + displaying progress in logs by @cedricvidal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F384\r\n* BFCL April 27th Release (Bug Fix in Cost\u002FLatency Calculation) by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F390\r\n* BFCL April 28th Release (New Model: snowflake\u002Farctic) by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F397\r\n* RAFT Recovery Mode for interruptions  by @kaiwen129 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F410\r\n* Small corrections to possible_answers for simple test category by @aastroza in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F405\r\n* BFCL May 6th Release (Dataset Bug Fix) by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F412\r\n* RAFT DevContainer for GitHub Codespaces by @cedricvidal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F379\r\n* RAFT Add support for configuring separate completion and embedding endpoints + pytest by @cedricvidal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F396\r\n* RAFT Fix arbitrary code execution vulnerability in checkpoint feature by @cedricvidal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F415\r\n* handle parallel function calls from gemini by @vandyxiaowei in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F406\r\n* RAFT Support for chat and completion model formats by @cedricvidal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F417\r\n* [RAFT] Edit encode prompt to include `\u003CANSWER>:` tag in label by @kaiwen129 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F422\r\n* [BFCL] Patch Gemini Handler  by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F421\r\n* BFCL May 14th Release (GPT-4o and Gemini) by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F426\r\n* [BFCL] update tree_sitter version in requirements.txt by @justinwangx in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F433\r\n* Fix indentation in leaderboard README by @polm-stability in https:\u002F\u002Fgithub.com\u002FShi","2024-06-05T05:43:19",{"id":204,"version":205,"summary_zh":206,"released_at":207},71093,"v0.2","😍 v0.2 release 🚀\r\n\r\n## Highlights\r\n\r\n🎯 Berkeley Function Calling Leaderboard (BFCL): How do models stack up for function calling? \r\n - Now includes latency and cost  \r\n - More open-source and closed-source models\r\n - Bug fixes in dataset. \r\n \r\nRAFT: Fine-tuning technique to improve LLMs for in-domain RAG!\r\n\r\n\r\n## What's Changed\r\n* Adding APIs of 9 Google Service to API Zoo by @meenakshi-mittal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F204\r\n* Github Actions to Maintain API Zoo Index by @ramanv0 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F188\r\n* Adding Zoom API to API Zoo by @meenakshi-mittal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F221\r\n* API Zoo Index Github Actions Fix by @ramanv0 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F261\r\n* Added Google Forms API by @elva01 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F185\r\n* RAFT + readme + small sample dataset by @kaiwen129 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F218\r\n* Sample data for RAFT by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F264\r\n* Docusign Additions by @dangeo773 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F194\r\n* [Bug Fix] Fix Executable Exact Match Condition Did not Meet by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F251\r\n* [Bug Fix] Fix Error in Parallel Function Possible Answer by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F252\r\n* [Bug Fix] Restrict AST checker on Boolean Variable by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F256\r\n* Adding 7 Oracle APIs to API Zoo by @meenakshi-mittal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F205\r\n* Adding Datadog API to API Zoo by @meenakshi-mittal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F206\r\n* Added Notion APIs (Block, Page, and Database) to APIZoo by @jennifer818 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F195\r\n* removed testing code by @kaiwen129 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F281\r\n* feat: more type annotations for the functions by @UponTheSky in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F283\r\n* [Fix] java, javascript parsers in openfunctions-v2 by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F284\r\n* Leaderboard Update April 1 by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F299\r\n* Remove Large File from `.\u002Finference` by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F297\r\n* Typo in raft.py by @danielfleischer in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F311\r\n* Leaderboard April 3 release by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F309\r\n* Support OSS Evaluation for Leaderboard by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F318\r\n* Update README.md by @HuanzhiMao in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F320\r\n* Fix typos by @viniciuslazzari in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F323\r\n* Correction in BFCL README instruction, fixed path in instructions by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F329\r\n\r\n## New Contributors\r\n* @elva01 made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F185\r\n* @kaiwen129 made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F218\r\n* @jennifer818 made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F195\r\n* @UponTheSky made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F283\r\n* @danielfleischer made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F311\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fcompare\u002Fv0.1...v0.2","2024-04-11T03:38:11",{"id":209,"version":210,"summary_zh":211,"released_at":212},71094,"v0.1","😍 v0.1 release 🚀\r\n\r\n## Highlights\r\n* :dart: Berkeley Function Calling Leaderboard (BFCL): How do models stack up for function calling?  Evaluation code for the Berkeley Function Calling Leaderboard. \r\n* :trophy: Gorilla OpenFunctions v2:  Inference examples for OpenFunctions-v2 - SoTA open-source LLM for function calling. On-par with GPT-4 :raised_hands: Supports more languages :ok_hand:. \r\n* API Zoo Index:  An accessible collection of API documentation for humans to search through, and for LLMs to use as tools 👀 \r\n\r\nWe are excited about our long due v0.1 release! Here's more: \r\n\r\n## What's Changed\r\n* Adding BM25 and GPT retrievers by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F61\r\n* update(anthropic):  #63 to (0.3.x) by @AmirAflak in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F64\r\n* Add inference support for Macbook silicon chip by @benjaminhuo in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F76\r\n* Update README.md by @eltociear in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F80\r\n* PR for Gradio WebUI Feature ([feature] Gradio webui - #102) by @TanmayDoesAI in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F105\r\n* Update README.md by @abhi-databricks in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F109\r\n* Adds wandb to eval files by @morganmcg1 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F114\r\n* Fix use_wandb in ast eval, responses file deletion, wandb artifacts renaming by @morganmcg1 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F115\r\n* sentence optimization in docstring and examples by @rajveer43 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F117\r\n* Gorilla OpenFunctions by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F142\r\n* Example on running it locally with Hugging Face 🤗 Transformers by @Danielskry in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F148\r\n* Added Gmail api to api zoo by @saikolasani in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F163\r\n* Add Google Maps API (python client) by @felixzhu555 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F164\r\n* Add support for the OpenWeatherMap API by @aryanvichare in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F159\r\n* Stripe Additions by @dangeo773 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F169\r\n* Added Kubernetes Pod API and Pod Template API by @saikolasani in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F170\r\n* Quantized Gorilla by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F160\r\n* Add a guide on how to self-host the OpenFunctions model by @ramanv0 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F157\r\n* Private Inference using Gorilla hosted endpoint on Replicate by @ramanv0 in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F162\r\n* added yfinance api to api zoo by @raywanb in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F161\r\n* Gorilla OpenFunctions run locally in Google Colab by @meenakshi-mittal in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F166\r\n* Fixed issue with Kubernetes Pod\u002FPod Template filename by @saikolasani in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F198\r\n* Create openfunctions-v2 issue template by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F203\r\n* Add support for the ServiceNow REST API by @aryanvichare in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F176\r\n* Berkeley Function Calling Leaderboard evaluation scripts and OpenFunctions v2 inference  by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F215\r\n* [Berkeley-Function-Calling-Leaderboard] Refactor leaderboard result generation and checking by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F223\r\n* Update openfunctions-v2 chatting format in README.md by @tianjunz in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F239\r\n* Update BFCL README.md by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F241\r\n* Local Inference script for openfunctions v2 by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F242\r\n* [Update Gemini-1.0-Pro result checker] by @Fanjia-Yan in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F245\r\n* Update project roadmap and repository structure by @ShishirPatil in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F257\r\n\r\n## New Contributors\r\n* @AmirAflak made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F64\r\n* @benjaminhuo made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F76\r\n* @TanmayDoesAI made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F105\r\n* @abhi-databricks made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F109\r\n* @morganmcg1 made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F114\r\n* @rajveer43 made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F117\r\n* @Danielskry made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F148\r\n* @saikolasani made their first contribution in https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fpull\u002F163\r\n* @felixzhu555 made their first contribution in https:\u002F\u002Fgithub","2024-03-12T07:56:58",{"id":214,"version":215,"summary_zh":216,"released_at":217},71095,"v0.0.1","🦍 Gorilla: An API store for LLMs 🚀 \r\n\r\n🚀  After 50,000 user requests through our hosted APIs, we are happy to tear the first release for Gorilla 💪 \r\n\r\n🤩 In this release:\r\n\r\n💻 [gorilla-cli](https:\u002F\u002Fgithub.com\u002Fgorilla-llm\u002Fgorilla-cli), LLMs for your CLI!\r\n🟢 Commercially usable, Apache 2.0 licensed Gorilla models\r\n🚀  [CLI interface](https:\u002F\u002Fgithub.com\u002FShishirPatil\u002Fgorilla\u002Fblob\u002Fmain\u002Finference\u002FREADME.md) to chat with Gorilla!\r\n🚀  Torch Hub and TensorFlow Hub Models!\r\n🚀 The first Gorilla model! Colab or [🤗](https:\u002F\u002Fhuggingface.co\u002Fgorilla-llm\u002Fgorilla-7b-hf-delta-v0)!\r\n🔥 APIZoo contribution guide for community API contributions!\r\n🔥 APIBench dataset and the evaluation code of Gorilla!\r\n","2023-07-18T08:14:15"]