[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-brainlid--langchain":3,"similar-brainlid--langchain":187},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":19,"owner_email":20,"owner_twitter":14,"owner_website":17,"owner_url":21,"languages":22,"stars":27,"forks":28,"last_commit_at":29,"license":30,"difficulty_score":31,"env_os":32,"env_gpu":32,"env_ram":32,"env_deps":33,"category_tags":39,"github_topics":44,"view_count":52,"oss_zip_url":17,"oss_zip_packed_at":17,"status":53,"created_at":54,"updated_at":55,"faqs":56,"releases":86},1015,"brainlid\u002Flangchain","langchain","Elixir implementation of a LangChain style framework that lets Elixir projects integrate with and leverage LLMs.","langchain 是一个专为 Elixir 语言打造的开源框架，灵感来源于流行的 LangChain 体系，旨在帮助 Elixir 项目轻松集成和利用大型语言模型（LLM）。单独使用 LLM 往往不足以构建强大的应用，langchain 解决了如何将语言模型与其他数据源、计算过程或服务连接起来的问题，让应用具备“数据感知”和“代理”能力。\n\nlangchain 适合熟悉 Elixir 生态的开发者，特别是希望在函数式编程环境中构建 AI 驱动应用的人群。langchain 支持 OpenAI、Anthropic、Google Gemini 等多种主流 AI 服务及本地模型（如 Ollama）。与 Python 或 JavaScript 版本不同，langchain 遵循 Elixir 的函数式编程范式，不强行套用面向对象设计，提供模块化组件和预制链式流程，方便定制与扩展。无论是快速搭建原型还是开发复杂应用，langchain 都能让 Elixir 开发者更高效地驾驭 AI 能力，将语言模型真正融入业务逻辑之中。","[![Elixir CI](https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Factions\u002Fworkflows\u002Felixir.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Factions\u002Fworkflows\u002Felixir.yml)\n[![Module Version](https:\u002F\u002Fimg.shields.io\u002Fhexpm\u002Fv\u002Flangchain.svg)](https:\u002F\u002Fhex.pm\u002Fpackages\u002Flangchain)\n[![Hex Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fhex-docs-lightgreen.svg)](https:\u002F\u002Fhexdocs.pm\u002Flangchain)\n\n# ![Logo with chat chain links](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbrainlid_langchain_readme_4c41ef6900c6.png) Elixir LangChain\n\nElixir LangChain enables Elixir applications to integrate AI services and self-hosted models into an application.\n\n**Currently supported AI services:**\n\n| Model | v0.3.x | v0.5.x |\n|-------|---------|---------|\n| OpenAI ChatGPT | ✓ | ✓ |\n| OpenAI DALL-e 2 (image generation) | ✓ | ? |\n| Anthropic Claude | ✓ | ✓ |\n| Anthropic Claude (thinking) | X | ✓ |\n| xAI Grok | X | ✓ |\n| Google Gemini | ✓ | ✓ |\n| Google Vertex AI* | ✓ | X |\n| Ollama | ✓ | ? |\n| Mistral | ✓ | X |\n| Bumblebee self-hosted models** | ✓ | ? |\n| LMStudio*** | ✓ | ? |\n| Perplexity | ✓ | ✓ |\n\n- *Google Vertex AI is Google's enterprise offering\n- **Bumblebee self-hosted models - including Llama, Mistral and Zephyr\n- ***[LMStudio](https:\u002F\u002Flmstudio.ai\u002Fdocs\u002Fapi\u002Fendpoints\u002Fopenai) via their OpenAI compatibility API\n- ****xAI Grok models including Grok-4, Grok-3-mini, Grok-4 Heavy (multi-agent)\n\n**LangChain** is short for Language Chain. An LLM, or Large Language Model, is the \"Language\" part. This library makes it easier for Elixir applications to \"chain\" or connect different processes, integrations, libraries, services, or functionality together with an LLM.\n\n**LangChain** is a framework for developing applications powered by language models. It enables applications that are:\n\n- **Data-aware:** connect a language model to other sources of data\n- **Agentic:** allow a language model to interact with its environment\n\nThe main value props of LangChain are:\n\n1. **Components:** abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not\n1. **Off-the-shelf chains:** a structured assembly of components for accomplishing specific higher-level tasks\n\nOff-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.\n\n## What is this?\n\nLarge Language Models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.\n\nThis library is aimed at assisting in the development of those types of applications.\n\n## Documentation\n\nThe online documentation can be [found here](https:\u002F\u002Fhexdocs.pm\u002Flangchain).\n\n## Demo\n\nCheck out the [demo project](https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain_demo) that you can download and review.\n\n## Relationship with JavaScript and Python LangChain\n\nThis library is written in [Elixir](https:\u002F\u002Felixir-lang.org\u002F) and intended to be used with Elixir applications. The original libraries are [LangChain JS\u002FTS](https:\u002F\u002Fjs.langchain.com\u002F) and [LangChain Python](https:\u002F\u002Fpython.langchain.com\u002F).\n\nThe JavaScript and Python projects aim to integrate with each other as seamlessly as possible. The intended integration is so strong that that all objects (prompts, LLMs, chains, etc) are designed in a way where they can be serialized and shared between the two languages.\n\nThis Elixir version does not aim for parity with the JavaScript and Python libraries. Why not?\n\n- JavaScript and Python are both Object Oriented languages. Elixir is Functional. We're not going to force a design that doesn't apply.\n- The JS and Python versions started before conversational LLMs were standard. They put a lot of effort into preserving history (like a conversation) when the LLM didn't support it. We're not doing that here.\n\nThis library was heavily inspired by, and based on, the way the JavaScript library actually worked and interacted with an LLM.\n\n## Installation\n\n**Requirements:** Elixir 1.17 or higher\n\nThe package can be installed by adding `langchain` to your list of dependencies\nin `mix.exs`:\n\n```elixir\ndef deps do\n  [\n    {:langchain, \"~> 0.6.0\"}\n  ]\nend\n```\n\n## Configuration\n\nCurrently, the library is written to use the `Req` library for making API calls.\n\nYou can configure an _organization ID_, and _API key_ for OpenAI's API, but this library also works with [other compatible APIs](#alternative-openai-compatible-apis) as well as other services and even [local models running on Bumblebee](#bumblebee-chat-support).\n\n`config\u002Fruntime.exs`:\n\n```elixir\nconfig :langchain, openai_key: System.fetch_env!(\"OPENAI_API_KEY\")\nconfig :langchain, openai_org_id: System.fetch_env!(\"OPENAI_ORG_ID\")\n# OR\nconfig :langchain, openai_key: \"YOUR SECRET KEY\"\nconfig :langchain, openai_org_id: \"YOUR_OPENAI_ORG_ID\"\n\nconfig :langchain, :anthropic_key, System.fetch_env!(\"ANTHROPIC_API_KEY\")\nconfig :langchain, :xai_api_key, System.fetch_env!(\"XAI_API_KEY\")\n```\n\nIt's possible to use a function or a tuple to resolve the secret:\n\n```elixir\nconfig :langchain, openai_key: {MyApp.Secrets, :openai_api_key, []}\nconfig :langchain, openai_org_id: {MyApp.Secrets, :openai_org_id, []}\n# OR\nconfig :langchain, openai_key: fn -> System.fetch_env!(\"OPENAI_API_KEY\") end\nconfig :langchain, openai_org_id: fn -> System.fetch_env!(\"OPENAI_ORG_ID\") end\n```\n\nThe API keys should be treated as secrets and not checked into your repository.\n\nFor [fly.io](https:\u002F\u002Ffly.io), adding the secrets looks like this:\n\n```\nfly secrets set OPENAI_API_KEY=MyOpenAIApiKey\nfly secrets set ANTHROPIC_API_KEY=MyAnthropicApiKey\nfly secrets set XAI_API_KEY=MyXaiApiKey\n```\n\nA list of models to use:\n\n- [Anthropic Claude models](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fabout-claude\u002Fmodels)\n- [Anthropic models on AWS Bedrock](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fapi\u002Fclaude-on-amazon-bedrock#accessing-bedrock)\n- [OpenAI models](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels)\n- [OpenAI models on Azure](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Fconcepts\u002Fmodels)\n- [xAI Grok models](https:\u002F\u002Fdocs.x.ai\u002Fdocs\u002Fmodels)\n- [Gemini AI models](https:\u002F\u002Fai.google.dev\u002Fgemini-api\u002Fdocs\u002Fmodels\u002Fgemini)\n\n## Prompt caching\n\nChatGPT, Claude, and DeepSeek all offer prefix-based prompt caching, which can offer cost and performance benefits for longer prompts. Gemini offers context caching, which is similar.\n\n- [ChatGPT's prompt caching](https:\u002F\u002Fopenai.com\u002Findex\u002Fapi-prompt-caching\u002F) is automatic for prompts longer than 1024 tokens, caching the longest common prefix.\n- [Claude's prompt caching](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fbuild-with-claude\u002Fprompt-caching) is not automatic. It's prefixing processes tools, system, and then messages, in that order, up to and including the block designated with {\"cache_control\": {\"type\": \"ephemeral\"}} . See LangChain.ChatModels.ChatAnthropicTest and for an example.\n- [DeepSeek's prompt caching](https:\u002F\u002Fapi-docs.deepseek.com\u002Fguides\u002Fkv_cache) provides automatic caching for repeated prompts and system messages, helping reduce costs and improve response times for longer conversations.\n- [Gemini's context caching]((https:\u002F\u002Fai.google.dev\u002Fgemini-api\u002Fdocs\u002Fcaching?lang=python)) requires a separate call which is not supported by Langchain.\n\n## Usage\n\nThe central module in this library is `LangChain.Chains.LLMChain`. Most other pieces are either inputs to this, or structures used by it. For understanding how to use the library, start there.\n\n### xAI Grok Support\n\nLangChain supports all xAI Grok models including the advanced Grok-4 variants:\n\n```elixir\nalias LangChain.ChatModels.ChatGrok\nalias LangChain.Chains.LLMChain\nalias LangChain.Message\n\n# Basic Grok-4 usage\n{:ok, grok} = ChatGrok.new(%{model: \"grok-4\", temperature: 0.7})\n\n{:ok, chain} =\n  LLMChain.new!(%{llm: grok})\n  |> LLMChain.add_message(Message.new_user!(\"Explain quantum computing\"))\n  |> LLMChain.run()\n\n# Fast and efficient Grok-3-mini\n{:ok, mini_grok} = ChatGrok.new(%{model: \"grok-3-mini\", temperature: 0.8})\n```\n\nGrok models offer unique capabilities:\n- **130K+ context window** for extensive conversations\n- **Multi-agent reasoning** (Grok-4 Heavy) where multiple agents collaborate\n- **Advanced reasoning mode** with first-principles thinking\n- **Specialized coding support** (Grok-4 Code)\n- **Multimodal capabilities** including vision and image analysis\n\n### Exposing a custom Elixir function to ChatGPT\n\nA really powerful feature of LangChain is making it easy to integrate an LLM into your application and expose features, data, and functionality _from_ your application to the LLM.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbrainlid_langchain_readme_fbb42c7e198d.png\" style=\"text-align: center;\" width=50% height=50% alt=\"Diagram showing LLM integration to application logic and data through a LangChain.Function\">\n\nA `LangChain.Function` bridges the gap between the LLM and our application code. We choose what to expose and using `context`, we can ensure any actions are limited to what the user has permission to do and access.\n\nFor an interactive example, refer to the project [Livebook notebook \"LangChain: Executing Custom Elixir Functions\"](notebooks\u002Fcustom_functions.livemd).\n\nThe following is an example of a function that receives parameter arguments.\n\n```elixir\nalias LangChain.Function\nalias LangChain.Message\nalias LangChain.Chains.LLMChain\nalias LangChain.ChatModels.ChatOpenAI\nalias LangChain.Utils.ChainResult\n\n# map of data we want to be passed as `context` to the function when\n# executed.\ncustom_context = %{\n  \"user_id\" => 123,\n  \"hairbrush\" => \"drawer\",\n  \"dog\" => \"backyard\",\n  \"sandwich\" => \"kitchen\"\n}\n\n# a custom Elixir function made available to the LLM\ncustom_fn =\n  Function.new!(%{\n    name: \"custom\",\n    description: \"Returns the location of the requested element or item.\",\n    parameters_schema: %{\n      type: \"object\",\n      properties: %{\n        thing: %{\n          type: \"string\",\n          description: \"The thing whose location is being requested.\"\n        }\n      },\n      required: [\"thing\"]\n    },\n    function: fn %{\"thing\" => thing} = _arguments, context ->\n      # our context is a pretend item\u002Flocation location map\n      {:ok, context[thing]}\n    end\n  })\n\n# create and run the chain\n{:ok, updated_chain} =\n  LLMChain.new!(%{\n    llm: ChatOpenAI.new!(),\n    custom_context: custom_context,\n    verbose: true\n  })\n  |> LLMChain.add_tools(custom_fn)\n  |> LLMChain.add_message(Message.new_user!(\"Where is the hairbrush located?\"))\n  |> LLMChain.run(mode: :while_needs_response)\n\n# print the LLM's answer\nIO.puts(ChainResult.to_string!(updated_chain))\n# => \"The hairbrush is located in the drawer.\"\n```\n\n### Alternative OpenAI compatible APIs\n\nThere are several services or self-hosted applications that provide an OpenAI compatible API for ChatGPT-like behavior. To use a service like that, the `endpoint` of the `ChatOpenAI` struct can be pointed to an API compatible `endpoint` for chats.\n\nFor example, if a locally running service provided that feature, the following code could connect to the service:\n\n```elixir\n{:ok, updated_chain} =\n  LLMChain.new!(%{\n    llm: ChatOpenAI.new!(%{endpoint: \"http:\u002F\u002Flocalhost:1234\u002Fv1\u002Fchat\u002Fcompletions\"}),\n  })\n  |> LLMChain.add_message(Message.new_user!(\"Hello!\"))\n  |> LLMChain.run()\n```\n\n### Bumblebee Chat Support\n\nBumblebee hosted chat models are supported. There is built-in support for Llama 2, Mistral, and Zephyr models.\n\nCurrently, function calling is only supported for llama 3.1 Json Tool calling for Llama 2, Mistral, and Zephyr is NOT supported.\nThere is an example notebook in the notebook folder.\n\n    ChatBumblebee.new!(%{\n      serving: @serving_name,\n      template_format: @template_format,\n      receive_timeout: @receive_timeout,\n      stream: true\n    })\n\nThe `serving` is the module name of the `Nx.Serving` that is hosting the model.\n\nSee the [`LangChain.ChatModels.ChatBumblebee` documentation](https:\u002F\u002Fhexdocs.pm\u002Flangchain\u002FLangChain.ChatModels.ChatBumblebee.html) for more details.\n\n## Testing\n\nBefore you can run live API tests, you need to provide your API keys. Copy the example file and populate it with your values:\n\n```\ncp .env.example .env\n# Edit .env with your private API keys\n```\n\nThe `.env` file is gitignored and is loaded automatically by the test suite via [dotenvy](https:\u002F\u002Fhex.pm\u002Fpackages\u002Fdotenvy) — no shell setup or external tools required.\n\nTo run all the tests including the ones that perform live calls against the OpenAI API, use the following command:\n\n```\nmix test --include live_call\nmix test --include live_open_ai\nmix test --include live_ollama_ai\nmix test --include live_anthropic\nmix test --include live_mistral_ai\nmix test --include live_grok\nmix test --include live_vertex_ai\nmix test test\u002Ftools\u002Fcalculator_test.exs --include live_call\n```\n\nNOTE: This will use the configured API credentials which creates billable events.\n\nOtherwise, running the following will only run local tests making no external API calls:\n\n```\nmix test\n```\n\nExecuting a specific test, whether it is a `live_call` or not, will execute it creating a potentially billable event.\n\n**Multi-modal support:**\n\nLangChain now supports multi-modal messages and tool results. This means you can include text, images, files, and even \"thinking\" blocks in a single message using ContentParts. See module docs for details. Support for this depends on the LLM and service. Not all models may yet support all modalities.\n\n## Evaluating Agent Behavior\n\nWhen building agent systems, the final answer is only part of the story. Two agents can produce the same answer through very different reasoning paths — one might make a single efficient tool call while another makes five redundant ones. LangChain provides `LangChain.Trajectory` to evaluate the *process*, not just the outcome.\n\nA trajectory captures the structured sequence of tool calls produced during an `LLMChain` run, enabling regression testing, cost control, safety verification, and debugging of agent workflows.\n\n### Capturing a Trajectory\n\nAfter running a chain, extract its trajectory:\n\n```elixir\nalias LangChain.Trajectory\n\n{:ok, chain} =\n  LLMChain.new!(%{llm: llm})\n  |> LLMChain.add_tools(my_tools)\n  |> LLMChain.add_message(Message.new_user!(\"What's the weather in Paris?\"))\n  |> LLMChain.run(mode: :while_needs_response)\n\ntrajectory = Trajectory.from_chain(chain)\ntrajectory.tool_calls\n#=> [%{name: \"search\", arguments: %{\"query\" => \"weather paris\"}},\n#    %{name: \"get_forecast\", arguments: %{\"city\" => \"Paris\"}}]\n```\n\n### Matching Tool Call Sequences\n\nUse `Trajectory.matches?\u002F3` to compare actual tool calls against expected patterns:\n\n```elixir\n# Strict: exact order and arguments\nTrajectory.matches?(trajectory, [\n  %{name: \"search\", arguments: %{\"query\" => \"weather paris\"}},\n  %{name: \"get_forecast\", arguments: %{\"city\" => \"Paris\"}}\n])\n\n# Wildcard arguments: pass nil to match any arguments\nTrajectory.matches?(trajectory, [\n  %{name: \"search\", arguments: nil},\n  %{name: \"get_forecast\", arguments: nil}\n])\n\n# Unordered: same calls in any order\nTrajectory.matches?(trajectory, expected, mode: :unordered)\n\n# Superset: actual contains at least all expected calls\nTrajectory.matches?(trajectory, [%{name: \"search\", arguments: nil}], mode: :superset)\n\n# Subset args: expected arguments are a subset of actual\nTrajectory.matches?(trajectory, expected, args: :subset)\n```\n\n### ExUnit Assertions\n\n`LangChain.Trajectory.Assertions` provides `assert_trajectory` and `refute_trajectory` macros with informative failure diffs:\n\n```elixir\nuse LangChain.Trajectory.Assertions\n\ntest \"agent calls the right tools in order\" do\n  trajectory = Trajectory.from_chain(chain)\n\n  assert_trajectory trajectory, [\n    %{name: \"search\", arguments: %{\"query\" => \"weather\"}},\n    %{name: \"get_forecast\", arguments: nil}\n  ]\nend\n\ntest \"agent does not call dangerous tools\" do\n  trajectory = Trajectory.from_chain(chain)\n\n  refute_trajectory trajectory, [\n    %{name: \"delete_all\", arguments: nil}\n  ], mode: :superset\nend\n```\n\nBoth macros also accept an `LLMChain` directly, extracting the trajectory automatically.\n\n### Golden-File Testing\n\nSave a known-good trajectory and compare future runs against it to catch regressions:\n\n```elixir\n# Save the golden file\ngolden = chain |> Trajectory.from_chain() |> Trajectory.to_map()\nFile.write!(\"test\u002Ffixtures\u002Fweather_agent.json\", Jason.encode!(golden))\n\n# In your test\ngolden_map = \"test\u002Ffixtures\u002Fweather_agent.json\" |> File.read!() |> Jason.decode!()\nexpected = Trajectory.from_map(golden_map)\nactual = Trajectory.from_chain(chain)\n\nassert_trajectory actual, expected\n```\n\n### Inspecting Trajectories\n\nFilter and group tool calls for deeper analysis:\n\n```elixir\n# All calls to a specific tool\nTrajectory.calls_by_name(trajectory, \"search\")\n\n# Group calls by conversation turn\nTrajectory.calls_by_turn(trajectory)\n#=> [{0, [%{name: \"search\", ...}]}, {1, [%{name: \"get_forecast\", ...}]}]\n\n# Check aggregated token usage\ntrajectory.token_usage\n#=> %TokenUsage{input: 150, output: 45}\n\n# Check metadata\ntrajectory.metadata\n#=> %{model: \"gpt-4\", llm_module: LangChain.ChatModels.ChatOpenAI}\n```\n\nSee `LangChain.Trajectory` and `LangChain.Trajectory.Assertions` module docs for the full API reference.\n","[![Elixir CI](https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Factions\u002Fworkflows\u002Felixir.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Factions\u002Fworkflows\u002Felixir.yml)\n[![Module Version](https:\u002F\u002Fimg.shields.io\u002Fhexpm\u002Fv\u002Flangchain.svg)](https:\u002F\u002Fhex.pm\u002Fpackages\u002Flangchain)\n[![Hex Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fhex-docs-lightgreen.svg)](https:\u002F\u002Fhexdocs.pm\u002Flangchain)\n\n# ![Logo with chat chain links](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbrainlid_langchain_readme_4c41ef6900c6.png) Elixir LangChain\n\nElixir LangChain 使 Elixir 应用程序能够将 AI 服务和自托管模型（self-hosted models）集成到应用中。\n\n**当前支持的 AI 服务：**\n\n| 模型 | v0.3.x | v0.5.x |\n|-------|---------|---------|\n| OpenAI ChatGPT | ✓ | ✓ |\n| OpenAI DALL-e 2（图像生成） | ✓ | ? |\n| Anthropic Claude | ✓ | ✓ |\n| Anthropic Claude（思考模式） | X | ✓ |\n| xAI Grok | X | ✓ |\n| Google Gemini | ✓ | ✓ |\n| Google Vertex AI* | ✓ | X |\n| Ollama | ✓ | ? |\n| Mistral | ✓ | X |\n| Bumblebee 自托管模型** | ✓ | ? |\n| LMStudio*** | ✓ | ? |\n| Perplexity | ✓ | ✓ |\n\n- *Google Vertex AI 是 Google 的企业级产品\n- **Bumblebee 自托管模型 - 包括 Llama、Mistral 和 Zephyr\n- ***[LMStudio](https:\u002F\u002Flmstudio.ai\u002Fdocs\u002Fapi\u002Fendpoints\u002Fopenai) 通过其 OpenAI 兼容 API（应用程序接口）\n- ****xAI Grok 模型，包括 Grok-4、Grok-3-mini、Grok-4 Heavy（多代理）\n\n**LangChain** 是 Language Chain 的缩写。LLM（大型语言模型，Large Language Model）是其中的\"Language\"部分。该库使 Elixir 应用程序更容易将不同的流程、集成、库、服务或功能与 LLM“链”接或连接在一起。\n\n**LangChain** 是一个用于开发由语言模型驱动的应用程序的框架。它使应用程序具备以下能力：\n\n- **数据感知（Data-aware）：** 将语言模型连接到其他数据源\n- **代理（Agentic）：** 允许语言模型与其环境交互\n\nLangChain 的主要价值主张是：\n\n1. **组件（Components）：** 用于使用语言模型的抽象，以及每个抽象的实现集合。无论您是否使用 LangChain 框架的其余部分，组件都是模块化且易于使用的\n1. **开箱即用的链（Off-the-shelf chains）：** 为实现特定高级任务而构建的结构化组件组装\n\n开箱即用的链让您轻松上手。对于更复杂的应用程序和细微的用例，组件可以轻松定制现有链或构建新链。\n\n## 这是什么？\n\n大型语言模型（LLM）正成为一种变革性技术，使开发人员能够构建以前无法构建的应用程序。但孤立地使用这些 LLM 通常不足以创建真正强大的应用程序——真正的力量来自于将它们与其他计算或知识来源相结合。\n\n本库旨在协助开发此类应用程序。\n\n## 文档\n\n在线文档可以[在此找到](https:\u002F\u002Fhexdocs.pm\u002Flangchain)。\n\n## 演示\n\n查看您可以下载和审查的[演示项目](https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain_demo)。\n\n## 与 JavaScript 和 Python LangChain 的关系\n\n本库使用 [Elixir](https:\u002F\u002Felixir-lang.org\u002F) 编写，旨在与 Elixir 应用程序一起使用。原始库是 [LangChain JS\u002FTS](https:\u002F\u002Fjs.langchain.com\u002F) 和 [LangChain Python](https:\u002F\u002Fpython.langchain.com\u002F)。\n\nJavaScript 和 Python 项目旨在尽可能无缝地相互集成。预期的集成非常强大，以至于所有对象（提示词、LLM、链等）的设计方式都使它们可以在两种语言之间序列化（serialized）和共享。\n\n这个 Elixir 版本并不旨在与 JavaScript 和 Python 库完全对等。为什么不？\n\n- JavaScript 和 Python 都是面向对象（Object Oriented）语言。Elixir 是函数式（Functional）语言。我们不会强制应用不适用的设计。\n- JS 和 Python 版本在对话式 LLM 成为标准之前就开始开发了。当 LLM 不支持时，他们投入了大量精力来保留历史（如对话）。我们这里不这样做。\n\n本库深受 JavaScript 库实际工作方式和与 LLM 交互方式的启发，并基于此构建。\n\n## 安装\n\n**要求：** Elixir 1.17 或更高版本\n\n可以通过将 `langchain` 添加到 `mix.exs` 中的依赖列表来安装该包：\n\n```elixir\ndef deps do\n  [\n    {:langchain, \"~> 0.6.0\"}\n  ]\nend\n```\n\n## 配置\n\n目前，该库编写为使用 `Req` 库进行 API（应用程序接口）调用。\n\n您可以为 OpenAI 的 API 配置 _组织 ID_ 和 _API 密钥_，但该库也适用于 [其他兼容 API](#alternative-openai-compatible-apis) 以及其他服务，甚至 [在 Bumblebee 上运行的本地模型](#bumblebee-chat-support)。\n\n`config\u002Fruntime.exs`：\n\n```elixir\nconfig :langchain, openai_key: System.fetch_env!(\"OPENAI_API_KEY\")\nconfig :langchain, openai_org_id: System.fetch_env!(\"OPENAI_ORG_ID\")\n# 或\nconfig :langchain, openai_key: \"YOUR SECRET KEY\"\nconfig :langchain, openai_org_id: \"YOUR_OPENAI_ORG_ID\"\n\nconfig :langchain, :anthropic_key, System.fetch_env!(\"ANTHROPIC_API_KEY\")\nconfig :langchain, :xai_api_key, System.fetch_env!(\"XAI_API_KEY\")\n```\n\n可以使用函数或元组来解析密钥：\n\n```elixir\nconfig :langchain, openai_key: {MyApp.Secrets, :openai_api_key, []}\nconfig :langchain, openai_org_id: {MyApp.Secrets, :openai_org_id, []}\n# 或\nconfig :langchain, openai_key: fn -> System.fetch_env!(\"OPENAI_API_KEY\") end\nconfig :langchain, openai_org_id: fn -> System.fetch_env!(\"OPENAI_ORG_ID\") end\n```\n\nAPI 密钥应被视为机密，不应检查到您的存储库中。\n\n对于 [fly.io](https:\u002F\u002Ffly.io)，添加机密如下所示：\n\n```\nfly secrets set OPENAI_API_KEY=MyOpenAIApiKey\nfly secrets set ANTHROPIC_API_KEY=MyAnthropicApiKey\nfly secrets set XAI_API_KEY=MyXaiApiKey\n```\n\n要使用的模型列表：\n\n- [Anthropic Claude 模型](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fabout-claude\u002Fmodels)\n- [AWS Bedrock 上的 Anthropic 模型](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fapi\u002Fclaude-on-amazon-bedrock#accessing-bedrock)\n- [OpenAI 模型](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels)\n- [Azure 上的 OpenAI 模型](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Fconcepts\u002Fmodels)\n- [xAI Grok 模型](https:\u002F\u002Fdocs.x.ai\u002Fdocs\u002Fmodels)\n- [Gemini AI 模型](https:\u002F\u002Fai.google.dev\u002Fgemini-api\u002Fdocs\u002Fmodels\u002Fgemini)\n\n## 提示词缓存 (Prompt caching)\n\nChatGPT、Claude 和 DeepSeek 都提供基于前缀的提示词缓存 (prefix-based prompt caching)，这对于较长的提示词可以提供成本和性能优势。Gemini 提供上下文缓存 (context caching)，类似。\n\n- [ChatGPT 的提示词缓存](https:\u002F\u002Fopenai.com\u002Findex\u002Fapi-prompt-caching\u002F) 对于超过 1024 tokens (令牌) 的提示词是自动的，缓存最长的公共前缀。\n- [Claude 的提示词缓存](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fbuild-with-claude\u002Fprompt-caching) 不是自动的。它按顺序前缀化处理工具 (tools)、系统 (system) 和消息 (messages)，直至并包括标记为 {\"cache_control\": {\"type\": \"ephemeral\"}} 的块。参见 `LangChain.ChatModels.ChatAnthropicTest` 以获取示例。\n- [DeepSeek 的提示词缓存](https:\u002F\u002Fapi-docs.deepseek.com\u002Fguides\u002Fkv_cache) 为重复的提示词和系统消息提供自动缓存，有助于降低更长对话的成本并改善响应时间。\n- [Gemini 的上下文缓存 (context caching)]((https:\u002F\u002Fai.google.dev\u002Fgemini-api\u002Fdocs\u002Fcaching?lang=python)) 需要单独的调用，Langchain 不支持。\n\n## 用法\n\n本库的核心模块 (module) 是 `LangChain.Chains.LLMChain`。大多数其他部分要么是它的输入，要么是它使用的结构。要了解如何使用本库，请从这里开始。\n\n### xAI Grok 支持\n\nLangChain 支持所有 xAI Grok 模型，包括高级的 Grok-4 变体：\n\n```elixir\nalias LangChain.ChatModels.ChatGrok\nalias LangChain.Chains.LLMChain\nalias LangChain.Message\n\n# Basic Grok-4 usage\n{:ok, grok} = ChatGrok.new(%{model: \"grok-4\", temperature: 0.7})\n\n{:ok, chain} =\n  LLMChain.new!(%{llm: grok})\n  |> LLMChain.add_message(Message.new_user!(\"Explain quantum computing\"))\n  |> LLMChain.run()\n\n# Fast and efficient Grok-3-mini\n{:ok, mini_grok} = ChatGrok.new(%{model: \"grok-3-mini\", temperature: 0.8})\n```\n\nGrok 模型提供独特的功能：\n- **130K+ 上下文窗口 (context window)** 用于广泛的对话\n- **多代理推理 (Multi-agent reasoning)** (Grok-4 Heavy) 多个代理协作\n- **高级推理模式 (Advanced reasoning mode)** 具有第一性原理思维\n- **专用编码支持 (Specialized coding support)** (Grok-4 Code)\n- **多模态能力 (Multimodal capabilities)** 包括视觉和图像分析\n\n### 向 ChatGPT 暴露自定义 Elixir 函数\n\nLangChain 的一个非常强大的功能是能够轻松地将大型语言模型 (LLM) 集成到您的应用程序中，并将功能、数据和功能从您的应用程序暴露给 LLM。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbrainlid_langchain_readme_fbb42c7e198d.png\" style=\"text-align: center;\" width=50% height=50% alt=\"Diagram showing LLM integration to application logic and data through a LangChain.Function\">\n\n`LangChain.Function` 弥合了 LLM 和我们应用程序代码之间的差距。我们选择暴露什么，并使用 `context`（上下文），我们可以确保任何操作都限制在用户有权执行和访问的范围内。\n\n有关交互式示例，请参阅项目 [Livebook 笔记本 (notebook) \"LangChain: Executing Custom Elixir Functions\"](notebooks\u002Fcustom_functions.livemd)。\n\n下面是一个接收参数的函数示例。\n\n```elixir\nalias LangChain.Function\nalias LangChain.Message\nalias LangChain.Chains.LLMChain\nalias LangChain.ChatModels.ChatOpenAI\nalias LangChain.Utils.ChainResult\n\n# map of data we want to be passed as `context` to the function when\n# executed.\ncustom_context = %{\n  \"user_id\" => 123,\n  \"hairbrush\" => \"drawer\",\n  \"dog\" => \"backyard\",\n  \"sandwich\" => \"kitchen\"\n}\n\n# a custom Elixir function made available to the LLM\ncustom_fn =\n  Function.new!(%{\n    name: \"custom\",\n    description: \"Returns the location of the requested element or item.\",\n    parameters_schema: %{\n      type: \"object\",\n      properties: %{\n        thing: %{\n          type: \"string\",\n          description: \"The thing whose location is being requested.\"\n        }\n      },\n      required: [\"thing\"]\n    },\n    function: fn %{\"thing\" => thing} = _arguments, context ->\n      # our context is a pretend item\u002Flocation location map\n      {:ok, context[thing]}\n    end\n  })\n\n# create and run the chain\n{:ok, updated_chain} =\n  LLMChain.new!(%{\n    llm: ChatOpenAI.new!(),\n    custom_context: custom_context,\n    verbose: true\n  })\n  |> LLMChain.add_tools(custom_fn)\n  |> LLMChain.add_message(Message.new_user!(\"Where is the hairbrush located?\"))\n  |> LLMChain.run(mode: :while_needs_response)\n\n# print the LLM's answer\nIO.puts(ChainResult.to_string!(updated_chain))\n# => \"The hairbrush is located in the drawer.\"\n```\n\n### 替代 OpenAI 兼容 API (应用程序接口)\n\n有几个服务或自托管应用程序提供用于类似 ChatGPT 行为的 OpenAI 兼容 API (OpenAI compatible API)。要使用此类服务，`ChatOpenAI` 结构体 (struct) 的 `endpoint`（端点）可以指向用于聊天的 API 兼容 `endpoint`。\n\n例如，如果本地运行的服务提供该功能，以下代码可以连接到该服务：\n\n```elixir\n{:ok, updated_chain} =\n  LLMChain.new!(%{\n    llm: ChatOpenAI.new!(%{endpoint: \"http:\u002F\u002Flocalhost:1234\u002Fv1\u002Fchat\u002Fcompletions\"}),\n  })\n  |> LLMChain.add_message(Message.new_user!(\"Hello!\"))\n  |> LLMChain.run()\n```\n\n### Bumblebee 聊天支持\n\n支持 Bumblebee 托管的聊天模型。内置支持 Llama 2、Mistral 和 Zephyr 模型。\n\n目前，函数调用 (function calling) 仅支持 llama 3.1 Json Tool，不支持 Llama 2、Mistral 和 Zephyr。\nnotebook 文件夹中有一个示例 notebook。\n\n    ChatBumblebee.new!(%{\n      serving: @serving_name,\n      template_format: @template_format,\n      receive_timeout: @receive_timeout,\n      stream: true\n    })\n\n`serving` 是托管模型的 `Nx.Serving` 的模块名称。\n\n详见 [`LangChain.ChatModels.ChatBumblebee` 文档](https:\u002F\u002Fhexdocs.pm\u002Flangchain\u002FLangChain.ChatModels.ChatBumblebee.html)。\n\n## 测试\n\n在运行实时 API 测试之前，您需要提供 API 密钥 (API keys)。复制示例文件并用您的值填充它：\n\n```\ncp .env.example .env\n```\n\n# 使用您的私有 API 密钥编辑 .env\n\n`.env` 文件已被 Git 忽略 (gitignored)，并通过 [dotenvy](https:\u002F\u002Fhex.pm\u002Fpackages\u002Fdotenvy) 由测试套件自动加载——无需 shell 设置或外部工具。\n\n要运行所有测试，包括针对 OpenAI API 执行实时调用 (live calls) 的测试，请使用以下命令：\n\n```\nmix test --include live_call\nmix test --include live_open_ai\nmix test --include live_ollama_ai\nmix test --include live_anthropic\nmix test --include live_mistral_ai\nmix test --include live_grok\nmix test --include live_vertex_ai\nmix test test\u002Ftools\u002Fcalculator_test.exs --include live_call\n```\n\n**注意：** 这将使用配置好的 API 凭证，从而产生计费事件。\n\n否则，运行以下命令将仅运行本地测试，不进行任何外部 API 调用：\n\n```\nmix test\n```\n\n执行特定测试，无论是否为 `live_call`，都会执行该测试并可能产生计费事件。\n\n**多模态 (Multi-modal) 支持：**\n\nLangChain 现在支持多模态 (multi-modal) 消息和工具结果。这意味着您可以使用 ContentParts 在单条消息中包含文本、图像、文件，甚至“思考”块。详见模块文档。此支持取决于 LLM（大型语言模型）和服务。并非所有模型都可能支持所有模态。\n\n## 评估 Agent (智能体) 行为\n\n在构建 Agent 系统时，最终答案只是故事的一部分。两个 Agent 可以通过非常不同的推理路径产生相同的答案——一个可能进行单次高效的工具调用，而另一个可能进行五次冗余调用。LangChain 提供 `LangChain.Trajectory` 来评估*过程*，而不仅仅是结果。\n\n轨迹 (trajectory) 捕获 `LLMChain` 运行期间产生的工具调用结构化序列，支持回归测试、成本控制、安全验证和 Agent 工作流调试。\n\n### 捕获轨迹 (Trajectory)\n\n运行 chain 后，提取其轨迹：\n\n```elixir\nalias LangChain.Trajectory\n\n{:ok, chain} =\n  LLMChain.new!(%{llm: llm})\n  |> LLMChain.add_tools(my_tools)\n  |> LLMChain.add_message(Message.new_user!(\"What's the weather in Paris?\"))\n  |> LLMChain.run(mode: :while_needs_response)\n\ntrajectory = Trajectory.from_chain(chain)\ntrajectory.tool_calls\n#=> [%{name: \"search\", arguments: %{\"query\" => \"weather paris\"}},\n#    %{name: \"get_forecast\", arguments: %{\"city\" => \"Paris\"}}]\n```\n\n### 匹配工具调用序列\n\n使用 `Trajectory.matches?\u002F3` 将实际工具调用与预期模式进行比较：\n\n```elixir\n# Strict: exact order and arguments\nTrajectory.matches?(trajectory, [\n  %{name: \"search\", arguments: %{\"query\" => \"weather paris\"}},\n  %{name: \"get_forecast\", arguments: %{\"city\" => \"Paris\"}}\n])\n\n# Wildcard arguments: pass nil to match any arguments\nTrajectory.matches?(trajectory, [\n  %{name: \"search\", arguments: nil},\n  %{name: \"get_forecast\", arguments: nil}\n])\n\n# Unordered: same calls in any order\nTrajectory.matches?(trajectory, expected, mode: :unordered)\n\n# Superset: actual contains at least all expected calls\nTrajectory.matches?(trajectory, [%{name: \"search\", arguments: nil}], mode: :superset)\n\n# Subset args: expected arguments are a subset of actual\nTrajectory.matches?(trajectory, expected, args: :subset)\n```\n\n### ExUnit 断言\n\n`LangChain.Trajectory.Assertions` 提供 `assert_trajectory` 和 `refute_trajectory` 宏，带有信息丰富的失败差异对比：\n\n```elixir\nuse LangChain.Trajectory.Assertions\n\ntest \"agent calls the right tools in order\" do\n  trajectory = Trajectory.from_chain(chain)\n\n  assert_trajectory trajectory, [\n    %{name: \"search\", arguments: %{\"query\" => \"weather\"}},\n    %{name: \"get_forecast\", arguments: nil}\n  ]\nend\n\ntest \"agent does not call dangerous tools\" do\n  trajectory = Trajectory.from_chain(chain)\n\n  refute_trajectory trajectory, [\n    %{name: \"delete_all\", arguments: nil}\n  ], mode: :superset\nend\n```\n\n这两个宏也直接接受 `LLMChain`，自动提取轨迹。\n\n### 黄金文件 (Golden-File) 测试\n\n保存一个已知良好的轨迹，并将未来的运行结果与之比较以捕捉回归问题：\n\n```elixir\n# Save the golden file\ngolden = chain |> Trajectory.from_chain() |> Trajectory.to_map()\nFile.write!(\"test\u002Ffixtures\u002Fweather_agent.json\", Jason.encode!(golden))\n\n# In your test\ngolden_map = \"test\u002Ffixtures\u002Fweather_agent.json\" |> File.read!() |> Jason.decode!()\nexpected = Trajectory.from_map(golden_map)\nactual = Trajectory.from_chain(chain)\n\nassert_trajectory actual, expected\n```\n\n### 检查轨迹 (Trajectories)\n\n过滤和分组工具调用以进行更深入的分析：\n\n```elixir\n# All calls to a specific tool\nTrajectory.calls_by_name(trajectory, \"search\")\n\n# Group calls by conversation turn\nTrajectory.calls_by_turn(trajectory)\n#=> [{0, [%{name: \"search\", ...}]}, {1, [%{name: \"get_forecast\", ...}]}]\n\n# Check aggregated token usage\ntrajectory.token_usage\n#=> %TokenUsage{input: 150, output: 45}\n\n# Check metadata\ntrajectory.metadata\n#=> %{model: \"gpt-4\", llm_module: LangChain.ChatModels.ChatOpenAI}\n```\n\n请参阅 `LangChain.Trajectory` 和 `LangChain.Trajectory.Assertions` 模块文档以获取完整的 API 参考。","# Elixir LangChain 快速上手指南\n\nElixir LangChain 是一个专为 Elixir 应用程序设计的框架，用于集成 AI 服务（如 OpenAI、Anthropic 等）和自托管模型。它支持将语言模型与数据源连接，并允许模型与环境交互。\n\n> **注意**：此库是 Elixir 版本，不同于常见的 Python 或 JavaScript 版 LangChain。\n\n## 环境准备\n\n- **Elixir 版本**：1.17 或更高版本\n- **依赖项**：库内部使用 `Req` 进行 API 调用，安装时会自动处理。\n\n## 安装步骤\n\n1. 打开项目的 `mix.exs` 文件。\n2. 在 `deps` 函数中添加 `langchain` 依赖：\n\n```elixir\ndef deps do\n  [\n    {:langchain, \"~> 0.6.0\"}\n  ]\nend\n```\n\n3. 获取依赖：\n\n```bash\nmix deps.get\n```\n\n## 配置 API 密钥\n\n在 `config\u002Fruntime.exs` 中配置所需的 AI 服务密钥。建议使用环境变量管理敏感信息。\n\n```elixir\nconfig :langchain, openai_key: System.fetch_env!(\"OPENAI_API_KEY\")\nconfig :langchain, openai_org_id: System.fetch_env!(\"OPENAI_ORG_ID\")\n\n# 如果使用 Anthropic\nconfig :langchain, :anthropic_key, System.fetch_env!(\"ANTHROPIC_API_KEY\")\n```\n\n也可以直接使用函数或元组来解析密钥：\n\n```elixir\nconfig :langchain, openai_key: fn -> System.fetch_env!(\"OPENAI_API_KEY\") end\n```\n\n## 基本使用\n\n核心模块为 `LangChain.Chains.LLMChain`。以下是一个使用 OpenAI 模型进行对话的最小示例：\n\n```elixir\nalias LangChain.ChatModels.ChatOpenAI\nalias LangChain.Chains.LLMChain\nalias LangChain.Message\nalias LangChain.Utils.ChainResult\n\n# 创建链并配置 LLM\n{:ok, updated_chain} =\n  LLMChain.new!(%{\n    llm: ChatOpenAI.new!(),\n    verbose: true\n  })\n  |> LLMChain.add_message(Message.new_user!(\"Hello!\"))\n  |> LLMChain.run()\n\n# 输出结果\nIO.puts(ChainResult.to_string!(updated_chain))\n```\n\n### 使用自定义工具（Function Calling）\n\n你可以将 Elixir 函数暴露给 LLM，使其能够调用本地逻辑：\n\n```elixir\nalias LangChain.Function\n\ncustom_fn =\n  Function.new!(%{\n    name: \"get_item_location\",\n    description: \"Returns the location of the requested item.\",\n    parameters_schema: %{\n      type: \"object\",\n      properties: %{\n        item: %{type: \"string\", description: \"The item to locate.\"}\n      },\n      required: [\"item\"]\n    },\n    function: fn %{\"item\" => item}, _context ->\n      {:ok, \"The #{item} is in the drawer.\"}\n    end\n  })\n\n# 将工具添加到链中\n{:ok, chain} =\n  LLMChain.new!(%{llm: ChatOpenAI.new!()})\n  |> LLMChain.add_tools(custom_fn)\n  |> LLMChain.add_message(Message.new_user!(\"Where is the hairbrush?\"))\n  |> LLMChain.run(mode: :while_needs_response)\n```\n\n更多详细用法请参考 [官方文档](https:\u002F\u002Fhexdocs.pm\u002Flangchain)。","一家基于 Elixir Phoenix 框架的 SaaS 团队，希望在其客服系统中引入 AI 助手，自动处理用户关于账单和技术文档的咨询，以提升响应速度。\n\n### 没有 langchain 时\n- 开发者需为不同大模型单独编写 API 调用代码，切换模型时重构成本高。\n- 难以将内部数据库信息安全地传递给模型，导致回答缺乏依据且容易产生幻觉。\n- 复杂任务流程（如先查账单再解释）需要手动串联多个步骤，代码耦合度高。\n- 维护多个供应商的 SDK 增加了项目依赖负担和调试难度。\n\n### 使用 langchain 后\n- 通过统一抽象接口，可灵活切换 OpenAI、Claude 或本地 Ollama 模型，无需修改业务逻辑。\n- 利用数据感知组件轻松连接数据库，让 AI 基于真实账单信息生成准确回复。\n- 使用预置链条（Chains）快速搭建“检索 - 分析 - 生成”工作流，显著简化复杂任务处理。\n- 模块化设计降低了代码重复率，团队能更专注于业务场景而非底层 API 对接。\n\nlangchain 帮助 Elixir 开发者将大模型与现有系统无缝连接，大幅降低集成门槛，快速构建数据驱动的智能应用。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbrainlid_langchain_f13cffa5.png","brainlid","Mark Ericksen","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbrainlid_714da7a4.png",null,"@superfly","Utah","brainlid@gmail.com","https:\u002F\u002Fgithub.com\u002Fbrainlid",[23],{"name":24,"color":25,"percentage":26},"Elixir","#6e4a7e",100,1129,193,"2026-04-04T19:41:52","NOASSERTION",2,"未说明",{"notes":34,"python":35,"dependencies":36},"这是 LangChain 的 Elixir 实现版本，并非 Python 或 JavaScript 版本。主要作为客户端集成外部 AI 服务（如 OpenAI、Anthropic），也支持通过 Bumblebee、Ollama 等集成本地模型。运行需配置相应的 API 密钥。本地模型的具体硬件需求取决于外部服务配置而非本库本身。","不需要 (基于 Elixir)",[37,38],"Elixir 1.17+","Req",[40,41,42,43],"图像","Agent","语言模型","开发框架",[45,46,6,47,48,49,50,51],"chatgpt","elixir","llm","ai","anthropic","bumblebee","claude-ai",3,"ready","2026-03-27T02:49:30.150509","2026-04-06T08:17:34.844051",[57,62,67,71,76,81],{"id":58,"question_zh":59,"answer_zh":60,"source_url":61},4517,"如何配置项目以支持 Azure OpenAI？","Azure OpenAI 现已支持。需要通过 PR #93 合并的更改，主要区别在于认证头。Azure 需要使用 `api-key` 头而不是标准的 `Authorization` 头。代码修改示例：`headers: %{\"api-key\" => get_api_key()}`。目前的实现会同时发送两者以兼容 Azure 和标准 OpenAI。","https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fissues\u002F28",{"id":63,"question_zh":64,"answer_zh":65,"source_url":66},4518,"该库是否支持 OpenAI Assistants API？","OpenAI 已宣布将在 2026 年弃用 Assistants API，并将所有 Agent 相关工作转移到新的 \"Responses\" API。Responses API 是 chat completions API 的超集。因此，该功能支持计划已关闭，建议关注新的 Responses API。","https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fissues\u002F33",{"id":68,"question_zh":69,"answer_zh":70,"source_url":66},4519,"Elixir LangChain 库是否内置支持 RAG 数据存储？","目前没有任何内置或提供的 RAG 数据存储支持。最简单的选项是直接编码使用 OpenAI 的 Assistants API（自带 RAG 支持和对话管理），或者参考在线示例自行在 Elixir 中实现。维护者欢迎相关的 PR 贡献。",{"id":72,"question_zh":73,"answer_zh":74,"source_url":75},4520,"如何验证 GoogleAI (Gemini) 的工具调用 API 代码？","可以通过运行特定测试命令验证：`mix test test\u002Fchat_models\u002Fchat_google_ai_test.exs --include live_google_ai`。注意 GoogleAI 的 API 文档较少，可能需要设置 function calling mode 为 `ANY`。库已更新 `LangChain.Utils.ChainResult.to_string` 以处理 GoogleAI 返回的内容部分（parts）。","https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fissues\u002F107",{"id":77,"question_zh":78,"answer_zh":79,"source_url":80},4521,"遇到 \"Invalid parameter: messages with role tool\" 错误如何解决？","这是 v0.3.0\u002Fv0.3.1 版本中的问题。修复代码已在 `main` 分支或 PR #248 中可用。该错误通常发生在 tool 消息没有跟随在带有 `tool_calls` 的消息之后。建议更新到包含修复的版本或切换到 main 分支测试。","https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fissues\u002F244",{"id":82,"question_zh":83,"answer_zh":84,"source_url":85},4522,"如何在库中处理 OpenAI 的速率限制（Rate Limits）？","目前库没有内置自动重试机制。可以通过响应头获取限制信息，例如 `x-ratelimit-remaining-tokens`（剩余 tokens）和 `x-ratelimit-reset-requests`（重置时间）。社区建议通过回调函数来暴露这些信息，以便用户自行实现重试或负载均衡逻辑。","https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fissues\u002F31",[87,92,97,102,107,112,117,122,127,132,137,142,147,152,157,162,167,172,177,182],{"id":88,"version":89,"summary_zh":90,"released_at":91},113667,"v0.8.0","## What's Changed\r\n* feat: add top-level cache_control support to ChatAnthropic by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F509\r\n* Guard WebSocket code behind Code.ensure_loaded?(Mint.WebSocket) by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F510\r\n* Expand error detection and handling across LLMChain streaming pipeline by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F511\r\n* prep for v0.8.0 release by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F512\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.7.0...v0.8.0","2026-04-04T19:42:32",{"id":93,"version":94,"summary_zh":95,"released_at":96},113668,"v0.7.0","## What's Changed\r\n* feat: support streaming tool calls for ChatOllamaAI by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F498\r\n* feat: add WebSocket transport for ChatOpenAIResponses by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F497\r\n* test: add regression tests for streaming token usage with non-empty choices by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F499\r\n* fix: handle streaming overloaded_error from Anthropic API by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F500\r\n* Handle {:interrupt, chain, data} and {:pause, chain} in try_chain_with_llm by @srikanthkyatham in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F502\r\n* fix: disable Req-level retry to prevent compounding with LangChain retries by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F504\r\n* feat: make retry_count configurable on chat model structs by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F505\r\n* prep for v0.7.0 release by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F506\r\n\r\n## New Contributors\r\n* @srikanthkyatham made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F502\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.6.3...v0.7.0","2026-04-02T20:37:22",{"id":98,"version":99,"summary_zh":100,"released_at":101},113669,"v0.6.3","## What's Changed\r\n* Handle empty LLM response in do_run\u002F1 (thinking model streaming fix) by @lurielle-studio in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F484\r\n* fix: skip Azure keepalive events in streaming responses by @vasspilka in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F485\r\n* feat: Langchain.Trajectory for easier evaluation of agents by @vasspilka in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F481\r\n* fix: add additionalProperties: false to tool schemas by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F490\r\n* Support multimodal tool results in ChatVertexAI by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F491\r\n* fix: reduce excessive Logger.error usage across the library by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F492\r\n* test: add regression tests for malformed tool call JSON loop (#443) by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F493\r\n* feat: add logprobs and top_logprobs support to ChatOpenAI by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F494\r\n* fix: handle Mistral responses without tool_calls key by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F495\r\n* prep for v0.6.3 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F496\r\n\r\n## New Contributors\r\n* @lurielle-studio made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F484\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.6.2...v0.6.3","2026-03-28T01:11:36",{"id":103,"version":104,"summary_zh":105,"released_at":106},113670,"v0.6.2","## What's Changed\r\n* feat: FileUploader by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F477\r\n* Add ChatReqLLM adapter - Experimental by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F486\r\n* Prep for v0.6.2 release by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F487\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.6.1...v0.6.2","2026-03-13T01:53:05",{"id":108,"version":109,"summary_zh":110,"released_at":111},113671,"v0.6.1","## What's Changed\r\n* feat: add verbosity parameter to ChatOpenAIResponses by @vasspilka in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F470\r\n* feat(gemini): adding inline_data support for pdf and csv by @conpat in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F478\r\n* fix: Fix verbosity parameter and better error handling by @vasspilka in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F476\r\n* feat: add structured output support to ChatAnthropic by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F474\r\n* fix: handle nil content in run_message_processors by @redmandarin in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F473\r\n* feat: add file_id support to ChatAnthropic by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F475\r\n* feat: add tool result interrupt\u002Fresume support by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F479\r\n* Handle streaming errors (content moderation, etc.) in LLMChain by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F480\r\n* feat: Add ModelsLabImage provider for text-to-image generation by @adhikjoshi in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F468\r\n* prep for v0.6.1 release by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F482\r\n\r\n## New Contributors\r\n* @conpat made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F478\r\n* @redmandarin made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F473\r\n* @adhikjoshi made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F468\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.6.0...v0.6.1","2026-03-05T05:05:48",{"id":113,"version":114,"summary_zh":115,"released_at":116},113672,"v0.6.0","## What's Changed\r\n* Add citations support by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F461\r\n* add req_config to vertexai by @kylewhite21 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F457\r\n* Fix exception handling in run_message_processors by @Munksgaard in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F460\r\n* feat: Consistently include `original` in \"Unexpected response\" errors by @mweidner037 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F392\r\n* fixed error handling issues in ChatAnthropic by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F462\r\n* Add explicit Auth error handler to ChatAnthropic by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F464\r\n* feat: Add streaming callbacks to reasoning summaries by @vasspilka in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F456\r\n* Customizable run modes via Mode behaviour by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F469\r\n* prep for v0.6.0 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F471\r\n\r\n## New Contributors\r\n* @kylewhite21 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F457\r\n* @Munksgaard made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F460\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.5.2...v0.6.0","2026-02-22T15:21:33",{"id":118,"version":119,"summary_zh":120,"released_at":121},113673,"v0.5.2","## What's Changed\r\n* Prep for v0.5.1 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F454\r\n* fix(ChatVertexAI): Fix tool calls for ChatVertexAI and support for Gemini 3 models by @abh1shek-sh in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F452\r\n* Revising tool detection by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F458\r\n* prep for v0.5.2 release by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F459\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.5.1...v0.5.2","2026-02-11T01:44:19",{"id":123,"version":124,"summary_zh":125,"released_at":126},113674,"v0.5.1","## What's Changed\r\n* fix(ChatMistralAI): Handle broken tool names that kill LLMChain by @xu-chris in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F448\r\n* Handle thought content parts from Google Vertex by @mattmatters in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F430\r\n* feat: Verify function parameters before executing, reject faulty message deltas by @xu-chris in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F449\r\n* fix(ChatOpenAIResponses): Handle failed response status from OpenAI Responses API by @xu-chris in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F450\r\n* Expanding callbacks for tool detection and UI feedback by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F453\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.5.0...v0.5.1","2026-01-31T15:15:27",{"id":128,"version":129,"summary_zh":130,"released_at":131},113675,"v0.5.0","## What's Changed\r\n* Add `req_config` to `ChatOpenAIResponses` by @xxdavid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F415\r\n* Add thinking config to vertex ai by @mattmatters in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F423\r\n* Add support for OpenAI Response API Stateful context by @cjimison in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F425\r\n* Fixes image file_id content type for ChatOpenAIResponses by @sezaru in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F438\r\n* fix(ChatMistralAI): missing error handling and fallback mechanism on server outages (#434) by @xu-chris in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F435\r\n* feat(ChatMistralAI): add support for parallel tool calls  by @xu-chris in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F433\r\n* use elixir 1.17 by @nbw in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F427\r\n* feat(GoogleChatAI): add thought_signature support for Gemini 3 function calls by @abh1shek-sh in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F431\r\n* Don't include top_p for gpt-5.2+ in ChatOpenAIResponses by @montebrown in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F428\r\n* Fix Mistral 'thinking' content parts by @arjan in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F418\r\n* Add verbose_api field to ChatPerplexity and ChatMistralAI by @arjan in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F416\r\n* Add support for OpenAI reasoning\u002Fthinking events in ChatOpenAIResponses by @arjan in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F421\r\n* Add new reasoning effort values to ChatOpenAIResponses by @xxdavid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F419\r\n* fix \"Support reasoning_content of deepseek model\" introducing UI bug … by @kayuapi in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F429\r\n* Support json schema in vertex ai by @mattmatters in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F424\r\n* Base work for new agent library by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F442\r\n* fix ChatGrok tool call arguments and message flattening by @KristerV in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F420\r\n* Revert \"fix ChatGrok tool call arguments and message flattening\" by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F445\r\n* prep for v0.5.0 release by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F451\r\n\r\n## New Contributors\r\n* @cjimison made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F425\r\n* @sezaru made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F438\r\n* @xu-chris made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F435\r\n* @nbw made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F427\r\n* @abh1shek-sh made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F431\r\n* @kayuapi made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F429\r\n* @KristerV made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F420\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.4.1...v0.5.0","2026-01-28T00:28:13",{"id":133,"version":134,"summary_zh":135,"released_at":136},113676,"v0.4.1","## What's Changed\r\n* OpenAI responses API improvements by @arjan in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F391\r\n* Support Anthropic `disable_parallel_tool_use` tool_choice setting by @vlymar in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F390\r\n* Update gettext dependency version to 1.0 by @bijanbwb in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F393\r\n* Add DeepSeek chat model integration by @gilbertwong96 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F394\r\n* Loosen the gettext dependency by @montebrown in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F399\r\n* add MessageDelta.merge_deltas\u002F2 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F401\r\n* formatting update by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F402\r\n* Added an :cache_messages option for ChatAnthropic, can improve cache utilization. by @montebrown in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F398\r\n* Add support for Anthropic API PDF reading to the ChatAnthropic model. by @jadengis in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F403\r\n* feat: Add support for `:file_url` to ChatAnthropic too by @jadengis in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F404\r\n* Support reasoning_content of deepseek model by @gilbertwong96 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F407\r\n* Add req_opts to ChatAnthropic by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F408\r\n* Open AI Responses API: Add support for file_url with link to file by @reetou in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F395\r\n* Add strict tool use support to ChatAnthropic by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F409\r\n* Add strict to function of ChatModels.ChatOpenAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F301\r\n* Allow multi-part tool responses. by @montebrown in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F410\r\n* prep for v0.4.1 release by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F411\r\n\r\n## New Contributors\r\n* @vlymar made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F390\r\n* @bijanbwb made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F393\r\n* @gilbertwong96 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F394\r\n* @reetou made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F395\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.4.0...v0.4.1","2025-12-02T05:06:24",{"id":138,"version":139,"summary_zh":140,"released_at":141},113677,"v0.4.0","## What's Changed since v0.3.3\r\n* Add OpenAI and Claude thinking support - v0.4.0-rc.0 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F297\r\n* vertex ai file url support by @ahsandar in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F296\r\n* Update docs for Vertex AI by @ahsandar in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F304\r\n* Fix ContentPart migration by @mathieuripert in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F309\r\n* Fix tests for content_part_for_api\u002F2 of ChatOpenAI in v0.4.0-rc0 by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F300\r\n* Fix `tool_calls` `nil` messages by @udoschneider in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F314\r\n* feat: Add structured output support to ChatMistralAI  by @mathieuripert in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F312\r\n* feat: add configurable tokenizer to text splitters by @mathieuripert in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F310\r\n* simple formatting issue by @Bodhert in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F307\r\n* Update Message.new_system spec to accurately accept [ContentPart.t()]… by @rtorresware in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F315\r\n* Fix: Add token usage to ChatGoogleAI message metadata by @mathieuripert in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F316\r\n* feat: include raw API responses in LLM error objects for better debug… by @TwistingTwists in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F317\r\n* expanded docs and test coverage for prompt caching by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F325\r\n* Fix AWS Bedrock stream decoder ordering issue by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F327\r\n* significant updates for v0.4.0-rc.1 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F328\r\n* filter out empty lists in message responses by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F333\r\n* fix: Require gettext ~> 0.26 by @mweidner037 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F332\r\n* Add `retry: transient` to Req for Anthropic models in stream mode by @jonator in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F329\r\n* fixed issue with poorly matching list in case by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F334\r\n* feat: Add organization ID as a parameter by @hjemmel in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F337\r\n* Add missing verbose_api field to ChatOllamaAI for streaming compatibility by @gur-xyz in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F341\r\n* Added usage data to the VertexAI Message response. by @raulchedrese in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F335\r\n* feat: add run mode: step by @CaiqueMitsuoka in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F343\r\n* feat: add support for multiple tools in run_until_tool_used by @fortmarek in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F345\r\n* Fix ChatOllamaAI stop sequences: change from string to array type by @gur-xyz in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F342\r\n* expanded logging for ChatAnthropic API errors by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F349\r\n* Prevent crash when ToolResult with string in ChatGoogleAI.for_api\u002F1 by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F352\r\n* Bedrock OpenAI-compatible API compatibility fix by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F356\r\n* added xAI Grok chat model support by @alexfilatov in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F338\r\n* Support thinking to ChatGoogleAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F354\r\n* Add req_config to ChatMode.ChatGoogleAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F357\r\n* Clean up treating MessageDelta in ChatModels.ChatGoogleAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F353\r\n* Expose full response headers through a new on_llm_response_headers callback by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F358\r\n* only include \"user\" with OpenAI request when a value is provided by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F364\r\n* Handle no content parts responses in ChatGoogleAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F365\r\n* Adds support for gpt-image-1 in LangChain.Images.OpenAIImage by @Ven109 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F360\r\n* Pref for release v0.4.0-rc.2 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F366\r\n* fix: handle missing finish_reason in streaming responses for LiteLLM compatibility by @fbettag in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F367\r\n* Add support for native tool calls to ChatVertexAI by @raulchedrese in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F359\r\n* Adds should_continue? optional function to mode step by @CaiqueMitsuoka in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F361\r\n* Add OpenAI Deep Research integration by @fbettag in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F336\r\n* Add `parallel_tool_calls` option to `ChatOpenAI` model by @martosaur in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F371\r\n* Add optional AWS session token handling in BedrockHelpers by @q","2025-10-02T02:13:45",{"id":143,"version":144,"summary_zh":145,"released_at":146},113678,"v0.4.0-rc.3","## What's Changed\r\n* fix: handle missing finish_reason in streaming responses for LiteLLM compatibility by @fbettag in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F367\r\n* Add support for native tool calls to ChatVertexAI by @raulchedrese in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F359\r\n* Adds should_continue? optional function to mode step by @CaiqueMitsuoka in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F361\r\n* Add OpenAI Deep Research integration by @fbettag in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F336\r\n* Add `parallel_tool_calls` option to `ChatOpenAI` model by @martosaur in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F371\r\n* Add optional AWS session token handling in BedrockHelpers by @quangngd in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F372\r\n* fix: handle LiteLLM responses with null b64_json in OpenAIImage by @fbettag in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F368\r\n* Add Orq AI chat by @arjan in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F377\r\n* Add req_config to ChatModels.ChatOpenAI by @koszta in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F376\r\n* fix(ChatGoogleAI): Handle cumulative token usage by @mweidner037 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F373\r\n* fix(ChatGoogleAI): Prevent error from thinking content parts by @mweidner037 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F374\r\n* feat(ChatGoogleAI): Full thinking config by @mweidner037 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F375\r\n* Support verbosity parameter for ChatOpenAI by @rohan-b99 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F379\r\n* add retry_on_fallback? to chat model definition and all models by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F350\r\n* Prep for v0.4.o-rc.3 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F380\r\n\r\n## New Contributors\r\n* @martosaur made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F371\r\n* @quangngd made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F372\r\n* @arjan made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F377\r\n* @koszta made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F376\r\n* @rohan-b99 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F379\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.4.0-rc.2...v0.4.0-rc.3","2025-09-19T04:22:34",{"id":148,"version":149,"summary_zh":150,"released_at":151},113679,"v0.4.0-rc.2","## What's Changed\r\n* filter out empty lists in message responses by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F333\r\n* fix: Require gettext ~> 0.26 by @mweidner037 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F332\r\n* Add `retry: transient` to Req for Anthropic models in stream mode by @jonator in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F329\r\n* fixed issue with poorly matching list in case by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F334\r\n* feat: Add organization ID as a parameter by @hjemmel in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F337\r\n* Add missing verbose_api field to ChatOllamaAI for streaming compatibility by @gur-xyz in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F341\r\n* Added usage data to the VertexAI Message response. by @raulchedrese in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F335\r\n* feat: add run mode: step by @CaiqueMitsuoka in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F343\r\n* feat: add support for multiple tools in run_until_tool_used by @fortmarek in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F345\r\n* Fix ChatOllamaAI stop sequences: change from string to array type by @gur-xyz in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F342\r\n* expanded logging for ChatAnthropic API errors by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F349\r\n* Prevent crash when ToolResult with string in ChatGoogleAI.for_api\u002F1 by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F352\r\n* Bedrock OpenAI-compatible API compatibility fix by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F356\r\n* added xAI Grok chat model support by @alexfilatov in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F338\r\n* Support thinking to ChatGoogleAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F354\r\n* Add req_config to ChatMode.ChatGoogleAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F357\r\n* Clean up treating MessageDelta in ChatModels.ChatGoogleAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F353\r\n* Expose full response headers through a new on_llm_response_headers callback by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F358\r\n* only include \"user\" with OpenAI request when a value is provided by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F364\r\n* Handle no content parts responses in ChatGoogleAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F365\r\n* Adds support for gpt-image-1 in LangChain.Images.OpenAIImage by @Ven109 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F360\r\n* Pref for release v0.4.0-rc.2 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F366\r\n\r\n## New Contributors\r\n* @mweidner037 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F332\r\n* @jonator made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F329\r\n* @hjemmel made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F337\r\n* @gur-xyz made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F341\r\n* @CaiqueMitsuoka made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F343\r\n* @fortmarek made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F345\r\n* @alexfilatov made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F338\r\n* @Ven109 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F360\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.4.0-rc.1...v0.4.0-rc.2","2025-08-27T14:15:02",{"id":153,"version":154,"summary_zh":155,"released_at":156},113680,"v0.4.0-rc.1","Refer to the CHANGELOG.md for notes on breaking changes and migrating.\r\n\r\n## What's Changed\r\n* vertex ai file url support by @ahsandar in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F296\r\n* Update docs for Vertex AI by @ahsandar in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F304\r\n* Fix ContentPart migration by @mathieuripert in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F309\r\n* Fix tests for content_part_for_api\u002F2 of ChatOpenAI in v0.4.0-rc0 by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F300\r\n* Fix `tool_calls` `nil` messages by @udoschneider in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F314\r\n* feat: Add structured output support to ChatMistralAI  by @mathieuripert in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F312\r\n* feat: add configurable tokenizer to text splitters by @mathieuripert in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F310\r\n* simple formatting issue by @Bodhert in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F307\r\n* Update Message.new_system spec to accurately accept [ContentPart.t()]… by @rtorresware in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F315\r\n* Fix: Add token usage to ChatGoogleAI message metadata by @mathieuripert in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F316\r\n* feat: include raw API responses in LLM error objects for better debug… by @TwistingTwists in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F317\r\n* expanded docs and test coverage for prompt caching by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F325\r\n* Fix AWS Bedrock stream decoder ordering issue by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F327\r\n* significant updates for v0.4.0-rc.1 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F328\r\n\r\n## New Contributors\r\n* @ahsandar made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F296\r\n* @mathieuripert made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F309\r\n* @udoschneider made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F314\r\n* @Bodhert made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F307\r\n* @rtorresware made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F315\r\n* @TwistingTwists made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F317\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.4.0-rc.0...v0.4.0-rc.1","2025-07-03T20:29:22",{"id":158,"version":159,"summary_zh":160,"released_at":161},113681,"v0.4.0-rc.0","## What's Changed\r\n* Add OpenAI and Claude thinking support - v0.4.0-rc.0 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F297\r\n\r\nIntroduces breaking changes while adding expanded support for thinking models. \r\n\r\n**NOTE**: See the `CHANGELOG.md` for more details\r\n**IMPORTANT**: Not all models are supported with this RC.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.3.3...v0.4.0-rc.0","2025-04-23T03:51:27",{"id":163,"version":164,"summary_zh":165,"released_at":166},113682,"v0.3.3","## What's Changed\r\n* upgrade gettext and migrate by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F271\r\n* Support caching tool results for Anthropic calls by @ci in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F269\r\n* Fix OpenAI verbose_api by @aaparmeggiani in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F274\r\n* Support choice of Anthropic beta headers by @ci in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F273\r\n* Fix specifying media uris for google vertex by @mattmatters in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F242\r\n* feat: add support for pdf content with OpenAI model by @bwan-nan in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F275\r\n* feat: File urls for Google by @vasspilka in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F286\r\n* support streaming responses from mistral by @manukall in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F287\r\n* Support for json_response in ChatModels.ChatGoogleAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F277\r\n* Fix options being passed to the ollama chat api by @alappe in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F179\r\n* Support for file with file_id in ChatOpenAI by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F283\r\n* added LLMChain.run_until_tool_used\u002F3 by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F292\r\n* adds telemetry by @epinault in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F284\r\n\r\n## New Contributors\r\n* @ci made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F269\r\n* @aaparmeggiani made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F274\r\n* @mattmatters made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F242\r\n* @vasspilka made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F286\r\n* @manukall made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F287\r\n* @epinault made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F284\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.3.2...v0.3.3","2025-04-23T01:28:48",{"id":168,"version":169,"summary_zh":170,"released_at":171},113683,"v0.3.2","## What's Changed\r\n* add on_message_processed callback when tool response is created by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F248\r\n* typos: Update Example for Syntax Issues by @bradschwartz in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F249\r\n* ensure consistent capitalization by @JoaquinIglesiasTurina in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F257\r\n* adds tool calls and usage for mistral ai. by @fbettag in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F253\r\n* Feature\u002Fsupport sys instruction vertexai by @vseng in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F260\r\n* Enable tool support for ollama by @alappe in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F164\r\n* Adds Perplexity AI by @fbettag in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F261\r\n* Fix typos by @kianmeng in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F264\r\n* Feat\u002Fadd text splitter by @JoaquinIglesiasTurina in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F256\r\n* CI housekeeping by @kianmeng in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F265\r\n* Redact api-key from models by @raulpe7eira in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F266\r\n* add native tool functionality (e.g. `google_search` for Gemini) by @avergin in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F250\r\n* prep for v0.3.2 release by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F270\r\n\r\n## New Contributors\r\n* @bradschwartz made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F249\r\n* @JoaquinIglesiasTurina made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F257\r\n* @vseng made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F260\r\n* @kianmeng made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F264\r\n* @raulpe7eira made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F266\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.3.1...v0.3.2","2025-03-18T00:15:26",{"id":173,"version":174,"summary_zh":175,"released_at":176},113684,"v0.3.1","## What's Changed\r\n* support LMStudio when using ChatOpenAI by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F243\r\n* Include stacktrace context in messages for caught exceptions from LLM functions & function callbacks. by @montebrown in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F241\r\n* fix issue with OpenAI converting a message to JSON by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F245\r\n* prep for v0.3.1 release by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F246\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.3.0...v0.3.1","2025-02-05T15:09:01",{"id":178,"version":179,"summary_zh":180,"released_at":181},113685,"v0.3.0","Lots of changes that includes the RC releases as well.\r\n\r\n## What's Changed\r\n* fix openai content part media by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F112\r\n* ContentPart image media option updates by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F113\r\n* Updates for ContentPart images with messages to support ChatGPT's \"detail\" level option by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F114\r\n* add openai image endpoint support (aka DALL-E-2 & DALL-E-3) by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F116\r\n* allow PromptTemplates to convert to ContentParts by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F117\r\n* Fix elixir 1.17 warnings by @MrYawe in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F123\r\n* updates to README by @petrus-jvrensburg in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F125\r\n* Add ChatVertexAI by @raulchedrese in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F124\r\n* Major update. Preparing for v0.3.0-rc.0 - breaking changes by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F131\r\n* update calculator tool by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F132\r\n* support receiving rate limit info by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F133\r\n* upgrade abacus dep by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F134\r\n* add support for TokenUsage through callbacks by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F137\r\n* Big update - RC ready by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F138\r\n* Improvements to docs by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F145\r\n* ChatGoogleAI fixes and updates by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F152\r\n* fix: typespec error on Message.new_user\u002F1 by @bwan-nan in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F151\r\n* Convert to use mimic for mocking calls by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F155\r\n* Remove ApiOverride reference in mix.exs project.docs by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F157\r\n* Fix OpenAI chat stream hanging by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F156\r\n* Fix streaming error when using Azure OpenAI Service by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F158\r\n* Update Azure OpenAI Service streaming fix by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F161\r\n* Fix ChatOllamaAI streaming response by @alappe in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F162\r\n* Fix PromptTemplate example by @joelpaulkoch in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F167\r\n* adds OpenAI project authentication. by @fbettag in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F166\r\n* Anthropic support for streamed tool calls with parameters by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F169\r\n* change return of LLMChain.run\u002F2 - breaking change by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F170\r\n* 🐛 cast tool_calls arguments correctly inside message_deltas by @rparcus in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F175\r\n* Do not duplicate tool call parameters if they are identical by @michalwarda in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F174\r\n* Structured Outputs by supplying `strict: true` in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F173\r\n* feat: add OpenAI's new structured output API by @monotykamary in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F180\r\n* Support system instructions for Google AI by @elliotb in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F182\r\n* Handle empty text parts from GoogleAI responses by @elliotb in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F181\r\n* Handle missing token usage fields for Google AI by @elliotb in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F184\r\n* Handle functions with no parameters for Google AI by @elliotb in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F183\r\n* Add AWS Bedrock support to ChatAnthropic by @stevehodgkiss in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F154\r\n* Handle all possible finishReasons for ChatGoogleAI by @elliotb in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F188\r\n* Remove unused assignment from ChatGoogleAI by @elliotb in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F187\r\n* Add support for passing safety settings to Google AI by @elliotb in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F186\r\n* Add tool_choice for OpenAI and Anthropic by @avergin in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F142\r\n* add support for examples to title chain by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F191\r\n* add \"processed_content\" to ToolResult struct and support storing Elixir data from function results by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F192\r\n* Revamped error handling and handles Anthropic's \"overload_error\" by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F194\r\n* Documenting AWS Bedrock support with Anthropic Claude by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F195\r\n* Cancel a message delta when we receive \"overloaded\" error by @","2025-01-22T20:21:43",{"id":183,"version":184,"summary_zh":185,"released_at":186},113686,"v0.3.0-rc.2","This release includes a breaking change. See [`CHANGELOG.md`](https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fblob\u002Fmain\u002FCHANGELOG.md#v030-rc2-2025-01-08) for more details and migration instructions.\r\n\r\n## What's Changed\r\n* add explicit message support in summarizer by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F220\r\n* Change abacus to optional dep by @nallwhy in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F223\r\n* Remove constraint of alternating user, assistant by @GenericJam in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F222\r\n* Breaking change: consolidate LLM callback functions by @brainlid in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F228\r\n* feat: Enable :inet6 for Req.new for Ollama by @mpope9 in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F227\r\n* fix: enable verbose_deltas by @cristineguadelupe in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F197\r\n\r\n## New Contributors\r\n* @nallwhy made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F223\r\n* @GenericJam made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F222\r\n* @mpope9 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fpull\u002F227\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbrainlid\u002Flangchain\u002Fcompare\u002Fv0.3.0-rc.1...v0.3.0-rc.2","2025-01-09T03:37:44",[188,196,204,212,220,233],{"id":189,"name":190,"github_repo":191,"description_zh":192,"stars":193,"difficulty_score":52,"last_commit_at":194,"category_tags":195,"status":53},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[43,40,41],{"id":197,"name":198,"github_repo":199,"description_zh":200,"stars":201,"difficulty_score":31,"last_commit_at":202,"category_tags":203,"status":53},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,"2026-04-05T23:32:43",[43,41,42],{"id":205,"name":206,"github_repo":207,"description_zh":208,"stars":209,"difficulty_score":31,"last_commit_at":210,"category_tags":211,"status":53},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[43,40,41],{"id":213,"name":214,"github_repo":215,"description_zh":216,"stars":217,"difficulty_score":31,"last_commit_at":218,"category_tags":219,"status":53},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[43,42],{"id":221,"name":222,"github_repo":223,"description_zh":224,"stars":225,"difficulty_score":31,"last_commit_at":226,"category_tags":227,"status":53},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[40,228,229,230,41,231,42,43,232],"数据工具","视频","插件","其他","音频",{"id":234,"name":235,"github_repo":236,"description_zh":237,"stars":238,"difficulty_score":52,"last_commit_at":239,"category_tags":240,"status":53},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[41,40,43,42,231]]