[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-bespokelabsai--curator":3,"tool-bespokelabsai--curator":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":75,"owner_website":80,"owner_url":81,"languages":82,"stars":99,"forks":100,"last_commit_at":101,"license":102,"difficulty_score":23,"env_os":103,"env_gpu":103,"env_ram":103,"env_deps":104,"category_tags":112,"github_topics":113,"view_count":125,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":126,"updated_at":127,"faqs":128,"releases":162},554,"bespokelabsai\u002Fcurator","curator","Synthetic data curation for post-training and structured data extraction","curator 是一款专为大模型后训练设计的开源数据合成与整理工具。它通过批量推理和可扩展的数据处理管道，帮助用户高效生成高质量的合成数据及结构化信息。\n\n在大模型微调过程中，获取优质训练数据往往面临成本高、流程复杂的挑战。curator 旨在解决这一痛点，简化从数据筛选、推理到提取的整个工作流。它不仅支持 OpenAI、Anthropic、Google Gemini 等多种主流模型接口，还引入了代码执行环境（支持本地、Ray、Docker 等后端），让数据生成过程更加灵活可靠。此外，其批量处理能力能有效降低 Token 消耗成本，配合 Tinker SDK 可实现从数据到 LoRA 微调模型的快速落地。\n\n目前，许多热门的高质量推理数据集（如 OpenThoughts 系列）均基于 curator 构建。无论是个人研究者还是企业团队，若计划进行垂直领域的模型优化或希望提升数据质量，curator 都是值得尝试的高效助手。","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fbespokelabs.ai\u002F\" target=\"_blank\">\n    \u003Cpicture>\n      \u003Csource media=\"(prefers-color-scheme: light)\" width=\"100px\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbespokelabsai_curator_readme_d5fdbb6997d6.png\">\n      \u003Cimg alt=\"Bespoke Labs Logo\" width=\"100px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbespokelabsai_curator_readme_d5fdbb6997d6.png\">\n    \u003C\u002Fpicture>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Ch1 align=\"center\">Bespoke Curator\u003C\u002Fh1>\n\u003Ch3 align=\"center\" style=\"font-size: 20px; margin-bottom: 4px\">Bulk Inference and Scalable Data Curation for Post-Training\u003C\u002Fh3>\n\u003Cbr\u002F>\n\n\u003Cdiv align=\"center\">\n\n[![Github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCurator-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white)](https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002F) [![Twitter](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F@BespokeLabsai-white?style=for-the-badge&logo=X&logoColor=white&color=000)](https:\u002F\u002Fx.com\u002Fbespokelabsai) [![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBespokeLabs-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor)](https:\u002F\u002Fhuggingface.co\u002Fbespokelabs) [![Discord](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBespoke_Labs-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https:\u002F\u002Fdiscord.gg\u002FKqpXvpzVBS) \n\u003Cbr>\n[![Docs](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocs-docs.bespokelabs.ai-blue?style=for-the-badge&link=https%3A%2F%2Fdocs.bespokelabs.ai&labelColor=000)](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fgetting-started) [![Website](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSite-bespokelabs.ai-blue?style=for-the-badge&link=https%3A%2F%2Fbespokelabs.ai&labelColor=000)](https:\u002F\u002Fbespokelabs.ai\u002F) [![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fbespokelabs-curator?style=for-the-badge&labelColor=000)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fbespokelabs-curator\u002F)\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n[ English | \u003Ca href=\"docs\u002FREADME_zh.md\">中文\u003C\u002Fa> ]\n\u003C\u002Fdiv>\n\n## 🎉 What's New\n* **[2026.03.14]** [Tinker integration for fine-tuning](examples\u002Fpoem_finetuning_example.py): Go from curated data to a LoRA fine-tuned model in a few lines of Python using the Tinker SDK.\n* **[2025.12.05]** [Launched OpenThoughts-Agents](https:\u002F\u002Fwww.open-thoughts.ai\u002Fblog\u002Fagent) whose data was curated using Curator.\n* **[2025.04.09]** [Launching Reasoning Datasets Competition](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fbespokelabs\u002Freasoning-datasets-competition) with HuggingFace and Together.ai. Win $5000 USD worth of prizes!\n* **[2025.04.03]** We used Bespoke Curator to create [OpenThoughts2-1M](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fopen-thoughts\u002FOpenThoughts2-1M) dataset, which was used to train OpenThinker2-32B that outperforms DeepSeek-R1-32B. The dataset started trending on HuggingFace.\n* **[2025.03.12]** [Gemini Batch support added](https:\u002F\u002Fwww.bespokelabs.ai\u002Fblog\u002Feffortless-gemini-batch-processing-with-curator): Gemini batch API is extremely challenging, and we made it much simpler! :)\n* **[2025.03.05]** [Claude 3.7 Sonnet Thinking and batch mode support added](https:\u002F\u002Fwww.bespokelabs.ai\u002Fblog\u002Fclaude-3-7-sonnet-thinking-mode-in-curator).\n* **[2025.02.26]** [Code Execution Support added](https:\u002F\u002Fwww.bespokelabs.ai\u002Fblog\u002Flaunching-code-executor): You can now run code (generated by Curator) using CodeExecutor. We support four backends: local (called multiprocessing), Ray, Docker and e2b.\n* **[2025.02.06]** We used Bespoke Curator to create [s1K-1.1]( https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fsimplescaling\u002Fs1K-1.1), a high-quality sample-efficient reasoning dataset.\n* **[2025.01.30]** [Batch Processing Support for OpenAI, Anthropic, and other compatible APIs](https:\u002F\u002Fwww.bespokelabs.ai\u002Fblog\u002Fbatch-processing-with-curator): Cut Token Costs in Half 🔥🔥🔥. Through our partnership with kluster.ai, new users using Curator can access open-source models like DeepSeek-R1 and receive a **$25 credit** (limits apply). EDIT: Promotion has come to an end.\n* **[2025.01.27]** We used Bespoke Curator to create [OpenThoughts-114k](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fopen-thoughts\u002FOpenThoughts-114k), a high-quality reasoning dataset (trending on HuggingFace).\n* **[2025.01.22]** We used Bespoke Curator to create [Bespoke-Stratos-17k](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fbespokelabs\u002FBespoke-Stratos-17k), a high-quality reasoning dataset (trending on HuggingFace).\n* **[2025.01.15]** Curator launched 🎉\n\n## Overview\n\nBespoke Curator makes it easy to create synthetic data pipelines. Whether you are training a model or extracting structured data, Curator will prepare high-quality data quickly and robustly.\n\n* Rich Python based library for generating and curating synthetic data.\n* Viewer to monitor data while it is being generated.\n* First class support for structured outputs.\n* Built-in performance optimizations for asynchronous operations, caching, and fault recovery at every scale.\n* Support for a wide range of inference options via LiteLLM, vLLM, and popular batch APIs.\n\n![CLI in action](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbespokelabsai_curator_readme_ef3e28969a3b.gif)\n\nCheck out our full documentation for [getting started](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fgetting-started), [tutorials](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Ftutorials), [guides](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides) and detailed [reference](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fapi-reference\u002Fllm-api-documentation).\n\n## 🛠️ Installation\n\n```bash\npip install bespokelabs-curator\n```\n## 📕 Examples\n\n### Finetuning\u002FDistillation\n| **Task** | **Link(s)** | **Goal** |\n|----------|--------------|-------------|\n| **Product feature extraction** | \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1YoA23-cBcWpaSErULzBI2bo2LPGo37GQ\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> | Finetuning a model to identify features of a product |\n| **Sentiment analysis** | \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Zfl3g7POsqqYQqkzXdyhYRSAymLhZugn?usp=sharing\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa> | Aspect-based sentiment analysis of restaurant reviews and finetuning using Together.ai |\n| **RAFT for domain-specific RAG** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Fblocks\u002Fraft\" target=\"_blank\">Code\u003C\u002Fa> | Implement Retrieval Augmented Fine-Tuning (RAFT) that processes domain-specific documents, generates questions, and prepares data for fine-tuning LLMs. |\n| **Poem generation & LoRA fine-tuning** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fblob\u002Fmain\u002Fexamples\u002Fpoem_finetuning_example.py\" target=\"_blank\">Code\u003C\u002Fa> | End-to-end pipeline: curate poem data with Curator, then LoRA fine-tune with TinkerTrainer |\n\n### Data Generation\n| **Task** | **Link(s)** | **Goal** |\n|----------|--------------|-------------|\n| **Reasoning dataset generation (Bespoke Stratos)** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Fbespoke-stratos-data-generation\" target=\"_blank\">Code\u003C\u002Fa> | Generate the Bespoke-Stratos-17k dataset, focusing on reasoning traces from math, coding, and problem-solving datasets. |\n| **Reasoning dataset generation (Open Thoughts)** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fopen-thoughts\u002Fopen-thoughts\" target=\"_blank\">Code\u003C\u002Fa> | Generate the Open-Thoughts-114k dataset, focusing on reasoning traces from math, coding, and problem-solving datasets.|\n| **Multimodal** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Fmultimodal\" target=\"_blank\">Code\u003C\u002Fa> | Demonstrates multimodal capabilities by generating recipes from food images |\n| **Ungrounded Question Answer generation** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Fungrounded-qa\" target=\"_blank\">Code\u003C\u002Fa> | Generate diverse question-answer pairs using techniques similar to the CAMEL paper |\n| **Code Execution** | \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1YKj1-BC66-3LgNkf1m5AEPswIYtpOU-k\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>| Execute code generated with Curator |\n| **3Blue1Brown video generation** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Fcode-execution\u002Fmath-animation\" target=\"_blank\">Code\u003C\u002Fa> | Generate videos similar to 3Blue1Brown and render them using code execution! |\n| **Synthetic charts** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fblob\u002Fmain\u002Fexamples\u002Fcode-execution\u002Fchart-generation\u002Fcharts.py\" target=\"_blank\">Code\u003C\u002Fa> | Generate charts synthetically.\n| **Function calling** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Ffunction-calling\" target=\"_blank\">Code\u003C\u002Fa> | Generate data for finetuning for function calling. |\n\n\n\n## 🚀 Quickstart\n\n### Using `curator.LLM` for Bulk Inference\n\n```python\nfrom typing import Dict\nfrom bespokelabs import curator\nfrom datasets import Dataset\nfrom pydantic import BaseModel, Field\nfrom typing import Literal\n\nclass Sentiment(BaseModel):\n  sentiment: Literal[\"positive\", \"negative\", \"neutral\"] = Field(\n    description=\"Sentiment of the review\")\n\nclass SentimentAnalyzer(curator.LLM):\n\n  def prompt(self, product: Dict):\n    return f\"Determine the sentiment of the product from the review: {product['review']}\"\n\n  def parse(self, product: Dict, response: Sentiment):\n    return [{\"name\": product[\"name\"], \"sentiment\": response.sentiment}]\n\n# You can easily have a million rows here. \n# Curator takes care of parallelism, retries, and caches responses.\ndataset = [{\"name\": \"Curator\", \"review\": \"Already saved hours in one day of use.\"},\n           {\"name\": \"Bespoke MiniCheck\", \"review\": \"Hallucination rates dropped by 90%.\"}]\n\n# You can set batch=True, and instantly uses batch mode to save 50% of the costs.\nanalyzer = SentimentAnalyzer(\n    model_name=\"gpt-4o-mini\", response_format=Sentiment, batch=False)\nreviews = analyzer(dataset)\nprint(reviews.to_pandas())\n```\nOutput:\n```\n                name sentiment\n0            Curator  positive\n1  Bespoke MiniCheck  positive\n```\n\nIn the `SentimentAnalyzer` class:\n* `prompt` takes the input (`product`) and returns the prompt for the LLM.\n* `parse` takes the input (`product`) and the structured output (`response`) and converts it to a list of dictionaries. This is so that we can easily convert the output to a HuggingFace Dataset object.\n\nInstead of a list, you can pass a HuggingFace Dataset object as well (see below for more details).\n\n### Using `curator.LLM` for data generation\n\nHere's an example of using structured outputs and chaing together two curator.LLM blocks to generate diverse poems.\n\n\n```python\nfrom typing import Dict, List\nfrom bespokelabs import curator\nfrom pydantic import BaseModel, Field\n\nclass Topics(BaseModel):\n    topics_list: List[str] = Field(description=\"A list of topics.\")\n\nclass TopicGenerator(curator.LLM):\n  response_format = Topics\n\n  def prompt(self, subject):\n    return f\"Return 3 topics related to {subject}\"\n\n  def parse(self, input: str, response: Topics):\n    return [{\"topic\": t} for t in response.topics_list]\n\n\nclass Poem(BaseModel):\n    title: str = Field(description=\"The title of the poem.\")\n    poem: str = Field(description=\"The content of the poem.\")\n\nclass Poet(curator.LLM):\n    response_format = Poem\n\n    def prompt(self, input: Dict) -> str:\n        return f\"Write two poems about {input['topic']}.\"\n\n    def parse(self, input: Dict, response: Poem) -> Dict:\n        return [{\"title\": response.title, \"poem\": response.poem}]\n\ntopic_generator = TopicGenerator(model_name=\"gpt-4o-mini\")\npoet = Poet(model_name=\"gpt-4o-mini\")\n# Start generation\ntopics = topic_generator(\"Mathematics\")\npoems = poet(topics)\n```\nOutput:\n```\n \ttitle                     poem\n0\tThe Language of Algebra\t  In symbols and signs, truths intertwine,..\n1\tThe Geometry of Space\t  In the world around us, shapes do collide,..\n2\tThe Language of Logic\t  In circuits and wires where silence speaks,..\n```\n\nYou can see more examples in the [examples](examples) directory.\n\nSee the [docs](https:\u002F\u002Fdocs.bespokelabs.ai\u002F) for more details as well as\nfor troubleshooting information.\n\n> [!TIP]\n> If you are generating large datasets, you may want to use [batch mode](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fsave-usdusdusd-on-llm-inference) to save costs. Currently batch APIs from [OpenAI](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fbatch) and [Anthropic](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fbuild-with-claude\u002Fmessage-batches) are supported. With curator this is as simple as setting `batch=True` in the `LLM` class.\n> [!NOTE]\n> Retries and caching are enabled by default to help you rapidly iterate your data pipelines.\n> So now if you run the same prompt again, you will get the same response, pretty much instantly.\n> You can delete the cache at `~\u002F.cache\u002Fcurator` or disable it with `export CURATOR_DISABLE_CACHE=true`.\n\n\n> [!IMPORTANT]\n> Make sure to set your API keys as environment variables for the model you are calling. For example running `export OPENAI_API_KEY=sk-...` and `export ANTHROPIC_API_KEY=ant-...` will allow you to run the previous two examples. A full list of supported models and their associated environment variable names can be found [in the litellm docs](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproviders).\n### Anonymized Telemetry\n\nWe collect minimal, anonymized usage telemetry to help prioritize new features and improvements that benefit the Curator community. You can opt out by setting the `TELEMETRY_ENABLED` environment variable to `False`. \n\n## 📖 Providers\nCurator supports a wide range of providers, including OpenAI, Anthropic, and many more. \n\n### OpenAI backend\n```python\nllm = curator.LLM(\n    model_name=\"gpt-4o-mini\",\n)\n```\nFor other models that support OpenAI-compatible APIs, you can use the `openai` backend:\n```python\nllm = curator.LLM(\n    model_name=\"gpt-4o-mini\",\n    backend=\"openai\",\n    backend_params={\n        \"base_url\": \"https:\u002F\u002Fyour-openai-compatible-api-url\",\n        \"api_key\": \u003CYOUR_OPENAI_COMPATIBLE_SERVICE_API_KEY>,\n    },\n)\n```\n\n\n### LiteLLM (Anthropic, Gemini, together.ai, etc.)\nHere is an example of using Gemini with litellm backend:\n```python\nllm = curator.LLM(\n    model_name=\"gemini\u002Fgemini-1.5-flash\",\n    backend=\"litellm\",\n    backend_params={\n        \"max_requests_per_minute\": 2_000,\n        \"max_tokens_per_minute\": 4_000_000\n    },\n)\n```\n[Documentation](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-litellm-for-diverse-providers)\n\n### Ollama\n```python\nllm = curator.LLM(\n    model_name=\"ollama\u002Fllama3.1:8b\",  # Ollama model identifier\n    backend_params={\"base_url\": \"http:\u002F\u002Flocalhost:11434\"},\n)\n```\n[Documentation](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-ollama-with-curator#id-2.-configure-the-ollama-backend)\n\n### vLLM\n\n```python\nllm = curator.LLM( \n    model_name=\"Qwen\u002FQwen2.5-3B-Instruct\", \n    backend=\"vllm\", \n    backend_params={ \n        \"tensor_parallel_size\": 1, # Adjust based on GPU count \n        \"gpu_memory_utilization\": 0.7 \n    }\n)\n```\n[Documentation](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-vllm-with-curator#id-3-initialize-and-use-the-generator)\n### DeepSeek\nDeepSeek offers an OpenAI-compatible API that you can use with the `openai` backend.\n> [!IMPORTANT]\n> The DeepSeek API is experiencing intermittent issues and will return empty responses during times of high traffic. We recommend\ncalling the DeepSeek API through the `openai` backend, with a high max retries so that we can retry failed requests upon empty\nresponse and a reasonable max requests and tokens per minute so we don't retry too aggressively and overwhelm the API.\n\n```python\nllm = curator.LLM(\n    model_name=\"deepseek-reasoner\",\n    generation_params={\"temp\": 0.0},\n    backend_params={\n        \"max_requests_per_minute\": 100,\n        \"max_tokens_per_minute\": 10_000_000,\n        \"base_url\": \"https:\u002F\u002Fapi.deepseek.com\u002F\",\n        \"api_key\": \u003CYOUR_DEEPSEEK_API_KEY>,\n        \"max_retries\": 50,\n    },\n    backend=\"openai\",\n)\n```\n\n### kluster.ai\n```python\nllm = curator.LLM(\n    model_name=\"deepseek-ai\u002FDeepSeek-R1\", \n    backend=\"klusterai\",\n)\n```\n[Documentation](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-kluster.ai-for-batch-inference)\n## 📦 Batch Mode\nSeveral providers offer about 50% discount on token usage when using batch mode. Curator makes it easy to use batch mode with a wide range of providers.\n\nExample with OpenAI ([docs reference](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-openai-for-batch-inference)):\n```python\nllm = curator.LLM(model_name=\"gpt-4o-mini\", batch=True)\n```\n\nSee documentation:\n* [OpenAI batch mode](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-openai-for-batch-inference)\n* [Anthropic batch mode](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-anthropic-for-batch-inference)\n* [Gemini batch mode](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-gemini-for-batch-inference)\n* [kluster.ai batch mode](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-kluster.ai-for-batch-inference)\n* [Mistral batch mode](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-mistral-for-batch-inference)\n\n## 🔧 Fine-Tuning with Tinker\n\nCurator integrates with the [Tinker SDK](https:\u002F\u002Fgithub.com\u002Fthinking-machines-lab\u002Ftinker-cookbook) so you can go from curated data to a LoRA fine-tuned model in a few lines of Python.\n\n```bash\npip install bespokelabs-curator tinker\nexport TINKER_API_KEY=\"your-tinker-key\"\n```\n\n```python\nfrom bespokelabs.curator import TinkerTrainer, TinkerTrainerConfig\n\n# Configure training\nconfig = TinkerTrainerConfig(\n    base_model=\"Qwen\u002FQwen3-8B\",\n    epochs=3,\n    batch_size=4,\n    lora_config={\"rank\": 16, \"alpha\": 32, \"dropout\": 0.05},\n    checkpoint_every_epoch=True,\n)\n\n# Training data is a list of chat-format dicts (or a HuggingFace Dataset)\ntraining_data = [\n    {\"messages\": [\n        {\"role\": \"user\", \"content\": \"What is Python?\"},\n        {\"role\": \"assistant\", \"content\": \"Python is a programming language.\"},\n    ]},\n    # ...\n]\n\n# Train\ntrainer = TinkerTrainer(config)\nresult = trainer.train(training_data)\nprint(f\"Final loss: {result.final_loss:.4f}\")\n\n# Sample from the fine-tuned model\nresponse = trainer.sample(\"Explain recursion in Python\")\nprint(response)\n```\n\n### Checkpoint Resume\n\nTraining can be resumed from any saved checkpoint. The trainer restores both model weights and optimizer state, then continues from where it left off — no data is replayed.\n\n```python\n# Resume from an earlier run's checkpoint\ncheckpoints = result.checkpoints  # list of CheckpointInfo\ntrainer = TinkerTrainer(config)\ntrainer.load_checkpoint(checkpoints[-1])\nresult = trainer.train(training_data)  # continues from the checkpoint\n```\n\n### Custom Data Formats\n\nSubclass `TinkerTrainer` to handle non-standard data layouts:\n\n```python\nclass MyTrainer(TinkerTrainer):\n    def format_example(self, row):\n        return TrainingExample.from_dict_messages([\n            {\"role\": \"user\", \"content\": row[\"question\"]},\n            {\"role\": \"assistant\", \"content\": row[\"answer\"]},\n        ])\n\ntrainer = MyTrainer(config)\nresult = trainer.train([{\"question\": \"What is 2+2?\", \"answer\": \"4\"}, ...])\n```\n\nSee the full [poem fine-tuning example](examples\u002Fpoem_finetuning_example.py) for an end-to-end pipeline that curates data with `curator.LLM` and then fine-tunes with `TinkerTrainer`.\n\n## Bespoke Curator Viewer\nThe hosted curator viewer is a rich interface to visualize data -- and makes visually inspecting the data much easier.\n\nYou can enable it as follows:\n\nBash:\n\n```shell\nexport CURATOR_VIEWER=1\n```\nPython\u002Fcolab:\n```python\nimport os\nos.environ[\"CURATOR_VIEWER\"]=\"1\"\n```\n\nWith this enabled, as curator generates data, it gets uploaded and you can see the responses streaming in the viewer. The URL for the viewer is displayed right next to the rich progress.\n\n### Authenticate with a Bespoke Labs API key\n\nBy default, datasets are accessible to anyone with the link. To keep your datasets private, you can associate them with a Bespoke Labs account. Doing so also allows you to:\n\n1. Track all datasets associated with your account\n2. Share datasets with collaborators\n3. Analyze data generation costs over time\n\nYou can enable authentication as follows;\n\n1. [Sign up](https:\u002F\u002Fcurator.bespokelabs.ai\u002Fauth\u002Fsignup) for a Bespoke Labs account.\n2. Create an API key from the [API Key](https:\u002F\u002Fcurator.bespokelabs.ai\u002Fhome\u002Fkeys) page.\n3. Set the `BESPOKE_API_KEY` and `CURATOR_VIEWER` environment variables:\n\n```shell\nexport BESPOKE_API_KEY=\u003CYOUR_API_KEY>\nexport CURATOR_VIEWER=1\n```\n\nWith the environment variables set, all your datasets will be streamed to the hosted viewer and linked to your Bespoke Labs account. You can visit the [Datasets](https:\u002F\u002Fcurator.bespokelabs.ai\u002Fhome\u002Fdatasets) page to see datasets generated with your API keys or shared with you by others, and the [Cost Report](https:\u002F\u002Fcurator.bespokelabs.ai\u002Fhome\u002Fcosts)\npage to see the data generation costs for a given period.\n\n## Environment Variables\n\nWe support a range of environment variables to customize the behavior of Curator.\n\nHere is a complete table of environment variables:\n| Variable | Description | Default |\n|----------|-------------|---------|\n| `CURATOR_VIEWER` | Enables the Curator viewer for visualizing data curation when `True`. | `False` |\n| `CURATOR_DISABLE_CACHE` | Disables caching for `curator.LLM` generations when `True`. Useful for fresh runs. | `False` |\n| `CURATOR_CACHE_DIR` | Sets the cache directory used for `curator.LLM` generations. | `~\u002F.cache\u002Fcurator` |\n| `CURATOR_DISABLE_RICH_DISPLAY` | When `True`, disables [Rich CLI](https:\u002F\u002Fgithub.com\u002FTextualize\u002Frich) output (and falls back to [tqdm](https:\u002F\u002Ftqdm.github.io\u002F) logging) for local data generation monitoring. This is useful when debugging with inline breakpoints or interactive debuggers like `pdb`, where Rich's dynamic output can interfere with terminal input. | `False` |\n| `TELEMETRY_ENABLED` | Enable telemetry for curator usage tracking when `True` | `True` |\n\n## Contributing\nThank you to all the contributors for making this project possible!\nPlease follow [these instructions](docs\u002FCONTRIBUTING.md) on how to contribute.\n\n## Citation\nIf you find Curator useful, please consider citing us!\n\n```\n@software{Curator: A Tool for Synthetic Data Creation,\n  author = {Marten, Ryan* and Vu, Trung* and Ji, Charlie Cheng-Jie and Sharma, Kartik and Pimpalgaonkar, Shreyas and Dimakis, Alex and Sathiamoorthy, Maheswaran},\n  month = jan,\n  title = {{Curator}},\n  year = {2025},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator}}\n}\n```\n","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fbespokelabs.ai\u002F\" target=\"_blank\">\n    \u003Cpicture>\n      \u003Csource media=\"(prefers-color-scheme: light)\" width=\"100px\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbespokelabsai_curator_readme_d5fdbb6997d6.png\">\n      \u003Cimg alt=\"Bespoke Labs 标志\" width=\"100px\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbespokelabsai_curator_readme_d5fdbb6997d6.png\">\n    \u003C\u002Fpicture>\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Ch1 align=\"center\">Bespoke Curator\u003C\u002Fh1>\n\u003Ch3 align=\"center\" style=\"font-size: 20px; margin-bottom: 4px\">用于后训练的大规模推理与可扩展数据整理\u003C\u002Fh3>\n\u003Cbr\u002F>\n\n\u003Cdiv align=\"center\">\n\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCurator-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white)](https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002F) [![Twitter](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F@BespokeLabsai-white?style=for-the-badge&logo=X&logoColor=white&color=000)](https:\u002F\u002Fx.com\u002Fbespokelabsai) [![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBespokeLabs-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor)](https:\u002F\u002Fhuggingface.co\u002Fbespokelabs) [![Discord](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBespoke_Labs-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https:\u002F\u002Fdiscord.gg\u002FKqpXvpzVBS) \n\u003Cbr>\n[![文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocs-docs.bespokelabs.ai-blue?style=for-the-badge&link=https%3A%2F%2Fdocs.bespokelabs.ai&labelColor=000)](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fgetting-started) [![网站](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSite-bespokelabs.ai-blue?style=for-the-badge&link=https%3A%2F%2Fbespokelabs.ai&labelColor=000)](https:\u002F\u002Fbespokelabs.ai\u002F) [![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fbespokelabs-curator?style=for-the-badge&labelColor=000)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fbespokelabs-curator\u002F)\n\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\n[ English | \u003Ca href=\"docs\u002FREADME_zh.md\">中文\u003C\u002Fa> ]\n\u003C\u002Fdiv>\n\n## 🎉 最新动态\n* **[2026.03.14]** [Tinker 微调集成](examples\u002Fpoem_finetuning_example.py)：使用 Tinker SDK (软件开发工具包)，仅需几行 Python 代码即可从整理好的数据到 LoRA (低秩自适应) 微调模型。\n* **[2025.12.05]** [推出 OpenThoughts-Agents](https:\u002F\u002Fwww.open-thoughts.ai\u002Fblog\u002Fagent)，其数据由 Curator 整理。\n* **[2025.04.09]** 与 HuggingFace 和 Together.ai 合作 [推出推理数据集竞赛](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fbespokelabs\u002Freasoning-datasets-competition)。赢取价值 5000 美元的奖品！\n* **[2025.04.03]** 我们使用 Bespoke Curator 创建了 [OpenThoughts2-1M](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fopen-thoughts\u002FOpenThoughts2-1M) 数据集，该数据集用于训练 OpenThinker2-32B，其表现优于 DeepSeek-R1-32B。该数据集开始在 HuggingFace 上流行。\n* **[2025.03.12]** [新增 Gemini Batch 支持](https:\u002F\u002Fwww.bespokelabs.ai\u002Fblog\u002Feffortless-gemini-batch-processing-with-curator)：Gemini 批处理 API (应用程序编程接口) 极具挑战性，但我们让它变得简单多了！:)\n* **[2025.03.05]** [新增 Claude 3.7 Sonnet 思维模式和批处理模式支持](https:\u002F\u002Fwww.bespokelabs.ai\u002Fblog\u002Fclaude-3-7-sonnet-thinking-mode-in-curator)。\n* **[2025.02.26]** [新增代码执行支持](https:\u002F\u002Fwww.bespokelabs.ai\u002Fblog\u002Flaunching-code-executor)：现在可以使用 CodeExecutor 运行代码（由 Curator 生成）。我们支持四种后端：本地（称为 multiprocessing）、Ray、Docker 和 e2b。\n* **[2025.02.06]** 我们使用 Bespoke Curator 创建了 [s1K-1.1]( https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fsimplescaling\u002Fs1K-1.1)，这是一个高质量、样本高效的推理数据集。\n* **[2025.01.30]** [支持 OpenAI、Anthropic 及其他兼容 API (应用程序编程接口) 的批处理](https:\u002F\u002Fwww.bespokelabs.ai\u002Fblog\u002Fbatch-processing-with-curator)：Token (令牌) 成本减半 🔥🔥🔥。通过与 kluster.ai 的合作，使用 Curator 的新用户可以访问 DeepSeek-R1 等开源模型并获得 **25 美元积分**（有限制）。编辑：促销活动已结束。\n* **[2025.01.27]** 我们使用 Bespoke Curator 创建了 [OpenThoughts-114k](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fopen-thoughts\u002FOpenThoughts-114k)，这是一个高质量的推理数据集（在 HuggingFace 上流行）。\n* **[2025.01.22]** 我们使用 Bespoke Curator 创建了 [Bespoke-Stratos-17k](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fbespokelabs\u002FBespoke-Stratos-17k)，这是一个高质量的推理数据集（在 HuggingFace 上流行）。\n* **[2025.01.15]** Curator 正式发布 🎉\n\n## 概述\n\nBespoke Curator 使得创建合成数据管道变得轻松。无论您是在训练模型还是提取结构化数据，Curator 都能快速且稳健地准备高质量数据。\n\n* 基于 Python 的丰富库，用于生成和整理合成数据。\n* 查看器，可在数据生成过程中进行监控。\n* 对结构化输出提供一流支持。\n* 内置性能优化，适用于异步操作、缓存和故障恢复，支持各种规模。\n* 通过 LiteLLM、vLLM 和流行的批处理 API (应用程序编程接口) 支持广泛的推理选项。\n\n![CLI (命令行界面) 实际操作](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbespokelabsai_curator_readme_ef3e28969a3b.gif)\n\n查看我们的完整文档，包括 [入门指南](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fgetting-started)、[教程](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Ftutorials)、[指南](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides) 和详细的 [参考](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fapi-reference\u002Fllm-api-documentation)。\n\n## 🛠️ 安装\n\n```bash\npip install bespokelabs-curator\n```\n## 📕 示例\n\n### 微调\u002F蒸馏\n| **任务** | **链接** | **目标** |\n|----------|--------------|-------------|\n| **产品特征提取** | \u003Ca target=\"_blank\" href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1YoA23-cBcWpaSErULzBI2bo2LPGo37GQ\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> | 微调模型以识别产品特征 |\n| **情感分析** | \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Zfl3g7POsqqYQqkzXdyhYRSAymLhZugn?usp=sharing\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa> | 基于方面的情感分析餐厅评论并使用 Together.ai 进行微调 |\n| **针对特定领域的 RAFT** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Fblocks\u002Fraft\" target=\"_blank\">代码\u003C\u002Fa> | 实现检索增强微调 (RAFT)，处理特定领域文档，生成问题，并为大语言模型 (LLM) 微调准备数据。 |\n| **诗歌生成与 LoRA 微调** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fblob\u002Fmain\u002Fexamples\u002Fpoem_finetuning_example.py\" target=\"_blank\">代码\u003C\u002Fa> | 端到端管道：使用 Curator 整理诗歌数据，然后使用 TinkerTrainer 进行 LoRA (低秩自适应) 微调 |\n\n### 数据生成\n| **任务** | **链接** | **目标** |\n|----------|--------------|-------------|\n| **推理数据集生成 (Bespoke Stratos)** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Fbespoke-stratos-data-generation\" target=\"_blank\">代码\u003C\u002Fa> | 生成 Bespoke-Stratos-17k 数据集，专注于来自数学、编码和问题解决数据集的推理轨迹。 |\n| **推理数据集生成 (Open Thoughts)** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fopen-thoughts\u002Fopen-thoughts\" target=\"_blank\">代码\u003C\u002Fa> | 生成 Open-Thoughts-114k 数据集，专注于来自数学、编码和问题解决数据集的推理轨迹。 |\n| **多模态** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Fmultimodal\" target=\"_blank\">代码\u003C\u002Fa> | 通过从食物图像生成食谱来展示多模态能力 |\n| **Ungrounded Question Answer generation (无 grounded 问答生成)** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Fungrounded-qa\" target=\"_blank\">代码\u003C\u002Fa> | 使用类似于 CAMEL 论文的技术生成多样化的问答对 |\n| **代码执行** | \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1YKj1-BC66-3LgNkf1m5AEPswIYtpOU-k\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>| 执行使用 Curator 生成的代码 |\n| **3Blue1Brown 视频生成** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Fcode-execution\u002Fmath-animation\" target=\"_blank\">代码\u003C\u002Fa> | 生成类似 3Blue1Brown 的视频并使用代码执行进行渲染！ |\n| **合成图表** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fblob\u002Fmain\u002Fexamples\u002Fcode-execution\u002Fchart-generation\u002Fcharts.py\" target=\"_blank\">代码\u003C\u002Fa> | 合成生成图表。 |\n| **函数调用** | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Ftree\u002Fmain\u002Fexamples\u002Ffunction-calling\" target=\"_blank\">代码\u003C\u002Fa> | 为函数调用的微调 (finetuning) 生成数据。 |\n\n\n\n## 🚀 快速开始\n\n### 使用 `curator.LLM` 进行批量推理\n\n```python\nfrom typing import Dict\nfrom bespokelabs import curator\nfrom datasets import Dataset\nfrom pydantic import BaseModel, Field\nfrom typing import Literal\n\nclass Sentiment(BaseModel):\n  sentiment: Literal[\"positive\", \"negative\", \"neutral\"] = Field(\n    description=\"Sentiment of the review\")\n\nclass SentimentAnalyzer(curator.LLM):\n\n  def prompt(self, product: Dict):\n    return f\"Determine the sentiment of the product from the review: {product['review']}\"\n\n  def parse(self, product: Dict, response: Sentiment):\n    return [{\"name\": product[\"name\"], \"sentiment\": response.sentiment}]\n\n# You can easily have a million rows here. \n# Curator takes care of parallelism, retries, and caches responses.\ndataset = [{\"name\": \"Curator\", \"review\": \"Already saved hours in one day of use.\"},\n           {\"name\": \"Bespoke MiniCheck\", \"review\": \"Hallucination rates dropped by 90%.\"}]\n\n# You can set batch=True, and instantly uses batch mode to save 50% of the costs.\nanalyzer = SentimentAnalyzer(\n    model_name=\"gpt-4o-mini\", response_format=Sentiment, batch=False)\nreviews = analyzer(dataset)\nprint(reviews.to_pandas())\n```\n输出:\n```\n                name sentiment\n0            Curator  positive\n1  Bespoke MiniCheck  positive\n```\n\n在 `SentimentAnalyzer` 类中：\n* `prompt` 接收输入（`product`）并返回给 LLM (大语言模型) 的 Prompt (提示词)。\n* `parse` 接收输入（`product`）和结构化输出（`response`）并将其转换为字典列表。这样我们可以轻松地将输出转换为 HuggingFace Dataset (数据集) 对象。\n\n除了列表之外，您也可以传递 HuggingFace Dataset 对象（详见下文）。\n\n### 使用 `curator.LLM` 进行数据生成\n\n这是一个使用结构化输出并将两个 `curator.LLM` 块连接起来生成多样化诗歌的示例。\n\n\n```python\nfrom typing import Dict, List\nfrom bespokelabs import curator\nfrom pydantic import BaseModel, Field\n\nclass Topics(BaseModel):\n    topics_list: List[str] = Field(description=\"A list of topics.\")\n\nclass TopicGenerator(curator.LLM):\n  response_format = Topics\n\n  def prompt(self, subject):\n    return f\"Return 3 topics related to {subject}\"\n\n  def parse(self, input: str, response: Topics):\n    return [{\"topic\": t} for t in response.topics_list]\n\n\nclass Poem(BaseModel):\n    title: str = Field(description=\"The title of the poem.\")\n    poem: str = Field(description=\"The content of the poem.\")\n\nclass Poet(curator.LLM):\n    response_format = Poem\n\n    def prompt(self, input: Dict) -> str:\n        return f\"Write two poems about {input['topic']}.\"\n\n    def parse(self, input: Dict, response: Poem) -> Dict:\n        return [{\"title\": response.title, \"poem\": response.poem}]\n\ntopic_generator = TopicGenerator(model_name=\"gpt-4o-mini\")\npoet = Poet(model_name=\"gpt-4o-mini\")\n# Start generation\ntopics = topic_generator(\"Mathematics\")\npoems = poet(topics)\n```\n输出:\n```\n \ttitle                     poem\n0\tThe Language of Algebra\t  In symbols and signs, truths intertwine,..\n1\tThe Geometry of Space\t  In the world around us, shapes do collide,..\n2\tThe Language of Logic\t  In circuits and wires where silence speaks,..\n```\n\n您可以在 [examples](examples) 目录中看到更多示例。\n\n有关更多详细信息以及故障排除信息，请参阅 [docs](https:\u002F\u002Fdocs.bespokelabs.ai\u002F)。\n\n> [!TIP]\n> 如果您正在生成大型数据集，您可能希望使用 [batch mode](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fsave-usdusdusd-on-llm-inference) (批处理模式) 以节省成本。目前支持来自 [OpenAI](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fbatch) 和 [Anthropic](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fbuild-with-claude\u002Fmessage-batches) 的批处理 API。使用 curator 只需在 `LLM` 类中设置 `batch=True` 即可。\n> [!NOTE]\n> 重试和缓存默认启用，以帮助您快速迭代数据管道。\n> 所以现在如果您再次运行相同的提示词，您将几乎瞬间得到相同的响应。\n> 您可以在 `~\u002F.cache\u002Fcurator` 删除缓存或使用 `export CURATOR_DISABLE_CACHE=true` 禁用它。\n\n\n> [!IMPORTANT]\n> 请确保将您的 API keys (API 密钥) 设置为调用模型的环境变量 (environment variables)。例如运行 `export OPENAI_API_KEY=sk-...` 和 `export ANTHROPIC_API_KEY=ant-...` 将允许您运行前两个示例。支持的模型完整列表及其关联的环境变量名称可在 [litellm docs](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproviders) 中找到。\n### 匿名遥测\n\n我们收集最小化的匿名使用遥测 (telemetry) 数据，以帮助优先考虑对 Curator 社区有益的新功能和改进。您可以通过将 `TELEMETRY_ENABLED` 环境变量设置为 `False` 来退出。 \n\n## 📖 提供商\nCurator 支持广泛的提供商 (providers)，包括 OpenAI、Anthropic 等。\n\n### OpenAI 后端 (backend)\n```python\nllm = curator.LLM(\n    model_name=\"gpt-4o-mini\",\n)\n```\n对于支持 OpenAI 兼容 API (应用程序编程接口) 的其他模型，您可以使用 `openai` 后端：\n```python\nllm = curator.LLM(\n    model_name=\"gpt-4o-mini\",\n    backend=\"openai\",\n    backend_params={\n        \"base_url\": \"https:\u002F\u002Fyour-openai-compatible-api-url\",\n        \"api_key\": \u003CYOUR_OPENAI_COMPATIBLE_SERVICE_API_KEY>,\n    },\n)\n```\n\n\n### LiteLLM (Anthropic, Gemini, together.ai 等)\n以下是使用 litellm 后端配合 Gemini 的示例：\n```python\nllm = curator.LLM(\n    model_name=\"gemini\u002Fgemini-1.5-flash\",\n    backend=\"litellm\",\n    backend_params={\n        \"max_requests_per_minute\": 2_000,\n        \"max_tokens_per_minute\": 4_000_000\n    },\n)\n```\n[文档](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-litellm-for-diverse-providers)\n\n### Ollama\n```python\nllm = curator.LLM(\n    model_name=\"ollama\u002Fllama3.1:8b\",  # Ollama model identifier\n    backend_params={\"base_url\": \"http:\u002F\u002Flocalhost:11434\"},\n)\n```\n[文档](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-ollama-with-curator#id-2.-configure-the-ollama-backend)\n\n### vLLM\n\n```python\nllm = curator.LLM( \n    model_name=\"Qwen\u002FQwen2.5-3B-Instruct\", \n    backend=\"vllm\", \n    backend_params={ \n        \"tensor_parallel_size\": 1, # Adjust based on GPU count \n        \"gpu_memory_utilization\": 0.7 \n    }\n)\n```\n[文档](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-vllm-with-curator#id-3-initialize-and-use-the-generator)\n### DeepSeek\nDeepSeek 提供 OpenAI 兼容 API，您可以使用 `openai` 后端进行调用。\n> [!IMPORTANT]\n> DeepSeek API 目前存在间歇性问题，在流量高峰期会返回空响应。我们建议\n通过 `openai` 后端调用 DeepSeek API，设置较高的最大重试次数 (max retries)，以便在收到空\n响应时重试失败的请求，并设置合理的每分钟最大请求数和令牌数以避免过于激进地重试而压垮 API。\n\n```python\nllm = curator.LLM(\n    model_name=\"deepseek-reasoner\",\n    generation_params={\"temp\": 0.0},\n    backend_params={\n        \"max_requests_per_minute\": 100,\n        \"max_tokens_per_minute\": 10_000_000,\n        \"base_url\": \"https:\u002F\u002Fapi.deepseek.com\u002F\",\n        \"api_key\": \u003CYOUR_DEEPSEEK_API_KEY>,\n        \"max_retries\": 50,\n    },\n    backend=\"openai\",\n)\n```\n\n### kluster.ai\n```python\nllm = curator.LLM(\n    model_name=\"deepseek-ai\u002FDeepSeek-R1\", \n    backend=\"klusterai\",\n)\n```\n[文档](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-kluster.ai-for-batch-inference)\n## 📦 批处理模式 (Batch Mode)\n多个提供商在使用批处理模式时提供约 50% 的令牌使用折扣。Curator 使得与广泛提供商一起使用批处理模式变得简单。\n\nOpenAI 示例 ([文档参考](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-openai-for-batch-inference)):\n```python\nllm = curator.LLM(model_name=\"gpt-4o-mini\", batch=True)\n```\n\n查看文档：\n* [OpenAI 批处理模式](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-openai-for-batch-inference)\n* [Anthropic 批处理模式](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-anthropic-for-batch-inference)\n* [Gemini 批处理模式](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-gemini-for-batch-inference)\n* [kluster.ai 批处理模式](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-kluster.ai-for-batch-inference)\n* [Mistral 批处理模式](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fhow-to-guides\u002Fusing-mistral-for-batch-inference)\n\n## 🔧 使用 Tinker 进行微调 (Fine-Tuning)\n\nCurator 集成了 [Tinker SDK](https:\u002F\u002Fgithub.com\u002Fthinking-machines-lab\u002Ftinker-cookbook)，让您只需几行 Python 代码即可从整理好的数据过渡到 LoRA (低秩自适应) 微调模型。\n\n```bash\npip install bespokelabs-curator tinker\nexport TINKER_API_KEY=\"your-tinker-key\"\n```\n\n```python\nfrom bespokelabs.curator import TinkerTrainer, TinkerTrainerConfig\n\n# Configure training\nconfig = TinkerTrainerConfig(\n    base_model=\"Qwen\u002FQwen3-8B\",\n    epochs=3,\n    batch_size=4,\n    lora_config={\"rank\": 16, \"alpha\": 32, \"dropout\": 0.05},\n    checkpoint_every_epoch=True,\n)\n\n# Training data is a list of chat-format dicts (or a HuggingFace Dataset)\ntraining_data = [\n    {\"messages\": [\n        {\"role\": \"user\", \"content\": \"What is Python?\"},\n        {\"role\": \"assistant\", \"content\": \"Python is a programming language.\"},\n    ]},\n    # ...\n]\n\n# Train\ntrainer = TinkerTrainer(config)\nresult = trainer.train(training_data)\nprint(f\"Final loss: {result.final_loss:.4f}\")\n\n# Sample from the fine-tuned model\nresponse = trainer.sample(\"Explain recursion in Python\")\nprint(response)\n```\n\n### 检查点恢复 (Checkpoint Resume)\n\n训练可以从任何保存的检查点 (checkpoint) 恢复。训练器将恢复模型权重和优化器状态 (optimizer state)，然后从中断处继续——不会重放任何数据。\n\n```python\n# Resume from an earlier run's checkpoint\ncheckpoints = result.checkpoints  # list of CheckpointInfo\ntrainer = TinkerTrainer(config)\ntrainer.load_checkpoint(checkpoints[-1])\nresult = trainer.train(training_data)  # continues from the checkpoint\n```\n\n### 自定义数据格式\n\n子类化 `TinkerTrainer` 以处理非标准数据布局：\n\n```python\nclass MyTrainer(TinkerTrainer):\n    def format_example(self, row):\n        return TrainingExample.from_dict_messages([\n            {\"role\": \"user\", \"content\": row[\"question\"]},\n            {\"role\": \"assistant\", \"content\": row[\"answer\"]},\n        ])\n\ntrainer = MyTrainer(config)\nresult = trainer.train([{\"question\": \"What is 2+2?\", \"answer\": \"4\"}, ...])\n```\n\n查看完整的 [诗歌微调示例](examples\u002Fpoem_finetuning_example.py) 以了解使用 `curator.LLM` 整理数据然后使用 `TinkerTrainer` 进行微调的端到端流程。\n\n## Bespoke Curator 查看器\n托管的 Curator 查看器是一个丰富的数据可视化界面——使视觉检查数据变得更加容易。\n\n您可以按以下方式启用它：\n\nBash:\n\n```shell\nexport CURATOR_VIEWER=1\n```\nPython\u002FColab:\n```python\nimport os\nos.environ[\"CURATOR_VIEWER\"]=\"1\"\n```\n\n启用后，随着 Curator 生成数据，它会被上传，您可以在查看器中看到流式传输的响应。查看器的 URL 显示在丰富的进度旁边。\n\n### 使用 Bespoke Labs API 密钥进行身份验证\n\n默认情况下，任何拥有链接的人都可以访问数据集。为了保持您的数据集私密，您可以将它们与 Bespoke Labs 账户关联。这样做还可以让您：\n\n1. 跟踪所有与您账户关联的数据集\n2. 与协作者共享数据集\n3. 分析随时间推移的数据生成成本\n\n您可以按以下方式启用身份验证：\n\n1. [注册](https:\u002F\u002Fcurator.bespokelabs.ai\u002Fauth\u002Fsignup) 一个 Bespoke Labs 账户。\n2. 从 [API 密钥](https:\u002F\u002Fcurator.bespokelabs.ai\u002Fhome\u002Fkeys) 页面创建 API 密钥。\n3. 设置 `BESPOKE_API_KEY` 和 `CURATOR_VIEWER` 环境变量：\n\n```shell\nexport BESPOKE_API_KEY=\u003CYOUR_API_KEY>\nexport CURATOR_VIEWER=1\n```\n\n设置环境变量后，您所有的数据集都将流式传输到托管查看器并链接到您的 Bespoke Labs 账户。您可以访问 [数据集](https:\u002F\u002Fcurator.bespokelabs.ai\u002Fhome\u002Fdatasets) 页面查看使用您的 API 密钥生成的或由他人共享给您的数据集，并访问 [成本报告](https:\u002F\u002Fcurator.bespokelabs.ai\u002Fhome\u002Fcosts) 页面查看特定时期的数据生成成本。\n\n## 环境变量\n\n我们支持一系列环境变量来定制 Curator 的行为。\n\n以下是环境变量的完整表格：\n| 变量 | 描述 | 默认值 |\n|----------|-------------|---------|\n| `CURATOR_VIEWER` | 启用用于可视化数据策展的 Curator 查看器，当为 `True` 时。 | `False` |\n| `CURATOR_DISABLE_CACHE` | 禁用 `curator.LLM` 生成的缓存，当为 `True` 时。适用于全新运行。 | `False` |\n| `CURATOR_CACHE_DIR` | 设置用于 `curator.LLM` 生成的缓存目录。 | `~\u002F.cache\u002Fcurator` |\n| `CURATOR_DISABLE_RICH_DISPLAY` | 当为 `True` 时，禁用 [Rich CLI](https:\u002F\u002Fgithub.com\u002FTextualize\u002Frich) 输出（并回退到 [tqdm](https:\u002F\u002Ftqdm.github.io\u002F) 日志记录），用于本地数据生成监控。在使用内联断点或像 `pdb` 这样的交互式调试器进行调试时很有用，因为 Rich 的动态输出可能会干扰终端输入。 | `False` |\n| `TELEMETRY_ENABLED` | 启用用于 Curator 使用追踪的遥测功能，当为 `True` 时 | `True` |\n\n## 贡献\n感谢所有让这个项目成为可能的贡献者！\n请遵循 [这些说明](docs\u002FCONTRIBUTING.md) 了解如何贡献。\n\n## 引用\n如果您觉得 Curator 有用，请考虑引用我们！\n\n```\n@software{Curator: A Tool for Synthetic Data Creation,\n  author = {Marten, Ryan* and Vu, Trung* and Ji, Charlie Cheng-Jie and Sharma, Kartik and Pimpalgaonkar, Shreyas and Dimakis, Alex and Sathiamoorthy, Maheswaran},\n  month = jan,\n  title = {{Curator}},\n  year = {2025},\n  howpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator}}\n}\n```","# Bespoke Curator 快速上手指南\n\nBespoke Curator 是一个强大的 Python 库，专为大规模推理（Bulk Inference）和可扩展的数据整理（Data Curation）设计。它支持合成数据生成、结构化输出处理以及异步操作优化，适用于模型训练前的数据准备及后训练场景。\n\n## 环境准备\n\n*   **操作系统**：Linux \u002F macOS \u002F Windows\n*   **Python 版本**：建议 Python 3.8 及以上\n*   **依赖说明**：核心功能通过安装包自动管理，示例代码中使用的 `datasets` 和 `pydantic` 通常作为依赖项包含或需单独安装以运行完整示例。\n\n## 安装步骤\n\n使用 pip 安装 Bespoke Curator：\n\n```bash\npip install bespokelabs-curator\n```\n\n如需更新到最新版本：\n\n```bash\npip install --upgrade bespokelabs-curator\n```\n\n## 基本使用\n\nCurator 的核心类是 `curator.LLM`。通过继承该类并定义 `prompt` 和 `parse` 方法，你可以轻松构建数据处理流水线。以下是一个简单的情感分析批量推理示例。\n\n### 示例代码\n\n```python\nfrom typing import Dict\nfrom bespokelabs import curator\nfrom datasets import Dataset\nfrom pydantic import BaseModel, Field\nfrom typing import Literal\n\nclass Sentiment(BaseModel):\n  sentiment: Literal[\"positive\", \"negative\", \"neutral\"] = Field(\n    description=\"Sentiment of the review\")\n\nclass SentimentAnalyzer(curator.LLM):\n\n  def prompt(self, product: Dict):\n    return f\"Determine the sentiment of the product from the review: {product['review']}\"\n\n  def parse(self, product: Dict, response: Sentiment):\n    return [{\"name\": product[\"name\"], \"sentiment\": response.sentiment}]\n\n# You can easily have a million rows here. \n# Curator takes care of parallelism, retries, and caches responses.\ndataset = [{\"name\": \"Curator\", \"review\": \"Already saved hours in one day of use.\"},\n           {\"name\": \"Bespoke MiniCheck\", \"review\": \"Hallucination rates dropped by 90%.\"}]\n\n# You can set batch=True, and instantly uses batch mode to save 50% of the costs.\nanalyzer = SentimentAnalyzer(\n    model_name=\"gpt-4o-mini\", response_format=Sentiment, batch=False)\nreviews = analyzer(dataset)\nprint(reviews.to_pandas())\n```\n\n### 预期输出\n\n```text\n                name sentiment\n0            Curator  positive\n1  Bespoke MiniCheck  positive\n```\n\n### 核心概念说明\n\n*   **`prompt`**: 接收输入数据，返回发送给大模型的提示词。\n*   **`parse`**: 接收原始输入和大模型的结构化响应，将其转换为列表格式以便导出为 HuggingFace Dataset 对象。\n*   **`batch=True`**: 启用批处理模式，可显著降低 Token 成本。\n\n## 更多资源\n\n*   **官方文档**：[docs.bespokelabs.ai](https:\u002F\u002Fdocs.bespokelabs.ai\u002Fbespoke-curator\u002Fgetting-started)\n*   **GitHub 仓库**：[github.com\u002Fbespokelabsai\u002Fcurator](https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002F)\n*   **PyPI 页面**：[pypi.org\u002Fproject\u002Fbespokelabs-curator](https:\u002F\u002Fpypi.org\u002Fproject\u002Fbespokelabs-curator\u002F)","某金融科技公司团队计划基于开源模型微调一个专业财务分析助手，急需构建包含复杂推理步骤的高质量数据集以优化模型表现。\n\n### 没有 curator 时\n- 手动编写脚本逐个调用大模型 API，速度慢且极易触发频率限制，开发周期长。\n- 不同模型返回格式不统一，人工清洗非结构化文本耗时巨大，数据质量参差不齐。\n- 按请求量付费导致 Token 成本过高，难以支撑百万级数据的生产需求，预算压力大。\n- 缺乏代码执行环境，无法验证生成答案中的数学计算是否准确，存在幻觉风险。\n\n### 使用 curator 后\n- curator 支持批量推理接口，自动处理并发与重试，数据生成效率提升十倍，不再受限于单线程。\n- 内置结构化提取功能，直接将输出转为标准 JSON，省去大量清洗步骤，确保数据一致性。\n- 开启批处理模式后 Token 费用减半，大幅降低了大规模数据生产成本，使预算可控。\n- 集成 CodeExecutor 后端，可运行 Python 代码验证财务计算逻辑的正确性，显著提升数据可信度。\n\ncurator 将原本繁琐的数据工程转化为高效流水线，显著加速了垂直领域模型的迭代进程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbespokelabsai_curator_80efbc81.png","bespokelabsai","Bespoke Labs","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbespokelabsai_9ac15e5c.png","",null,"bespokelabs.ai","https:\u002F\u002Fgithub.com\u002Fbespokelabsai",[83,87,91,95],{"name":84,"color":85,"percentage":86},"Python","#3572A5",99.7,{"name":88,"color":89,"percentage":90},"JavaScript","#f1e05a",0.2,{"name":92,"color":93,"percentage":94},"Makefile","#427819",0.1,{"name":96,"color":97,"percentage":98},"Shell","#89e051",0,1654,136,"2026-04-02T07:10:41","Apache-2.0","未说明",{"notes":105,"python":103,"dependencies":106},"基于 Python 的开源库，用于批量推理和数据整理。通过 pip 安装。支持多种推理后端（LiteLLM、vLLM、批处理 API）。具备异步操作、缓存和故障恢复功能。支持结构化输出（Pydantic）。代码执行支持本地多进程、Ray、Docker 和 e2b。需用户自行配置 LLM API 密钥或本地模型资源。",[107,108,109,110,111],"bespokelabs-curator","datasets","pydantic","litellm","vllm",[13,15,51,26],[114,115,116,117,118,119,120,121,122,123,124],"synthetic-data","agents","llm","prompt","python","synthetic-dataset-generation","deep-learning","fine-tuning","instruction-tuning","machine-learning","natural-language-processing",4,"2026-03-27T02:49:30.150509","2026-04-06T07:05:48.446761",[129,134,138,143,148,152,157],{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},2250,"大规模运行中遇到文件上传或批次发送的速率限制错误怎么办？","这通常是因为上游达到了文件上传的速率限制。OpenAI Batch API 在同一时间只能排队 1,000,000 个请求。建议通过以下代码开启详细日志监控批次处理情况：\n```\nimport logging\nlogger = logging.getLogger(\"bespokelabs.curator\")\nlogger.setLevel(logging.INFO)\n```\n对于更大规模的任务，最佳实践是建立一个数据库来跟踪批次状态，以便轻松恢复任务，并确保代码感知速率限制。","https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fissues\u002F116",{"id":135,"question_zh":136,"answer_zh":137,"source_url":133},2251,"OpenAI Batch API 的请求队列限制具体是多少？","根据社区反馈和实际测试，OpenAI Batch API 在同一时间只能排队 1,000,000 个请求。如果遇到此限制，可能需要调整作业大小或分批处理。测试表明，使用实际数据而非玩具示例，且批次较小时，上传数据的耗时可能会帮助缓解达到速率限制的问题。",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},2252,"Gemini 批次处理中出现 `KeyError: 'parts'` 解析错误如何解决？","该问题已在 PR #612 中修复。由于修复尚未合并到主库版本中，建议在发布前从分支进行测试。如果无法立即更新，可以暂时从相关分支拉取代码验证修复效果。","https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fissues\u002F611",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},2253,"如何使用 Anthropic 的批量模式进行结构化输出？","支持使用 `instructor` 库处理结构化输出。注意 Claude 可能会在 JSON 响应前添加友好前言（preamble），导致解析失败（如警告信息所示）。建议参考 Anthropic 文档尝试预填充（prefill）模型响应的开头部分以增强一致性，避免解析错误。","https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fissues\u002F225",{"id":149,"question_zh":150,"answer_zh":151,"source_url":147},2254,"Anthropic 批量请求失败时返回的错误格式是什么样的？","失败的请求会包含 `result.error` 字段。例如：\n```\n{\n  \"custom_id\": \"2\",\n  \"result\": {\n    \"error\": {\n      \"error\": {\n        \"message\": \"max_tokens: Field required\",\n        \"type\": \"invalid_request_error\"\n      }\n    }\n  }\n}\n```\n成功请求则显示 `type: \"succeeded\"` 及消息详情。",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},2255,"如何为未优化的模型添加支持？","可以使用默认的 `LiteLLMOnlineRequestProcessor`。它作为“万能”处理器，适用于所有未自行优化或实现的模型。此外，可以通过 `instructor` 扩展结构化输出覆盖范围，并利用 `hidden_params` 获取原生响应成本信息。","https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fissues\u002F74",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},2256,"如何在测试中使用模拟服务器而不调用真实 API？","针对 OpenAI，可参考其官方贡献指南中的方法。该项目已通过 PR #318 实现了 Mock server 功能用于测试，开发者可以直接使用该功能进行本地验证。","https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fissues\u002F238",[163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238,243,248,253,258],{"id":164,"version":165,"summary_zh":166,"released_at":167},101782,"v0.1.27","## What's New\r\n\r\n### Tinker Integration for Fine-tuning (#710)\r\n\r\n  - Added support for Tinker SDK integration, enabling LoRA fine-tuning directly after generating data with Curator. When a TINKER_API_KEY is provided, training runs against the Tinker service with real\r\n  forward\u002Fbackward passes. Features include:\r\n  - Checkpointing support (per-epoch and per-N-steps) with dashboard visibility\r\n  - Gradient accumulation, LoRA alpha\u002Fdropout\u002Ftarget_modules configuration\r\n  - Checkpoint resume support\r\n  - Falls back to mock mode when SDK\u002Fkey is not available\r\n  - New examples in examples\u002Ftinker\u002F and examples\u002Fpoem_finetuning_example.py\r\n\r\n### Claude 4.x and 3.7 Model Support (#704)\r\n\r\n  Added multimodal and JSON format support for new Anthropic models:\r\n  - Claude 4.5 (Sonnet, Haiku, Opus)\r\n  - Claude 4.0\u002F4.1 (Sonnet, Opus)\r\n  - Claude 3.7 Sonnet\r\n\r\n### Bug Fixes\r\n\r\n  - Fixed LiteLLM multi-hosted_vllm request address mixture (#701) — Resolved an issue where requests could be routed to the wrong vLLM host when using multiple hosted_vllm backends.\r\n  - Fixed broken tests (#706)\r\n\r\n### Other Changes\r\n\r\n  - Code cleanup in OpenAI online request processor (#708)\r\n  - README updates (#707)","2026-03-15T17:57:15",{"id":169,"version":170,"summary_zh":171,"released_at":172},101783,"v0.1.26","## What's Changed\r\n* fix: viewer url bug in curator response by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F682\r\n* feat: add support to download dataset from viewer by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F652\r\n* If the model is not known, let the cost be None. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F681\r\n* feat: add stopping criteron in agent by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F684\r\n* fix error when torch isn't installed by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F663\r\n* ref: check structured outputs support with litellm by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F683\r\n* Ref agent multiturn minor by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F685\r\n* chore: bump 0.1.26 by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F686\r\n* fix: resolve Pydantic issues by @emmanuel-ferdman in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F689\r\n* feat: auto batch mode by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F691\r\n* ref: add version tag in lib by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F696\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.25...v0.1.26","2025-07-29T18:39:32",{"id":174,"version":175,"summary_zh":176,"released_at":177},101784,"v0.1.25","## What's Changed\r\n* docs: clean up wording in README by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F662\r\n* Feat\u002Fopenai\u002Fdeepseek api by @fenglui in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F640\r\n* Fix\u002Fcurator-cli-batch-update-freq-increase by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F632\r\n* feat: add example of prescription extraction by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F661\r\n* fix: migrate to `logger.warning` by @emmanuel-ferdman in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F666\r\n* perf: make download batch lazy in gemini processor by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F654\r\n* Feat\u002Fagentic\u002Fmultiturn by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F658\r\n* update agentic multi turn example by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F667\r\n* Clean up _factory.py by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F668\r\n* updated examples by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F670\r\n* feat: add response object in agent by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F671\r\n* make agentic curation response format as openai by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F673\r\n* Feat\u002Fadd\u002Fcurator\u002Flink\u002Fagent by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F672\r\n* Delete HOSTED_CURATOR_VIEWER environment variable. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F674\r\n* Add o3 to list of models supporting structured outputs by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F679\r\n* chore: bump 0.1.25 by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F680\r\n\r\n## New Contributors\r\n* @fenglui made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F640\r\n* @emmanuel-ferdman made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F666\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.24...v0.1.25","2025-05-30T17:59:06",{"id":179,"version":180,"summary_zh":181,"released_at":182},101785,"v0.1.24","## What's Changed\r\n* fix: multiple bugs in batch cancellation by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F614\r\n* bump version and remove curator viewer related release code by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F627\r\n* fix\u002Fcurator CLI display - rpm display and frequent tqdm update for tqdm for online by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F629\r\n* Fix\u002Fdeepseek\u002Frpm by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F630\r\n* Update README to add new updates and a better example by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F628\r\n* improve projected total, projected remaining UI in online by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F631\r\n* update metadata schema with cost by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F626\r\n* feat: add param to disable metadata db by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F583\r\n* add gpt 4.1 structured output support from curator by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F637\r\n* feat: fix batch mode structure outputs bug by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F639\r\n* cost streaming in batch mode and delayed streaming support by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F638\r\n* Add gpt-4.1 structured output support by @Mithil467 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F642\r\n* fix\u002Ffinal_statistics_display_fixes by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F644\r\n* feature: set dtype flag by @sky-2002 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F645\r\n* Fix casting issue in token accounting with anthropic by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F649\r\n* ref: check job state before download in gemini by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F646\r\n* ref: support anthropic backend for multimodal input by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F656\r\n* chore: bump 0.1.24 by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F659\r\n* feat: add response object in curator by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F643\r\n* docs: updated README with information about authentication by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F660\r\n\r\n## New Contributors\r\n* @Mithil467 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F642\r\n* @sky-2002 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F645\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.23...v0.1.24","2025-05-06T16:45:57",{"id":184,"version":185,"summary_zh":186,"released_at":187},101786,"v0.1.23.post1","## What's Changed\r\n* fix: multiple bugs in batch cancellation by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F614\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.23...v0.1.23.post1","2025-04-11T23:11:14",{"id":189,"version":190,"summary_zh":191,"released_at":192},101787,"v0.1.23","## What's Changed\r\n* Adding Mistral batch to examples\u002Fproviders by @sarthakwer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F609\r\n* Feat\u002Fretry\u002Fbatch by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F585\r\n* Feat\u002Frecipe\u002Fsimplestrat by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F598\r\n* Update curator rich cli gif  by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F617\r\n* remove split from card template by @davanstrien in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F619\r\n* feat\u002Fenv-disable-rich-cli by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F600\r\n* update config map for kluster by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F620\r\n* feat: fix gemini batch parts key missing by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F612\r\n* Add llama4 models from klusterAI by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F622\r\n* feat: add failed requests jsonl by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F615\r\n* Feat\u002Fopenai\u002Fdeepseek api by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F579\r\n* Disable loud log by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F624\r\n* fix: add finish reason in gemini batch by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F621\r\n* bump 0.1.23 by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F625\r\n\r\n## New Contributors\r\n* @sarthakwer made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F609\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.22...v0.1.23","2025-04-11T16:28:58",{"id":194,"version":195,"summary_zh":196,"released_at":197},101788,"v0.1.22","## What's Changed\r\n* bump: 0.1.21 by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F595\r\n* Update citation in zh, and move some files to docs folder. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F596\r\n* Add a table of examples. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F597\r\n* Add one more tutorial and organize the examples better. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F601\r\n* ref: update gemini example by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F602\r\n* Fix\u002Flive colab display error by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F599\r\n* feat: implement Mistral batch request processor by @Saharsh1005 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F584\r\n* If the input is a string or a list of strings for prompt(), maintain the same for parse() by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F604\r\n* Delete the local curator viewer and rename variable HOSTED_CURATOR_VIER by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F605\r\n* ref: make prapogate false in logger by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F590\r\n* feat: add authentication flow in curator client by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F606\r\n* chore: bump version (0.1.22) by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F608\r\n\r\n## New Contributors\r\n* @Saharsh1005 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F584\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.21...v0.1.22","2025-03-31T06:56:51",{"id":199,"version":200,"summary_zh":201,"released_at":202},101789,"v0.1.21","## What's Changed\r\n* fix: publish package install command by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F555\r\n* add max in progress (concurrent request) in online status tracker by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F568\r\n* fix: make process_response mandatory in create_dataset files by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F574\r\n* Cost Estimation revamp [1\u002Fn] Online processors by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F546\r\n* Ref\u002Fpush to viewer by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F576\r\n* Ref\u002Fupdate\u002Fratelimits\u002Fgemini by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F578\r\n* fix: use model name from config in  cost processor by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F575\r\n* feat: extract mime_type in Image types by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F582\r\n* Feat\u002Fraft by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F571\r\n* Update README.md by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F587\r\n* Ensure that when a list or a string is passed without a dictionary, we handle it appropriately. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F588\r\n* Add o3-mini models as a supported model series for structured output. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F593\r\n* fix: use semaphore to gate create new rows to prevent OOMs by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F591\r\n* Cleaup: Minor refactoring. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F592\r\n* bug: wrap estimate tokens with int in litellm backend and mime type fix. by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F586\r\n* fix: output unicode directly for gemini batch processor by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F594\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.20...v0.1.21","2025-03-19T23:21:12",{"id":204,"version":205,"summary_zh":206,"released_at":207},101790,"v0.1.20","## What's Changed\r\n* Update README.md to add code execution launch news by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F547\r\n* Update README.md by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F551\r\n* Feat\u002Fresume\u002Fhosted curator viewer by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F533\r\n* ref: add gen params in testcall litellm backend by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F552\r\n* Claude 3.7 Reasoning by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F536\r\n* 0.1.20 release by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F554\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.19.post1...v0.1.20","2025-02-28T19:30:56",{"id":209,"version":210,"summary_zh":211,"released_at":212},101791,"v0.1.19.post1","## What's Changed\r\n* Update badges by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F501\r\n* Update README.md to stop kluster.ai promo by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F514\r\n* ref: summarize errors in online mode by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F515\r\n* Rich hyperlink for curator viewer by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F516\r\n* fix: expanduser in telemetry random id creationg by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F530\r\n* Update\u002Flitellm by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F509\r\n* Code executor enhancements + tests by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F495\r\n* modify telemetry config by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F502\r\n* Feat\u002Flogger by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F518\r\n* Feat\u002Fpush to viewer by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F520\r\n* Fix curator tag by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F539\r\n* Minor changes to code executor by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F541\r\n* update examples and minor fix to docker backend by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F540\r\n* bump version by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F545\r\n* ref: make batch response file method async by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F532\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002F0.1.19...v0.1.19.post1","2025-02-26T20:26:21",{"id":214,"version":215,"summary_zh":216,"released_at":217},101792,"0.1.19","## What's Changed\r\n* fix: make explicit garbage collection for rich live objects by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F469\r\n* bump 0.1.18.post4 by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F465\r\n* Update README with better poem example, more news items, and more providers. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F476\r\n* test: raw prompt as list of dict by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F466\r\n* feat: added generation_params per row by @richardzhuang0412 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F443\r\n* typo by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F473\r\n* Add link to readme for gemini batch mode by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F478\r\n* logging (only log cost retrieval failure in debug) by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F460\r\n* Reasoning with OpenRouter examples by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F430\r\n* Fix\u002Flocal url\u002Ffile by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F477\r\n* feat: support manual cost map by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F481\r\n* Update README.md author name by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F496\r\n* Ref\u002Fratelimit\u002Ftogetherai by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F490\r\n* Update README.md by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F497\r\n* Fix\u002Fbatch status tracker\u002Ftelemetry by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F499\r\n* Feat\u002Fcurator\u002Fclient by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F491\r\n* fix: make request count in gemini batch processor max-requests-per-batch by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F500\r\n* Feat\u002Fbatch\u002Fcurator client by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F503\r\n* fix: update generic response type by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F507\r\n* Revamp CLI progress bar UI by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F493\r\n* docs: update litellm example by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F510\r\n* 0.1.19 bump version by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F512\r\n* fixing function calling example by commenting out intended error by @richardzhuang0412 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F513\r\n\r\n## New Contributors\r\n* @richardzhuang0412 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F443\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.18.post4...0.1.19","2025-02-17T20:23:39",{"id":219,"version":220,"summary_zh":221,"released_at":222},101793,"v0.1.18.post4","## What's Changed\r\n* Feat\u002Fmultimodal\u002Flitellm by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F445\r\n* Feat\u002Fmultimodal\u002Flocal urls by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F454\r\n* hotfix: fix prompt formatter by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F461\r\n* update readme by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F462\r\n* fix: handle empty response by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F463\r\n* add `curator` tag to hf_card_template by @davanstrien in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F456\r\n* add inference.net as provider by @samheutmaker in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F459\r\n* fix: Rich error log overlap by progress bar fix by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F457\r\n\r\n## New Contributors\r\n* @davanstrien made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F456\r\n* @samheutmaker made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F459\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.18...v0.1.18.post4","2025-02-07T06:10:50",{"id":224,"version":225,"summary_zh":226,"released_at":227},101794,"v0.1.18","## What's Changed\r\n* Override push_to_hub() by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F420\r\n* Pass generation params in gemini batch by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F418\r\n* Kluster.ai backend by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F422\r\n* feat: add cost processor for litellm and external provider by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F424\r\n* bump: 0.1.17.post1 by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F425\r\n* docs: add news section by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F426\r\n* fix: remove args[0] in cost by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F428\r\n* fix: check model in litellm.cost instead of try catching by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F429\r\n* Verifiers for Code by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F407\r\n* Update authorship by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F431\r\n* Fix kluster online example by @CharlieJCJ in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F434\r\n* ref: add a cost and rate limit default map json by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F437\r\n* bug: check  max tokens from the generations params in output estimation by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F435\r\n* Anon telemetry by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F439\r\n* Ref\u002Fgeneral by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F438\r\n* Feat\u002Fmulti modal support by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F427\r\n* ci: cache by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F411\r\n* fix: remove kwargs so push_to_hub works on private datasets by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F448\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.17...v0.1.18","2025-02-06T00:03:36",{"id":229,"version":230,"summary_zh":231,"released_at":232},101795,"v0.1.17.post1","## What's Changed\r\n* Override push_to_hub() by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F420\r\n* Pass generation params in gemini batch by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F418\r\n* Kluster.ai backend by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F422\r\n* feat: add cost processor for litellm and external provider by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F424\r\n* bump: 0.1.17.post1 by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F425\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.17...v0.1.17.post1","2025-01-30T17:02:37",{"id":234,"version":235,"summary_zh":236,"released_at":237},101796,"v0.1.17","## What's Changed\r\n* bump: 0.1.16 by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F391\r\n* Data generation example by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F394\r\n* Update data generation readme by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F395\r\n* Rename Bespoke-Stratos example directory by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F396\r\n* docs: fix link in citation by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F397\r\n* ref: block capacity by max_tokens for anthropic online by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F387\r\n* Update requirements.txt by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F402\r\n* Update README.md by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F404\r\n* Update README.md by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F405\r\n* feat: support max parallel request processor by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F378\r\n* ref: do not use request file info from batch metadata by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F406\r\n* Remove upper bound from tiktoken by @kartik4949 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F409\r\n* Feat\u002Fgemini batch processor by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F403\r\n* bump: 0.1.17 by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F417\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.16...v0.1.17","2025-01-28T22:54:22",{"id":239,"version":240,"summary_zh":241,"released_at":242},101797,"v0.1.16","## What's Changed\r\n* Update README by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F366\r\n* fix: divide cost by minutes instead of num requests in rate\u002Fminute by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F367\r\n* docs: fix typo in first example by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F368\r\n* Allow lists of messages as simple input by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F371\r\n* feat: support separate rate limits for input and output tokens and add moving average estimate of output tokens by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F373\r\n* Update CONTRIBUTING.md by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F382\r\n* Organize dependencies by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F383\r\n* feat: add support for returning the full completions object + some changes to support deepseek models by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F384\r\n* fix: check whether a model cost is available before getting cost by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F385\r\n* ref: make `invalid_finish_reasons` configurable by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F380\r\n* ref: free extra capacity in online processor by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F381\r\n* Fix\u002Fllm\u002Fdataset by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F393\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.15...v0.1.16","2025-01-21T19:33:23",{"id":244,"version":245,"summary_zh":246,"released_at":247},101798,"v0.1.15.post1","## What's Changed\r\n* Update README by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F366\r\n* fix: divide cost by minutes instead of num requests in rate\u002Fminute by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F367\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.15...v0.1.15.post1","2025-01-15T16:06:49",{"id":249,"version":250,"summary_zh":251,"released_at":252},101799,"v0.1.15","## What's Changed\r\n* fix: allow special tokens in tiktoken encoding by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F316\r\n* feat: make cache directory configurable via CURATOR_CACHE_DIR environment variable by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F321\r\n* Move prompt_formatter test file to the correct location by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F323\r\n* Add schema validation to prevent DB schema mismatches by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F322\r\n* Closes #297 Support local models via vLLM by @marianna13 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F298\r\n* fix: update poetry.lock to reflect pyproject.toml changes by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F326\r\n* Relax VLLM version requirements by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F329\r\n* Replace black with Ruff and add pre-commit hooks by @shreyaspimpalgaonkar in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F302\r\n* Fix circular import by @GeorgiosSmyrnis in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F330\r\n* Refactor: Factory Pattern, Base URL via Env, and Backend Determination Fix by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F317\r\n* Test\u002Fintegration\u002Fbasic setup by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F320\r\n* test: overload integration tests with other backends by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F334\r\n* Update unittests and add coverage atleast 80 by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F337\r\n* fix: set backend to None for SimpleLLM class by default by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F338\r\n* feat: add ability to disable caching via CURATOR_DISABLE_CACHE by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F335\r\n* fix: make sure we post process dataset file by sorting by and removing the __original_row_idx column by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F340\r\n* Rynam\u002Fbatch retry fix by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F343\r\n* Retry failed requests within batches by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F295\r\n* feat: detailed progress tracking via cli for online request processors by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F342\r\n* fix: safe open file - closes #344 by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F346\r\n* test: add anthropic integration tests by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F350\r\n* docs: add contribution md by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F353\r\n* Update readme and mute camel test by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F355\r\n* feat: detailed progress tracking for batch processing by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F349\r\n* docs: add citation by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F357\r\n* fix: broken vllm offline processor + add test by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F358\r\n* ref: refactor LLM params into backend params by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F345\r\n* feat: Updated LLM class interface by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F360\r\n* Add fix_json check by @GeorgiosSmyrnis in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F359\r\n* perf: lazy import litellm, datasets by @adamoptimizer in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F361\r\n* docs: add Kartik to CITATION.cff  by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F363\r\n* Add venv activation instructions to contributing.md by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F364\r\n\r\n## New Contributors\r\n* @marianna13 made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F298\r\n* @shreyaspimpalgaonkar made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F302\r\n* @adamoptimizer made their first contribution in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F317\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.14...v0.1.15","2025-01-14T22:59:51",{"id":254,"version":255,"summary_zh":256,"released_at":257},101800,"v0.1.14","## What's Changed\r\n* Fix bug in batch mapping and get right order for outputs. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F289\r\n* refactor: use os.path.join consistently for path handling by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F291\r\n* Add Anthropic batch and general refactor by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F243\r\n* Remove duplicate resource limit by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F299\r\n* Merge dev into main by @vutrung96 in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F306\r\n* Re-do docstrings for batch request processors by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F308\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002F0.1.13...v0.1.14","2025-01-07T02:32:10",{"id":259,"version":260,"summary_zh":261,"released_at":262},101801,"0.1.13","## What's Changed\r\n1. Fix issues around litellm, to support Gemini Flash Thinking model.\r\n2. Add support for o1.\r\n\r\n## Details\r\n* Ryan marten patch 1 by @RyanMarten in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F273\r\n* Clean ups in llm.py by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F274\r\n* Put the examples in respective folders and add requirements.txt everywhere by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F275\r\n* Catch catch-all Exception since litellm doesn't throw specific error. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F281\r\n* feat: add o1 model structured output support by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F284\r\n* Bump to 0.1.13 by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F285\r\n* Merge dev into main for 0.1.13 release. by @madiator in https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fpull\u002F286\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fbespokelabsai\u002Fcurator\u002Fcompare\u002Fv0.1.12...0.1.13","2024-12-23T07:42:07"]