[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-IDSIA--sacred":3,"tool-IDSIA--sacred":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":23,"env_os":98,"env_gpu":99,"env_ram":99,"env_deps":100,"category_tags":107,"github_topics":108,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":116,"updated_at":117,"faqs":118,"releases":147},3134,"IDSIA\u002Fsacred","sacred","Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA.","Sacred 是一款专为机器学习实验设计的开源工具，旨在帮助研究人员和开发者高效地配置、组织、记录并复现实验结果。在科研过程中，手动管理大量超参数、记录每次运行的环境细节以及确保结果可复现往往繁琐且容易出错，Sacred 正是为了解决这些痛点而生。\n\n它通过独特的“配置作用域”机制，让开发者能像定义普通局部变量一样轻松设定实验参数，并支持自动将这些参数注入到函数的任何位置，极大简化了代码结构。Sacred 还内置了强大的命令行接口，允许用户无需修改代码即可灵活调整参数运行不同变体。此外，其“观察者”功能会自动捕获实验依赖、系统环境、配置信息及最终结果，并可无缝保存至 MongoDB 数据库，方便后续查询与分析。配合自动随机种子管理，Sacred 确保了实验的高度可复现性。\n\n这款工具特别适合从事深度学习、数据科学研究的科研人员，以及需要严谨管理实验流程的算法工程师。如果你希望从琐碎的实验管理工作中解脱出来，专注于核心算法创新，Sacred 将是一个得力的助手。","Sacred\n======\n\n    | *Every experiment is sacred*\n    | *Every experiment is great*\n    | *If an experiment is wasted*\n    | *God gets quite irate*\n\n|pypi| |py_versions| |license| |rtfd| |doi|\n\n|build| |coverage| |code_quality| |black|\n\n\n\n\nSacred is a tool to help you configure, organize, log and reproduce experiments.\nIt is designed to do all the tedious overhead work that you need to do around\nyour actual experiment in order to:\n\n- keep track of all the parameters of your experiment\n- easily run your experiment for different settings\n- save configurations for individual runs in a database\n- reproduce your results\n\nSacred achieves this through the following main mechanisms:\n\n-  **Config Scopes** A very convenient way of the local variables in a function\n   to define the parameters your experiment uses.\n-  **Config Injection**: You can access all parameters of your configuration\n   from every function. They are automatically injected by name.\n-  **Command-line interface**: You get a powerful command-line interface for each\n   experiment that you can use to change parameters and run different variants.\n-  **Observers**: Sacred provides Observers that log all kinds of information\n   about your experiment, its dependencies, the configuration you used,\n   the machine it is run on, and of course the result. These can be saved\n   to a MongoDB, for easy access later.\n-  **Automatic seeding** helps controlling the randomness in your experiments,\n   such that the results remain reproducible.\n\nExample\n-------\n+------------------------------------------------+--------------------------------------------+\n| **Script to train an SVM on the iris dataset** | **The same script as a Sacred experiment** |\n+------------------------------------------------+--------------------------------------------+\n| .. code:: python                               | .. code:: python                           |\n|                                                |                                            |\n|  from numpy.random import permutation          |   from numpy.random import permutation     |\n|  from sklearn import svm, datasets             |   from sklearn import svm, datasets        |\n|                                                |   from sacred import Experiment            |\n|                                                |   ex = Experiment('iris_rbf_svm')          |\n|                                                |                                            |\n|                                                |   @ex.config                               |\n|                                                |   def cfg():                               |\n|  C = 1.0                                       |     C = 1.0                                |\n|  gamma = 0.7                                   |     gamma = 0.7                            |\n|                                                |                                            |\n|                                                |   @ex.automain                             |\n|                                                |   def run(C, gamma):                       |\n|  iris = datasets.load_iris()                   |     iris = datasets.load_iris()            |\n|  perm = permutation(iris.target.size)          |     per = permutation(iris.target.size)    |\n|  iris.data = iris.data[perm]                   |     iris.data = iris.data[per]             |\n|  iris.target = iris.target[perm]               |     iris.target = iris.target[per]         |\n|  clf = svm.SVC(C=C, kernel='rbf',              |     clf = svm.SVC(C=C, kernel='rbf',       |\n|          gamma=gamma)                          |             gamma=gamma)                   |\n|  clf.fit(iris.data[:90],                       |     clf.fit(iris.data[:90],                |\n|          iris.target[:90])                     |             iris.target[:90])              |\n|  print(clf.score(iris.data[90:],               |     return clf.score(iris.data[90:],       |\n|                  iris.target[90:]))            |                      iris.target[90:])     |\n+------------------------------------------------+--------------------------------------------+\n\nDocumentation\n-------------\nThe documentation is hosted at `ReadTheDocs \u003Chttp:\u002F\u002Fsacred.readthedocs.org\u002F>`_. You can also `Ask Sacred Guru \u003Chttps:\u002F\u002Fgurubase.io\u002Fg\u002Fsacred>`_, it is a Sacred-focused AI to answer your questions.\n\nInstalling\n----------\nYou can directly install it from the Python Package Index with pip:\n\n    pip install sacred\n\nOr if you want to do it manually you can checkout the current version from git\nand install it yourself:\n\n   | git clone https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred.git\n   | cd sacred\n   | python setup.py install\n\nYou might want to also install the ``numpy`` and the ``pymongo`` packages. They are\noptional dependencies but they offer some cool features:\n\n    pip install numpy pymongo\n\nTests\n-----\nThe tests for sacred use the `pytest \u003Chttp:\u002F\u002Fpytest.org\u002Flatest\u002F>`_ package.\nYou can execute them by running ``pytest`` in the sacred directory like this:\n\n    pytest\n\nThere is also a config file for `tox \u003Chttps:\u002F\u002Ftox.readthedocs.io\u002Fen\u002Flatest\u002F>`_ so you\ncan automatically run the tests for various python versions like this:\n\n    tox\n\nUpdate pytest version\n+++++++++++++++++++++\n\nIf you update or change the pytest version, the following files need to be changed:\n\n- ``dev-requirements.txt``\n- ``tox.ini``\n- ``test\u002Ftest_utils.py``\n- ``setup.py``\n\nContributing\n------------\nIf you find a bug, have a feature request or want to discuss something general you are welcome to open an\n`issue \u003Chttps:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred\u002Fissues>`_. If you have a specific question related\nto the usage of sacred, please ask a question on StackOverflow under the\n`python-sacred tag \u003Chttps:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Fpython-sacred>`_. We value documentation\na lot. If you find something that should be included in the documentation please\ndocument it or let us know whats missing. If you are using Sacred in one of your projects and want to share\nyour code with others, put your repo in the `Projects using Sacred \u003Cdocs\u002Fprojects_using_sacred.rst`>_ list.\nPull requests are highly welcome!\n\nFrontends\n---------\nAt this point there are three frontends to the database entries created by sacred (that I'm aware of).\nThey are developed externally as separate projects.\n\n`Omniboard \u003Chttps:\u002F\u002Fgithub.com\u002Fvivekratnavel\u002Fomniboard>`_\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n.. image:: docs\u002Fimages\u002Fomniboard-table.png\n.. image:: docs\u002Fimages\u002Fomniboard-metric-graphs.png\n\nOmniboard is a web dashboard that helps in visualizing the experiments and metrics \u002F logs collected by sacred.\nOmniboard is written with React, Node.js, Express and Bootstrap.\n\n\n`Incense \u003Chttps:\u002F\u002Fgithub.com\u002FJarnoRFB\u002Fincense>`_\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n.. image:: docs\u002Fimages\u002Fincense-artifact.png\n.. image:: docs\u002Fimages\u002Fincense-metric.png\n\nIncense is a Python library to retrieve runs stored in a MongoDB and interactively display metrics and artifacts\nin Jupyter notebooks.\n\n`Sacredboard \u003Chttps:\u002F\u002Fgithub.com\u002Fchovanecm\u002Fsacredboard>`_\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n.. image:: docs\u002Fimages\u002Fsacredboard.png\n\nSacredboard is a web-based dashboard interface to the sacred runs stored in a\nMongoDB.\n\n`Neptune \u003Chttps:\u002F\u002Fneptune.ai\u002F>`_\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n.. image:: docs\u002Fimages\u002Fneptune-compare.png\n.. image:: docs\u002Fimages\u002Fneptune-collaboration.png\n\nNeptune is a metadata store for MLOps, built for teams that run a lot of experiments.\nIt gives you a single place to log, store, display, organize, compare, and query all your model-building metadata via API available for both Python and R programming languages:\n\n.. image:: docs\u002Fimages\u002Fneptune-query-api.png\n\nIn order to log your sacred experiments to Neptune, all you need to do is add an observer:\n\n.. code-block:: python\n\n    from neptune.new.integrations.sacred import NeptuneObserver\n    ex.observers.append(NeptuneObserver(api_token='\u003CYOUR_API_TOKEN>',\n                                        project='\u003CYOUR_WORKSPACE\u002FYOUR_PROJECT>'))\n\nFor more info, check the `Neptune + Sacred integration guide \u003Chttps:\u002F\u002Fdocs.neptune.ai\u002Fintegrations-and-supported-tools\u002Fexperiment-tracking\u002Fsacred>`_.\n\n`SacredBrowser \u003Chttps:\u002F\u002Fgithub.com\u002Fmichaelwand\u002FSacredBrowser>`_\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n.. image:: docs\u002Fimages\u002Fsacred_browser.png\n\nSacredBrowser is a PyQt4 application to browse the MongoDB entries created by\nsacred experiments.\nFeatures include custom queries, sorting of the results,\naccess to the stored source-code, and many more.\nNo installation is required and it can connect to a local\ndatabase or over the network.\n\n\n`Prophet \u003Chttps:\u002F\u002Fgithub.com\u002FQwlouse\u002Fprophet>`_\n+++++++++++++++++++++++++++++++++++++++++++++++\nProphet is an early prototype of a webinterface to the MongoDB entries created by\nsacred experiments, that is discontinued.\nIt requires you to run `RestHeart \u003Chttp:\u002F\u002Frestheart.org>`_ to access the database.\n\n\nRelated Projects\n----------------\n\n`Sumatra \u003Chttps:\u002F\u002Fpythonhosted.org\u002FSumatra\u002F>`_\n++++++++++++++++++++++++++++++++++++++++++++++\n   | Sumatra is a tool for managing and tracking projects based on numerical\n   | simulation and\u002For analysis, with the aim of supporting reproducible research.\n   | It can be thought of as an automated electronic lab notebook for\n   | computational projects.\n\nSumatra takes a different approach by providing commandline tools to initialize\na project and then run arbitrary code (not just python).\nIt tracks information about all runs in a SQL database and even provides a nice browser tool.\nIt integrates less tightly with the code to be run, which makes it easily\napplicable to non-python experiments.\nBut that also means it requires more setup for each experiment and\nconfiguration needs to be done using files.\nUse this project if you need to run non-python experiments, or are ok with the additional setup\u002Fconfiguration overhead.\n\n\n`Future Gadget Laboratory \u003Chttps:\u002F\u002Fgithub.com\u002FKaixhin\u002FFGLab>`_\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n   | FGLab is a machine learning dashboard, designed to make prototyping\n   | experiments easier. Experiment details and results are sent to a database,\n   | which allows analytics to be performed after their completion. The server\n   | is FGLab, and the clients are FGMachines.\n\nSimilar to Sumatra, FGLab is an external tool that can keep track of runs from\nany program. Projects are configured via a JSON schema and the program needs to\naccept these configurations via command-line options.\nFGLab also takes the role of a basic scheduler by distributing runs over several\nmachines.\n\n\nLicense\n-------\nThis project is released under the terms of the `MIT license \u003Chttp:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT>`_.\n\n\nCiting Sacred\n-------------\n`K. Greff, A. Klein, M. Chovanec, F. Hutter, and J. Schmidhuber, ‘The Sacred Infrastructure for Computational Research’,\nin Proceedings of the 15th Python in Science Conference (SciPy 2017), Austin, Texas, 2017, pp. 49–56\n\u003Chttp:\u002F\u002Fconference.scipy.org\u002Fproceedings\u002Fscipy2017\u002Fklaus_greff.html>`_.\n\n\n.. |pypi| image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fsacred.svg\n    :target: https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Fsacred\n    :alt: Current PyPi Version\n\n.. |py_versions| image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fsacred.svg\n    :target: https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Fsacred\n    :alt: Supported Python Versions\n\n.. |license| image:: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue.png\n    :target: http:\u002F\u002Fchoosealicense.com\u002Flicenses\u002Fmit\u002F\n    :alt: MIT licensed\n\n.. |rtfd| image:: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Fsacred\u002Fbadge\u002F?version=latest&style=flat\n    :target: https:\u002F\u002Fsacred.readthedocs.io\u002Fen\u002Fstable\u002F\n    :alt: ReadTheDocs\n\n.. |doi| image:: https:\u002F\u002Fzenodo.org\u002Fbadge\u002Fdoi\u002F10.5281\u002Fzenodo.16386.svg\n    :target: http:\u002F\u002Fdx.doi.org\u002F10.5281\u002Fzenodo.16386\n    :alt: DOI for this release\n\n.. |build| image:: https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred\u002Factions\u002Fworkflows\u002Ftest.yml\u002Fbadge.svg\n    :target: https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred\u002Factions\u002Fworkflows\u002Ftest.yml\u002Fbadge.svg\n    :alt: Github Actions PyTest\n\n.. |coverage| image:: https:\u002F\u002Fcoveralls.io\u002Frepos\u002FIDSIA\u002Fsacred\u002Fbadge.svg\n    :target: https:\u002F\u002Fcoveralls.io\u002Fr\u002FIDSIA\u002Fsacred\n    :alt: Coverage Report\n\n.. |code_quality| image:: https:\u002F\u002Fscrutinizer-ci.com\u002Fg\u002FIDSIA\u002Fsacred\u002Fbadges\u002Fquality-score.png?b=master\n    :target: https:\u002F\u002Fscrutinizer-ci.com\u002Fg\u002FIDSIA\u002Fsacred\u002F\n    :alt: Code Scrutinizer Quality\n\n.. |black| image:: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg\n    :target: https:\u002F\u002Fgithub.com\u002Fambv\u002Fblack\n    :alt: Code style: black\n","Sacred\n======\n\n    | *每个实验都是神圣的*\n    | *每个实验都很棒*\n    | *如果一个实验被浪费了*\n    | *上帝会非常生气*\n\n|pypi| |py_versions| |license| |rtfd| |doi|\n\n|build| |coverage| |code_quality| |black|\n\n\n\n\nSacred 是一款帮助你配置、组织、记录和复现实验的工具。它旨在处理围绕实际实验所需的所有繁琐的额外工作，以便：\n\n- 跟踪实验的所有参数\n- 轻松地以不同设置运行实验\n- 将每次运行的配置保存到数据库中\n- 复现你的结果\n\nSacred 通过以下主要机制实现这些目标：\n\n-  **配置作用域**：一种非常方便的方式，允许在函数中使用局部变量来定义实验所使用的参数。\n-  **配置注入**：你可以从任何函数访问配置中的所有参数，它们会按名称自动注入。\n-  **命令行界面**：为每个实验提供一个功能强大的命令行界面，可用于更改参数并运行不同的变体。\n-  **观察者**：Sacred 提供观察者，用于记录关于实验的各种信息，包括其依赖项、所使用的配置、运行机器以及结果等。这些信息可以保存到 MongoDB 中，便于日后访问。\n-  **自动种子设置**有助于控制实验中的随机性，从而确保结果可重复。\n\n示例\n-------\n+------------------------------------------------+--------------------------------------------+\n| **在鸢尾花数据集上训练 SVM 的脚本**           | **相同的脚本作为 Sacred 实验**            |\n+------------------------------------------------+--------------------------------------------+\n| .. code:: python                               | .. code:: python                           |\n|                                                |                                            |\n|  from numpy.random import permutation          |   from numpy.random import permutation     |\n|  from sklearn import svm, datasets             |   from sklearn import svm, datasets        |\n|                                                |   from sacred import Experiment            |\n|                                                |   ex = Experiment('iris_rbf_svm')          |\n|                                                |                                            |\n|                                                |   @ex.config                               |\n|                                                |   def cfg():                               |\n|  C = 1.0                                       |     C = 1.0                                |\n|  gamma = 0.7                                   |     gamma = 0.7                            |\n|                                                |                                            |\n|                                                |   @ex.automain                             |\n|                                                |   def run(C, gamma):                       |\n|  iris = datasets.load_iris()                   |     iris = datasets.load_iris()            |\n|  perm = permutation(iris.target.size)          |     per = permutation(iris.target.size)    |\n|  iris.data = iris.data[perm]                   |     iris.data = iris.data[per]             |\n|  iris.target = iris.target[perm]               |     iris.target = iris.target[per]         |\n|  clf = svm.SVC(C=C, kernel='rbf',              |     clf = svm.SVC(C=C, kernel='rbf',       |\n|          gamma=gamma)                          |             gamma=gamma)                   |\n|  clf.fit(iris.data[:90],                       |     clf.fit(iris.data[:90],                |\n|          iris.target[:90])                     |             iris.target[:90])              |\n|  print(clf.score(iris.data[90:],               |     return clf.score(iris.data[90:],       |\n|                  iris.target[90:]))            |                      iris.target[90:])     |\n+------------------------------------------------+--------------------------------------------+\n\n文档\n-------------\n文档托管在 `ReadTheDocs \u003Chttp:\u002F\u002Fsacred.readthedocs.org\u002F>`_ 上。你也可以 `向 Sacred Guru 提问 \u003Chttps:\u002F\u002Fgurubase.io\u002Fg\u002Fsacred>`_，这是一个专注于 Sacred 的 AI 助手，可以回答你的问题。\n\n安装\n----------\n你可以直接从 Python 包索引使用 pip 安装：\n\n    pip install sacred\n\n或者，如果你想手动安装，可以从 git 克隆当前版本并自行安装：\n\n   | git clone https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred.git\n   | cd sacred\n   | python setup.py install\n\n你可能还需要安装 ``numpy`` 和 ``pymongo`` 包。它们是可选依赖项，但提供了许多有用的功能：\n\n    pip install numpy pymongo\n\n测试\n-----\nSacred 的测试使用 `pytest \u003Chttp:\u002F\u002Fpytest.org\u002Flatest\u002F>`_ 包。你可以在 Sacred 目录下运行 ``pytest`` 来执行测试：\n\n    pytest\n\n此外，还有一个针对 `tox \u003Chttps:\u002F\u002Ftox.readthedocs.io\u002Fen\u002Flatest\u002F>`_ 的配置文件，因此你可以自动为不同版本的 Python 运行测试：\n\n    tox\n\n更新 pytest 版本\n+++++++++++++++++++++\n\n如果你更新或更改 pytest 版本，需要修改以下文件：\n\n- ``dev-requirements.txt``\n- ``tox.ini``\n- ``test\u002Ftest_utils.py``\n- ``setup.py``\n\n贡献\n------------\n如果你发现了一个 bug、有一个功能请求，或者想讨论一些通用的问题，欢迎在 `GitHub 仓库的议题页面 \u003Chttps:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred\u002Fissues>`_ 上提交。如果你有关于 Sacred 使用的具体问题，请在 StackOverflow 上使用 `python-sacred 标签 \u003Chttps:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Fpython-sacred>`_ 提问。我们非常重视文档工作。如果你发现有需要添加到文档的内容，请自行补充文档或告诉我们缺少什么。如果你在自己的项目中使用了 Sacred，并希望与他人分享代码，请将你的仓库添加到 `使用 Sacred 的项目列表 \u003Cdocs\u002Fprojects_using_sacred.rst>`_ 中。我们非常欢迎 Pull 请求！\n\n前端\n---------\n目前有三个我所知的 Sacred 数据库条目的前端界面，它们都是由外部独立开发的项目。\n\n`Omniboard \u003Chttps:\u002F\u002Fgithub.com\u002Fvivekratnavel\u002Fomniboard>`_\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n.. image:: docs\u002Fimages\u002Fomniboard-table.png\n.. image:: docs\u002Fimages\u002Fomniboard-metric-graphs.png\n\nOmniboard 是一个 Web 仪表板，用于可视化 Sacred 收集的实验、指标和日志。Omniboard 使用 React、Node.js、Express 和 Bootstrap 构建。\n\n\n`Incense \u003Chttps:\u002F\u002Fgithub.com\u002FJarnoRFB\u002Fincense>`_\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n.. image:: docs\u002Fimages\u002Fincense-artifact.png\n.. image:: docs\u002Fimages\u002Fincense-metric.png\n\nIncense 是一个 Python 库，用于从 MongoDB 中检索存储的运行记录，并在 Jupyter 笔记本中交互式地展示指标和工件。\n\n`Sacredboard \u003Chttps:\u002F\u002Fgithub.com\u002Fchovanecm\u002Fsacredboard>`_\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n.. image:: docs\u002Fimages\u002Fsacredboard.png\n\nSacredboard 是一个基于 Web 的仪表板界面，用于查看存储在 MongoDB 中的 Sacred 运行记录。\n\n`Neptune \u003Chttps:\u002F\u002Fneptune.ai\u002F>`_\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n.. image:: docs\u002Fimages\u002Fneptune-compare.png\n.. image:: docs\u002Fimages\u002Fneptune-collaboration.png\n\nNeptune 是一个面向 MLOps 的元数据存储平台，专为频繁进行实验的团队设计。它提供了一个统一的平台，允许您通过 Python 和 R 编程语言的 API 来记录、存储、展示、组织、比较和查询所有模型构建相关的元数据：\n\n.. image:: docs\u002Fimages\u002Fneptune-query-api.png\n\n要将您的 Sacred 实验日志记录到 Neptune，您只需添加一个观察者即可：\n\n.. code-block:: python\n\n    from neptune.new.integrations.sacred import NeptuneObserver\n    ex.observers.append(NeptuneObserver(api_token='\u003CYOUR_API_TOKEN>',\n                                        project='\u003CYOUR_WORKSPACE\u002FYOUR_PROJECT>'))\n\n更多信息，请参阅 `Neptune + Sacred 集成指南 \u003Chttps:\u002F\u002Fdocs.neptune.ai\u002Fintegrations-and-supported-tools\u002Fexperiment-tracking\u002Fsacred>`_。\n\n`SacredBrowser \u003Chttps:\u002F\u002Fgithub.com\u002Fmichaelwand\u002FSacredBrowser>`_\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n.. image:: docs\u002Fimages\u002Fsacred_browser.png\n\nSacredBrowser 是一个基于 PyQt4 的应用程序，用于浏览由 Sacred 实验创建的 MongoDB 条目。其功能包括自定义查询、结果排序、访问存储的源代码等。该工具无需安装，可以连接本地数据库或通过网络连接。\n\n`Prophet \u003Chttps:\u002F\u002Fgithub.com\u002FQwlouse\u002Fprophet>`_\n++++++++++++++++++++++++++++++++++++++++++++++\nProphet 是一个早期的原型，用于查看由 Sacred 实验创建的 MongoDB 条目，但目前已停止维护。它需要您运行 `RestHeart \u003Chttp:\u002F\u002Frestheart.org>`_ 才能访问数据库。\n\n相关项目\n----------------\n\n`Sumatra \u003Chttps:\u002F\u002Fpythonhosted.org\u002FSumatra\u002F>`_\n++++++++++++++++++++++++++++++++++++++++++++++\n   | Sumatra 是一种用于管理和跟踪基于数值模拟和\u002F或分析的项目的工具，旨在支持可重复的研究。\n   | 它可以被视为计算项目中的自动化电子实验记录本。\n   \nSumatra 采用不同的方法，提供命令行工具来初始化项目并运行任意代码（而不仅仅是 Python）。它会将所有运行的信息记录到 SQL 数据库中，并提供一个友好的浏览器工具。与要运行的代码集成度较低，因此更容易应用于非 Python 实验。不过，这也意味着每个实验需要更多的设置，且配置需通过文件完成。如果您需要运行非 Python 实验，或者不介意额外的设置和配置开销，可以考虑使用该项目。\n\n`Future Gadget Laboratory \u003Chttps:\u002F\u002Fgithub.com\u002FKaixhin\u002FFGLab>`_\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n   | FGLab 是一个机器学习仪表板，旨在简化实验原型设计。实验的详细信息和结果会被发送到数据库，从而可以在实验完成后进行分析。服务器端是 FGLab，客户端则是 FGMachines。\n   \n与 Sumatra 类似，FGLab 是一个外部工具，可以跟踪来自任何程序的运行。项目通过 JSON 模式进行配置，程序需要通过命令行选项接受这些配置。FGLab 还充当一个基本的调度器，负责将运行分配到多台机器上。\n\n许可证\n-------\n本项目根据 `MIT 许可证 \u003Chttp:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT>`_ 发布。\n\n\n引用 Sacred\n-------------\n`K. Greff, A. Klein, M. Chovanec, F. Hutter, and J. Schmidhuber, ‘The Sacred Infrastructure for Computational Research’,\nin Proceedings of the 15th Python in Science Conference (SciPy 2017), Austin, Texas, 2017, pp. 49–56\n\u003Chttp:\u002F\u002Fconference.scipy.org\u002Fproceedings\u002Fscipy2017\u002Fklaus_greff.html>`_.\n\n\n.. |pypi| image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fsacred.svg\n    :target: https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Fsacred\n    :alt: 当前 PyPI 版本\n\n.. |py_versions| image:: https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fsacred.svg\n    :target: https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Fsacred\n    :alt: 支持的 Python 版本\n\n.. |license| image:: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-blue.png\n    :target: http:\u002F\u002Fchoosealicense.com\u002Flicenses\u002Fmit\u002F\n    :alt: MIT 许可证\n\n.. |rtfd| image:: https:\u002F\u002Freadthedocs.org\u002Fprojects\u002Fsacred\u002Fbadge\u002F?version=latest&style=flat\n    :target: https:\u002F\u002Fsacred.readthedocs.io\u002Fen\u002Fstable\u002F\n    :alt: ReadTheDocs\n\n.. |doi| image:: https:\u002F\u002Fzenodo.org\u002Fbadge\u002Fdoi\u002F10.5281\u002Fzenodo.16386.svg\n    :target: http:\u002F\u002Fdx.doi.org\u002F10.5281\u002Fzenodo.16386\n    :alt: 本版本的 DOI\n\n.. |build| image:: https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred\u002Factions\u002Fworkflows\u002Ftest.yml\u002Fbadge.svg\n    :target: https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred\u002Factions\u002Fworkflows\u002Ftest.yml\u002Fbadge.svg\n    :alt: Github Actions PyTest\n\n.. |coverage| image:: https:\u002F\u002Fcoveralls.io\u002Frepos\u002FIDSIA\u002Fsacred\u002Fbadge.svg\n    :target: https:\u002F\u002Fcoveralls.io\u002Fr\u002FIDSIA\u002Fsacred\n    :alt: 覆盖率报告\n\n.. |code_quality| image:: https:\u002F\u002Fscrutinizer-ci.com\u002Fg\u002FIDSIA\u002Fsacred\u002Fbadges\u002Fquality-score.png?b=master\n    :target: https:\u002F\u002Fscrutinizer-ci.com\u002Fg\u002FIDSIA\u002Fsacred\u002F\n    :alt: Code Scrutinizer 质量评分\n\n.. |black| image:: https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg\n    :target: https:\u002F\u002Fgithub.com\u002Fambv\u002Fblack\n    :alt: 代码风格：Black","# Sacred 快速上手指南\n\nSacred 是一个用于配置、组织、记录复现机器学习实验的 Python 工具。它能自动追踪实验参数、保存配置到数据库，并确保结果可复现。\n\n## 环境准备\n\n- **操作系统**：Linux, macOS, Windows\n- **Python 版本**：支持 Python 3.6+\n- **前置依赖**：\n  - 基础功能无需额外依赖\n  - 若需使用数据库存储（推荐）或数值计算功能，建议安装 `pymongo` 和 `numpy`\n\n## 安装步骤\n\n### 方式一：通过 pip 安装（推荐）\n\n直接使用国内镜像源加速安装：\n\n```bash\npip install sacred -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n若需使用 MongoDB 存储实验记录及数值处理功能：\n\n```bash\npip install pymongo numpy -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方式二：从源码安装\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred.git\ncd sacred\npython setup.py install\n```\n\n## 基本使用\n\n以下是将普通脚本转换为 Sacred 实验的最小示例。对比展示了如何通过装饰器管理配置和注入参数。\n\n### 1. 定义实验与配置\n\n创建一个 Python 文件（例如 `train.py`），内容如下：\n\n```python\nfrom sklearn import svm, datasets\nfrom numpy.random import permutation\nfrom sacred import Experiment\n\n# 初始化实验\nex = Experiment('iris_rbf_svm')\n\n# 定义配置作用域：局部变量即为实验参数\n@ex.config\ndef cfg():\n    C = 1.0\n    gamma = 0.7\n\n# 定义主运行函数：参数自动注入\n@ex.automain\ndef run(C, gamma):\n    iris = datasets.load_iris()\n    perm = permutation(iris.target.size)\n    iris.data = iris.data[perm]\n    iris.target = iris.target[perm]\n    \n    clf = svm.SVC(C=C, kernel='rbf', gamma=gamma)\n    clf.fit(iris.data[:90], iris.target[:90])\n    \n    return clf.score(iris.data[90:], iris.target[90:])\n```\n\n### 2. 运行实验\n\n在终端中直接运行脚本，Sacred 会自动解析命令行参数并执行：\n\n```bash\npython train.py\n```\n\n### 3. 修改参数运行\n\n无需修改代码，直接通过命令行覆盖配置参数：\n\n```bash\n# 修改 C 和 gamma 参数\npython train.py with C=2.0 gamma=0.5\n\n# 查看帮助信息\npython train.py help\n```\n\n### 4. 进阶：保存结果到数据库\n\n若要记录实验详情到 MongoDB，只需添加 Observer：\n\n```python\nfrom sacred.observers import MongoObserver\n\n# 确保已启动 MongoDB 服务\nex.observers.append(MongoObserver(url='localhost:27017'))\n```\n\n运行后，所有参数、依赖、日志和结果将自动存入数据库，便于后续复现和分析。","某算法团队正在大规模调优深度学习模型的超参数，试图在有限算力下找到最优配置并复现最佳结果。\n\n### 没有 sacred 时\n- 研究人员手动修改代码中的硬编码参数（如学习率、批次大小），每次调整都需重新编辑脚本，极易出错且难以回溯具体使用了哪组参数。\n- 实验日志散落在不同的文本文件或终端截图里，缺乏统一结构，导致后期无法快速对比不同配置下的模型性能差异。\n- 由于随机种子未统一管理，即使使用相同的参数设置，多次运行的结果也存在波动，团队成员之间无法精确复现彼此的实验结论。\n- 缺少自动化的依赖记录机制，当需要重新运行旧实验时，往往因环境变更或代码版本迭代而失败。\n\n### 使用 sacred 后\n- 通过 Config Scopes 将超参数定义为函数局部变量，配合强大的命令行接口，无需改动代码即可动态切换数百种实验组合，大幅提升调试效率。\n- 借助 Observers 自动将所有运行细节（包括参数配置、主机信息、依赖包版本及最终指标）结构化存入 MongoDB，形成可查询的实验数据库，对比分析一目了然。\n- 内置的自动播种机制确保了随机性的可控性，只要指定相同配置和种子，任何人在任何机器上都能得到完全一致的运行结果，彻底解决复现难题。\n- 系统自动捕获并保存代码快照与依赖关系，即便数月后需要复盘，也能一键还原当时的完整实验环境，确保实验资产不流失。\n\nsacred 将繁琐的实验管理自动化，让研究人员从“运维式”的参数记录中解放出来，专注于核心算法的创新与验证。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIDSIA_sacred_3ed4061a.png","IDSIA","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FIDSIA_36c681d0.png","Istituto Dalle Molle di Studi sull'Intelligenza Artificiale ",null,"https:\u002F\u002Fidsia.ch\u002F","https:\u002F\u002Fgithub.com\u002FIDSIA",[82,86,90],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.7,{"name":87,"color":88,"percentage":89},"Makefile","#427819",0.3,{"name":91,"color":92,"percentage":93},"Shell","#89e051",0,4357,390,"2026-04-02T08:31:34","MIT","","未说明",{"notes":101,"python":99,"dependencies":102},"Sacred 是一个用于配置、组织、记录和复现实验的通用 Python 工具，本身不依赖特定硬件（如 GPU）。核心功能无需额外依赖，但若需将实验数据保存至数据库，建议安装 pymongo；若使用自动种子功能或数值计算，建议安装 numpy。该工具支持多种前端可视化面板（如 Omniboard, Incense 等），部分前端可能需要 Node.js 或其他环境。",[103,104,105,106],"numpy (可选)","pymongo (可选)","pytest (测试用)","tox (测试用)",[54,13],[109,110,111,112,113,114,115],"python","machine-learning","infrastructure","reproducible-research","reproducibility","reproducible-science","mongodb","2026-03-27T02:49:30.150509","2026-04-06T11:31:01.296596",[119,124,129,134,139,143],{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},14449,"Sacred 的“魔法”式接口太难理解，是否有面向对象的替代方案？","Sacred 本身的设计倾向于减少样板代码，因此使用了较多装饰器等“魔法”特性，这对不熟悉该库的用户确实存在学习门槛。目前官方尚未提供原生的面向对象接口。不过，社区推荐了一个名为 Machinable 的替代项目（https:\u002F\u002Fgithub.com\u002Fmachinable-org\u002Fmachinable），它具有与 Sacred 类似的功能列表，但采用了完全不同的面向对象 API。在 Machinable 中，配置在项目范围的配置文件中指定，而功能则在子类中实现，子类可以重写如 'on_create' 等事件。这对于需要更高可维护性和扩展性的生产工作流可能更合适。","https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred\u002Fissues\u002F193",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},14450,"如何在 Jupyter Notebook 中以交互方式运行 Sacred 实验？","虽然 Sacred 主要设计用于命令行脚本，但也可以用于交互式环境。基本流程是：首先初始化 Experiment 并添加观察者（如 FilestorageObserver）；然后定义配置和捕获函数；接着手动调用 `ex.start()` 来 finalize 配置并启动观察者；在循环中执行实验逻辑并使用 `ex.log_metric` 记录指标；最后调用 `ex.stop(result=...)` 结束实验。这种模式适合探索性实验和低复杂度任务，允许用户手动控制实验的各个阶段。","https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred\u002Fissues\u002F663",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},14451,"Sacred 的配置系统是否支持非字符串键（如整数键）或修改列表元素？","目前的 Sacred 配置系统存在局限性：字典的键必须是有效的 Python 标识符（即字符串），否则无法从命令行设置，这导致无法直接支持像 sklearn 中使用整数键指定 `class_weights` 的情况。此外，目前只能从命令行修改字典，不能直接修改列表元素。变通方法是将所有数据在配置中存为字典，然后在实验代码中转换为列表，但这并不优雅。社区正在讨论重构配置过程以解决这些问题，包括支持非字符串键和直接操作列表元素。","https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred\u002Fissues\u002F610",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},14452,"为什么导入 Sacred 时在 TensorFlow 1.14.0 环境下会报错？","这是一个已知的兼容性问题。TensorFlow 1.14.0 版本中，其包的 `__spec__` 属性被设置为 None，导致 Sacred 内部调用 `pkgutil.find_loader` 时抛出异常。这通常表现为导入 Sacred 时失败。虽然这可能是 TensorFlow 的一个行为变更，但它破坏了 Sacred 的正常加载。解决方法包括降级 TensorFlow 到 1.13 或更早版本，或者等待 Sacred 发布修复此问题的新版本（如果有的话）。在 Docker 环境中复现该问题时，可以看到安装 sacred 后直接导入即报错。","https:\u002F\u002Fgithub.com\u002FIDSIA\u002Fsacred\u002Fissues\u002F493",{"id":140,"question_zh":141,"answer_zh":142,"source_url":133},14453,"如何更好地组织关于 Sacred 未来工作流和配置的讨论？","为了更有效地推进 Sacred 的改进，维护者建议将大型讨论拆分为多个专门的 Issue。例如，将“工作流”（如交互式、脚本化、批处理）、“配置系统重构”和“配置更新机制”分别列为独立议题（见 Issues #663, #664, #665）。这种方式比在单个提案文件下评论更清晰，便于社区针对不同方面深入讨论。用户可以参与这些具体议题，提出自己对理想工作流或配置语法的看法，共同塑造 Sacred 的未来发展方向。",{"id":144,"question_zh":145,"answer_zh":146,"source_url":133},14454,"Sacred 的配置对象能否像函数一样被调用以解析配置？","社区曾探讨过让配置对象支持 `__call__` 方法，以便通过调用来解析配置。有两种主要设计方案：一是 `__call__(cmdline_modifications, named_modifications)`，这种方式能清晰区分命令行传入和命名配置传入的参数，类似于当前的 `Experiment.run`，但不利于作为 ingredient 的替代品；二是 `__call__(named_modifications=(), **config)`，这种方式简化了 ingredient 的使用，允许双向传递配置值，甚至可以根据主实验的配置选择 ingredient 中的命名配置，但缺点是难以追踪配置值的来源。目前这仍处于设计讨论阶段，尚未实现。",[148,153,158,163,168,173,178,183,188,193,198,203,208,213,218],{"id":149,"version":150,"summary_zh":151,"released_at":152},81356,"0.8.7","小幅错误修复版本，修复了上一版本中与 NumPy 2.0 兼容变更相关的问题。\n\n* 修复：恢复 `is_different` 的旧行为 (#933，感谢 @n-gao)\n* 文档：将 Sacred 添加到 Gurubase (#935，@kursataktas)","2024-11-26T07:16:29",{"id":154,"version":155,"summary_zh":156,"released_at":157},81357,"0.8.6","次要版本更新，以兼容 NumPy>=2.0\n\n* 添加对 NumPy>=2.0 的支持 (#928)\n* 从 docopt（已停止维护）切换到 docopt-ng（有维护的分支）(#927，感谢 @n-gao)","2024-08-26T09:22:38",{"id":159,"version":160,"summary_zh":161,"released_at":162},81358,"0.8.5","一个小版本更新，包含若干小修复。\n\n* 功能：新增默认心跳间隔的配置项\n* 修复：不再忽略配置文件中无法加载的类 (#902，感谢 @ernestum)\n* 修复：修复导致 conda-forge 构建失败的导入错误 (#921，感谢 @n-gao)\n* 文档：更新已不再存在的 CDE 工具说明，并修复入门示例 (#905, #906，感谢 @zhimin-z)","2023-11-13T07:23:02",{"id":164,"version":165,"summary_zh":166,"released_at":167},81359,"0.8.4","一个小版本更新，包含若干小修复。\n\n* 更新测试和支持的 Python 版本：sacred 现已正式支持 Python 3.8–3.11（#872、#892，感谢 @jnphilipp）\n* 功能：允许在配置作用域中使用类型注解，并通过使用 `ast` 而非复杂的正则表达式来使配置作用域更具未来兼容性（感谢 @vnmabus）\n* 功能：在 `MongoObserver` 中公开 `MongoClient`（感谢 @Gracecr）\n* 修复：通过改用 Python 内置类型而非 `np.*` 别名，以支持新的 NumPy 版本（#870，感谢 @Kaushalya）\n* 修复：当通过 `ipython` 以非交互模式运行时，允许将 `*.ipynb` 文件作为源文件\n* 内部改进：为代码库中的许多错误添加错误原因（#894、#898，感谢 @cool-RR）\n* 内部改进：使用 GitHub Actions 替代 Azure Pipelines 进行测试，以获得更强的控制能力（#896）\n* 内部改进：使用 GitHub Actions 自动化 PyPI 上的发布","2023-01-25T17:03:54",{"id":169,"version":170,"summary_zh":171,"released_at":172},81360,"0.8.3","一个包含多项小改进并支持 Python 3.10 的次要版本。\n\n* 功能：支持新的 NumPy 随机 API（`np.random.Generator`）；自 NumPy 1.19 起弃用旧的 `np.random.RandomState` (#779，感谢 @jnphilipp)\n* 功能：为 mypy 等类型检查器添加 `py.typed` 文件 (#849，感谢 @neophnx)\n* 功能：验证 Sacred 配置 (#774)\n* 功能：更新 CLI 选项：将运行 ID 改为通过命令行指定 (#798，感谢 @jnphilipp)\n* 功能：记录命名配置及配置更新 (#823)\n* 功能：在 FileStorageObserver 中新增保存源代码和复制资源的选项 (#806，感谢 @patrick-kidger)\n* 功能：支持 NVIDIA 多实例 GPU (#865，感谢 @j3soon)\n* 修复：将测试用例更新至 Python 3.6 及以上版本；更新依赖项（例如，tinydb 4+、pytest 6.2.1、pymongo 4.0）(#799, #819, #821，非常感谢 @jnphilipp)\n* 修复：修复符号链接的处理问题 (#791，感谢 @MaxSchambach)\n* 修复：修复 Docker 示例 (#829，感谢 @ahallermed)\n* 文档：对文档进行了一些修复和更新 (#778, #792, #793, #797, #804, #842, #856，感谢 @daliasen @aaronsnoswell @schmitts @Blaizzy)","2022-03-28T13:51:55",{"id":174,"version":175,"summary_zh":176,"released_at":177},81361,"0.8.2","一个小的修复版本，解决了 Python 3.8 及更高版本中的一些问题，以及只读容器类型的相关问题。\n\n- 功能：为只读容器添加了 pickle 序列化和 YAML 序列化支持 (#775, #737)\n- 功能：为 SqlObserver 添加了 Git 集成 (#741)\n- 功能：为 MongoObserver 添加了集合前缀支持 (#704)\n- 修复：修复 Python 3.8 下的 `print_config` 命令 (#719)\n- 修复：修复 `save_config` 命令 (#765)\n- 修复：命名配置更新现在会在配置创建过程中正确分发 (#769, #777)\n- 修复：nvidia_smi 输出的解析现在也支持进程名中包含非 Unicode 字符（例如中文）(#776)\n- 修复：修复 MongoObserver 的类型注解 (#762)\n- 修复：在超时情况下终止 tee。这是一个临时解决方案，用于防止因输出捕获而导致的程序崩溃 (#740)\n- 修复：改进配置作用域的解析 (#699, #764)\n- 修复：修复在配置作用域中抛出 `ConfigErrors` 时的错误跟踪问题 (#733)\n- 修复：使 Git 导入成为可选功能 (#724)","2020-11-26T21:37:36",{"id":179,"version":180,"summary_zh":181,"released_at":182},81362,"0.8.0","重大发布，包含多项破坏性变更。\n\n* API 变更：停止对 Python 2 的支持\n* API 变更：Git 信息的收集功能现已默认启用 #595\n* API 变更：所有观察器的构造函数已从 Observer.create(...) 改为 Observer(...)\n* API 变更：修改了自定义主机信息收集的接口 #569\n* API 变更：修改了 CLI 选项的定义接口。#572\n* 功能新增：新增 S3 文件观察器 #542\n* 功能新增：为 TelegramObserver 添加了 `started_text` 选项 #494\n* 功能新增：为只读容器添加了 copy\u002Fdeepcopy 支持 #500\n* Bug 修复：FileStorage 观察器在并行执行下的可靠性得到提升 #503\n* Bug 修复：当工件可能覆盖重要文件时，FileStorageObserver 现在会抛出错误 #647\n* Bug 修复：修复了配置嵌套行为不一致的问题 #409 #505\n* Bug 修复：针对 TensorFlow 集成进行了多项修复\n* Bug 修复：修复了因部分机器缺少品牌键而导致的崩溃问题 #512\n* 内部变更：将 CI 服务器迁移到 Azure\n* 内部变更：添加了 pre-commit 钩子，用于进行 pep 8 检查和使用 python black 进行代码自动格式化\n* 内部变更：在许多地方开始使用 pathlib.Path 替代 os.path\n\n","2019-10-14T14:59:34",{"id":184,"version":185,"summary_zh":186,"released_at":187},81363,"0.7.5","这是最后一个支持 Python 2.7 的版本。\n\n* 功能：错误报告大幅改进（感谢 @thequilo）\n* 功能：新增 print_named_configs 命令\n* 功能：新增为 artifacts 添加元数据的选项（感谢 @jarnoRFB）\n* 功能：artifacts 的内容类型检测（感谢 @jarnoRFB）\n* 功能：PyTorch 的自动种子设置（感谢 @srossi93）\n* 功能：为 Telegram 观察器添加代理支持（感谢 @brickerino）\n* 功能：使 MongoObserver 的转储目录可配置（感谢 @jarnoRFB）\n* 功能：新增基于队列的观察器，更好地处理不稳定的连接（感谢 @jarnoRFB）\n* 修复：对 stdout 捕获的一些修复\n* 修复：FileStorageObserver 现在仅在启动运行时创建目录 (#329；感谢 @thomasjpfan)\n* 修复：修复了 config_hooks 问题 (#326；感谢 @thomasjpfan)\n* 修复：修复了用字典覆盖非字典型配置项时导致的崩溃问题 (#325；感谢 @thomasjpfan)\n* 修复：解决了在 Conda 环境中运行的问题 (#341)\n* 修复：支持感知 NumPy 的配置变更检测 (#344)\n* 修复：允许依赖项为编译好的库（感谢 @jnphilipp）\n* 修复：输出颜色化现在适用于 256 色和 16 色终端（感谢 @bosr）\n* 修复：修复了 TinyDB 观察器日志记录的问题 (#327；感谢 @michalgregor）\n* 修复：忽略与 named_config 同名的文件夹（感谢 @boeddeker）\n* 修复：setup 不再覆盖预先配置的根日志记录器（感谢 @thequilo）\n* 修复：兼容 TensorFlow 2.0（感谢 @tarik、@gabrieldemarmiesse）\n* 修复：修复了在没有 tee 可用于 stdout 捕获时抛出的异常（感谢 @greg-farquhar）\n* 修复：修复了 FileStorageObserver 的并发问题（感谢 @dekuenstle）","2019-06-20T14:11:49",{"id":189,"version":190,"summary_zh":191,"released_at":192},81364,"0.7.4","一个小的 bug 修复版本，解决了配料与命名配置交互的一些问题。\n\n* Bugfix：修复了 SQLObserver 的 PostgreSQL 后端问题（感谢 @bensternlieb）\n* Bugfix：修复了配料与命名配置交互的问题\n* Feature：为 FileStorageObserver 添加了指标日志记录功能（感谢 @ummavi）\n","2018-06-12T06:04:59",{"id":194,"version":195,"summary_zh":196,"released_at":197},81365,"0.7.3","重大 bug 修复版本，解决了多个关键问题，包括：实验有时无法正常退出、FileStorage 和 MongoObserver 中的竞态条件，以及若干 stdout 捕获相关的问题。\n\n* 功能：支持自定义实验基础目录（感谢 @anibali）\n* 功能：新增选项，可将现有 MongoClient 传递给 MongoObserver（感谢 @rueberger）\n* 功能：允许从命名配置中设置配置文档字符串\n* 功能：添加了 py-cpuinfo 作为获取 CPU 信息的后备方案（感谢 @serv-inc）\n* 功能：在配置函数中增加了对 _log 参数的支持\n* Bug 修复：堆栈跟踪过滤现能正确处理链式异常（感谢 @kamo-naoyuki）\n* Bug 修复：解决了 stdout 捕获有时会丢失最后几行的问题\n* Bug 修复：修复了 MongoObserver 的覆盖选项\n* Bug 修复：修复了心跳有时无法结束的问题\n* Bug 修复：修复了交互模式下运行时的错误\n* Bug 修复：添加了对配料路径不唯一性的检查（感谢 @boeddeker）\n* Bug 修复：修复了多个 UTF-8 解码相关的问题（感谢 @LukasDrude、@wjp）\n* Bug 修复：修复了 _config 的嵌套结构问题（感谢 @boeddeker）\n* Bug 修复：修复了在空仓库中使用 Git 集成时的崩溃问题（感谢 @ramon-oliveira）\n* Bug 修复：修复了首次使用 SQLite 后端时的崩溃问题\n* Bug 修复：修复了测试中的多个问题（感谢 @thomasjpfan）\n* Bug 修复：修复了 FileStorageObserver 中的竞态条件（感谢 @boeddeker）\n* Bug 修复：修复了覆盖配料的命名配置时出现的问题（感谢 @pimdh）\n* Bug 修复：移除了已弃用的 inspect.getargspec() 调用\n* Bug 修复：修复了配置更新和命名配置中空字典消失的问题（感谢 @TomVeniat）\n* Bug 修复：修复了当程序名包含空格时命令行解析的问题\n* Bug 修复：配置相关的警告现在会考虑日志级别设置\n* Bug 修复：在指标日志记录中正确处理 NumPy 类型\n","2018-05-06T20:44:10",{"id":199,"version":200,"summary_zh":201,"released_at":202},81366,"0.7.2","Minor features release:\r\n* API Change: added host_info to queued_event\r\n* Feature: improved and configurable dependency discovery system\r\n* Feature: improved and configurable source-file discovery system\r\n* Feature: better error messages for missing or misspelled commands\r\n* Feature: -m flag now supports passing an id for a run to overwrite\r\n* Feature: allow captured functions to be called outside of a run (thanks @berleon)\r\n* Bugfix: fixed issue with telegram imports (thanks @millawell)\r\n\r\n","2018-05-06T13:24:20",{"id":204,"version":205,"summary_zh":206,"released_at":207},81367,"0.7.1","Bugfixes and improved Tensorflow support.\r\n\r\n* Refactor: lazy importing of many optional dependencies\r\n* Feature: added metrics API for adding live monitoring information to the MongoDB\r\n* Feature: added integration with tensorflow for automatic capturing of LogWriter paths\r\n* Feature: set seed of tensorflow if it is imported\r\n* Feature: named_configs can now affect the config of ingredients\r\n* Bugfix: failed runs now return with exit code 1 by default\r\n* Bugfix: fixed a problem with UTF-8 symbols in stdout\r\n* Bugfix: fixed a threading issue with the SQLObserver\r\n* Bugfix: fixed a problem with consecutive ids in the SQLObserver\r\n* Bugfix: heartbeat events now also serialize the intermediate results\r\n* Bugfix: reapeatedly calling run from python with an option for adding an\r\n          observer, no longer duplicates observers\r\n* Bugfix: fixed a problem where **kwargs of captured functions might be modified\r\n* Bugfix: fixed an encoding problem with the FileStorageObserver\r\n* Bugfix: fixed an issue where determining the version of some packages would crash\r\n* Bugfix: fixed handling of relative filepaths in the SQLObserver and the TinyDBObserver","2018-05-06T13:22:53",{"id":209,"version":210,"summary_zh":211,"released_at":212},81368,"0.7.0","Major feature release that breaks backwards compatibility in a few cases.\r\n\r\n* Feature: host info now contains information about NVIDIA GPUs (if available)\r\n* Feature: git integration: sacred now collects info about the git repository\r\n           of the experiment (if available and if gitpython is installed)\r\n* Feature: new ``--enforce-clean`` flag that cancels a run if the\r\n           git repository is dirty\r\n* Feature: added new TinyDbObserver and TinyDbReader (thanks to @MrKriss)\r\n* Feature: added new SqlObserver\r\n* Feature: added new FileStorageObserver\r\n* Feature: added new SlackObserver\r\n* Feature: added new TelegramObserver (thanks to @black-puppydog)\r\n* Feature: added save_config command\r\n* Feature: added queue flag to just queue a run instead of executing it\r\n* Feature: added TimeoutInterrupt to signal that a run timed out\r\n* Feature: experiments can now be run in Jupyter notebook, but will fail with\r\n           an error by default, which can be deactivated using interactive=True\r\n* Feature: allow to pass unparsed commandline string to ``ex.run_commandline``.\r\n* Feature: improved stdout\u002Fstderr capturing: it now also collects non-python\r\n           outputs and logging.\r\n* Feature: observers now share the id of a run and it is available during\r\n           runtime as ``run._id``.\r\n* Feature: new ``--print_config`` flag to always print config first\r\n* Feature: added sacred.SETTINGS as a place to configure some of the behaviour\r\n* Feature: ConfigScopes now extract docstrings and line comments and display\r\n           them when calling ``print_config``\r\n* Feature: observers are now run in order of priority (settable)\r\n* Feature: new ``--name=NAME`` option to set the name of experiment for this run\r\n* Feature: the heartbeat event now stores an intermediate result (if set).\r\n* Feature: ENVIRONMENT variables can be captured as part of host info.\r\n* Feature: sped up the applying_lines_and_backfeeds stdout filter. (thanks to @remss)\r\n* Feature: adding resources by name (thanks to @d4nst)\r\n* API Change: all times are now in UTC\r\n* API Change: significantly changed the mongoDB layout\r\n* API Change: MongoObserver and FileStorageObserver now use consecutive\r\n              integers as _id\r\n* API Change: the name passed to Experiment is now optional and defaults to the\r\n              name of the file in which it was instantiated.\r\n              (The name is still required for interactive mode)\r\n* API Change: Artifacts can now be named, and are stored by the observers under\r\n              that name.\r\n* API Change: Experiment.run_command is deprecated in favor of run, which now\r\n              also takes a command_name parameter.\r\n* API Change: Experiment.run now takes an options argument to add\r\n              commandline-options also from python.\r\n* API Change: Experiment.get_experiment_info() now returns source-names as\r\n              relative paths and includes a separate base_dir entry\r\n* Dependencies: Migrated from six to future, to avoid conflicts with old\r\n                preinstalled versions of six.\r\n* Bugfix: fixed a problem when trying  to set the loglevel to DEBUG\r\n* Bugfix: type conversions from None to some other type are now correctly ignored\r\n* Bugfix: fixed a problem with stdout capturing breaking tools that access\r\n          certain attributes of ``sys.stdout`` or ``sys.stderr``.\r\n* Bugfix: @main, @automain, @command and @capture now support functions with\r\n           Python3 style annotations.\r\n* Bugfix: fixed a problem with config-docs from ingredients not being propagated\r\n* Bugfix: fixed setting seed to 0 being ignored\r\n\r\n","2017-05-07T23:08:03",{"id":214,"version":215,"summary_zh":216,"released_at":217},81369,"0.6.10","A minor release to incorporate a few bugfixes and minor features before the upcoming big 0.7 release\n- Bugfix: fixed a problem when trying  to set the loglevel to DEBUG\n- Bugfix: fixed a random crash of the heartbeat thread (see #101).\n- Feature: added --force\u002F-f option to disable errors and warnings concerning\n         suspicious changes. (thanks to Yannic Kilcher)\n- Feature: experiments can now be run in Jupyter notebook, but will fail with\n         an error by default, which can be deactivated using interactive=True\n- Feature: added support for adding a captured out filter, and a filter that\n         and applies backspaces and linefeeds before saving like a terminal\n         would. (thanks to Kevin McGuinness)\n","2016-08-08T13:45:57",{"id":219,"version":220,"summary_zh":221,"released_at":222},81370,"0.6.8","## 0.6.8 (2016-01-14)\n- Feature: Added automatic conversion of `pandas` datastructures in the\n         custom info dict to json-format in the MongoObserver.\n- Feature: Fail if a new config entry is added but it is not used anywhere\n- Feature: Added a warning if no observers were added to the experiment.\n         Added also an `unobserved` keyword to commands and a\n         `--unobserved` commandline option to silence that warning\n- Feature: Split the debug flag `-d` into two flags: `-d` now only disables\n         stacktrace filtering, while `-D` adds post-mortem debugging.\n- API change: renamed `named_configs_to_use` kwarg in `ex.run_command`\n            method to `named_configs`\n- API change: changed the automatic conversion of numpy arrays in the\n            MongoObserver from pickle to human readable nested lists.\n- Bugfix: Fixed a problem with debugging experiments.\n- Bugfix: Fixed a problem with numpy datatypes in the configuration\n- Bugfix: More helpful error messages when using `return` or `yield` in a\n        config scope\n- Bugfix: Be more helpful when using -m\u002F--mongo_db and pymongo is not installed\n","2016-01-13T18:59:56"]