[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-minimaxir--automl-gs":3,"tool-minimaxir--automl-gs":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":23,"env_os":94,"env_gpu":95,"env_ram":96,"env_deps":97,"category_tags":105,"github_topics":106,"view_count":23,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":111,"updated_at":112,"faqs":113,"releases":139},1324,"minimaxir\u002Fautoml-gs","automl-gs","Provide an input CSV and a target field to predict, generate a model + code to run it.","automl-gs 是一款“零代码”自动机器学习小助手。只需给它一份 CSV 文件并指出想预测的列，它就能自动完成数据清洗、特征工程、模型选择与调参，最终生成可直接运行的 Python 脚本和训练好的高性能模型。你无需再为日期格式、缺失值、类别编码或超参数搜索头疼，也不必担心平台锁定——生成的代码完全开源、可读、可改，随时能嵌入现有系统或继续迭代。\n\n它特别适合统计背景不深的数据分析师、产品经理、开发工程师，以及希望快速拿到 baseline 的研究人员。亮点包括：支持 TensorFlow 和 XGBoost，未来还将加入 LightGBM、CatBoost；可一键在 Google Colab 免费 TPU 上训练；每步流程拆成带文档的函数，方便二次开发；训练进度实时可见，结果以 CSV 详细记录，随时中断也能续跑。","# automl-gs\n\n![console gif](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fminimaxir_automl-gs_readme_17d8afeb7031.gif)\n\nGive an input CSV file and a target field you want to predict to automl-gs, and get a trained high-performing machine learning or deep learning model plus native Python code pipelines allowing you to integrate that model into any prediction workflow. No black box: you can see *exactly* how the data is processed, how the model is constructed, and you can make tweaks as necessary.\n\n![demo output](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fminimaxir_automl-gs_readme_873694f58df6.png)\n\nautoml-gs is an AutoML tool which, unlike Microsoft's [NNI](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fnni), Uber's [Ludwig](https:\u002F\u002Fgithub.com\u002Fuber\u002Fludwig), and [TPOT](https:\u002F\u002Fgithub.com\u002FEpistasisLab\u002Ftpot), offers a *zero code\u002Fmodel definition interface* to getting an optimized model and data transformation pipeline in multiple popular ML\u002FDL frameworks, with minimal Python dependencies (pandas + scikit-learn + your framework of choice). automl-gs is designed for citizen data scientists and engineers without a deep statistical background under the philosophy that you don't need to know any modern data preprocessing and machine learning engineering techniques to create a powerful prediction workflow.\n\nNowadays, the cost of computing many different models and hyperparameters is much lower than the opportunity cost of an data scientist's time. automl-gs is a Python 3 module designed to abstract away the common approaches to transforming tabular data, architecting machine learning\u002Fdeep learning models, and performing random hyperparameter searches to identify the best-performing model. This allows data scientists and researchers to better utilize their time on model performance optimization.\n\n* Generates native Python code; no platform lock-in, and no need to use automl-gs after the model script is created.\n* Train model configurations super-fast *for free* using a **TPU** and TensorFlow in Google Colaboratory. (in Beta: you can access the Colaboratory notebook [here](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1sbF8cqnOsdzN9Bdt74eER5s_xXcdvatV)).\n* Handles messy datasets that normally require manual intervention, such as datetime\u002Fcategorical encoding and spaced\u002Fparenthesized column names.\n* Each part of the generated model pipeline is its own function w\u002F docstrings, making it much easier to integrate into production workflows.\n* Extremely detailed metrics reporting for every trial stored in a tidy CSV, allowing you to identify and visualize model strengths and weaknesses.\n* Correct serialization of data pipeline encoders on disk (i.e. no pickled Python objects!)\n* Retrain the generated model on new data without making any code\u002Fpipeline changes.\n* Quit the hyperparameter search at any time, as the results are saved after each trial.\n* Training progress bars with ETAs for both the overall experiment and per-epoch during the experiment.\n\nThe models generated by automl-gs are intended to give a very strong *baseline* for solving a given problem; they're not the end-all-be-all that often accompanies the AutoML hype, but the resulting code is easily tweakable to improve from the baseline.\n\nYou can view the hyperparameters and their values [here](automl_gs\u002Fhyperparameters.yml), and the metrics that can be optimized [here](automl_gs\u002Fmetrics.yml). Some of the more controversial design decisions for the generated models are noted in [DESIGN.md](DESIGN.md).\n\n## Framework Support\n\nCurrently automl-gs supports the generation of models for regression and classification problems using the following Python frameworks:\n\n* TensorFlow (via `tf.keras`) | `tensorflow`\n* XGBoost (w\u002F histogram binning) | `xgboost`\n\nTo be implemented:\n\n* Catboost | `catboost`\n* LightGBM | `lightgbm`\n\n## How to Use\n\nautoml-gs can be installed [via pip](https:\u002F\u002Fpypi.org\u002Fproject\u002Fautoml_gs\u002F):\n\n```shell\npip3 install automl_gs\n```\n\nYou will also need to install the corresponding ML\u002FDL framework (e.g. `tensorflow`\u002F`tensorflow-gpu` for TensorFlow, `xgboost` for xgboost, etc.)\n\nAfter that, you can run it directly from the command line. For example, with the [famous Titanic dataset](http:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Farchive\u002Fcs\u002Fcs109\u002Fcs109.1166\u002Fproblem12.html):\n\n```shell\nautoml_gs titanic.csv Survived\n```\n\nIf you want to use a different framework or configure the training, you can do it with flags:\n\n```shell\nautoml_gs titanic.csv Survived --framework xgboost --num_trials 1000\n```\n\nYou may also invoke automl-gs directly from Python. (e.g. via a Jupyter Notebook)\n\n```python\nfrom automl_gs import automl_grid_search\n\nautoml_grid_search('titanic.csv', 'Survived')\n```\n\nThe output of the automl-gs training is:\n\n* A timestamped folder (e.g. `automl_tensorflow_20190317_020434`) with contains:\n  * `model.py`: The generated model file.\n  * `pipeline.py`: The generated pipeline file.\n  * `requirements.txt`: The generated requirements file.\n  * `\u002Fencoders`: A folder containing JSON-serialized encoder files\n  * `\u002Fmetadata`: A folder containing training statistics + other cool stuff not yet implemented!\n  * The model itself (format depends on framework)\n* `automl_results.csv`: A CSV containing the training results after each epoch and the hyperparameters used to train at that time.\n\nOnce the training is done, you can run the generated files from the command line within the generated folder above.\n\nTo predict:\n\n```shell\npython3 model.py -d data.csv -m predict\n```\n\nTo retrain the model on new data:\n\n```shell\npython3 model.py -d data.csv -m train\n```\n\n## CLI Arguments\u002FFunction Parameters\n\nYou can view these at any time by running `automl_gs -h` in the command line.\n\n* `csv_path`: Path to the CSV file (must be in the current directory) [Required]\n* `target_field`: Target field to predict [Required]\n* `target_metric`: Target metric to optimize [Default: Automatically determined depending on problem type]\n* `framework`: Machine learning framework to use [Default: 'tensorflow']\n* `model_name`: Name of the model (if you want to train models with different names) [Default: 'automl']\n* `num_trials`: Number of trials \u002F different hyperparameter combos to test. [Default: 100]\n* `split`: Train-validation split when training the models [Default: 0.7]\n* `num_epochs`: Number of epochs \u002F passes through the data when training the models. [Default: 20]\n* `col_types`: Dictionary of fields:data types to use to override automl-gs's guesses. (only when using in Python) [Default: {}]\n* `gpu`: For non-Tensorflow frameworks and Pascal-or-later GPUs, boolean to determine whether to use GPU-optimized training methods (TensorFlow can detect it automatically) [Default: False]\n* `tpu_address`: For TensorFlow, hardware address of the TPU on the system. [Default: None]\n\n## Examples\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fminimaxir_automl-gs_readme_a8bffdc30d89.png)\n\nFor a quick Hello World on how to use automl-gs, see [this Jupyter Notebook](docs\u002Fautoml_gs_tutorial.ipynb).\n\nDue to the size of some examples w\u002F generated code and accompanying data visualizations, they are maintained in a [separate repository](https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs-examples). (and also explain why there are two distinct \"levels\" in the example viz above!)\n\n## How automl-gs Works\n\nTL;DR: auto-ml gs generates raw Python code using Jinja templates and trains a model using the generated code in a subprocess: repeat using different hyperparameters until done and save the best model.\n\nautoml-gs loads a given CSV and infers the data type of each column to be fed into the model. Then it tries a ETL strategy for each column field as determined by the hyperparameters; for example, a Datetime field has its `hour` and `dayofweek` binary-encoded by default, but hyperparameters may dictate the encoding of `month` and `year` as additional model fields. ETL strategies are optimized for frameworks; TensorFlow for example will use text embeddings, while other frameworks will use CountVectorizers to encode text (when training, TensorFlow will also used a shared text encoder via Keras's functional API). automl-gs then creates a statistical model with the specified framework. Both the model ETL functions and model construction functions are saved as a generated Python script.\n\nautoml-gs then runs the generated training script as if it was a typical user. Once the model is trained, automl-gs saves the training results in its own CSV, along with all the hyperparameters used to train the model. automl-gs then repeats the task with another set of hyperparameters, until the specified number of trials is hit or the user kills the script.\n\nThe best model Python script is kept after each trial, which can then easily be integrated into other scripts, or run directly to get the prediction results on a new dataset.\n\n## Helpful Notes\n\n* *It is the user's responsibility to ensure the input dataset is high-quality.* No model hyperparameter search will provide good research on flawed\u002Funbalanced datasets. Relatedly, hyperparameter optimization may provide optimistic predictions on the validation set, which may not necessarily match the model performance in the real world.\n* *A neural network approach alone may not necessarily be the best approach*. Try using `xgboost`. The results may surprise you!\n* *automl-gs is only attempting to solve tabular data problems.* If you have a more complicated problem to solve (e.g. predicting a sequence of outputs), I recommend using Microsoft's [NNI](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fnni) and Uber's [Ludwig](https:\u002F\u002Fgithub.com\u002Fuber\u002Fludwig) as noted in the introduction.\n\n## Known Issues\n\n* Issues when using Anaconda ([#8](https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F8)). Use an installed Python is possible.\n* Issues when using Windows ([#13](https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F13))\n* Issues when a field name in the input dataset starts with a number ([#18](https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F18))\n\n## Future Work\n\nFeature development will continue on automl-gs as long as there is interest in the package.\n\n### Top Priority\n\n* Add more frameworks\n* Results visualization (via `plotnine`)\n* Holiday support for datetimes\n* Remove redundant generated code\n* Native distributed\u002Fhigh level automation support (Polyaxon\u002FKubernetes, Airflow)\n* Image field support (both as a CSV column field, and a special flow mode to take advantage of hyperparameter tuning)\n* PyTorch model code generation.\n\n### Elsework\n\n* Generate script given an explicit set of hyperparameters\n* More hyperparameters.\n* Bayesian hyperparameter search for standalone version.\n* Support for generating model code for R\u002FJulia\n* Tool for generating a Flask\u002FStarlette REST API from a trained model script\n* Allow passing an explicit, pre-defined test set CSV.\n\n## Maintainer\u002FCreator\n\nMax Woolf ([@minimaxir](http:\u002F\u002Fminimaxir.com))\n\n*Max's open-source projects are supported by his [Patreon](https:\u002F\u002Fwww.patreon.com\u002Fminimaxir). If you found this project helpful, any monetary contributions to the Patreon are appreciated and will be put to good creative use.*\n\n## License\n\nMIT\n\nThe code generated by automl-gs is unlicensed; the owner of the generated code can decide the license.\n","# automl-gs\n\n![控制台动图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fminimaxir_automl-gs_readme_17d8afeb7031.gif)\n\n只需提供一个输入的CSV文件和您希望预测的目标字段，automl-gs即可为您生成一个训练有素、性能优异的机器学习或深度学习模型，并附带原生的Python代码流水线，使您能够将该模型无缝集成到任何预测工作流中。全程透明：您可以清晰地看到数据是如何被处理的、模型是如何构建的，并且可以根据需要进行调整。\n\n![演示输出](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fminimaxir_automl-gs_readme_873694f58df6.png)\n\nautoml-gs是一款AutoML工具，与微软的[NNI](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fnni)、优步的[Ludwig](https:\u002F\u002Fgithub.com\u002Fuber\u002Fludwig)以及[TPOT](https:\u002F\u002Fgithub.com\u002FEpistasisLab\u002Ftpot)不同，它提供了一个“零代码\u002F模型定义”的界面，让您能够在多种主流的ML\u002FDL框架中快速获得经过优化的模型及数据转换流水线，同时仅需极少的Python依赖（pandas + scikit-learn + 您选择的框架）。automl-gs专为不具备深厚统计学背景的“公民数据科学家”和工程师而设计，其核心理念是：您无需掌握任何现代的数据预处理与机器学习工程技巧，也能构建出强大的预测工作流。\n\n如今，运行多种模型及其超参数组合的成本已远低于数据科学家时间的机会成本。automl-gs是一个基于Python 3的模块，旨在抽象化常见的表格数据转换方法、机器学习\u002F深度学习模型架构设计以及随机超参数搜索流程，从而帮助您专注于模型性能的优化。\n\n* 生成原生的Python代码；无平台锁定，且在生成模型脚本后无需继续使用automl-gs。\n* 在Google Colaboratory中利用**TPU**和TensorFlow，以极快的速度免费训练模型配置。（处于Beta阶段：您可在此访问Colaboratory笔记本[这里](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1sbF8cqnOsdzN9Bdt74eER5s_xXcdvatV)）。\n* 能够处理通常需要人工干预的混乱数据集，例如日期时间\u002F分类编码以及带有空格或括号的列名。\n* 生成的模型流水线的每个部分都独立为一个函数，并配有文档字符串，因此更容易集成到生产工作流中。\n* 对每次试验都提供极为详尽的指标报告，并以整洁的CSV格式存储，便于您识别并可视化模型的优势与不足。\n* 确保数据流水线编码器在磁盘上的正确序列化（即不使用pickle保存Python对象！）。\n* 无需修改任何代码或流水线，即可在新数据上重新训练生成的模型。\n* 可随时停止超参数搜索，因为每次试验的结果都会被保存。\n* 提供整体实验及每轮迭代的进度条与预计完成时间。\n\nautoml-gs生成的模型旨在为解决特定问题提供一个非常强大的*基准*；它们并非AutoML热潮中常伴随的“终极解决方案”，但生成的代码却易于调整，以便在基准基础上进一步优化。\n\n您可以在[这里](automl_gs\u002Fhyperparameters.yml)查看超参数及其取值，以及可在[这里](automl_gs\u002Fmetrics.yml)查看可优化的指标。此外，一些关于生成模型的更具争议的设计决策已在[DESIGN.md](DESIGN.md)中予以说明。\n\n## 框架支持\n\n目前，automl-gs支持使用以下Python框架生成回归与分类问题的模型：\n\n* TensorFlow（通过`tf.keras`）| `tensorflow`\n* XGBoost（采用直方图分箱）| `xgboost`\n\n待实现的框架包括：\n* Catboost | `catboost`\n* LightGBM | `lightgbm`\n\n## 使用方法\n\nautoml-gs可通过[pip](https:\u002F\u002Fpypi.org\u002Fproject\u002Fautoml_gs\u002F)安装：\n\n```shell\npip3 install automl_gs\n```\n\n此外，您还需要安装相应的ML\u002FDL框架（例如，TensorFlow需安装`tensorflow`或`tensorflow-gpu`，xgboost需安装`xgboost`等）。\n\n安装完成后，您即可直接在命令行中运行。例如，使用著名的泰坦尼克号数据集（来源：http:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Farchive\u002Fcs\u002Fcs109\u002Fcs109.1166\u002Fproblem12.html）：\n\n```shell\nautoml_gs titanic.csv Survived\n```\n\n如果您希望使用其他框架或自定义训练设置，可以通过添加标志来实现：\n\n```shell\nautoml_gs titanic.csv Survived --framework xgboost --num_trials 1000\n```\n\n您也可以直接从Python调用automl-gs。（例如，在Jupyter Notebook中）\n\n```python\nfrom automl_gs import automl_grid_search\n\nautoml_grid_search('titanic.csv', 'Survived')\n```\n\nautoml-gs训练后的输出包括：\n* 一个带时间戳的文件夹（例如`automl_tensorflow_20190317_020434`），其中包含：\n  * `model.py`：生成的模型文件。\n  * `pipeline.py`：生成的流水线文件。\n  * `requirements.txt`：生成的依赖文件。\n  * `\u002Fencoders`：存放JSON序列化编码器文件的文件夹。\n  * `\u002Fmetadata`：存放训练统计信息及其他尚未实现的酷炫内容的文件夹。\n  * 模型本身（格式取决于所选框架）。\n* `automl_results.csv`：一份CSV文件，记录每次迭代的训练结果以及当时使用的超参数。\n\n训练完成后，您可以在上述生成的文件夹内通过命令行运行生成的文件。\n\n预测时：\n```shell\npython3 model.py -d data.csv -m predict\n```\n\n重新训练模型时：\n```shell\npython3 model.py -d data.csv -m train\n```\n\n## CLI参数\u002F函数参数\n\n您可随时通过在命令行中运行`automl_gs -h`查看这些参数。\n\n* `csv_path`：CSV文件的路径（必须位于当前目录）【必填】\n* `target_field`：目标预测字段【必填】\n* `target_metric`：目标优化指标【默认：根据问题类型自动确定】\n* `framework`：使用的机器学习框架【默认：'tensorflow'】\n* `model_name`：模型名称（若您希望训练不同名称的模型）【默认：'automl'】\n* `num_trials`：要测试的试验次数\u002F不同的超参数组合数量【默认：100】\n* `split`：训练模型时的训练-验证拆分比例【默认：0.7】\n* `num_epochs`：训练模型时的迭代次数\u002F遍历数据的次数【默认：20】\n* `col_types`：字段与数据类型的字典，用于覆盖automl-gs的默认猜测。（仅在Python中使用时生效）【默认：{}】\n* `gpu`：对于非TensorFlow框架及Pascal及以上GPU，布尔值，用于决定是否使用GPU优化的训练方法（TensorFlow可自动检测）【默认：False】\n* `tpu_address`：对于TensorFlow，系统中TPU的硬件地址【默认：None】\n\n## 示例\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fminimaxir_automl-gs_readme_a8bffdc30d89.png)\n\n要快速了解如何使用 automl-gs 的“Hello World”示例，请参阅[此 Jupyter Notebook](docs\u002Fautoml_gs_tutorial.ipynb)。\n\n由于某些包含生成代码及相应数据可视化示例的文件较大，因此它们被单独维护在一个[独立的仓库](https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs-examples)中。（这也解释了为何上述示例可视化中存在两个不同的“层级”！）\n\n## automl-gs 的工作原理\n\n简而言之：automl-gs 使用 Jinja 模板生成原始 Python 代码，并在子进程中利用该生成的代码训练模型；重复这一过程，采用不同的超参数设置，直至完成并保存最佳模型。\n\nautoml-gs 会加载给定的 CSV 文件，并推断每一列的数据类型以供模型使用。随后，它会根据超参数设定为每列字段尝试相应的 ETL 策略；例如，对于日期时间字段，默认会将其 `hour` 和 `dayofweek` 进行二进制编码，但超参数也可能指定将 `month` 和 `year` 编码为额外的模型特征。ETL 策略针对不同框架进行了优化；例如，TensorFlow 会使用文本嵌入，而其他框架则会使用 CountVectorizer 对文本进行编码（在训练时，TensorFlow 还会通过 Keras 的函数式 API 使用共享的文本编码器）。之后，automl-gs 会基于指定的框架构建统计模型。模型的 ETL 函数与模型构建函数均会被保存为一个生成的 Python 脚本。\n\n随后，automl-gs 会像普通用户一样运行该生成的训练脚本。模型训练完成后，automl-gs 会将训练结果连同用于训练的所有超参数一起保存到其自身的 CSV 文件中。接着，automl-gs 会使用另一组超参数重复这一过程，直到达到指定的试验次数或用户手动终止脚本为止。\n\n每次试验后都会保留最优模型的 Python 脚本，该脚本可轻松集成到其他脚本中，或直接运行以在新数据集上获得预测结果。\n\n## 有用提示\n\n* *确保输入数据集的质量是用户的职责。* 无论何种模型超参数搜索，都无法在存在缺陷或不平衡的数据集上取得良好的研究结果。同样地，超参数优化可能会在验证集上给出过于乐观的预测，而这些预测未必能反映模型在真实环境中的表现。\n* *仅采用神经网络方法并不一定是最优选择。* 建议尝试使用 `xgboost`。其结果可能会令您惊喜！\n* *automl-gs 仅致力于解决表格型数据问题。* 如果您面临更为复杂的问题（例如预测一系列输出），如引言所述，建议使用 Microsoft 的 [NNI](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fnni) 和 Uber 的 [Ludwig](https:\u002F\u002Fgithub.com\u002Fuber\u002Fludwig)。\n\n## 已知问题\n\n* 使用 Anaconda 时出现的问题([#8](https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F8))。可以改用已安装的 Python。\n* 在 Windows 上使用时出现的问题([#13](https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F13))\n* 当输入数据集中的字段名以数字开头时出现的问题([#18](https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F18))\n\n## 未来工作\n\n只要对该软件包仍有兴趣，automl-gs 的功能开发将持续进行。\n\n### 首要优先级\n\n* 增加更多框架支持\n* 结果可视化（通过 `plotnine`）\n* 日期时间字段的节假日支持\n* 删除冗余的生成代码\n* 原生分布式\u002F高级自动化支持（Polyaxon\u002FKubernetes、Airflow）\n* 图像字段支持（既作为 CSV 列字段，又提供专门的流程模式以充分利用超参数调优）\n* PyTorch 模型代码生成。\n\n### 其他工作\n\n* 根据明确的超参数集合生成脚本\n* 增加更多超参数\n* 为独立版本提供贝叶斯超参数搜索\n* 支持为 R\u002FJulia 生成模型代码\n* 提供工具，可根据训练好的模型脚本生成 Flask\u002FStarlette REST API\n* 允许传入明确且预先定义的测试集 CSV 文件。\n\n## 维护者\u002F创建者\n\nMax Woolf ([@minimaxir](http:\u002F\u002Fminimaxir.com))\n\n* Max 的开源项目由他的[Patreon](https:\u002F\u002Fwww.patreon.com\u002Fminimaxir)资助。如果您觉得该项目有所帮助，欢迎向 Patreon 提供任何资金支持，这些捐款都将用于富有创意的用途。*\n\n## 许可证\n\nMIT\n\nautoml-gs 生成的代码未附带许可证；生成代码的所有者可自行决定其许可方式。","# automl-gs 快速上手指南\n\n## 环境准备\n- **系统要求**：Linux \u002F macOS（Windows 暂有兼容性问题，建议用 WSL2）\n- **Python**：3.6 及以上\n- **前置依赖**（任选其一即可）\n  - TensorFlow：`pip3 install tensorflow` 或 `tensorflow-gpu`\n  - XGBoost：`pip3 install xgboost`\n\n> 国内用户可临时使用清华镜像加速：  \n> `pip3 install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple automl_gs`\n\n## 安装步骤\n```shell\n# 1. 安装 automl-gs\npip3 install automl_gs\n\n# 2. 按需安装框架（示例：TensorFlow）\npip3 install tensorflow\n```\n\n## 基本使用\n### 命令行一行启动\n```shell\n# 以 Titanic 数据集为例，预测 Survived 列\nautoml_gs titanic.csv Survived\n```\n\n### Python 代码调用\n```python\nfrom automl_gs import automl_grid_search\n\n# 最小示例\nautoml_grid_search('titanic.csv', 'Survived')\n\n# 指定框架与搜索次数\nautoml_grid_search('titanic.csv', 'Survived',\n                   framework='xgboost',\n                   num_trials=1000)\n```\n\n### 训练完成后\n- 结果目录：`automl_tensorflow_时间戳\u002F`\n  - `model.py`：可直接运行的模型脚本\n  - `pipeline.py`：数据预处理脚本\n- 预测新数据：\n  ```shell\n  cd automl_tensorflow_时间戳\n  python3 model.py -d new_data.csv -m predict\n  ```\n- 增量训练：\n  ```shell\n  python3 model.py -d new_data.csv -m train\n  ```","一家 20 人的跨境电商初创公司，运营经理小林每天要从 8 个广告渠道、50 万条投放记录里预测次日 GMV，以便实时调整预算。\n\n### 没有 automl-gs 时\n- 小林得先写脚本清洗日期、货币、渠道名称等脏数据，光这一步就占掉 3 小时。\n- 接着在 Jupyter 里手动试 LightGBM、XGBoost、神经网络，调 20 多组超参数，跑一晚 GPU 只得出 3 个候选模型。\n- 模型效果不稳定，周一训练好的模型周三就掉 8% 准确率，重新训练又得重来一遍。\n- 工程师把模型封装成 API 时，发现预处理代码散落在 4 个 notebook，根本合不进 CI\u002FCD。\n- 老板临时要看“如果砍掉 Facebook 预算会怎样”，小林只能干瞪眼，因为模型不支持快速重训。\n\n### 使用 automl-gs 后\n- 一条命令 `automl_gs --csv ads.csv --target GMV --framework tensorflow`，10 分钟自动生成清洗+建模脚本，脏数据自动搞定。\n- 同一晚 GPU 时间，automl-gs 并行跑了 120 组超参数，直接给出 AUC 0.87 的 tf.keras 模型，附带完整 Python 文件。\n- 生成的 `train.py` 支持增量训练，每天凌晨 2 点定时跑新数据，准确率波动 \u003C1%，无需人工干预。\n- 代码按函数拆分，预处理、模型、后处理各自独立，工程师 30 分钟就把脚本嵌入现有 Flask 服务上线。\n- 老板临时提问时，小林把最新 CSV 扔给 automl-gs，15 分钟后拿到新模型，直接回答“砍掉 Facebook 预算预计 GMV 下降 12%”。\n\nautoml-gs 让没有算法背景的运营经理也能在一天内拥有可迭代的生产级预测系统。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fminimaxir_automl-gs_9ff22a0f.png","minimaxir","Max Woolf","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fminimaxir_3ab20437.jpg","Senior Data Scientist @buzzfeed. Plotter of pretty charts.","@buzzfeed ","San Francisco","max@minimaxir.com",null,"https:\u002F\u002Fminimaxir.com","https:\u002F\u002Fgithub.com\u002Fminimaxir",[86],{"name":87,"color":88,"percentage":89},"Python","#3572A5",100,1869,182,"2026-04-02T08:38:20","MIT","Linux, macOS","可选；TensorFlow 可自动检测 GPU，XGBoost 需 Pascal 或更新架构的 NVIDIA GPU","未说明",{"notes":98,"python":99,"dependencies":100},"Windows 与 Anaconda 环境存在已知兼容性问题，建议使用原生 Python；TPU 支持通过 Google Colab 免费使用；生成代码无平台锁定，可直接脱离 automl-gs 运行","3",[101,102,103,104],"pandas","scikit-learn","tensorflow","xgboost",[13],[107,103,108,104,109,110],"python","keras","machine-learning","automl","2026-03-27T02:49:30.150509","2026-04-06T07:16:07.175514",[114,119,124,129,134],{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},6053,"运行时报 YAMLLoadWarning 并干扰进度条，如何解决？","升级到 0.2.1 版本即可修复；如仍有问题，可手动在代码中把 `yaml.load(f)` 改为 `yaml.load(f, Loader=yaml.SafeLoader)`。","https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F19",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},6054,"automl-gs 只能做分类任务吗？回归任务能用吗？","可以用于回归任务，无需额外配置。","https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F29",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},6055,"xgboost 是否支持 GPU？","已添加 GPU 支持，但需手动启用 `tree_method='gpu_hist'`；注意 Colab 的 K80 GPU 不满足 Pascal 架构要求，会回退到 CPU。","https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F2",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},6052,"automl-gs 中的 \"gs\" 是什么意思？","\"gs\" 代表 grid search（网格搜索）。","https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F21",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},6056,"在 Colab 运行示例时出现 FileNotFoundError: tpu_train\u002Fmetadata\u002Fresults.csv 不存在，怎么办？","这是已知问题，与 Issue #10 相同，建议先检查是否正确上传了 titanic.csv 并确认 `model_name` 参数设置无误。","https:\u002F\u002Fgithub.com\u002Fminimaxir\u002Fautoml-gs\u002Fissues\u002F38",[140,145],{"id":141,"version":142,"summary_zh":143,"released_at":144},105698,"v0.2.1","Resolves Windows support (hopefully) and YAML warnings.\r\n\r\nThanks to all the PRs from the contributors!","2019-04-05T06:51:04",{"id":146,"version":147,"summary_zh":82,"released_at":148},105699,"v0.2","2019-03-26T05:47:21"]