[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Diyago--Tabular-data-generation":3,"tool-Diyago--Tabular-data-generation":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":79,"owner_url":82,"languages":83,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":92,"env_os":93,"env_gpu":94,"env_ram":93,"env_deps":95,"category_tags":106,"github_topics":107,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":117,"updated_at":118,"faqs":119,"releases":149},2764,"Diyago\u002FTabular-data-generation","Tabular-data-generation","We well know GANs for success in the realistic image generation. However, they can be applied in tabular data generation. We will review and examine some recent papers about tabular GANs in action.","TabGAN 是一款专注于生成高质量合成表格数据的开源工具。它旨在解决现实场景中原始数据稀缺、分布不均或涉及隐私敏感无法直接共享的难题，通过算法创造出既保留真实数据统计特征又具备多样性的“虚拟”数据。\n\n这款工具非常适合数据科学家、机器学习工程师以及需要构建测试数据集的研究人员使用。无论是处理金融风控中的不平衡样本，还是为医疗研究生成脱敏数据，TabGAN 都能提供强有力的支持。\n\n其核心亮点在于统一的调用接口，让用户能轻松切换多种前沿生成技术：包括擅长处理混合数据类型的条件表格生成对抗网络（CTGAN）、适用于结构化数据的高保真扩散模型（ForestDiffusion），以及能捕捉语义依赖的大语言模型框架（GReaT）。此外，TabGAN 内置了基于 LightGBM 的对抗过滤机制，自动剔除不符合真实分布的异常样本，确保生成数据的可靠性。它还支持一键对比原始与合成数据的分布差异，并能自动评估不同生成器的效果以推荐最佳方案。对于希望快速验证想法或保护数据隐私的团队来说，TabGAN 是一个高效且灵活的选择。","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDiyago_Tabular-data-generation_readme_150d961e5513.png\" height=\"120\" alt=\"TabGAN logo\">\n\u003C\u002Fp>\n\n\u003Ch1 align=\"center\">TabGAN\u003C\u002Fh1>\n\u003Cp align=\"center\">\u003Cstrong>High-quality synthetic tabular data generation\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Ftabgan\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Ftabgan.svg\" alt=\"PyPI Version\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Ftabgan\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftabgan?v=3.0.2\" alt=\"Python Version\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Ftabgan\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDiyago_Tabular-data-generation_readme_3b433e9f0db0.png\" alt=\"Downloads\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg\" alt=\"License\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg\" alt=\"Code style: black\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.codefactor.io\u002Frepository\u002Fgithub\u002Fdiyago\u002Ftabular-data-generation\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDiyago_Tabular-data-generation_readme_9ee0cb95ac54.png\" alt=\"CodeFactor\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdiyago\u002FTabular-data-generation\u002Factions\u002Fworkflows\u002Fcodeql.yml\">\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fdiyago\u002FTabular-data-generation\u002Fworkflows\u002FCodeQL\u002Fbadge.svg\" alt=\"CodeQL\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Finsafq-tabgan.hf.space\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Spaces-TabGAN%20Demo-blue\" alt=\"HF Space\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FDiyago\u002FTabular-data-generation\u002Fblob\u002Fmaster\u002Fexamples\u002Ftabgan_examples.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n## Overview\n\nTabGAN provides a unified Python interface for generating synthetic tabular data using multiple state-of-the-art generative approaches:\n\n| Approach | Backend | Strengths |\n|----------|---------|-----------|\n| **GANs** | Conditional Tabular GAN (CTGAN) | Mixed data types, complex multivariate distributions |\n| **Diffusion Models** | ForestDiffusion (tree-based gradient boosting) | High-fidelity generation for structured data |\n| **Large Language Models** | GReaT framework | Capturing semantic dependencies, conditional text generation |\n| **Baseline** | Random sampling with replacement | Quick benchmarking and comparison |\n\nAll generators share a common pipeline: **generate &rarr; post-process &rarr; adversarial filter**, ensuring synthetic data stays close to the real data distribution.\n\n*Based on the paper: [Tabular GANs for uneven distribution](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.00638) (arXiv:2010.00638)*\n\n## Key Features\n\n- **Unified API** &mdash; switch between GANs, diffusion models, and LLMs with a single parameter change\n- **Adversarial filtering** &mdash; built-in LightGBM-based validation keeps synthetic samples distribution-consistent\n- **Mixed data types** &mdash; native handling of continuous, categorical, and free-text columns\n- **Conditional generation** &mdash; generate text conditioned on categorical attributes via LLM prompting\n- **LLM API support** &mdash; integrate with LM Studio, OpenAI, Ollama, or any OpenAI-compatible endpoint\n- **Quality validation** &mdash; compare original and synthetic distributions with a single function call\n- **AutoSynth** &mdash; automatically run all generators, compare quality & privacy, pick the best one\n- **HuggingFace integration** &mdash; synthesize any HF dataset in one call, push results back to Hub\n- **[Live Demo](https:\u002F\u002Finsafq-tabgan.hf.space)** &mdash; try it in browser on HuggingFace Spaces\n\n## Installation\n\n```bash\npip install tabgan\n```\n\n## Quick Start\n\n```python\nimport pandas as pd\nimport numpy as np\nfrom tabgan.sampler import GANGenerator\n\ntrain = pd.DataFrame(np.random.randint(-10, 150, size=(150, 4)), columns=list(\"ABCD\"))\ntarget = pd.DataFrame(np.random.randint(0, 2, size=(150, 1)), columns=list(\"Y\"))\ntest = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=list(\"ABCD\"))\n\nnew_train, new_target = GANGenerator().generate_data_pipe(train, target, test)\n```\n\n## Available Generators\n\n| Generator | Description | Best For |\n|-----------|-------------|----------|\n| `GANGenerator` | CTGAN-based generation | General tabular data with mixed types |\n| `ForestDiffusionGenerator` | Diffusion models with tree-based methods | Complex tabular structures |\n| `BayesianGenerator` | Gaussian Copula with marginal preservation | Fast, correlation-preserving generation |\n| `LLMGenerator` | Large Language Model based | Semantic dependencies, text columns |\n| `OriginalGenerator` | Baseline random sampler | Benchmarking and comparison |\n\n## API Reference\n\n### Common Parameters\n\nAll generators accept the following parameters:\n\n| Parameter | Type | Default | Description |\n|-----------|------|---------|-------------|\n| `gen_x_times` | `float` | `1.1` | Multiplier for synthetic sample count relative to training size |\n| `cat_cols` | `list` | `None` | Column names to treat as categorical |\n| `bot_filter_quantile` | `float` | `0.001` | Lower quantile for post-processing filters |\n| `top_filter_quantile` | `float` | `0.999` | Upper quantile for post-processing filters |\n| `is_post_process` | `bool` | `True` | Enable quantile-based post-filtering |\n| `pregeneration_frac` | `float` | `2` | Oversampling factor before filtering |\n| `only_generated_data` | `bool` | `False` | Return only synthetic rows (exclude originals) |\n| `gen_params` | `dict` | See below | Generator-specific hyperparameters |\n\n### Generator-Specific Parameters (`gen_params`)\n\n**GANGenerator:**\n```python\n{\"batch_size\": 500, \"patience\": 25, \"epochs\": 500}\n```\n\n**LLMGenerator:**\n```python\n{\"batch_size\": 32, \"epochs\": 4, \"llm\": \"distilgpt2\", \"max_length\": 500}\n```\n\n### `generate_data_pipe` Method\n\n```python\nnew_train, new_target = generator.generate_data_pipe(\n    train_df,           # pd.DataFrame - training features\n    target,             # pd.DataFrame - target variable (or None)\n    test_df,            # pd.DataFrame - test features for distribution alignment\n    deep_copy=True,     # bool - copy input DataFrames\n    only_adversarial=False,  # bool - skip generation, only filter\n    use_adversarial=True,    # bool - enable adversarial filtering\n)\n```\n\n**Returns:** `Tuple[pd.DataFrame, pd.DataFrame]` &mdash; `(new_train, new_target)`\n\n## Data Format\n\nTabGAN accepts `pandas.DataFrame` inputs with:\n\n- **Continuous columns** &mdash; any real-valued numerical data\n- **Categorical columns** &mdash; discrete columns with a finite set of values\n\n> **Note:** TabGAN processes values as floating-point internally. Apply rounding after generation for integer-valued outputs.\n\n## Examples\n\n### Basic Usage with All Generators\n\n```python\nfrom tabgan.sampler import (\n    OriginalGenerator, GANGenerator, ForestDiffusionGenerator,\n    BayesianGenerator, LLMGenerator,\n)\nimport pandas as pd\nimport numpy as np\n\ntrain = pd.DataFrame(np.random.randint(-10, 150, size=(150, 4)), columns=list(\"ABCD\"))\ntarget = pd.DataFrame(np.random.randint(0, 2, size=(150, 1)), columns=list(\"Y\"))\ntest = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=list(\"ABCD\"))\n\nnew_train1, new_target1 = OriginalGenerator().generate_data_pipe(train, target, test)\nnew_train2, new_target2 = GANGenerator(\n    gen_params={\"batch_size\": 500, \"epochs\": 10, \"patience\": 5}\n).generate_data_pipe(train, target, test)\nnew_train3, new_target3 = ForestDiffusionGenerator().generate_data_pipe(train, target, test)\nnew_train4, new_target4 = BayesianGenerator().generate_data_pipe(train, target, test)\nnew_train5, new_target5 = LLMGenerator(\n    gen_params={\"batch_size\": 32, \"epochs\": 4, \"llm\": \"distilgpt2\", \"max_length\": 500}\n).generate_data_pipe(train, target, test)\n```\n\n### Full Parameter Example\n\n```python\nnew_train, new_target = GANGenerator(\n    gen_x_times=1.1,\n    cat_cols=None,\n    bot_filter_quantile=0.001,\n    top_filter_quantile=0.999,\n    is_post_process=True,\n    adversarial_model_params={\n        \"metrics\": \"AUC\", \"max_depth\": 2, \"max_bin\": 100,\n        \"learning_rate\": 0.02, \"random_state\": 42, \"n_estimators\": 100,\n    },\n    pregeneration_frac=2,\n    only_generated_data=False,\n    gen_params={\"batch_size\": 500, \"patience\": 25, \"epochs\": 500},\n).generate_data_pipe(\n    train, target, test,\n    deep_copy=True,\n    only_adversarial=False,\n    use_adversarial=True,\n)\n```\n\n### LLM Conditional Text Generation\n\nGenerate synthetic rows with novel text values conditioned on categorical attributes:\n\n```python\nimport pandas as pd\nfrom tabgan.sampler import LLMGenerator\n\ntrain = pd.DataFrame({\n    \"Name\": [\"Anna\", \"Maria\", \"Ivan\", \"Sergey\", \"Olga\", \"Boris\"],\n    \"Gender\": [\"F\", \"F\", \"M\", \"M\", \"F\", \"M\"],\n    \"Age\": [25, 30, 35, 40, 28, 32],\n    \"Occupation\": [\"Engineer\", \"Doctor\", \"Artist\", \"Teacher\", \"Manager\", \"Pilot\"],\n})\n\nnew_train, _ = LLMGenerator(\n    gen_x_times=1.5,\n    text_generating_columns=[\"Name\"],      # columns to generate novel text for\n    conditional_columns=[\"Gender\"],         # columns that condition text generation\n    gen_params={\"batch_size\": 32, \"epochs\": 4, \"llm\": \"distilgpt2\", \"max_length\": 500},\n    is_post_process=False,\n).generate_data_pipe(train, target=None, test_df=None, only_generated_data=True)\n```\n\n**How it works:**\n1. Sample conditional column values from their empirical distributions\n2. Impute remaining non-text columns using the fitted GReaT model\n3. Generate novel text via prompt-based generation\n4. Ensure generated text values differ from the original data\n\n### LLM API-Based Text Generation\n\nUse external LLM APIs (LM Studio, OpenAI, Ollama) instead of local models:\n\n```python\nimport pandas as pd\nfrom tabgan.sampler import LLMGenerator\nfrom tabgan.llm_config import LLMAPIConfig\n\ntrain = pd.DataFrame({\n    \"Name\": [\"Anna\", \"Maria\", \"Ivan\", \"Sergey\", \"Olga\", \"Boris\"],\n    \"Gender\": [\"F\", \"F\", \"M\", \"M\", \"F\", \"M\"],\n    \"Age\": [25, 30, 35, 40, 28, 32],\n    \"Occupation\": [\"Engineer\", \"Doctor\", \"Artist\", \"Teacher\", \"Manager\", \"Pilot\"],\n})\n\n# LM Studio\napi_config = LLMAPIConfig.from_lm_studio(\n    base_url=\"http:\u002F\u002Flocalhost:1234\",\n    model=\"google\u002Fgemma-3-12b\",\n    timeout=90,\n)\n\n# Or OpenAI:  LLMAPIConfig.from_openai(api_key=\"...\", model=\"gpt-4\")\n# Or Ollama:  LLMAPIConfig.from_ollama(model=\"llama3\")\n\nnew_train, _ = LLMGenerator(\n    gen_x_times=1.5,\n    text_generating_columns=[\"Name\"],\n    conditional_columns=[\"Gender\"],\n    gen_params={\"batch_size\": 32, \"epochs\": 4, \"llm\": \"distilgpt2\", \"max_length\": 500},\n    llm_api_config=api_config,\n    is_post_process=False,\n).generate_data_pipe(train, target=None, test_df=None, only_generated_data=True)\n```\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>LLM API Configuration Options\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n| Parameter | Type | Default | Description |\n|-----------|------|---------|-------------|\n| `base_url` | `str` | `\"http:\u002F\u002Flocalhost:1234\"` | API server base URL |\n| `model` | `str` | `\"google\u002Fgemma-3-12b\"` | Model identifier |\n| `api_key` | `str` | `None` | API key for authentication |\n| `timeout` | `int` | `90` | Request timeout in seconds |\n| `max_tokens` | `int` | `256` | Maximum tokens to generate |\n| `temperature` | `float` | `0.7` | Sampling temperature |\n| `system_prompt` | `str` | `None` | System prompt for generation |\n\n**Testing the connection:**\n\n```python\nfrom tabgan.llm_config import LLMAPIConfig\nfrom tabgan.llm_api_client import LLMAPIClient\n\nconfig = LLMAPIConfig.from_lm_studio()\nwith LLMAPIClient(config) as client:\n    print(f\"API available: {client.check_connection()}\")\n    print(f\"Generated: {client.generate('Generate a female name: ')}\")\n```\n\n\u003C\u002Fdetails>\n\n### Improving Model Performance\n\n```python\nimport sklearn\nimport pandas as pd\nfrom tabgan.sampler import GANGenerator\n\ndef evaluate(clf, X_train, y_train, X_test, y_test):\n    clf.fit(X_train, y_train)\n    return sklearn.metrics.roc_auc_score(y_test, clf.predict_proba(X_test)[:, 1])\n\ndataset = sklearn.datasets.load_breast_cancer()\nclf = sklearn.ensemble.RandomForestClassifier(n_estimators=25, max_depth=6)\nX_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(\n    pd.DataFrame(dataset.data),\n    pd.DataFrame(dataset.target, columns=[\"target\"]),\n    test_size=0.33, random_state=42,\n)\n\nprint(\"Baseline:\", evaluate(clf, X_train, y_train, X_test, y_test))\n\nnew_train, new_target = GANGenerator().generate_data_pipe(X_train, y_train, X_test)\nprint(\"With GAN:\", evaluate(clf, new_train, new_target, X_test, y_test))\n```\n\n### Time-Series Data Generation\n\n```python\nimport pandas as pd\nimport numpy as np\nfrom tabgan.utils import get_year_mnth_dt_from_date, collect_dates\nfrom tabgan.sampler import GANGenerator\n\ntrain = pd.DataFrame(np.random.randint(-10, 150, size=(100, 4)), columns=list(\"ABCD\"))\nmin_date, max_date = pd.to_datetime(\"2019-01-01\"), pd.to_datetime(\"2021-12-31\")\nd = (max_date - min_date).days + 1\ntrain[\"Date\"] = min_date + pd.to_timedelta(np.random.randint(d, size=100), unit=\"d\")\ntrain = get_year_mnth_dt_from_date(train, \"Date\")\n\nnew_train, _ = GANGenerator(\n    gen_x_times=1.1, cat_cols=[\"year\"],\n    bot_filter_quantile=0.001, top_filter_quantile=0.999,\n    is_post_process=True, pregeneration_frac=2,\n).generate_data_pipe(train.drop(\"Date\", axis=1), None, train.drop(\"Date\", axis=1))\n\nnew_train = collect_dates(new_train)\n```\n\n## Quality Report\n\nGenerate a self-contained HTML report comparing original and synthetic data across multiple quality axes: column statistics, PSI, correlation heatmaps, distribution plots, and ML utility (TSTR vs TRTR).\n\n```python\nfrom tabgan import QualityReport\n\nreport = QualityReport(\n    original_df, synthetic_df,\n    cat_cols=[\"gender\"],\n    target_col=\"target\",      # enables ML utility evaluation\n).compute()\n\n# Export to a single HTML file (charts embedded as base64)\nreport.to_html(\"quality_report.html\")\n\n# Or access metrics programmatically\nsummary = report.summary()\nprint(f\"Overall score: {summary['overall_score']}\")\nprint(f\"Mean PSI: {summary['psi']['mean']}\")\nprint(f\"ML utility ratio: {summary['ml_utility']['utility_ratio']}\")\n```\n\nFor a quick comparison without the full report:\n\n```python\nfrom tabgan.utils import compare_dataframes\n\nscore = compare_dataframes(original_df, generated_df)  # 0.0 (poor) to 1.0 (excellent)\n```\n\n## Constraints\n\nEnforce business rules on generated data. Constraints are applied as a post-generation step — invalid rows are repaired or filtered out.\n\n```python\nfrom tabgan import GANGenerator, RangeConstraint, UniqueConstraint, FormulaConstraint, RegexConstraint\n\nnew_train, new_target = GANGenerator(gen_x_times=1.5).generate_data_pipe(\n    train, target, test,\n    constraints=[\n        RangeConstraint(\"age\", min_val=0, max_val=120),\n        UniqueConstraint(\"email\"),\n        FormulaConstraint(\"end_date > start_date\"),\n        RegexConstraint(\"zip_code\", r\"\\d{5}\"),\n    ],\n)\n```\n\n**Available constraints:**\n\n| Constraint | Description | Fix strategy |\n|------------|-------------|--------------|\n| `RangeConstraint` | Numeric values within `[min, max]` | Clips values to bounds |\n| `UniqueConstraint` | No duplicate values in a column | Drops duplicate rows |\n| `FormulaConstraint` | Boolean expression via `df.eval()` | Filters violating rows |\n| `RegexConstraint` | String values match a regex pattern | Filters non-matching rows |\n\nThe `ConstraintEngine` supports two strategies: `\"fix\"` (repair then filter) and `\"filter\"` (drop violations only):\n\n```python\nfrom tabgan import ConstraintEngine, RangeConstraint\n\nengine = ConstraintEngine(\n    constraints=[RangeConstraint(\"price\", min_val=0)],\n    strategy=\"fix\",  # or \"filter\"\n)\ncleaned_df = engine.apply(generated_df)\n```\n\n## Privacy Metrics\n\nAssess re-identification risk of synthetic data before sharing. Includes Distance to Closest Record (DCR), Nearest Neighbor Distance Ratio (NNDR), and membership inference risk.\n\n```python\nfrom tabgan import PrivacyMetrics\n\npm = PrivacyMetrics(original_df, synthetic_df, cat_cols=[\"gender\"])\nsummary = pm.summary()\n\nprint(f\"Overall privacy score: {summary['overall_privacy_score']}\")  # 0 (risky) to 1 (private)\nprint(f\"DCR mean: {summary['dcr']['mean']}\")\nprint(f\"NNDR mean: {summary['nndr']['mean']}\")\nprint(f\"Membership inference AUC: {summary['membership_inference']['auc']}\")  # closer to 0.5 = better\n```\n\n**Metrics explained:**\n\n| Metric | What it measures | Good value |\n|--------|-----------------|------------|\n| **DCR** | Distance from each synthetic row to nearest real row | Higher = more private |\n| **NNDR** | Ratio of 1st\u002F2nd nearest neighbor distances | Closer to 1.0 |\n| **MI AUC** | Can a classifier tell if a record was in training data? | Closer to 0.5 |\n\n## sklearn Pipeline Integration\n\nUse `TabGANTransformer` to insert synthetic data augmentation into an sklearn `Pipeline`:\n\n```python\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.ensemble import RandomForestClassifier\nfrom tabgan import TabGANTransformer\n\npipe = Pipeline([\n    (\"augment\", TabGANTransformer(gen_x_times=1.5, cat_cols=[\"gender\"])),\n    (\"model\", RandomForestClassifier()),\n])\n\n# fit() generates synthetic data and trains the model on augmented data\npipe.fit(X_train, y_train)\n```\n\nWorks with any generator and supports constraints:\n\n```python\nfrom tabgan import TabGANTransformer, GANGenerator, RangeConstraint\n\ntransformer = TabGANTransformer(\n    generator_class=GANGenerator,\n    gen_x_times=2.0,\n    gen_params={\"batch_size\": 500, \"epochs\": 10, \"patience\": 5},\n    constraints=[RangeConstraint(\"age\", min_val=0, max_val=120)],\n)\n\nX_augmented = transformer.fit_transform(X_train, y_train)\ny_augmented = transformer.get_augmented_target()\n```\n\n## AutoSynth\n\nDon't know which generator works best for your data? **AutoSynth** runs all of them and picks the winner based on quality and privacy scores:\n\n```python\nfrom tabgan import AutoSynth\n\nresult = AutoSynth(df, target_col=\"label\").run()\n\nprint(result.report)\n#   Generator          Status  Score  Quality  Privacy  Rows  Time (s)\n# 0 GAN (CTGAN)        OK      0.847  0.891    0.743    165   12.3\n# 1 Forest Diffusion   OK      0.812  0.834    0.761    165   45.1\n# 2 Random Baseline    OK      0.654  0.621    0.732    165   0.1\n\nbest_synthetic = result.best_data\nprint(f\"Winner: {result.best_name}\")\n```\n\nCustomize scoring weights:\n\n```python\nresult = AutoSynth(\n    df,\n    target_col=\"label\",\n    quality_weight=0.5,   # equal weight\n    privacy_weight=0.5,\n).run()\n```\n\n## HuggingFace Hub Integration\n\nSynthesize any tabular dataset from HuggingFace Hub in one call:\n\n```python\nfrom tabgan import synthesize_hf_dataset\n\n# Load → Generate → Evaluate automatically\nresult = synthesize_hf_dataset(\"scikit-learn\u002Firis\", target_col=\"target\")\nprint(result.synthetic_df.head())\nprint(f\"Quality: {result.quality_summary['overall_score']}\")\n\n# Push synthetic dataset back to Hub\nresult = synthesize_hf_dataset(\n    \"scikit-learn\u002Firis\",\n    target_col=\"target\",\n    push_to_hub=True,\n    hub_repo_id=\"your-username\u002Firis-synthetic\",\n)\n```\n\n## Command-Line Interface\n\n```bash\ntabgan-generate \\\n    --input-csv train.csv \\\n    --target-col target \\\n    --generator gan \\\n    --gen-x-times 1.5 \\\n    --cat-cols year,gender \\\n    --output-csv synthetic_train.csv\n```\n\n## Pipeline Architecture\n\n![Experiment design and workflow](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDiyago_Tabular-data-generation_readme_aff00f01663b.png)\n\n```\nInput (train_df, target, test_df)\n  |\n  v\n[Preprocess] --> Validate DataFrames, prepare columns\n  |\n  v\n[Generate]  --> CTGAN \u002F ForestDiffusion \u002F GReaT LLM \u002F Random sampling\n  |\n  v\n[Post-process] --> Quantile-based filtering against test distribution\n  |\n  v\n[Adversarial Filter] --> LightGBM classifier removes dissimilar samples\n  |\n  v\nOutput (synthetic_df, synthetic_target)\n```\n\n## Benchmark Results\n\nNormalized ROC AUC scores (higher is better):\n\n| Dataset | No augmentation | GAN | Sample Original |\n|---------|:-:|:-:|:-:|\n| credit | 0.997 | **0.998** | 0.997 |\n| employee | **0.986** | 0.966 | 0.972 |\n| mortgages | 0.984 | 0.964 | **0.988** |\n| poverty_A | 0.937 | **0.950** | 0.933 |\n| taxi | 0.966 | 0.938 | **0.987** |\n| adult | 0.995 | 0.967 | **0.998** |\n\n## Citation\n\n```bibtex\n@misc{ashrapov2020tabular,\n    title={Tabular GANs for uneven distribution},\n    author={Insaf Ashrapov},\n    year={2020},\n    eprint={2010.00638},\n    archivePrefix={arXiv},\n    primaryClass={cs.LG}\n}\n```\n\n## References\n\n1. Xu, L., & Veeramachaneni, K. (2018). *Synthesizing Tabular Data using Generative Adversarial Networks*. arXiv:1811.11264.\n2. Jolicoeur-Martineau, A., Fatras, K., & Kachman, T. (2023). *Generating and Imputing Tabular Data via Diffusion and Flow-based Gradient-Boosted Trees*. SamsungSAILMontreal\u002FForestDiffusion.\n3. Xu, L., Skoularidou, M., Cuesta-Infante, A., & Veeramachaneni, K. (2019). *Modeling Tabular data using Conditional GAN*. NeurIPS.\n4. Borisov, V., Sessler, K., Leemann, T., Pawelczyk, M., & Kasneci, G. (2023). *Language Models are Realistic Tabular Data Generators*. ICLR.\n\n## License\n\nApache License 2.0 &mdash; see [LICENSE](LICENSE) for details.\n","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDiyago_Tabular-data-generation_readme_150d961e5513.png\" height=\"120\" alt=\"TabGAN标志\">\n\u003C\u002Fp>\n\n\u003Ch1 align=\"center\">TabGAN\u003C\u002Fh1>\n\u003Cp align=\"center\">\u003Cstrong>高质量合成表格数据生成\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Ftabgan\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Ftabgan.svg\" alt=\"PyPI版本\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Ftabgan\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftabgan?v=3.0.2\" alt=\"Python版本\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpepy.tech\u002Fproject\u002Ftabgan\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDiyago_Tabular-data-generation_readme_3b433e9f0db0.png\" alt=\"下载量\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg\" alt=\"许可证\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg\" alt=\"代码风格：black\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fwww.codefactor.io\u002Frepository\u002Fgithub\u002Fdiyago\u002Ftabular-data-generation\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDiyago_Tabular-data-generation_readme_9ee0cb95ac54.png\" alt=\"CodeFactor\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdiyago\u002FTabular-data-generation\u002Factions\u002Fworkflows\u002Fcodeql.yml\">\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fdiyago\u002FTabular-data-generation\u002Fworkflows\u002FCodeQL\u002Fbadge.svg\" alt=\"CodeQL\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Finsafq-tabgan.hf.space\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Spaces-TabGAN%20Demo-blue\" alt=\"HF Space\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FDiyago\u002FTabular-data-generation\u002Fblob\u002Fmaster\u002Fexamples\u002Ftabgan_examples.ipynb\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在Colab中打开\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n## 概述\n\nTabGAN 提供了一个统一的 Python 接口，用于使用多种最先进的生成方法生成合成表格数据：\n\n| 方法 | 后端 | 优势 |\n|----------|---------|-----------|\n| **GANs** | 条件表格 GAN (CTGAN) | 混合数据类型，复杂多变量分布 |\n| **扩散模型** | ForestDiffusion（基于树的梯度提升） | 针对结构化数据的高保真生成 |\n| **大型语言模型** | GReaT 框架 | 捕捉语义依赖关系，条件文本生成 |\n| **基准** | 带重置的随机采样 | 快速基准测试和比较 |\n\n所有生成器都共享一个通用流程：**生成 &rarr; 后处理 &rarr; 对抗性过滤**，确保合成数据与真实数据分布保持一致。\n\n*基于论文：[不均衡分布下的表格 GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.00638) (arXiv:2010.00638)*\n\n## 核心特性\n\n- **统一 API** &mdash; 通过更改单个参数即可在 GAN、扩散模型和 LLM 之间切换\n- **对抗性过滤** &mdash; 内置基于 LightGBM 的验证机制，使合成样本分布一致\n- **混合数据类型** &mdash; 原生支持连续、分类和自由文本列\n- **条件生成** &mdash; 通过 LLM 提示生成基于分类属性的文本\n- **LLM API 支持** &mdash; 可与 LM Studio、OpenAI、Ollama 或任何兼容 OpenAI 的端点集成\n- **质量验证** &mdash; 通过一次函数调用即可比较原始和合成分布\n- **AutoSynth** &mdash; 自动运行所有生成器，比较质量和隐私，选择最佳方案\n- **HuggingFace 集成** &mdash; 一键合成任意 HF 数据集，并将结果推回 Hub\n- **[在线演示](https:\u002F\u002Finsafq-tabgan.hf.space)** &mdash; 在 HuggingFace Spaces 中浏览器体验\n\n## 安装\n\n```bash\npip install tabgan\n```\n\n## 快速入门\n\n```python\nimport pandas as pd\nimport numpy as np\nfrom tabgan.sampler import GANGenerator\n\ntrain = pd.DataFrame(np.random.randint(-10, 150, size=(150, 4)), columns=list(\"ABCD\"))\ntarget = pd.DataFrame(np.random.randint(0, 2, size=(150, 1)), columns=list(\"Y\"))\ntest = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=list(\"ABCD\"))\n\nnew_train, new_target = GANGenerator().generate_data_pipe(train, target, test)\n```\n\n## 可用生成器\n\n| 生成器 | 描述 | 最适用场景 |\n|-----------|-------------|----------|\n| `GANGenerator` | 基于 CTGAN 的生成 | 具有混合类型的通用表格数据 |\n| `ForestDiffusionGenerator` | 基于树状方法的扩散模型 | 复杂的表格结构 |\n| `BayesianGenerator` | 保留边缘分布的高斯 Copula | 快速且保持相关性的生成 |\n| `LLMGenerator` | 基于大型语言模型 | 语义依赖关系，文本列 |\n| `OriginalGenerator` | 基准随机采样 | 基准测试和比较 |\n\n## API 参考\n\n### 常见参数\n\n所有生成器均接受以下参数：\n\n| 参数 | 类型 | 默认值 | 描述 |\n|-----------|------|---------|-------------|\n| `gen_x_times` | `float` | `1.1` | 合成样本数量相对于训练集大小的倍数 |\n| `cat_cols` | `list` | `None` | 视为分类列的列名 |\n| `bot_filter_quantile` | `float` | `0.001` | 后处理过滤的下分位数 |\n| `top_filter_quantile` | `float` | `0.999` | 后处理过滤的上分位数 |\n| `is_post_process` | `bool` | `True` | 启用基于分位数的后处理过滤 |\n| `pregeneration_frac` | `float` | `2` | 过滤前的过采样系数 |\n| `only_generated_data` | `bool` | `False` | 仅返回合成行（排除原始数据） |\n| `gen_params` | `dict` | 见下文 | 生成器特定的超参数 |\n\n### 生成器特定参数 (`gen_params`)\n\n**GANGenerator:**\n```python\n{\"batch_size\": 500, \"patience\": 25, \"epochs\": 500}\n```\n\n**LLMGenerator:**\n```python\n{\"batch_size\": 32, \"epochs\": 4, \"llm\": \"distilgpt2\", \"max_length\": 500}\n```\n\n### `generate_data_pipe` 方法\n\n```python\nnew_train, new_target = generator.generate_data_pipe(\n    train_df,           # pd.DataFrame - 训练特征\n    target,             # pd.DataFrame - 目标变量（或 None）\n    test_df,            # pd.DataFrame - 测试特征，用于分布对齐\n    deep_copy=True,     # bool - 复制输入 DataFrame\n    only_adversarial=False,  # bool - 跳过生成，仅进行过滤\n    use_adversarial=True,    # bool - 启用对抗性过滤\n)\n```\n\n**返回值:** `Tuple[pd.DataFrame, pd.DataFrame]` &mdash; `(new_train, new_target)`\n\n## 数据格式\n\nTabGAN 接受 `pandas.DataFrame` 输入，包含：\n\n- **连续列** &mdash; 任何实数值的数值数据\n- **分类列** &mdash; 具有有限取值集合的离散列\n\n> **注意:** TabGAN 内部以浮点数处理数值。若需整数输出，请在生成后进行四舍五入。\n\n## 示例\n\n### 所有生成器的基本用法\n\n```python\nfrom tabgan.sampler import (\n    OriginalGenerator, GANGenerator, ForestDiffusionGenerator,\n    BayesianGenerator, LLMGenerator,\n)\nimport pandas as pd\nimport numpy as np\n\ntrain = pd.DataFrame(np.random.randint(-10, 150, size=(150, 4)), columns=list(\"ABCD\"))\ntarget = pd.DataFrame(np.random.randint(0, 2, size=(150, 1)), columns=list(\"Y\"))\ntest = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=list(\"ABCD\"))\n\nnew_train1, new_target1 = OriginalGenerator().generate_data_pipe(train, target, test)\nnew_train2, new_target2 = GANGenerator(\n    gen_params={\"batch_size\": 500, \"epochs\": 10, \"patience\": 5}\n).generate_data_pipe(train, target, test)\nnew_train3, new_target3 = ForestDiffusionGenerator().generate_data_pipe(train, target, test)\nnew_train4, new_target4 = BayesianGenerator().generate_data_pipe(train, target, test)\nnew_train5, new_target5 = LLMGenerator(\n    gen_params={\"batch_size\": 32, \"epochs\": 4, \"llm\": \"distilgpt2\", \"max_length\": 500}\n).generate_data_pipe(train, target, test)\n```\n\n### 完整参数示例\n\n```python\nnew_train, new_target = GANGenerator(\n    gen_x_times=1.1,\n    cat_cols=None,\n    bot_filter_quantile=0.001,\n    top_filter_quantile=0.999,\n    is_post_process=True,\n    adversarial_model_params={\n        \"metrics\": \"AUC\", \"max_depth\": 2, \"max_bin\": 100,\n        \"learning_rate\": 0.02, \"random_state\": 42, \"n_estimators\": 100,\n    },\n    pregeneration_frac=2,\n    only_generated_data=False,\n    gen_params={\"batch_size\": 500, \"patience\": 25, \"epochs\": 500},\n).generate_data_pipe(\n    train, target, test,\n    deep_copy=True,\n    only_adversarial=False,\n    use_adversarial=True,\n)\n```\n\n### LLM 条件文本生成\n\n根据分类属性生成包含新颖文本值的合成行：\n\n```python\nimport pandas as pd\nfrom tabgan.sampler import LLMGenerator\n\ntrain = pd.DataFrame({\n    \"Name\": [\"Anna\", \"Maria\", \"Ivan\", \"Sergey\", \"Olga\", \"Boris\"],\n    \"Gender\": [\"F\", \"F\", \"M\", \"M\", \"F\", \"M\"],\n    \"Age\": [25, 30, 35, 40, 28, 32],\n    \"Occupation\": [\"Engineer\", \"Doctor\", \"Artist\", \"Teacher\", \"Manager\", \"Pilot\"],\n})\n\nnew_train, _ = LLMGenerator(\n    gen_x_times=1.5,\n    text_generating_columns=[\"Name\"],      # 需要生成新颖文本的列\n    conditional_columns=[\"Gender\"],         # 用于条件化文本生成的列\n    gen_params={\"batch_size\": 32, \"epochs\": 4, \"llm\": \"distilgpt2\", \"max_length\": 500},\n    is_post_process=False,\n).generate_data_pipe(train, target=None, test_df=None, only_generated_data=True)\n```\n\n**工作原理：**\n1. 从条件列的经验分布中采样值。\n2. 使用拟合的 GReaT 模型插补其余非文本列。\n3. 通过基于提示的生成方式生成新颖的文本。\n4. 确保生成的文本值与原始数据不同。\n\n### 基于 LLM API 的文本生成\n\n使用外部 LLM API（LM Studio、OpenAI、Ollama）代替本地模型：\n\n```python\nimport pandas as pd\nfrom tabgan.sampler import LLMGenerator\nfrom tabgan.llm_config import LLMAPIConfig\n\ntrain = pd.DataFrame({\n    \"Name\": [\"Anna\", \"Maria\", \"Ivan\", \"Sergey\", \"Olga\", \"Boris\"],\n    \"Gender\": [\"F\", \"F\", \"M\", \"M\", \"F\", \"M\"],\n    \"Age\": [25, 30, 35, 40, 28, 32],\n    \"Occupation\": [\"Engineer\", \"Doctor\", \"Artist\", \"Teacher\", \"Manager\", \"Pilot\"],\n})\n\n# LM Studio\napi_config = LLMAPIConfig.from_lm_studio(\n    base_url=\"http:\u002F\u002Flocalhost:1234\",\n    model=\"google\u002Fgemma-3-12b\",\n    timeout=90,\n)\n\n# 或 OpenAI：LLMAPIConfig.from_openai(api_key=\"...\", model=\"gpt-4\")\n# 或 Ollama：LLMAPIConfig.from_ollama(model=\"llama3\")\n\nnew_train, _ = LLMGenerator(\n    gen_x_times=1.5,\n    text_generating_columns=[\"Name\"],\n    conditional_columns=[\"Gender\"],\n    gen_params={\"batch_size\": 32, \"epochs\": 4, \"llm\": \"distilgpt2\", \"max_length\": 500},\n    llm_api_config=api_config,\n    is_post_process=False,\n).generate_data_pipe(train, target=None, test_df=None, only_generated_data=True)\n```\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>LLM API 配置选项\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n| 参数 | 类型 | 默认值 | 描述 |\n|-----------|------|---------|-------------|\n| `base_url` | `str` | `\"http:\u002F\u002Flocalhost:1234\"` | API 服务器基础 URL |\n| `model` | `str` | `\"google\u002Fgemma-3-12b\"` | 模型标识符 |\n| `api_key` | `str` | `None` | 用于身份验证的 API 密钥 |\n| `timeout` | `int` | `90` | 请求超时时间（秒） |\n| `max_tokens` | `int` | `256` | 最大生成标记数 |\n| `temperature` | `float` | `0.7` | 采样温度 |\n| `system_prompt` | `str` | `None` | 用于生成的系统提示 |\n\n**测试连接：**\n\n```python\nfrom tabgan.llm_config import LLMAPIConfig\nfrom tabgan.llm_api_client import LLMAPIClient\n\nconfig = LLMAPIConfig.from_lm_studio()\nwith LLMAPIClient(config) as client:\n    print(f\"API 可用：{client.check_connection()}\")\n    print(f\"生成结果：{client.generate('生成一个女性名字：')}\")\n```\n\n\u003C\u002Fdetails>\n\n### 提升模型性能\n\n```python\nimport sklearn\nimport pandas as pd\nfrom tabgan.sampler import GANGenerator\n\ndef evaluate(clf, X_train, y_train, X_test, y_test):\n    clf.fit(X_train, y_train)\n    return sklearn.metrics.roc_auc_score(y_test, clf.predict_proba(X_test)[:, 1])\n\ndataset = sklearn.datasets.load_breast_cancer()\nclf = sklearn.ensemble.RandomForestClassifier(n_estimators=25, max_depth=6)\nX_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(\n    pd.DataFrame(dataset.data),\n    pd.DataFrame(dataset.target, columns=[\"target\"]),\n    test_size=0.33, random_state=42,\n)\n\nprint(\"基线：\", evaluate(clf, X_train, y_train, X_test, y_test))\n\nnew_train, new_target = GANGenerator().generate_data_pipe(X_train, y_train, X_test)\nprint(\"使用 GAN 后：\", evaluate(clf, new_train, new_target, X_test, y_test))\n```\n\n### 时间序列数据生成\n\n```python\nimport pandas as pd\nimport numpy as np\nfrom tabgan.utils import get_year_mnth_dt_from_date, collect_dates\nfrom tabgan.sampler import GANGenerator\n\ntrain = pd.DataFrame(np.random.randint(-10, 150, size=(100, 4)), columns=list(\"ABCD\"))\nmin_date, max_date = pd.to_datetime(\"2019-01-01\"), pd.to_datetime(\"2021-12-31\")\nd = (max_date - min_date).days + 1\ntrain[\"Date\"] = min_date + pd.to_timedelta(np.random.randint(d, size=100), unit=\"d\")\ntrain = get_year_mnth_dt_from_date(train, \"Date\")\n\nnew_train, _ = GANGenerator(\n    gen_x_times=1.1, cat_cols=[\"year\"],\n    bot_filter_quantile=0.001, top_filter_quantile=0.999,\n    is_post_process=True, pregeneration_frac=2,\n).generate_data_pipe(train.drop(\"Date\", axis=1), None, train.drop(\"Date\", axis=1))\n\nnew_train = collect_dates(new_train)\n```\n\n## 质量报告\n\n生成一份自包含的 HTML 报告，从多个质量维度比较原始数据和合成数据：列统计信息、PSI、相关性热图、分布图以及机器学习效用（TSTR 与 TRTR）。\n\n```python\nfrom tabgan import QualityReport\n\nreport = QualityReport(\n    original_df, synthetic_df,\n    cat_cols=[\"gender\"],\n    target_col=\"target\",      # 启用机器学习效用评估\n).compute()\n\n# 导出为单个 HTML 文件（图表以内嵌 base64 格式）\nreport.to_html(\"quality_report.html\")\n\n# 或者以编程方式访问指标\nsummary = report.summary()\nprint(f\"总体评分: {summary['overall_score']}\")\nprint(f\"PSI 均值: {summary['psi']['mean']}\")\nprint(f\"ML 效用比: {summary['ml_utility']['utility_ratio']}\")\n```\n\n若只需快速比较而无需完整报告，可以使用以下代码：\n\n```python\nfrom tabgan.utils import compare_dataframes\n\nscore = compare_dataframes(original_df, generated_df)  # 0.0（差）至 1.0（优）\n```\n\n## 约束条件\n\n对生成的数据强制执行业务规则。约束条件在数据生成后应用——无效行会被修复或过滤掉。\n\n```python\nfrom tabgan import GANGenerator, RangeConstraint, UniqueConstraint, FormulaConstraint, RegexConstraint\n\nnew_train, new_target = GANGenerator(gen_x_times=1.5).generate_data_pipe(\n    train, target, test,\n    constraints=[\n        RangeConstraint(\"age\", min_val=0, max_val=120),\n        UniqueConstraint(\"email\"),\n        FormulaConstraint(\"end_date > start_date\"),\n        RegexConstraint(\"zip_code\", r\"\\d{5}\"),\n    ],\n)\n```\n\n**可用约束条件：**\n\n| 约束类型         | 描述                           | 修复策略                     |\n|------------------|--------------------------------|------------------------------|\n| `RangeConstraint` | 数值在 `[min, max]` 范围内      | 将数值截断到边界             |\n| `UniqueConstraint` | 列中无重复值                   | 删除重复行                   |\n| `FormulaConstraint` | 通过 `df.eval()` 的布尔表达式   | 过滤违反条件的行             |\n| `RegexConstraint` | 字符串值需匹配正则表达式       | 过滤不匹配的行               |\n\n`ConstraintEngine` 支持两种策略：“fix”（先修复再过滤）和“filter”（仅丢弃违规行）：\n\n```python\nfrom tabgan import ConstraintEngine, RangeConstraint\n\nengine = ConstraintEngine(\n    constraints=[RangeConstraint(\"price\", min_val=0)],\n    strategy=\"fix\",  # 或 \"filter\"\n)\ncleaned_df = engine.apply(generated_df)\n```\n\n## 隐私度量\n\n在共享合成数据之前，评估其重新识别风险。包括最近邻距离（DCR）、最近邻距离比（NNDR）以及成员推理风险。\n\n```python\nfrom tabgan import PrivacyMetrics\n\npm = PrivacyMetrics(original_df, synthetic_df, cat_cols=[\"gender\"])\nsummary = pm.summary()\n\nprint(f\"总体隐私评分: {summary['overall_privacy_score']}\")  # 0（有风险）至 1（隐私性强）\nprint(f\"DCR 均值: {summary['dcr']['mean']}\")\nprint(f\"NNDR 均值: {summary['nndr']['mean']}\")\nprint(f\"成员推理 AUC: {summary['membership_inference']['auc']}\")  # 接近 0.5 表示更好\n```\n\n**度量解释：**\n\n| 度量           | 测量内容                 | 良好值          |\n|----------------|--------------------------|-----------------|\n| **DCR**        | 每个合成行与最近真实行的距离 | 越高越隐私      |\n| **NNDR**       | 第一\u002F第二近邻距离之比     | 接近 1.0        |\n| **MI AUC**     | 分类器能否判断记录是否来自训练数据？ | 接近 0.5 表示更好 |\n\n## sklearn 管道集成\n\n使用 `TabGANTransformer` 将合成数据增强插入到 sklearn `Pipeline` 中：\n\n```python\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.ensemble import RandomForestClassifier\nfrom tabgan import TabGANTransformer\n\npipe = Pipeline([\n    (\"augment\", TabGANTransformer(gen_x_times=1.5, cat_cols=[\"gender\"])),\n    (\"model\", RandomForestClassifier()),\n])\n\n# fit() 会生成合成数据，并在增强后的数据上训练模型\npipe.fit(X_train, y_train)\n```\n\n适用于任何生成器，并支持约束条件：\n\n```python\nfrom tabgan import TabGANTransformer、GANGenerator 和 RangeConstraint\n\ntransformer = TabGANTransformer(\n    generator_class=GANGenerator,\n    gen_x_times=2.0,\n    gen_params={\"batch_size\": 500, \"epochs\": 10, \"patience\": 5},\n    constraints=[RangeConstraint(\"age\", min_val=0, max_val=120)],\n)\n\nX_augmented = transformer.fit_transform(X_train, y_train)\ny_augmented = transformer.get_augmented_target()\n```\n\n## AutoSynth\n\n不知道哪种生成器最适合您的数据？**AutoSynth** 会运行所有生成器，并根据质量和隐私评分选出最佳方案：\n\n```python\nfrom tabgan import AutoSynth\n\nresult = AutoSynth(df, target_col=\"label\").run()\n\nprint(result.report)\n#   生成器          状态  评分  质量  隐私  行数  时间 (s)\n# 0 GAN (CTGAN)        OK      0.847  0.891    0.743    165   12.3\n# 1 Forest Diffusion   OK      0.812  0.834    0.761    165   45.1\n# 2 Random Baseline    OK      0.654  0.621    0.732    165   0.1\n\nbest_synthetic = result.best_data\nprint(f\"获胜者: {result.best_name}\")\n```\n\n自定义评分权重：\n\n```python\nresult = AutoSynth(\n    df,\n    target_col=\"label\",\n    quality_weight=0.5,   # 平等权重\n    privacy_weight=0.5,\n).run()\n```\n\n## HuggingFace Hub 集成\n\n只需一次调用，即可从 HuggingFace Hub 合成任意表格数据集：\n\n```python\nfrom tabgan import synthesize_hf_dataset\n\n# 自动加载 → 生成 → 评估\nresult = synthesize_hf_dataset(\"scikit-learn\u002Firis\", target_col=\"target\")\nprint(result.synthetic_df.head())\nprint(f\"质量: {result.quality_summary['overall_score']}\")\n\n# 将合成数据推回 Hub\nresult = synthesize_hf_dataset(\n    \"scikit-learn\u002Firis\",\n    target_col=\"target\",\n    push_to_hub=True,\n    hub_repo_id=\"your-username\u002Firis-synthetic\",\n)\n```\n\n## 命令行界面\n\n```bash\ntabgan-generate \\\n    --input-csv train.csv \\\n    --target-col target \\\n    --generator gan \\\n    --gen-x-times 1.5 \\\n    --cat-cols year,gender \\\n    --output-csv synthetic_train.csv\n```\n\n## 流程架构\n\n![实验设计与工作流程](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDiyago_Tabular-data-generation_readme_aff00f01663b.png)\n\n```\n输入（train_df、target、test_df）\n  |\n  v\n[预处理] --> 验证数据框，准备列\n  |\n  v\n[生成]  --> CTGAN \u002F ForestDiffusion \u002F GReaT LLM \u002F 随机采样\n  |\n  v\n[后处理] --> 基于分位数的过滤，使其符合测试分布\n  |\n  v\n[对抗性过滤] --> LightGBM 分类器移除不相似样本\n  |\n  v\n输出（synthetic_df、synthetic_target）\n```\n\n## 基准测试结果\n\n归一化 ROC AUC 分数（越高越好）：\n\n| 数据集       | 无增强 | GAN | 采样原始数据 |\n|--------------|--------|-----|--------------|\n| credit       | 0.997  | **0.998** | 0.997        |\n| employee     | **0.986** | 0.966 | 0.972        |\n| mortgages    | 0.984  | 0.964 | **0.988**    |\n| poverty_A    | 0.937  | **0.950** | 0.933        |\n| taxi         | 0.966  | 0.938 | **0.987**    |\n| adult        | 0.995  | 0.967 | **0.998**    |\n```\n\n## 引用\n\n```bibtex\n@misc{ashrapov2020tabular,\n    title={用于不均衡分布的表格 GAN},\n    author={Insaf Ashrapov},\n    year={2020},\n    eprint={2010.00638},\n    archivePrefix={arXiv},\n    primaryClass={cs.LG}\n}\n```\n\n## 参考文献\n\n1. 徐亮，维拉马切内尼，K.（2018）。*使用生成对抗网络合成表格数据*。arXiv:1811.11264。\n2. 若利库尔-马蒂诺，A.，法特拉斯，K.，卡赫曼，T.（2023）。*通过扩散模型和基于流的梯度提升树生成与填补表格数据*。SamsungSAILMontreal\u002FForestDiffusion。\n3. 徐亮，斯库拉里杜，M.，库埃斯塔-因凡特，A.，维拉马切内尼，K.（2019）。*使用条件生成对抗网络建模表格数据*。NeurIPS。\n4. 鲍里索夫，V.，塞斯勒，K.，莱曼，T.，帕韦尔奇克，M.，卡斯内奇，G.（2023）。*语言模型是真实的表格数据生成器*。ICLR。\n\n## 许可证\n\nApache许可证2.0 — 详情请参阅[LICENSE](LICENSE)文件。","# TabGAN 快速上手指南\n\nTabGAN 是一个用于生成高质量合成表格数据的 Python 库，支持 GAN、扩散模型和大语言模型（LLM）等多种生成方法。它提供统一的 API，能够处理混合数据类型（连续值、类别值、自由文本），并内置对抗性过滤机制以确保生成数据分布与真实数据一致。\n\n## 环境准备\n\n- **操作系统**：Linux, macOS, Windows\n- **Python 版本**：3.8 - 3.12\n- **核心依赖**：\n  - `pandas`\n  - `numpy`\n  - `scikit-learn`\n  - `torch` (PyTorch，用于 GAN 和扩散模型)\n  - `transformers` (可选，用于 LLM 生成)\n\n> **提示**：建议在使用前确保已安装 PyTorch。若需加速下载，可使用国内镜像源配置 pip。\n\n## 安装步骤\n\n使用 pip 直接安装：\n\n```bash\npip install tabgan\n```\n\n**推荐使用国内镜像源加速安装：**\n\n```bash\npip install tabgan -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n若需使用基于大语言模型的生成器（`LLMGenerator`），请额外安装相关依赖：\n\n```bash\npip install transformers accelerate -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 基本使用\n\n以下示例展示如何使用默认的 `GANGenerator` 快速生成合成数据。\n\n### 1. 导入库并准备数据\n\n```python\nimport pandas as pd\nimport numpy as np\nfrom tabgan.sampler import GANGenerator\n\n# 构造示例训练数据 (特征)\ntrain = pd.DataFrame(np.random.randint(-10, 150, size=(150, 4)), columns=list(\"ABCD\"))\n\n# 构造示例目标变量\ntarget = pd.DataFrame(np.random.randint(0, 2, size=(150, 1)), columns=list(\"Y\"))\n\n# 构造示例测试数据 (用于分布对齐)\ntest = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=list(\"ABCD\"))\n```\n\n### 2. 生成合成数据\n\n调用 `generate_data_pipe` 方法即可完成“生成 -> 后处理 -> 对抗过滤”的全流程：\n\n```python\n# 初始化生成器并生成数据\nnew_train, new_target = GANGenerator().generate_data_pipe(train, target, test)\n\n# 查看生成结果\nprint(f\"原始数据形状：{train.shape}\")\nprint(f\"合成数据形状：{new_train.shape}\")\nprint(new_train.head())\n```\n\n### 3. 切换其他生成器（可选）\n\nTabGAN 支持通过更换类名轻松切换底层算法，API 保持一致：\n\n```python\nfrom tabgan.sampler import ForestDiffusionGenerator, BayesianGenerator, LLMGenerator\n\n# 使用基于树的扩散模型\nnew_train_diff, _ = ForestDiffusionGenerator().generate_data_pipe(train, target, test)\n\n# 使用贝叶斯方法 (速度快，保留相关性)\nnew_train_bayes, _ = BayesianGenerator().generate_data_pipe(train, target, test)\n\n# 使用大语言模型 (适合包含文本列的数据)\n# new_train_llm, _ = LLMGenerator().generate_data_pipe(train, target, test)\n```\n\n> **注意**：生成的数据内部以浮点数处理。如果原始数据包含整数，建议在生成后对结果进行取整操作。","某金融科技公司风控团队急需构建反欺诈模型，但受限于隐私法规，无法直接使用包含用户敏感信息的真实交易数据，且现有样本中欺诈案例极少，导致模型训练困难。\n\n### 没有 Tabular-data-generation 时\n- **数据获取受阻**：因合规限制，无法将生产环境的真实脱敏数据用于外部开发或共享，导致算法迭代停滞。\n- **样本严重失衡**：真实数据中欺诈交易占比不足 1%，传统过采样方法（如 SMOTE）生成的数据缺乏多样性，模型极易过拟合。\n- **混合类型处理繁琐**：数据集中同时包含连续金额、离散类别及自由文本备注，需编写大量自定义代码进行预处理和分布对齐。\n- **验证成本高昂**：缺乏自动化工具评估合成数据与原始数据的分布一致性，只能依靠人工统计抽查，效率低下且不可靠。\n\n### 使用 Tabular-data-generation 后\n- **隐私安全合规**：利用 CTGAN 或 ForestDiffusion 生成高保真合成数据，完全保留原始统计特征却不包含任何真实用户信息，轻松通过合规审查。\n- **解决长尾分布**：通过条件生成功能，定向扩充稀有的欺诈样本，使正负样本比例达到平衡，显著提升模型对异常交易的识别率。\n- **统一接口提效**：借助统一 API 自动处理数值、类别及文本列的混合类型，无需手动编写复杂的清洗逻辑，一键完成数据生成与后处理。\n- **自动化质量把关**：内置基于 LightGBM 的对抗性过滤机制，自动剔除分布异常的合成样本，并提供量化指标对比，确保数据可用性。\n\nTabular-data-generation 通过生成高质量、合规且分布均衡的合成表格数据，彻底打破了数据隐私与模型性能之间的僵局，让风控模型训练不再受限于真实数据的匮乏。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDiyago_Tabular-data-generation_f79923b6.png","Diyago","Insaf Ashrapov","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FDiyago_811d9b8e.jpg","Data Science Product Owner",null,"iashrapov@gmail.com","IAshrapov","https:\u002F\u002Fgithub.com\u002FDiyago",[84],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,565,84,"2026-04-02T12:16:41","Apache-2.0",1,"未说明","未说明 (支持纯 CPU 运行，如 Random sampling 和 BayesianGenerator；LLM 和 Diffusion 模型建议使用 GPU 加速但未强制要求)",{"notes":96,"python":97,"dependencies":98},"该工具提供统一的 Python 接口，支持多种生成方法（GAN、扩散模型、LLM）。若使用 LLMGenerator 调用外部 API（如 OpenAI, Ollama, LM Studio），需配置网络连接及 API Key。部分高级功能（如 ForestDiffusion）可能依赖额外的树模型库。安装命令为 `pip install tabgan`。","3.8+",[99,100,101,102,103,104,105],"pandas","numpy","scikit-learn","lightgbm","torch","transformers","diffusers",[51,14,13],[108,109,110,111,112,113,114,115,116],"tabular-data","gans","train-dataframe","adversarial-filtering","deep-learning","feature-engineering","gan","machine-learning","python","2026-03-27T02:49:30.150509","2026-04-06T09:44:34.673012",[120,125,129,134,139,144],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},12789,"如何处理包含分类变量（字符串）的表格数据？需要什么样的预处理？","如果数据中包含分类列（如字符串类型的 \"Income_Category\"），不能直接传入模型，否则可能报错 \"Input X contains Nans\"。您需要先对分类列进行编码预处理。维护者建议使用序数编码（Ordinal Encoding）或标签编码（Label Encoding）。处理步骤：1. 使用 sklearn 等库对分类列进行编码；2. 将编码后的数据传入 GANGenerator；3. 如果问题依旧，请确保没有隐藏的 NaN 值。","https:\u002F\u002Fgithub.com\u002FDiyago\u002FTabular-data-generation\u002Fissues\u002F66",{"id":126,"question_zh":127,"answer_zh":128,"source_url":124},12790,"修改 GAN 的 batch_size 参数时遇到错误怎么办？","由于模型本身的限制，某些 batch_size 数值是不支持的，强行修改会引发错误。维护者已在新版本中修复了此问题。解决方案是升级到最新版本：运行命令 `pip install tabgan==1.3.2`（或更新到当前最新版）即可解决 batch_size 相关的兼容性问题。",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},12791,"输入数据量很大（如 20 万行）导致内存错误，或者生成的数据中出现负值怎么办？","1. 针对内存错误：在调用 `generate_data_pipe` 方法时，添加参数 `deep_copy=True`，这可以将内存使用量减少约一半，从而支持百万行级别的数据输入。2. 针对生成负值问题：这通常是配置或版本问题，确保使用最新版本并检查数据过滤参数（如 quantile 设置），已有用户反馈升级后能正确生成正整数且支持大规模数据。","https:\u002F\u002Fgithub.com\u002FDiyago\u002FTabular-data-generation\u002Fissues\u002F14",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},12792,"明明已经用 dropna() 过滤了 NaN，为什么还是报错 \"ValueError: Input X contains NaN\"？","这个问题通常不是由数据中的 NaN 引起的，而是由 DataFrame 的索引（index）不匹配导致的。特别是在使用 `train_test_split` 后，target 数据的索引可能与 train 数据不一致。解决方案：在传入 `generate_data_pipe` 之前，对 train 和 target 数据重置索引。代码示例：`train.reset_index(drop=True)` 和 `target.reset_index(drop=True)`。另外，也可以尝试将 target 参数设为 None 进行测试。","https:\u002F\u002Fgithub.com\u002FDiyago\u002FTabular-data-generation\u002Fissues\u002F68",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},12793,"安装或导入包时出现 \"ModuleNotFoundError: No module named 'be_great'\" 错误如何解决？","该错误表明缺少依赖包 `be_great`。这是因为 `tabgan` 依赖外部库（如 GReaT）。解决方法是手动安装缺失的依赖项。请在终端运行：`pip install be_great`。如果还有其他类似缺失模块的错误（如 `_ctgan` 等），请根据报错信息依次安装对应的依赖包，或检查 `requirements.txt` 确保所有依赖已安装。","https:\u002F\u002Fgithub.com\u002FDiyago\u002FTabular-data-generation\u002Fissues\u002F81",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},12794,"在 Python 3.7+ 版本上安装失败，提示 GLIBC 版本错误或无法安装 wheel 文件怎么办？","这通常是由于预编译的 wheel 文件与您的系统环境（特别是 Linux 系统的 GLIBC 版本）不兼容导致的。维护者建议不要完全依赖 pip 自动安装的 wheel，而是尝试手动安装依赖。具体步骤：1. 查看项目 `requirements.txt` 文件；2. 手动安装指定版本的 torch（例如：`pip install torch==1.6.0`）；3. 尝试从源码构建或使用兼容您系统环境的 Python 版本。如果是 GLIBC 版本过低，可能需要升级操作系统或使用 Docker 容器。","https:\u002F\u002Fgithub.com\u002FDiyago\u002FTabular-data-generation\u002Fissues\u002F13",[150,155,160,165,170,175],{"id":151,"version":152,"summary_zh":153,"released_at":154},71445,"v3.2.0","## 新功能\n\n### BayesianGenerator（高斯Copula）\n- 新增基于高斯Copula的生成器，用于快速、轻量级的合成数据生成\n- 无需训练神经网络——开箱即用\n- 在Colab笔记本中添加了示例\n\n### AutoSynth与HuggingFace Hub集成（v3.1.0）\n- **AutoSynth**：自动为您的数据集选择最佳生成器\n- **HuggingFace Hub**：直接推送\u002F拉取合成数据集\n- 发布博文，包含基准测试和速度对比\n\n### 其他改进\n- 所有生成器的执行时间和质量报告\n- HuggingFace Space演示（Gradio应用）\n- 修复HuggingFace Space：禁用SSR，将重度依赖项设为可选\n- 更新PyPI描述\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FDiyago\u002FTabular-data-generation\u002Fcompare\u002Fv3.0.2...v3.2.0","2026-03-29T15:12:40",{"id":156,"version":157,"summary_zh":158,"released_at":159},71446,"v3.0.1","## 新功能\n\n### 质量报告 (HTML)\n生成独立的 HTML 报告，用于比较原始数据和合成数据——包括列统计信息、每列的 PSI 值、相关性热图、分布图以及机器学习效用分数（TSTR 对比 TRTR）。\n\n```python\nfrom tabgan import QualityReport\nreport = QualityReport(original_df, synthetic_df, target_col=\"target\").compute()\nreport.to_html(\"report.html\")\n```\n\n### 约束系统\n通过 4 种约束类型对生成的数据施加业务规则：`RangeConstraint`、`UniqueConstraint`、`FormulaConstraint` 和 `RegexConstraint`。这些约束直接集成到 `generate_data_pipe()` 中。\n\n```python\nfrom tabgan import GANGenerator, RangeConstraint\nnew_train, _ = GANGenerator().generate_data_pipe(\n    train, target, test,\n    constraints=[RangeConstraint(\"age\", min_val=0, max_val=120)]\n)\n```\n\n### 隐私度量\n使用 DCR（与最近记录的距离）、NNDR（最近邻距离比）以及成员推理风险来评估再识别风险。返回一个 0–1 的总体隐私评分。\n\n```python\nfrom tabgan import PrivacyMetrics\npm = PrivacyMetrics(original_df, synthetic_df).summary()\nprint(pm[\"overall_privacy_score\"])\n```\n\n### sklearn 管道集成\n`TabGANTransformer` — 一个可直接替换的 sklearn 转换器，用于在 `Pipeline` 内部进行数据增强。支持 `get_params`\u002F`set_params`、约束条件以及所有生成器类型。\n\n```python\nfrom sklearn.pipeline import Pipeline\nfrom tabgan import TabGANTransformer\npipe = Pipeline([(\"augment\", TabGANTransformer(gen_x_times=1.5)), (\"model\", clf)])\n```\n\n## 改进\n- **重构代码库**：修复了可变默认参数、嵌套测试类、`Warning()` 错误、`make_two_digit()` 错误以及已弃用的 `pkg_resources`。\n- **通过 `_BaseGenerator` 基类实现 DRY 的生成器工厂**。\n- **专业化的 README**：包含居中徽章、管道示意图、CLI 文档和新功能说明。\n- **添加了 Python 版本分类器**，并将 `python_requires` 更新为 `>= 3.9`。\n- **测试覆盖率提升**：从 39 个测试扩展到 115 个测试。\n\n## 依赖项\n- 新增了 `matplotlib>=3.5` 和 `requests`。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FDiyago\u002FTabular-data-generation\u002Fcompare\u002Fv2.6.0...v3.0.1","2026-03-28T05:38:28",{"id":161,"version":162,"summary_zh":163,"released_at":164},71447,"2.0.0","本次发布引入了一个名为 **ForestDiffusion** 的生成器，该生成器源自论文《通过基于扩散和流的 XGBoost 模型生成与填补表格数据》。[GitHub 仓库](https:\u002F\u002Fgithub.com\u002FSamsungSAILMontreal\u002FForestDiffusion)\n\n安装：`pip install tabgan`\n数据生成：`ForestDiffusionGenerator().generate_data_pipe(train, target, test, )`","2023-09-30T19:35:48",{"id":166,"version":167,"summary_zh":168,"released_at":169},71448,"1.2.0","**完整更新日志**: https:\u002F\u002Fgithub.com\u002FDiyago\u002FGAN-for-tabular-data\u002Fcompare\u002F1.0.1...1.2.0\n\n更新内容：\n\n1. 新增时间序列数据生成功能，即 TimeGAN。\n2. 修复了 Colab 使用时的版本依赖问题。#22 #23 #24\n3. 提升了生成数据的鲁棒性 #12。\n4. 修复了内存占用过高的问题 #14。\n5. 增加了日志记录功能 #15。\n\n安装命令：`pip install tabgan` 或 `pip install tabgan==1.2.0`","2021-12-26T11:39:40",{"id":171,"version":172,"summary_zh":173,"released_at":174},71449,"1.0.3","https:\u002F\u002Fpypi.org\u002Fproject\u002Ftabgan\u002F1.0.3\u002F","2021-02-18T21:22:24",{"id":176,"version":177,"summary_zh":79,"released_at":178},71450,"research","2020-07-13T16:43:56"]