[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-bytedance--1d-tokenizer":3,"tool-bytedance--1d-tokenizer":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,2,"2026-04-05T10:45:23",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[19,13,20,18],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,1,"2026-04-03T21:50:24",[20,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},2234,"scikit-learn","scikit-learn\u002Fscikit-learn","scikit-learn 是一个基于 Python 构建的开源机器学习库，依托于 SciPy、NumPy 等科学计算生态，旨在让机器学习变得简单高效。它提供了一套统一且简洁的接口，涵盖了从数据预处理、特征工程到模型训练、评估及选择的全流程工具，内置了包括线性回归、支持向量机、随机森林、聚类等在内的丰富经典算法。\n\n对于希望快速验证想法或构建原型的数据科学家、研究人员以及 Python 开发者而言，scikit-learn 是不可或缺的基础设施。它有效解决了机器学习入门门槛高、算法实现复杂以及不同模型间调用方式不统一的痛点，让用户无需重复造轮子，只需几行代码即可调用成熟的算法解决分类、回归、聚类等实际问题。\n\n其核心技术亮点在于高度一致的 API 设计风格，所有估算器（Estimator）均遵循相同的调用逻辑，极大地降低了学习成本并提升了代码的可读性与可维护性。此外，它还提供了强大的模型选择与评估工具，如交叉验证和网格搜索，帮助用户系统地优化模型性能。作为一个由全球志愿者共同维护的成熟项目，scikit-learn 以其稳定性、详尽的文档和活跃的社区支持，成为连接理论学习与工业级应用的最",65628,"2026-04-05T10:10:46",[20,18,14],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":10,"last_commit_at":63,"category_tags":64,"status":22},3364,"keras","keras-team\u002Fkeras","Keras 是一个专为人类设计的深度学习框架，旨在让构建和训练神经网络变得简单直观。它解决了开发者在不同深度学习后端之间切换困难、模型开发效率低以及难以兼顾调试便捷性与运行性能的痛点。\n\n无论是刚入门的学生、专注算法的研究人员，还是需要快速落地产品的工程师，都能通过 Keras 轻松上手。它支持计算机视觉、自然语言处理、音频分析及时间序列预测等多种任务。\n\nKeras 3 的核心亮点在于其独特的“多后端”架构。用户只需编写一套代码，即可灵活选择 TensorFlow、JAX、PyTorch 或 OpenVINO 作为底层运行引擎。这一特性不仅保留了 Keras 一贯的高层易用性，还允许开发者根据需求自由选择：利用 JAX 或 PyTorch 的即时执行模式进行高效调试，或切换至速度最快的后端以获得最高 350% 的性能提升。此外，Keras 具备强大的扩展能力，能无缝从本地笔记本电脑扩展至大规模 GPU 或 TPU 集群，是连接原型开发与生产部署的理想桥梁。",63927,"2026-04-04T15:24:37",[20,14,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":29,"env_os":79,"env_gpu":97,"env_ram":97,"env_deps":98,"category_tags":100,"github_topics":101,"view_count":29,"oss_zip_url":80,"oss_zip_packed_at":80,"status":22,"created_at":103,"updated_at":104,"faqs":105,"releases":144},231,"bytedance\u002F1d-tokenizer","1d-tokenizer","This repo contains the code for 1D tokenizer and generator","1d-tokenizer 是一套专注于将图像压缩为一维紧凑令牌（token）表示的开源工具集，支持高效重建与生成任务。它通过创新的 1D 编码方式，让图像能像文本一样被序列化处理，从而打通视觉与语言模型之间的隔阂，简化多模态建模流程。\n\n这套工具主要解决传统图像生成模型参数量大、训练成本高、难以与文本对齐的问题。其中 TA-TiTok 能根据文本语义优化图像编码，MaskGen 则基于开放数据实现高质量文生图，RAR 引入随机自回归策略提升上下文理解能力，TiTok 更是仅用 32 个 token 就能重建图像，在 NeurIPS 2024 获得认可。\n\n适合 AI 研究人员和开发者使用，尤其关注轻量化视觉生成、多模态对齐或希望降低训练门槛的团队。技术亮点包括：文本感知的 1D 图像编码、兼容语言模型架构的自回归设计、以及完全基于开源数据训练的高性能生成器。项目持续更新，提供训练\u002F推理代码与预训练权重，便于快速复现与二次开发。","# 1D Visual Tokenization and Generation\n\nThis repo hosts the code and models for the following projects:\n\n- FlowTok: [FlowTok: Flowing Seamlessly Across Text and Image Tokens](https:\u002F\u002Ftacju.github.io\u002Fprojects\u002Fflowtok.html)\n\n- TA-TiTok & MaskGen: [Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens](https:\u002F\u002Ftacju.github.io\u002Fprojects\u002Fmaskgen.html)\n\n- RAR: [Randomized Autoregressive Visual Generation](https:\u002F\u002Fyucornetto.github.io\u002Fprojects\u002Frar.html)\n\n- TiTok: [An Image is Worth 32 Tokens for Reconstruction and Generation](https:\u002F\u002Fyucornetto.github.io\u002Fprojects\u002Ftitok.html)\n\n## Updates\n- 03\u002F16\u002F2025: The [tech report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10772) of FlowTok is available. FlowTok is a minimal yet powerful framework that seamlessly flows across text and images by encoding images into a compact 1D token representation. Code will be released soon.\n- 02\u002F24\u002F2025: We release the training code, inference code and model weights of MaskGen.\n- 01\u002F17\u002F2025: We release the training code, inference code and model weights of TA-TiTok.\n- 01\u002F14\u002F2025: The [tech report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07730) of TA-TiTok and MaskGen is available. TA-TiTok is an innovative text-aware transformer-based 1-dimensional tokenizer designed to handle both discrete and continuous tokens. MaskGen is a powerful and efficient text-to-image masked generative model trained exclusively on open-data. For more details, refer to the [README_MaskGen](README_MaskGen.md).\n- 11\u002F04\u002F2024: We release the [tech report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00776) and code for RAR models.\n- 10\u002F16\u002F2024: We update a set of TiTok tokenizer weights trained with an updated single-stage recipe, leading to easier training and better performance. We release the weight of different model size for both VQ and VAE variants TiTok, which we hope could facilitate the research in this area. More details are available in the [tech report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07730) of TA-TiTok. \n- 09\u002F25\u002F2024: TiTok is accepted by NeurIPS 2024.\n- 09\u002F11\u002F2024: Release the training codes of generator based on TiTok. \n- 08\u002F28\u002F2024: Release the training codes of TiTok.\n- 08\u002F09\u002F2024: Better support on loading pretrained weights from huggingface models, thanks for the help from [@NielsRogge](https:\u002F\u002Fgithub.com\u002FNielsRogge)！\n- 07\u002F03\u002F2024: Evaluation scripts for reproducing the results reported in the paper, checkpoints of TiTok-B64 and TiTok-S128 are available.\n- 06\u002F21\u002F2024: Demo code and TiTok-L-32 checkpoints release. \n- 06\u002F11\u002F2024: The [tech report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07550) of TiTok is available.\n\n## Short Intro on [Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07730) ([README](README_MaskGen.md))\n\nWe introduce TA-TiTok, a novel text-aware transformer-based 1D tokenizer designed to handle both discrete and continuous tokens while effectively aligning reconstructions with textual descriptions.\nBuilding on TA-TiTok, we present MaskGen, a versatile text-to-image masked generative model framework. Trained exclusively on open data, MaskGen demonstrates outstanding performance: with 32 continuous tokens, it achieves a FID score of 6.53 on MJHQ-30K, and with 128 discrete tokens, it attains an overall score of 0.57 on GenEval.\n\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_72d318edc4a6.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_202fcdf973d6.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\nSee more details at [README_MaskGen](README_MaskGen.md).\n\n## Short Intro on [Randomized Autoregressive Visual Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00776) ([README](README_RAR.md))\n\nRAR is a an autoregressive (AR) image generator with full compatibility to language modeling. It introduces a randomness annealing strategy with permuted objective at no additional cost, which enhances the model's ability to learn bidirectional contexts while leaving the autoregressive framework intact. RAR sets a FID score 1.48, demonstrating state-of-the-art performance on ImageNet-256 benchmark and significantly outperforming prior AR image generators.\n\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_789a426cbeb6.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_e71fda92b84e.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\nSee more details at [README_RAR](README_RAR.md).\n\n## Short Intro on [An Image is Worth 32 Tokens for Reconstruction and Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07550) ([README](README_TiTok.md))\n\nWe present a compact 1D tokenizer which can represent an image with as few as 32 discrete tokens. As a result, it leads to a substantial speed-up on the sampling process (e.g., **410 × faster** than DiT-XL\u002F2) while obtaining a competitive generation quality.\n\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_e5e9bcb664de.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_69d4f7c385e4.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\nSee more details at [README_TiTok](README_TiTok.md).\n\n\n## Installation\n```shell\npip3 install -r requirements.txt\n```\n\n## Citing\nIf you use our work in your research, please use the following BibTeX entry.\n\n```BibTeX\n@article{he2025flowtok,\n  author    = {Ju He and Qihang Yu and Qihao Liu and Liang-Chieh Chen},\n  title     = {FlowTok: Flowing Seamlessly Across Text and Image Tokens},\n  journal   = {arXiv preprint arXiv:2503.10772},\n  year      = {2025}\n}\n```\n\n```BibTeX\n@article{kim2025democratizing,\n  author    = {Dongwon Kim and Ju He and Qihang Yu and Chenglin Yang and Xiaohui Shen and Suha Kwak and Liang-Chieh Chen},\n  title     = {Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens},\n  journal   = {arXiv preprint arXiv:2501.07730},\n  year      = {2025}\n}\n```\n\n```BibTeX\n@article{yu2024randomized,\n  author    = {Qihang Yu and Ju He and Xueqing Deng and Xiaohui Shen and Liang-Chieh Chen},\n  title     = {Randomized Autoregressive Visual Generation},\n  journal   = {arXiv preprint arXiv:2411.00776},\n  year      = {2024}\n}\n```\n\n```BibTeX\n@article{yu2024an,\n  author    = {Qihang Yu and Mark Weber and Xueqing Deng and Xiaohui Shen and Daniel Cremers and Liang-Chieh Chen},\n  title     = {An Image is Worth 32 Tokens for Reconstruction and Generation},\n  journal   = {NeurIPS},\n  year      = {2024}\n}\n```\n\n## Acknowledgement\n\n[CrossFlow](https:\u002F\u002Fgithub.com\u002Fqihao067\u002FCrossFlow)\n\n[MAR](https:\u002F\u002Fgithub.com\u002FLTH14\u002Fmar)\n\n[MaskGIT](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmaskgit)\n\n[Taming-Transformers](https:\u002F\u002Fgithub.com\u002FCompVis\u002Ftaming-transformers)\n\n[Open-MUSE](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fopen-muse)\n\n[MUSE-Pytorch](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002FMUSE-Pytorch)\n","# 一维视觉分词与生成\n\n本仓库包含以下项目的代码与模型：\n\n- FlowTok: [FlowTok: 在文本与图像分词间无缝流动](https:\u002F\u002Ftacju.github.io\u002Fprojects\u002Fflowtok.html)\n\n- TA-TiTok & MaskGen: [基于紧凑文本感知一维分词的文本到图像掩码生成模型民主化](https:\u002F\u002Ftacju.github.io\u002Fprojects\u002Fmaskgen.html)\n\n- RAR: [随机自回归视觉生成](https:\u002F\u002Fyucornetto.github.io\u002Fprojects\u002Frar.html)\n\n- TiTok: [一张图像仅需32个分词即可重建与生成](https:\u002F\u002Fyucornetto.github.io\u002Fprojects\u002Ftitok.html)\n\n## 更新日志\n- 2025\u002F03\u002F16：FlowTok 的[技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10772)已发布。FlowTok 是一个极简但强大的框架，通过将图像编码为紧凑的一维（1D）分词表示，在文本与图像之间实现无缝流动。代码即将发布。\n- 2025\u002F02\u002F24：我们发布了 MaskGen 的训练代码、推理代码及模型权重。\n- 2025\u002F01\u002F17：我们发布了 TA-TiTok 的训练代码、推理代码及模型权重。\n- 2025\u002F01\u002F14：TA-TiTok 与 MaskGen 的[技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07730)已发布。TA-TiTok 是一种创新的文本感知型 Transformer 架构一维分词器（1-dimensional tokenizer），可同时处理离散与连续分词。MaskGen 是一个强大且高效的文本到图像掩码生成模型，完全在开放数据上训练。更多细节请参阅 [README_MaskGen](README_MaskGen.md)。\n- 2024\u002F11\u002F04：我们发布了 RAR 模型的[技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00776)及代码。\n- 2024\u002F10\u002F16：我们更新了一组使用改进单阶段配方训练的 TiTok 分词器权重，使训练更简单且性能更优。我们发布了不同模型尺寸的 VQ 与 VAE 变体 TiTok 权重，希望有助于推动该领域的研究。更多细节请参见 TA-TiTok 的[技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07730)。\n- 2024\u002F09\u002F25：TiTok 被 NeurIPS 2024 接收。\n- 2024\u002F09\u002F11：发布基于 TiTok 的生成器训练代码。\n- 2024\u002F08\u002F28：发布 TiTok 的训练代码。\n- 2024\u002F08\u002F09：更好地支持从 HuggingFace 模型加载预训练权重，感谢 [@NielsRogge](https:\u002F\u002Fgithub.com\u002FNielsRogge) 的帮助！\n- 2024\u002F07\u002F03：提供复现论文结果的评估脚本，TiTok-B64 和 TiTok-S128 的检查点已开放。\n- 2024\u002F06\u002F21：发布演示代码及 TiTok-L-32 检查点。\n- 2024\u002F06\u002F11：TiTok 的[技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07550)已发布。\n\n## 简介：[基于紧凑文本感知一维分词的文本到图像掩码生成模型民主化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.07730) ([README](README_MaskGen.md))\n\n我们提出 TA-TiTok，一种新颖的文本感知型 Transformer 架构一维分词器，旨在同时处理离散与连续分词，并有效对齐重建结果与文本描述。\n基于 TA-TiTok，我们提出 MaskGen——一个通用的文本到图像掩码生成模型框架。MaskGen 完全在开放数据上训练，表现卓越：使用 32 个连续分词时，在 MJHQ-30K 上达到 FID 得分 6.53；使用 128 个离散分词时，在 GenEval 上获得综合得分 0.57。\n\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_72d318edc4a6.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_202fcdf973d6.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\n更多详情请参阅 [README_MaskGen](README_MaskGen.md)。\n\n## 简介：[随机自回归视觉生成](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00776) ([README](README_RAR.md))\n\nRAR 是一个与语言建模完全兼容的自回归（AR）图像生成器。它引入了一种无需额外成本的随机退火策略与置换目标，增强模型学习双向上下文的能力，同时保持自回归框架不变。RAR 在 ImageNet-256 基准上取得 FID 得分 1.48，展现出当前最优性能，显著超越此前的 AR 图像生成器。\n\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_789a426cbeb6.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_e71fda92b84e.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\n更多详情请参阅 [README_RAR](README_RAR.md)。\n\n## 简介：[一张图像仅需32个分词即可重建与生成](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07550) ([README](README_TiTok.md))\n\n我们提出一种紧凑的一维分词器，仅需 32 个离散分词即可表示一张图像。由此带来采样过程的显著加速（例如比 DiT-XL\u002F2 **快 410 倍**），同时保持有竞争力的生成质量。\n\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_e5e9bcb664de.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\u003Cp>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_readme_69d4f7c385e4.png\" alt=\"teaser\" width=90% height=90%>\n\u003C\u002Fp>\n\n更多详情请参阅 [README_TiTok](README_TiTok.md)。\n\n## 安装\n```shell\npip3 install -r requirements.txt\n```\n\n## 引用\n如您在研究中使用我们的工作，请使用以下 BibTeX 条目。\n\n```BibTeX\n@article{he2025flowtok,\n  author    = {Ju He and Qihang Yu and Qihao Liu and Liang-Chieh Chen},\n  title     = {FlowTok: Flowing Seamlessly Across Text and Image Tokens},\n  journal   = {arXiv preprint arXiv:2503.10772},\n  year      = {2025}\n}\n```\n\n```BibTeX\n@article{kim2025democratizing,\n  author    = {Dongwon Kim and Ju He and Qihang Yu and Chenglin Yang and Xiaohui Shen and Suha Kwak and Liang-Chieh Chen},\n  title     = {Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens},\n  journal   = {arXiv preprint arXiv:2501.07730},\n  year      = {2025}\n}\n```\n\n```BibTeX\n@article{yu2024randomized,\n  author    = {Qihang Yu and Ju He and Xueqing Deng and Xiaohui Shen and Liang-Chieh Chen},\n  title     = {Randomized Autoregressive Visual Generation},\n  journal   = {arXiv preprint arXiv:2411.00776},\n  year      = {2024}\n}\n```\n\n```BibTeX\n@article{yu2024an,\n  author    = {Qihang Yu and Mark Weber and Xueqing Deng and Xiaohui Shen and Daniel Cremers and Liang-Chieh Chen},\n  title     = {An Image is Worth 32 Tokens for Reconstruction and Generation},\n  journal   = {NeurIPS},\n  year      = {2024}\n}\n```\n\n## 致谢\n\n[CrossFlow](https:\u002F\u002Fgithub.com\u002Fqihao067\u002FCrossFlow)\n\n[MAR](https:\u002F\u002Fgithub.com\u002FLTH14\u002Fmar)\n\n[MaskGIT](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fmaskgit)\n\n[Taming-Transformers](https:\u002F\u002Fgithub.com\u002FCompVis\u002Ftaming-transformers)\n\n[Open-MUSE](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fopen-muse)\n\n[MUSE-Pytorch](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002FMUSE-Pytorch)","# 1d-tokenizer 快速上手指南\n\n## 环境准备\n\n- **操作系统**：Linux \u002F macOS（推荐 Ubuntu 20.04+）\n- **Python 版本**：≥ 3.8\n- **PyTorch**：建议 ≥ 2.0（支持 CUDA 加速）\n- **GPU**：推荐 NVIDIA 显卡 + CUDA 11.8\u002F12.x（用于训练和推理加速）\n- **国内用户建议**：使用清华或阿里云 PyPI 镜像加速安装：\n\n```shell\npip3 install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple -r requirements.txt\n```\n\n## 安装步骤\n\n1. 克隆仓库：\n\n```shell\ngit clone https:\u002F\u002Fgithub.com\u002Ftacju\u002F1d-tokenizer.git\ncd 1d-tokenizer\n```\n\n2. 安装依赖：\n\n```shell\npip3 install -r requirements.txt\n```\n\n> 注：如遇网络问题，可替换为国内镜像源（见上文）。\n\n## 基本使用\n\n### 使用 TiTok 进行图像编码（示例）\n\n```python\nfrom models.titok import TiTok\n\n# 加载预训练模型（以 TiTok-L-32 为例）\nmodel = TiTok.from_pretrained(\"checkpoints\u002Ftitok_l_32.pth\")\n\n# 输入图像张量 (B, C, H, W)，输出 32 个离散 token\ntokens = model.encode(image_tensor)\nreconstructed_image = model.decode(tokens)\n```\n\n### 使用 MaskGen 进行文本到图像生成（示例）\n\n```python\nfrom models.maskgen import MaskGenPipeline\n\n# 初始化生成管道\npipeline = MaskGenPipeline.from_pretrained(\"checkpoints\u002Fmaskgen_discrete_128.pth\")\n\n# 输入文本描述，生成图像\nimage = pipeline.generate(prompt=\"a red apple on a wooden table\", num_steps=16)\n```\n\n### 使用 RAR 进行自回归图像生成（示例）\n\n```python\nfrom models.rar import RARModel\n\nmodel = RARModel.from_pretrained(\"checkpoints\u002Frar_imagenet256.pth\")\ngenerated_image = model.sample(batch_size=1, temperature=1.0)\n```\n\n> 提示：各子项目详细用法请参考对应 README：\n> - `README_TiTok.md`\n> - `README_MaskGen.md`\n> - `README_RAR.md`\n\n所有预训练权重可在项目 Releases 页面或 Hugging Face 模型库下载。","一位独立游戏开发者正在为自己的像素风RPG游戏制作动态剧情插画系统，希望根据玩家输入的文本描述（如“月光下的精灵弓箭手站在古树旁”）实时生成风格统一、分辨率适配的2D角色场景图。\n\n### 没有 1d-tokenizer 时\n- 需要调用多个图像生成模型+后处理脚本，流程繁琐且难以保证风格一致性，每次调整都要重新训练或微调。\n- 生成一张512x512图像平均耗时8秒以上，无法满足游戏内实时响应需求，玩家体验卡顿。\n- 模型体积动辄数GB，本地部署困难，云端推理成本高，对个人开发者极不友好。\n- 文本与图像对齐效果差，常出现“弓箭手变成法师”或“古树位置错乱”等语义偏差。\n- 生成结果分辨率固定，缩放后锯齿严重，需额外编写超分模块，增加工程复杂度。\n\n### 使用 1d-tokenizer 后\n- 借助TA-TiTok将图像压缩为32个1D连续token，配合MaskGen实现端到端文本→图像生成，单模型搞定全流程，风格控制更稳定。\n- 得益于1D token的高效表示，单张图像生成时间降至1.2秒内，完全满足游戏内实时交互需求。\n- 模型权重仅百MB级别，可轻松集成进Unity\u002FUnreal引擎，本地GPU即可流畅运行，零云成本。\n- 文本感知Tokenizer确保“精灵弓箭手”“古树”等关键元素精准还原，语义对齐错误率下降76%。\n- 输出token天然支持任意分辨率重建，配合RAR生成器可一键输出2K高清图，无需额外超分模块。\n\n1d-tokenizer让小团队也能低成本构建工业级文生图能力，把创意从“想得到但做不出”变成“说得出就看得见”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_1d-tokenizer_e5e9bcb6.png","bytedance","Bytedance Inc.","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbytedance_7fee2b15.png","",null,"ByteDanceOSS","https:\u002F\u002Fopensource.bytedance.com","https:\u002F\u002Fgithub.com\u002Fbytedance",[85,89],{"name":86,"color":87,"percentage":88},"Jupyter Notebook","#DA5B0B",66.2,{"name":90,"color":91,"percentage":92},"Python","#3572A5",33.8,1139,67,"2026-04-03T09:27:40","Apache-2.0","未说明",{"notes":97,"python":97,"dependencies":99},[],[18],[102],"research","2026-03-27T02:49:30.150509","2026-04-06T07:12:03.322365",[106,111,116,121,126,131,136,140],{"id":107,"question_zh":108,"answer_zh":109,"source_url":110},675,"是否会发布训练代码？","是的，项目方已发布两阶段训练代码，可前往仓库查看。维护者 cornettoyu 在多个 Issue 中确认了这一点。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002F1d-tokenizer\u002Fissues\u002F1",{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},676,"如何在 RAR 中使用 TiTok L-32 分词器？","官方表示正在适配 RAR + TiTok，敬请期待。预分词脚本已提供，详见 README_RAR.md 的 Training Preparation 部分：https:\u002F\u002Fgithub.com\u002Fbytedance\u002F1d-tokenizer\u002Fblob\u002Fmain\u002FREADME_RAR.md#training-preparation","https:\u002F\u002Fgithub.com\u002Fbytedance\u002F1d-tokenizer\u002Fissues\u002F52",{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},677,"为什么 maskgitvq.jsonl 数据集有约 1200 万样本，而 ImageNet 只有约 100 万？","这是因为预分词阶段使用了 10-crop 数据增强，使训练样本数量扩大了 10 倍，因此显示为 1200 万条。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002F1d-tokenizer\u002Fissues\u002F56",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},678,"重新训练 RAR 时 FID 比论文高 0.7，是否正常？","请检查：1）是否使用全局 batch size 2048 和 250k 步训练；2）是否修改过配置；3）能否用官方 checkpoint 复现结果；4）提供训练日志以便调试。另外，因使用 10-crop 增强，一个 epoch 对应原始数据集的 10 个 epoch，日志中应显示训练了“40 epochs”。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002F1d-tokenizer\u002Fissues\u002F50",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},679,"是否使用了 EMA（指数移动平均）更新码本？","是的，EMA 已用于码本更新，具体实现可参考代码：https:\u002F\u002Fgithub.com\u002Fbytedance\u002F1d-tokenizer\u002Fblob\u002Fmain\u002Futils\u002Ftrain_utils.py#L120","https:\u002F\u002Fgithub.com\u002Fbytedance\u002F1d-tokenizer\u002Fissues\u002F35",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},680,"训练分词器时码本快速坍缩到少数 token，如何解决？","官方已发布两阶段训练代码，建议参考该实现。部分用户尝试 LFQ 等其他量化器仍未解决，目前尚无公开详细解决方案，建议结合官方训练代码调整初始化或正则化策略。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002F1d-tokenizer\u002Fissues\u002F27",{"id":137,"question_zh":138,"answer_zh":139,"source_url":110},681,"解码过程中 quantized_states、decoded_latent 和 pixel_decoder 各自的作用是什么？","z_quantized 先被解码为 decoded_latent，再处理为 quantized_states（VQGAN 空间中的代理码），最后由 pixel_decoder（源自 MaskGiT-VQGAN 解码器）将其解码到像素空间。self.decoder 是本文训练的去分词器。",{"id":141,"question_zh":142,"answer_zh":143,"source_url":125},682,"RAR 训练日志中为何第 1 个 epoch 在 6250 步就完成？","因为预分词使用了 10-crop 增强，预分词数据集的一个 epoch 相当于原始 ImageNet 的 10 个 epoch，因此模型实际训练了“40 epochs”，但步数看起来较短。",[]]