[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-tensorflow--compression":3,"tool-tensorflow--compression":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159636,2,"2026-04-17T23:33:34",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":80,"stars":101,"forks":102,"last_commit_at":103,"license":104,"difficulty_score":10,"env_os":105,"env_gpu":106,"env_ram":107,"env_deps":108,"category_tags":115,"github_topics":116,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":124,"updated_at":125,"faqs":126,"releases":155},8841,"tensorflow\u002Fcompression","compression","Data compression in TensorFlow","TensorFlow Compression 是专为 TensorFlow 打造的数据压缩工具库，旨在帮助开发者构建端到端优化的机器学习模型。它核心解决了如何在仅牺牲极小模型性能的前提下，为图像、特征等数据找到高效存储表示的难题，特别适用于对存储空间敏感的应用场景。\n\n该工具主要面向 AI 研究人员和深度学习开发者，尤其是那些需要探索有损数据压缩或模型压缩技术的人群。其独特亮点在于提供了灵活的 C++ 范围编码（Range Coding）实现，支持处理包含所有整数的无限字母表；内置的熵模型类能自动设计编码表，在训练时充当似然模型，推理时则直接将浮点张量转换为优化后的比特流。此外，它还集成了广义除法归一化（GDN）等专为学习式压缩设计的 Keras 层。\n\n需要注意的是，自 2024 年 2 月起，TensorFlow Compression 已进入维护模式，功能不再新增且主要兼容 TensorFlow 2.14 版本。为了支持更新版本的 TensorFlow，官方推出了仅包含 C++ 算子的独立包 tensorflow-compression-ops，但该支持预计将持续至 TF 2.1","TensorFlow Compression 是专为 TensorFlow 打造的数据压缩工具库，旨在帮助开发者构建端到端优化的机器学习模型。它核心解决了如何在仅牺牲极小模型性能的前提下，为图像、特征等数据找到高效存储表示的难题，特别适用于对存储空间敏感的应用场景。\n\n该工具主要面向 AI 研究人员和深度学习开发者，尤其是那些需要探索有损数据压缩或模型压缩技术的人群。其独特亮点在于提供了灵活的 C++ 范围编码（Range Coding）实现，支持处理包含所有整数的无限字母表；内置的熵模型类能自动设计编码表，在训练时充当似然模型，推理时则直接将浮点张量转换为优化后的比特流。此外，它还集成了广义除法归一化（GDN）等专为学习式压缩设计的 Keras 层。\n\n需要注意的是，自 2024 年 2 月起，TensorFlow Compression 已进入维护模式，功能不再新增且主要兼容 TensorFlow 2.14 版本。为了支持更新版本的 TensorFlow，官方推出了仅包含 C++ 算子的独立包 tensorflow-compression-ops，但该支持预计将持续至 TF 2.18 版本。如果您正在寻找成熟的学习式压缩方案或希望复现相关研究，这是一个非常专业的选择。","# TensorFlow Compression\n\nTensorFlow Compression (TFC) contains data compression tools for TensorFlow.\n\nYou can use this library to build your own ML models with end-to-end optimized\ndata compression built in. It's useful to find storage-efficient representations\nof your data (images, features, examples, etc.) while only sacrificing a small\nfraction of model performance. Take a look at the [lossy data compression\ntutorial](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fgenerative\u002Fdata_compression) or\nthe [model compression\ntutorial](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Foptimization\u002Fcompression) to get\nstarted.\n\nFor a more in-depth introduction from a classical data compression perspective,\nconsider our [paper on nonlinear transform\ncoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.03034), or watch @jonarchists's [talk on\nlearned image compression](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=x_q7cZviXkY). For an\nintroduction to lossy data compression from a machine learning perspective, take\na look at @yiboyang's [review paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.06533).\n\nThe library contains (see the [API\ndocs](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftfc) for details):\n\n- Range coding (a.k.a. arithmetic coding) implementations in the form of\n  flexible TF ops written in C++. These include an optional \"overflow\"\n  functionality that embeds an Elias gamma code into the range encoded bit\n  sequence, making it possible to encode alphabets containing the entire set of\n  signed integers rather than just a finite range.\n\n- Entropy model classes which simplify the process of designing rate–distortion\n  optimized codes. During training, they act like likelihood models. Once\n  training is completed, they encode floating point tensors into optimized bit\n  sequences by automating the design of range coding tables and calling the\n  range coder implementation behind the scenes.\n\n- Additional TensorFlow functions and Keras layers that are useful in the\n  context of learned data compression, such as methods to numerically find\n  quantiles of density functions, take expectations with respect to dithering\n  noise, convolution layers with more flexible padding options and support for\n  reparameterizing kernels and biases in the Fourier domain, and an\n  implementation of generalized divisive normalization (GDN).\n\n**Important update:** As of February 1, 2024, TensorFlow Compression is in\nmaintenance mode. This means concretely:\n\n- The full feature set of TFC is frozen. No new features will be developed, but\n  the repository will receive maintenance fixes.\n\n- Going forward, new TFC packages will only work with TensorFlow 2.14. This is\n  due to an incompatibility introduced in the Keras version shipped with TF\n  2.15, which would require a rewrite of our layer and entropy model classes.\n\n- To ensure existing models can still be run with TF 2.15 and later, we are\n  releasing a new package\n  [tensorflow-compression-ops](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Ftree\u002Fmaster\u002Ftensorflow_compression_ops),\n  which only contains the C++ ops. These will be updated as long as possible for\n  newer TF versions.\n\n  - UPDATE: Due to technical challenges in maintaining C++ custom ops with newer\n    TensorFlow releases, TF 2.18 will be the last version supported by\n    `tensorflow-compression-ops`.\n\n- Binary packages are provided for both options on pypi.org:\n  [TFC](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensorflow-compression\u002F) and\n  [TFC ops](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensorflow-compression-ops\u002F).\n\n\n## Documentation & getting help\n\nRefer to [the API documentation](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftfc)\nfor a complete description of the classes and functions this package implements.\n\nPlease post all questions or comments on\n[Discussions](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Fdiscussions). Only file\n[Issues](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Fissues) for actual bugs or\nfeature requests. On Discussions, you may get a faster answer, and you help\nother people find the question or answer more easily later.\n\n## Installation\n\n***Note: Precompiled packages are currently only provided for Linux and\nDarwin\u002FMac OS. To use these packages on Windows, consider installing TensorFlow\nusing the [instructions for\nWSL2](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fpip#windows_1) or using a [TensorFlow\nDocker image](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fdocker), and then installing\nthe Linux package.***\n\nSet up an environment in which you can install precompiled binary Python\npackages using the `pip` command. Refer to the\n[TensorFlow installation instructions](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fpip)\nfor more information on how to set up such a Python environment.\n\nThe current version of TensorFlow Compression requires TensorFlow 2. For\nversions compatible with TensorFlow 1, see our [previous\nreleases](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Freleases).\n\n### pip\n\nTo install TFC via `pip`, run the following command:\n\n```bash\npython -m pip install tensorflow-compression\n```\n\nTo test that the installation works correctly, you can run the unit tests with:\n\n```bash\npython -m tensorflow_compression.all_tests\n```\n\nOnce the command finishes, you should see a message ```OK (skipped=29)``` or\nsimilar in the last line.\n\n### Colab\n\nYou can try out TFC live in a [Colab](https:\u002F\u002Fcolab.research.google.com\u002F). The\nfollowing command installs the latest version of TFC that is compatible with the\ninstalled TensorFlow version. Run it in a cell before executing your Python\ncode:\n\n```\n%pip install tensorflow-compression~=$(pip show tensorflow | perl -p -0777 -e 's\u002F.*Version: (\\d+\\.\\d+).*\u002F\\1.0\u002Fsg')\n```\n\nNote: The binary packages of TFC are tied to TF with the same minor version\n(e.g., TFC 2.9.1 requires TF 2.9.x), and Colab sometimes lags behind a few days\nin deploying the latest version of TensorFlow. As a result, using `pip install\ntensorflow-compression` naively might attempt to upgrade TF, which can create\nproblems.\n\n### Docker\n\nTo use a Docker container (e.g. on Windows), be sure to install Docker\n(e.g., [Docker Desktop](https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop)),\nuse a [TensorFlow Docker image](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fdocker),\nand then run the `pip install` command inside the Docker container, not on the\nhost. For instance, you can use a command line like this:\n\n```bash\ndocker run tensorflow\u002Ftensorflow:latest bash -c \\\n    \"python -m pip install tensorflow-compression &&\n     python -m tensorflow_compression.all_tests\"\n```\n\nThis will fetch the TensorFlow Docker image if it's not already cached, install\nthe pip package and then run the unit tests to confirm that it works.\n\n### Anaconda\n\nIt seems that [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002Fdistribution\u002F) ships its own\nbinary version of TensorFlow which is incompatible with our pip package. To\nsolve this, always install TensorFlow via `pip` rather than `conda`. For\nexample, this creates an Anaconda environment with CUDA libraries, and then\ninstalls TensorFlow and TensorFlow Compression:\n\n```bash\nconda create --name ENV_NAME python cudatoolkit cudnn\nconda activate ENV_NAME\npython -m pip install tensorflow-compression\n```\n\nDepending on the requirements of the `tensorflow` pip package, you may need to\npin the CUDA libraries to specific versions. If you aren't using a GPU, CUDA is\nof course not necessary.\n\n## Usage\n\nWe recommend importing the library from your Python code as follows:\n\n```python\nimport tensorflow as tf\nimport tensorflow_compression as tfc\n```\n\n### Using a pre-trained model to compress an image\n\nIn the\n[models directory](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Ftree\u002Fmaster\u002Fmodels),\nyou'll find a python script `tfci.py`. Download the file and run:\n\n```bash\npython tfci.py -h\n```\n\nThis will give you a list of options. Briefly, the command\n\n```bash\npython tfci.py compress \u003Cmodel> \u003CPNG file>\n```\n\nwill compress an image using a pre-trained model and write a file ending in\n`.tfci`. Execute `python tfci.py models` to give you a list of supported\npre-trained models. The command\n\n```bash\npython tfci.py decompress \u003CTFCI file>\n```\n\nwill decompress a TFCI file and write a PNG file. By default, an output file\nwill be named like the input file, only with the appropriate file extension\nappended (any existing extensions will not be removed).\n\n### Training your own model\n\nThe\n[models directory](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Ftree\u002Fmaster\u002Fmodels)\ncontains several implementations of published image compression models to enable\neasy experimentation. Note that in order to reproduce published results, more\ntuning of the code and training dataset may be necessary. Use the `tfci.py`\nscript above to access published models.\n\nThe following instructions talk about a re-implementation of the model published\nin:\n\n> \"End-to-end optimized image compression\"\u003Cbr \u002F>\n> J. Ballé, V. Laparra, E. P. Simoncelli\u003Cbr \u002F>\n> https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01704\n\nNote that the models directory is not contained in the pip package. The models\nare meant to be downloaded individually. Download the file `bls2017.py` and run:\n\n```bash\npython bls2017.py -h\n```\n\nThis will list the available command line options for the implementation.\nTraining can be as simple as the following command:\n\n```bash\npython bls2017.py -V train\n```\n\nThis will use the default settings. Note that unless a custom training dataset\nis provided via `--train_glob`, the\n[CLIC dataset](https:\u002F\u002Fwww.tensorflow.org\u002Fdatasets\u002Fcatalog\u002Fclic) will be\ndownloaded using TensorFlow Datasets.\n\nThe most important training parameter is `--lambda`, which controls the\ntrade-off between bitrate and distortion that the model will be optimized for.\nThe number of channels per layer is important, too: models tuned for higher\nbitrates (or, equivalently, lower distortion) tend to require transforms with a\ngreater approximation capacity (i.e. more channels), so to optimize performance,\nyou want to make sure that the number of channels is large enough (or larger).\nThis is described in more detail in:\n\n> \"Efficient nonlinear transforms for lossy image compression\"\u003Cbr \u002F>\n> J. Ballé\u003Cbr \u002F>\n> https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.00847\n\nIf you wish, you can monitor progress with Tensorboard. To do this, create a\nTensorboard instance in the background before starting the training, then point\nyour web browser to [port 6006 on your machine](http:\u002F\u002Flocalhost:6006):\n\n```bash\ntensorboard --logdir=\u002Ftmp\u002Ftrain_bls2017 &\n```\n\nWhen training has finished, the Python script saves the trained model to the\ndirectory specified with `--model_path` (by default, `bls2017` in the current\ndirectory) in TensorFlow's `SavedModel` format. The script can then be used to\ncompress and decompress images as follows. The same saved model must be\naccessible to both commands.\n\n```bash\npython bls2017.py [options] compress original.png compressed.tfci\npython bls2017.py [options] decompress compressed.tfci reconstruction.png\n```\n\n## Building pip packages\n\nThis section describes the necessary steps to build your own pip packages of\nTensorFlow Compression. This may be necessary to install it on platforms for\nwhich we don't provide precompiled binaries (currently only Linux and Darwin).\n\nTo be compatible with the official TensorFlow pip package, the TFC pip package\nmust be linked against a matching version of the C libraries. For this reason,\nto build the official Linux pip packages, we use [these Docker\nimages](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorflow\u002Fbuild) and use the same toolchain\nthat TensorFlow uses.\n\nInside the Docker container, the following steps need to be taken:\n\n1. Clone the `tensorflow\u002Fcompression` repo from GitHub.\n2. Run `tools\u002Fbuild_pip_pkg.sh` inside the cloned repo.\n\nFor example:\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression.git \u002Ftensorflow_compression\ndocker run -i --rm \\\n    -v \u002Ftmp\u002Ftensorflow_compression:\u002Ftmp\u002Ftensorflow_compression\\\n    -v \u002Ftensorflow_compression:\u002Ftensorflow_compression \\\n    -w \u002Ftensorflow_compression \\\n    -e \"BAZEL_OPT=--config=manylinux_2_24_x86_64\" \\\n    tensorflow\u002Fbuild:latest-python3.10 \\\n    bash tools\u002Fbuild_pip_pkg.sh \u002Ftmp\u002Ftensorflow_compression \u003Ccustom-version>\n```\n\nFor Darwin, the Docker image and specifying the toolchain is not necessary. We\njust build the package like this (note that you may want to create a clean\nPython virtual environment to do this):\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression.git \u002Ftensorflow_compression\ncd \u002Ftensorflow_compression\nBAZEL_OPT=\"--macos_minimum_os=10.14\" bash \\\n  tools\u002Fbuild_pip_pkg.sh \\\n  \u002Ftmp\u002Ftensorflow_compression \u003Ccustom-version>\n```\n\nIn both cases, the wheel file is created inside `\u002Ftmp\u002Ftensorflow_compression`.\n\nTo test the created package, first install the resulting wheel file:\n\n```bash\npython -m pip install \u002Ftmp\u002Ftensorflow_compression\u002Ftensorflow_compression-*.whl\n```\n\nThen run the unit tests (Do not run the tests in the workspace directory where\nthe `WORKSPACE` file lives. In that case, the Python interpreter would attempt\nto import `tensorflow_compression` packages from the source tree, rather than\nfrom the installed package system directory):\n\n```bash\npushd \u002Ftmp\npython -m tensorflow_compression.all_tests\npopd\n```\n\nWhen done, you can uninstall the pip package again:\n\n```bash\npython -m pip uninstall tensorflow-compression\n```\n\n## Evaluation\n\nWe provide evaluation results for several image compression methods in terms of\ndifferent metrics in different colorspaces. Please see the\n[results subdirectory](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Ftree\u002Fmaster\u002Fresults\u002Fimage_compression)\nfor more information.\n\n## Citation\n\nIf you use this library for research purposes, please cite:\n```\n@software{tfc_github,\n  author = \"Ballé, Jona and Hwang, Sung Jin and Agustsson, Eirikur\",\n  title = \"{T}ensor{F}low {C}ompression: Learned Data Compression\",\n  url = \"http:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\",\n  version = \"2.14.1\",\n  year = \"2024\",\n}\n```\nIn the above BibTeX entry, names are top contributors sorted by number of\ncommits. Please adjust version number and year according to the version that was\nactually used.\n\nNote that this is not an officially supported Google product.\n","# TensorFlow 压缩\n\nTensorFlow Compression (TFC) 是一个为 TensorFlow 提供的数据压缩工具库。\n\n您可以使用该库构建自己的机器学习模型，并内置端到端优化的数据压缩功能。这有助于在仅牺牲少量模型性能的情况下，找到数据（如图像、特征、样本等）的高效存储表示。请参阅 [有损数据压缩教程](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fgenerative\u002Fdata_compression) 或 [模型压缩教程](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Foptimization\u002Fcompression)，以开始您的实践。\n\n若想从经典数据压缩的角度深入了解，可以阅读我们的 [非线性变换编码论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.03034)，或观看 @jonarchists 的 [关于学习型图像压缩的演讲](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=x_q7cZviXkY)。而从机器学习视角介绍有损数据压缩的内容，则可参考 @yiboyang 的 [综述论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.06533)。\n\n该库包含以下内容（详情请参阅 [API 文档](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftfc)）：\n\n- 范围编码（又称算术编码）实现，以 C++ 编写的灵活 TF 操作形式提供。这些实现包括可选的“溢出”功能，可在范围编码的比特序列中嵌入 Elias gamma 编码，从而支持对包含所有有符号整数而非有限范围的字符集进行编码。\n  \n- 熵模型类，简化了设计率–失真优化编码的过程。在训练期间，它们充当似然模型；训练完成后，通过自动设计范围编码表并调用底层范围编码实现，将浮点张量编码为优化的比特序列。\n\n- 其他在学习型数据压缩上下文中非常有用的 TensorFlow 函数和 Keras 层，例如用于数值计算密度函数分位数的方法、基于抖动噪声取期望值的操作、具有更灵活填充选项且支持在傅里叶域中重参数化卷积核和偏置的卷积层，以及广义除法归一化（GDN）的实现。\n\n**重要更新：** 自 2024 年 2 月 1 日起，TensorFlow Compression 已进入维护模式。具体而言：\n\n- TFC 的完整功能集已冻结。不会开发新功能，但仓库将继续接收维护性修复。\n  \n- 今后，新的 TFC 包将仅兼容 TensorFlow 2.14。这是由于 TensorFlow 2.15 携带的 Keras 版本引入了不兼容性，需要重写我们的层和熵模型类。\n\n- 为确保现有模型仍能在 TensorFlow 2.15 及更高版本上运行，我们发布了新的包 [tensorflow-compression-ops](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Ftree\u002Fmaster\u002Ftensorflow_compression_ops)，其中仅包含 C++ 操作。这些操作将尽可能地更新以支持较新的 TensorFlow 版本。\n\n  - 更新：由于在维护 C++ 自定义操作时面临的技术挑战，TF 2.18 将是 `tensorflow-compression-ops` 支持的最后一个版本。\n\n- 在 pypi.org 上同时提供了两种选项的二进制包：[TFC](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensorflow-compression\u002F) 和 [TFC 操作](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensorflow-compression-ops\u002F)。\n\n## 文档与帮助\n\n有关本包所实现的类和函数的完整说明，请参阅 [API 文档](https:\u002F\u002Fwww.tensorflow.org\u002Fapi_docs\u002Fpython\u002Ftfc)。\n\n请将所有问题或评论发布到 [Discussions](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Fdiscussions)。仅针对实际的 bug 或功能请求提交 [Issues](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Fissues)。在 Discussions 上，您可能会更快得到回复，并且有助于其他人日后更轻松地找到相关问题或答案。\n\n## 安装\n\n***注意：目前仅提供适用于 Linux 和 Darwin\u002FMac OS 的预编译包。若要在 Windows 上使用这些包，建议按照 [WSL2 安装指南](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fpip#windows_1) 安装 TensorFlow，或使用 [TensorFlow Docker 镜像](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fdocker)，然后再安装 Linux 版本的包。***\n\n设置一个可以使用 `pip` 命令安装预编译 Python 二进制包的环境。有关如何搭建此类 Python 环境的详细信息，请参阅 [TensorFlow 安装指南](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fpip)。\n\n当前版本的 TensorFlow Compression 需要 TensorFlow 2。对于兼容 TensorFlow 1 的版本，请参阅我们的 [历史版本](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Freleases)。\n\n### pip\n\n要通过 `pip` 安装 TFC，运行以下命令：\n\n```bash\npython -m pip install tensorflow-compression\n```\n\n为测试安装是否正确，可以运行单元测试：\n\n```bash\npython -m tensorflow_compression.all_tests\n```\n\n命令执行完毕后，最后一行应显示类似 ```OK (skipped=29)``` 的消息。\n\n### Colab\n\n您可以在 [Colab](https:\u002F\u002Fcolab.research.google.com\u002F) 中实时试用 TFC。以下命令会安装与当前安装的 TensorFlow 版本兼容的最新 TFC 版本。在执行 Python 代码之前，在单元格中运行它：\n\n```\n%pip install tensorflow-compression~=$(pip show tensorflow | perl -p -0777 -e 's\u002F.*Version: (\\d+\\.\\d+).*\u002F\\1.0\u002Fsg')\n```\n\n注意：TFC 的二进制包与 TensorFlow 的次版本号绑定（例如，TFC 2.9.1 需要 TensorFlow 2.9.x），而 Colab 在部署最新版 TensorFlow 方面有时会滞后几天。因此，如果直接使用 `pip install tensorflow-compression`，可能会尝试升级 TensorFlow，从而引发问题。\n\n### Docker\n\n若要在 Docker 容器中使用（例如在 Windows 上），请确保已安装 Docker（如 [Docker Desktop](https:\u002F\u002Fwww.docker.com\u002Fproducts\u002Fdocker-desktop)），并使用 [TensorFlow Docker 镜像](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fdocker)。然后在 Docker 容器内运行 `pip install` 命令，而不是在主机上。例如，可以使用如下命令：\n\n```bash\ndocker run tensorflow\u002Ftensorflow:latest bash -c \\\n    \"python -m pip install tensorflow-compression &&\n     python -m tensorflow_compression.all_tests\"\n```\n\n此命令将拉取 TensorFlow Docker 镜像（如果尚未缓存），安装 pip 包，并运行单元测试以确认其正常工作。\n\n### Anaconda\n\n似乎 [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002Fdistribution\u002F) 自带的 TensorFlow 二进制版本与我们通过 pip 发布的包不兼容。为了解决这个问题，请始终使用 `pip` 而不是 `conda` 来安装 TensorFlow。例如，以下命令会创建一个包含 CUDA 库的 Anaconda 环境，然后安装 TensorFlow 和 TensorFlow Compression：\n\n```bash\nconda create --name ENV_NAME python cudatoolkit cudnn\nconda activate ENV_NAME\npython -m pip install tensorflow-compression\n```\n\n根据 `tensorflow` pip 包的具体要求，你可能需要将 CUDA 库固定到特定版本。如果你不使用 GPU，则当然不需要 CUDA。\n\n## 使用方法\n\n我们建议在 Python 代码中按如下方式导入该库：\n\n```python\nimport tensorflow as tf\nimport tensorflow_compression as tfc\n```\n\n### 使用预训练模型压缩图像\n\n在\n[models 目录](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Ftree\u002Fmaster\u002Fmodels)\n中，你会找到一个名为 `tfci.py` 的 Python 脚本。下载该文件并运行：\n\n```bash\npython tfci.py -h\n```\n\n这将列出所有可用选项。简而言之，使用以下命令：\n\n```bash\npython tfci.py compress \u003Cmodel> \u003CPNG 文件>\n```\n\n即可利用预训练模型对图像进行压缩，并生成一个以 `.tfci` 结尾的文件。执行 `python tfci.py models` 可查看支持的预训练模型列表。而以下命令：\n\n```bash\npython tfci.py decompress \u003CTFCI 文件>\n```\n\n则可以解压缩 TFCI 文件并生成一个 PNG 文件。默认情况下，输出文件名与输入文件相同，仅添加相应的文件扩展名（现有扩展名不会被移除）。\n\n### 训练自己的模型\n\n[models 目录](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Ftree\u002Fmaster\u002Fmodels) 中包含了多个已发表的图像压缩模型实现，方便用户进行实验。需要注意的是，要复现已发表的结果，可能还需要对代码和训练数据集进行更多调优。你可以使用上述 `tfci.py` 脚本来访问这些公开模型。\n\n以下说明介绍了一种对以下论文中提出的模型的重新实现：\n\n> “端到端优化的图像压缩”\u003Cbr \u002F>\n> J. Ballé, V. Laparra, E. P. Simoncelli\u003Cbr \u002F>\n> https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01704\n\n请注意，models 目录并不包含在 pip 包中。这些模型需要单独下载。下载 `bls2017.py` 文件并运行：\n\n```bash\npython bls2017.py -h\n```\n\n这将列出该实现的所有可用命令行选项。训练过程可以非常简单，例如：\n\n```bash\npython bls2017.py -V train\n```\n\n此命令将使用默认设置。请注意，除非通过 `--train_glob` 提供自定义训练数据集，否则将会使用 TensorFlow Datasets 下载\n[CLIC 数据集](https:\u002F\u002Fwww.tensorflow.org\u002Fdatasets\u002Fcatalog\u002Fclic)。\n\n最重要的训练参数是 `--lambda`，它控制着模型在比特率和失真之间的权衡。每层的通道数也很重要：针对较高比特率（或等价地，较低失真）优化的模型通常需要具有更高逼近能力的变换（即更多的通道）。因此，为了优化性能，你需要确保通道数足够多（或更多）。这一点在以下文献中也有详细描述：\n\n> “用于有损图像压缩的有效非线性变换”\u003Cbr \u002F>\n> J. Ballé\u003Cbr \u002F>\n> https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.00847\n\n如果需要，你可以使用 TensorBoard 监控训练进度。为此，在开始训练之前先在后台启动一个 TensorBoard 实例，然后在浏览器中访问你机器上的 [6006 端口](http:\u002F\u002Flocalhost:6006)：\n\n```bash\ntensorboard --logdir=\u002Ftmp\u002Ftrain_bls2017 &\n```\n\n训练完成后，Python 脚本会将训练好的模型保存到由 `--model_path` 指定的目录中（默认为当前目录下的 `bls2017`），格式为 TensorFlow 的 `SavedModel`。之后，你可以使用该脚本进行图像的压缩和解压缩操作。两个命令必须能够访问同一个保存的模型。\n\n```bash\npython bls2017.py [options] compress original.png compressed.tfci\npython bls2017.py [options] decompress compressed.tfci reconstruction.png\n```\n\n## 构建 pip 包\n\n本节介绍了构建你自己的 TensorFlow Compression pip 包所需的步骤。这可能对于在我们尚未提供预编译二进制文件的平台上安装该库的情况是必要的（目前仅支持 Linux 和 Darwin）。\n\n为了与官方 TensorFlow pip 包兼容，TFC 的 pip 包必须链接到匹配版本的 C 库。因此，在构建官方的 Linux 版本时，我们使用 [这些 Docker 镜像](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorflow\u002Fbuild)，并采用与 TensorFlow 相同的工具链。\n\n在 Docker 容器内，需要执行以下步骤：\n\n1. 从 GitHub 克隆 `tensorflow\u002Fcompression` 仓库。\n2. 在克隆的仓库中运行 `tools\u002Fbuild_pip_pkg.sh`。\n\n例如：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression.git \u002Ftensorflow_compression\ndocker run -i --rm \\\n    -v \u002Ftmp\u002Ftensorflow_compression:\u002Ftmp\u002Ftensorflow_compression\\\n    -v \u002Ftensorflow_compression:\u002Ftensorflow_compression \\\n    -w \u002Ftensorflow_compression \\\n    -e \"BAZEL_OPT=--config=manylinux_2_24_x86_64\" \\\n    tensorflow\u002Fbuild:latest-python3.10 \\\n    bash tools\u002Fbuild_pip_pkg.sh \u002Ftmp\u002Ftensorflow_compression \u003Ccustom-version>\n```\n\n对于 Darwin 系统，无需使用 Docker 镜像或指定工具链。我们可以直接构建包，如下所示（建议先创建一个干净的 Python 虚拟环境来完成此操作）：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression.git \u002Ftensorflow_compression\ncd \u002Ftensorflow_compression\nBAZEL_OPT=\"--macos_minimum_os=10.14\" bash \\\n  tools\u002Fbuild_pip_pkg.sh \\\n  \u002Ftmp\u002Ftensorflow_compression \u003Ccustom-version>\n```\n\n在这两种情况下，wheel 文件都会在 `\u002Ftmp\u002Ftensorflow_compression` 目录下生成。\n\n要测试生成的包，首先安装生成的 wheel 文件：\n\n```bash\npython -m pip install \u002Ftmp\u002Ftensorflow_compression\u002Ftensorflow_compression-*.whl\n```\n\n然后运行单元测试（请勿在包含 `WORKSPACE` 文件的工作目录中运行测试。否则，Python 解释器会尝试从源代码树中导入 `tensorflow_compression` 包，而不是从已安装的包系统目录中导入）：\n\n```bash\npushd \u002Ftmp\npython -m tensorflow_compression.all_tests\npopd\n```\n\n测试完成后，你可以再次卸载该 pip 包：\n\n```bash\npython -m pip uninstall tensorflow-compression\n```\n\n## 评估\n\n我们提供了多种图像压缩方法在不同色彩空间下、基于不同指标的评估结果。更多信息请参阅\n[results 子目录](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Ftree\u002Fmaster\u002Fresults\u002Fimage_compression)。\n\n## 引用\n\n如果您将本库用于研究目的，请引用以下文献：\n```\n@software{tfc_github,\n  author = {Ballé, Jona 和 Hwang, Sung Jin 和 Agustsson, Eirikur},\n  title = {{T}ensor{F}low {C}ompression: Learned Data Compression},\n  url = {http:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression},\n  version = {2.14.1},\n  year = {2024},\n}\n```\n在上述 BibTeX 条目中，作者按提交次数排序，列出了主要贡献者。请根据实际使用的版本号和年份进行相应调整。\n\n请注意，本项目并非 Google 官方支持的产品。","# TensorFlow Compression 快速上手指南\n\nTensorFlow Compression (TFC) 是一个用于构建端到端优化数据压缩模型的 TensorFlow 库。它可以帮助你在仅牺牲少量模型性能的前提下，找到数据（如图像、特征等）的高效存储表示。\n\n> **重要提示**：自 2024 年 2 月 1 日起，TFC 进入**维护模式**。功能已冻结，不再开发新特性。新版包仅支持 **TensorFlow 2.14**。若需使用 TF 2.15+，请单独安装 `tensorflow-compression-ops`（仅包含 C++ 算子），但该组件最高仅支持到 TF 2.18。\n\n## 环境准备\n\n*   **操作系统**：预编译包目前仅支持 **Linux** 和 **macOS (Darwin)**。\n    *   *Windows 用户建议*：使用 WSL2 或 TensorFlow Docker 镜像来运行 Linux 版本。\n*   **Python 环境**：需要配置好支持 `pip` 安装的 Python 环境。\n*   **核心依赖**：\n    *   **TensorFlow 2.14**（推荐，完整功能支持）。\n    *   若使用 TF 2.15 - 2.18，仅支持底层算子功能。\n*   **注意**：如果你使用 Anaconda，请务必通过 `pip` 安装 TensorFlow，不要使用 `conda install tensorflow`，以避免二进制兼容性问题。\n\n## 安装步骤\n\n### 方法一：使用 pip 安装（推荐）\n\n确保已安装 compatible 版本的 TensorFlow（建议 2.14），然后运行：\n\n```bash\npython -m pip install tensorflow-compression\n```\n\n**验证安装：**\n运行单元测试确认安装成功：\n\n```bash\npython -m tensorflow_compression.all_tests\n```\n若最后一行显示 `OK (skipped=...)` 或类似信息，则表示安装成功。\n\n### 方法二：在 Google Colab 中使用\n\n在 Colab 单元格中运行以下命令，它会自动匹配当前环境的 TensorFlow 版本安装对应的 TFC：\n\n```bash\n%pip install tensorflow-compression~=$(pip show tensorflow | perl -p -0777 -e 's\u002F.*Version: (\\d+\\.\\d+).*\u002F\\1.0\u002Fsg')\n```\n\n### 方法三：使用 Docker\n\n适用于 Windows 用户或需要隔离环境的场景：\n\n```bash\ndocker run tensorflow\u002Ftensorflow:latest bash -c \\\n    \"python -m pip install tensorflow-compression &&\n     python -m tensorflow_compression.all_tests\"\n```\n\n## 基本使用\n\n### 1. 导入库\n\n在 Python 代码中按以下方式导入：\n\n```python\nimport tensorflow as tf\nimport tensorflow_compression as tfc\n```\n\n### 2. 使用预训练模型压缩图像\n\nTFC 提供了脚本 `tfci.py` 用于快速测试（需从 GitHub 仓库的 `models` 目录下载该文件）。\n\n**查看帮助与支持的模型：**\n```bash\npython tfci.py -h\npython tfci.py models\n```\n\n**压缩图像：**\n将 `\u003Cmodel>` 替换为支持的模型名称（如 `bmshj2018-factorized`），`\u003Cinput.png>` 替换为你的图片路径。\n```bash\npython tfci.py compress \u003Cmodel> \u003Cinput.png>\n```\n这将生成一个 `.tfci` 结尾的压缩文件。\n\n**解压图像：**\n```bash\npython tfci.py decompress \u003Coutput.tfci>\n```\n这将还原出一张 PNG 图片。\n\n### 3. 训练自定义模型\n\n你可以基于官方提供的复现模型（如 Ballé et al. 2017）进行训练。需先从 GitHub 下载对应的脚本（例如 `bls2017.py`）。\n\n**开始训练：**\n以下命令使用默认设置和 CLIC 数据集（自动下载）进行训练：\n```bash\npython bls2017.py -V train\n```\n\n**关键参数说明：**\n*   `--lambda`：控制码率（bitrate）与失真（distortion）之间的权衡。\n*   `--train_glob`：指定自定义训练数据集路径。\n*   `--model_path`：指定训练完成后模型保存的路径。\n\n**监控训练进度：**\n启动 TensorBoard 查看训练曲线：\n```bash\ntensorboard --logdir=\u002Ftmp\u002Ftrain_bls2017 &\n```\n然后在浏览器访问 `http:\u002F\u002Flocalhost:6006`。\n\n**使用训练好的模型：**\n训练结束后，使用生成的 SavedModel 进行压缩和解压：\n```bash\n# 压缩\npython bls2017.py [options] compress original.png compressed.tfci\n\n# 解压\npython bls2017.py [options] decompress compressed.tfci reconstruction.png\n```","某医疗影像初创团队正在构建基于 TensorFlow 的肺部 CT 扫描异常检测系统，面临海量高分辨率图像数据的存储与传输瓶颈。\n\n### 没有 compression 时\n- 原始浮点型张量直接落盘占用巨大存储空间，导致云存储成本随数据量激增而难以控制。\n- 在边缘设备与云端服务器间传输未压缩模型权重和特征图时，网络带宽成为训练和推理速度的主要瓶颈。\n- 若手动集成传统压缩算法（如 ZIP 或 JPEG），会破坏端到端的梯度传播，无法针对特定任务优化“码率 - 失真”平衡。\n- 缺乏自动化的熵编码支持，开发人员需耗费大量精力编写底层比特流处理逻辑，且难以支持有符号整数等复杂数据类型。\n\n### 使用 compression 后\n- 利用 entropy model classes 自动设计范围编码表，将浮点张量转换为优化比特序列，在仅牺牲微小检测精度的前提下大幅缩减存储体积。\n- 内置的 C++ Range coding 算子实现了端到端可微的压缩流程，让模型能自动学习如何在低码率下保留对病灶识别最关键的特征信息。\n- 借助广义除法归一化（GDN）等专用层，显著提升了非线性变换编码效率，使得边缘设备上传数据的速度提升数倍。\n- 无需重复造轮子即可处理包含全量有符号整数的复杂数据分布，通过可选的“溢出”功能轻松应对各种医学影像数据格式。\n\ncompression 的核心价值在于让开发者能在 TensorFlow 生态内直接构建“感知优化”的压缩系统，以极小的性能代价换取存储与传输效率的数量级提升。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorflow_compression_95a7cfb9.png","tensorflow","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ftensorflow_07ed5093.png","",null,"github-admin@tensorflow.org","http:\u002F\u002Fwww.tensorflow.org","https:\u002F\u002Fgithub.com\u002Ftensorflow",[81,85,89,93,97],{"name":82,"color":83,"percentage":84},"Python","#3572A5",59.5,{"name":86,"color":87,"percentage":88},"C++","#f34b7d",25.5,{"name":90,"color":91,"percentage":92},"Jupyter Notebook","#DA5B0B",12.7,{"name":94,"color":95,"percentage":96},"Starlark","#76d275",1.6,{"name":98,"color":99,"percentage":100},"Shell","#89e051",0.8,913,260,"2026-04-17T20:05:57","Apache-2.0","Linux, macOS","非必需。若使用 GPU 加速，需安装 CUDA 和 cuDNN（具体版本取决于 TensorFlow 要求），README 未指定具体显存大小或显卡型号。","未说明",{"notes":109,"python":110,"dependencies":111},"1. Windows 用户需通过 WSL2 或 Docker 运行 Linux 环境才能使用预编译包。\n2. 自 2024 年 2 月 1 日起，该库进入维护模式，完整功能包仅支持 TensorFlow 2.14。\n3. 若需使用 TensorFlow 2.15-2.18，需单独安装 'tensorflow-compression-ops' 包（仅含 C++ 算子）。\n4. 在 Anaconda 环境中，必须通过 pip 而非 conda 安装 TensorFlow 以避免兼容性问题。\n5. 训练默认会使用 CLIC 数据集（自动下载）。","3.8+ (基于 Docker 镜像 tensorflow\u002Fbuild:latest-python3.10 推断，需配合 TensorFlow 2.x)",[112,113,114],"tensorflow>=2.14","numpy","scipy",[14,16],[64,117,73,118,119,120,121,122,123],"data-compression","machine-learning","python","deep-learning","deep-neural-networks","neural-network","ml","2026-03-27T02:49:30.150509","2026-04-18T14:24:33.467808",[127,132,136,141,145,150],{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},39661,"如何在 TensorFlow 2.x 中解决熵模型（Entropy Models）在急切执行（Eager Execution）模式下的兼容性问题？","旧的 `entropy_models.py` 中的实现不适用于 TensorFlow 2.0 及更高版本。请使用位于 `tensorflow_compression\u002Fpython\u002Fentropy_models` 目录下的新实现。例如，可以参考 `models\u002Fms2020.py` 中的用法。此外，确保安装支持 TF 2.4+ 的 beta 版本（如 v2.0b2 或更高），旧版本的 `add_loss` 问题在新实现中已修复。","https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Fissues\u002F9",{"id":133,"question_zh":134,"answer_zh":135,"source_url":131},39662,"针对不同版本的 TensorFlow，应该安装哪个版本的 tensorflow-compression？","请根据以下对应关系进行安装：\n1. 稳定版（TensorFlow 1.15）：运行 `pip install tensorflow==1.15 tensorflow-compression==1.3`\n2. Beta 版（TensorFlow 2.3\u002F2.4+）：运行 `pip install tensorflow==2.3 tensorflow-compression==2.0b1`（或更高版本）。\n后者支持 Python 3.6 到 3.8（MacOS 和 Linux）。",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},39663,"为什么在 Python 3.7 环境下使用 pip 安装 tensorflow-compression 会失败？","这通常是因为版本不匹配或环境问题。如果在使用 Conda 且 Python 为 3.7 时遇到“找不到满足要求的版本”错误，尝试将 Python 降级至 3.6 通常可以解决问题。此外，确保你的 TensorFlow 版本与所需的 tensorflow-compression 版本兼容（参见安装说明中的版本对应表）。","https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Fissues\u002F28",{"id":142,"question_zh":143,"answer_zh":144,"source_url":140},39664,"tensorflow-compression 支持 Windows 操作系统吗？","目前官方提供的 pip 安装包仅支持 MacOS 和 Linux，暂不支持 Windows。Windows 用户可以尝试通过安装 Docker 或使用 Windows 子系统（WSL）来运行该库，但官方不提供针对这些环境的具体指导。",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},39665,"导入 tensorflow_compression 时出现 'NotFoundError: libtensorflow_compression.so' 错误或要求 TensorFlow 2.1 的错误怎么办？","这通常是因为你正在尝试导入源代码中的开发版本（master），而该版本是为较新的 TensorFlow（如 2.1+）开发的，且缺少编译好的共享库文件。\n解决方案：\n1. 推荐直接使用 pip 安装预编译的稳定包：`pip install tensorflow-compression==1.3`（配合 TF 1.15）。\n2. 如果必须使用源码，请确保安装了匹配的 TensorFlow 版本（如 2.1），并自行编译库文件，但这通常比较复杂，不建议普通用户操作。","https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Fissues\u002F30",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},39666,"运行测试脚本时出现 'AttributeError: _name_scope' 或 'TypeError: add_variable() got an unexpected keyword argument' 错误是什么原因？","这些错误通常是由于 TensorFlow 版本不兼容导致的（例如在 TF 1.8 上运行了需要更新版本 API 的代码）。维护者确认在正确配置的环境下（参考 README 中的安装说明并使用兼容的 TF 版本），瓶颈层（bottleneck layer）和测试脚本是可以正常工作的。请检查是否使用了推荐的 TensorFlow 版本，并确保按照 README 指示正确保存和加载图形（graph）。","https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fcompression\u002Fissues\u002F1",[156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231,236,241,246,251],{"id":157,"version":158,"summary_zh":159,"released_at":160},315612,"v2.11.0","版本 2.11.0 改进了 `estimate_tails` 的行为，并引入了新的游程编码操作，同时支持戈伦布–赖斯编码和埃利亚斯伽马编码。此外，还修复了若干问题。\n\n本版本要求 TensorFlow 2.11 和 TensorFlow Probability 0.15。\n\n如果您使用的是 Linux 或 Darwin（Mac OS）系统，并且 Python 版本为 3.7–3.10，请通过以下命令安装预编译的二进制包：\n```sh\npip install tensorflow-compression==2.11.0\n```","2022-11-23T02:02:13",{"id":162,"version":163,"summary_zh":164,"released_at":165},315613,"v2.10.0","版本 2.10.0 改进了 `run_length_gamma_encode`\u002F`decode` 操作的错误处理及其他方面。\n\n此版本要求 TensorFlow 2.10 和 TensorFlow Probability 0.15。\n\n如果您使用的是 Linux 或 Darwin（Mac OS）系统，并且 Python 版本为 3.7–3.10，请通过以下命令安装预编译的二进制包：\n```sh\npip install tensorflow-compression==2.10.0\n```","2022-09-07T15:27:20",{"id":167,"version":168,"summary_zh":169,"released_at":170},315614,"v2.9.2","版本 2.9.2 是一个维护版本，修复了在某些系统上 Linux wheel 文件存在的问题（问题 #148），同时还包含其他一些小的改进。\n\n此版本需要 TensorFlow 2.9 和 TensorFlow Probability 0.15。\n\n如果您使用的是 Linux 或 Darwin（Mac OS）操作系统，并且 Python 版本为 3.7–3.10，请通过运行以下命令安装预编译的二进制包：\n```sh\npip install tensorflow-compression==2.9.2\n```","2022-08-13T06:18:43",{"id":172,"version":173,"summary_zh":174,"released_at":175},315615,"v2.9.1","Release 2.9.1 is a maintenance release that only updates documentation.\r\n\r\nThis release requires TensorFlow 2.9 and TensorFlow Probability 0.15.\r\n\r\nIf you're on Linux or Darwin (Mac OS) and Python 3.7–3.10, install the pre-compiled binary by running:\r\n```sh\r\npip install tensorflow-compression==2.9.1\r\n```\r\n","2022-05-30T18:44:48",{"id":177,"version":178,"summary_zh":179,"released_at":180},315616,"v2.9.0","Release 2.9.0 adds a new `PowerLawEntropyModel` class to be used as a rate penalty for the run-length gamma coding ops.\r\n\r\nThis release requires TensorFlow 2.9 and TensorFlow Probability 0.15.\r\n\r\nIf you're on Linux or Darwin (Mac OS) and Python 3.7–3.10, install the pre-compiled binary by running:\r\n```sh\r\npip install tensorflow-compression==2.9.0\r\n```\r\n","2022-05-16T22:44:20",{"id":182,"version":183,"summary_zh":184,"released_at":185},315617,"v2.8.1","Release 2.8.1 fixes a bug in mixed precision training, and adds a new entropy coding op that relies on Elias gamma and run-length coding.\r\n\r\nThis release requires TensorFlow 2.8 and TensorFlow Probability 0.15.\r\n\r\nIf you're on Linux or Darwin (Mac OS) and Python 3.7–3.10, install the pre-compiled binary by running:\r\n```sh\r\npip install tensorflow-compression==2.8.1\r\n```\r\n","2022-03-02T19:12:18",{"id":187,"version":188,"summary_zh":189,"released_at":190},315618,"v2.8.0","Release 2.8.0 adds support for mixed precision training (for details, see commit c20abdbb0906cab39aeb5078c0e17789e37994ee).\r\n\r\nThis release requires TensorFlow 2.8 and TensorFlow Probability 0.15.\r\n\r\nIf you're on Linux or Darwin (Mac OS) and Python 3.7–3.10, install the pre-compiled binary by running:\r\n```sh\r\npip install tensorflow-compression==2.8.0\r\n```\r\n","2022-02-09T15:42:33",{"id":192,"version":193,"summary_zh":194,"released_at":195},315605,"v2.17.0","版本 2.17.0 与版本 2.16.0 完全相同。\n\n请注意，随着 TensorFlow 2.17.0 的发布，TF 团队已停止对基于 x86（Intel）架构的 macOS\u002FDarwin 系统的支持。由于 TFC 从未移植到新的 AMD 架构，我们也将从本版本开始停止对 macOS\u002FDarwin 的支持。\n\n如果您使用的是 Linux 系统且 Python 版本为 3.9–3.12，请通过以下命令安装预编译的二进制包：\n\n```sh\npip install tensorflow-compression-ops==2.17.0\n```","2024-08-07T20:25:13",{"id":197,"version":198,"summary_zh":199,"released_at":200},315606,"v2.16.0","版本 2.16.0 与版本 2.15.0 完全相同。\n\n如果您使用的是 Linux 或 Darwin（Mac OS）系统，并且安装了 Python 3.9 至 3.12，请通过运行以下命令来安装预编译的二进制文件：\n\n```sh\npip install tensorflow-compression-ops==2.16.0\n```","2024-03-27T19:52:35",{"id":202,"version":203,"summary_zh":204,"released_at":205},315607,"v2.15.0","版本 2.15.0 与版本 2.14.1 完全相同。\n\n然而，从 TensorFlow 2.15 开始，TensorFlow Compression 的二进制包将仅包含低层级（C++）操作，几乎不包含任何 Python 代码。这是因为在 TensorFlow 2.15 中引入了 Keras 的一项不兼容变更，该变更需要我们对层和熵模型类进行彻底重写。\n\n如果您使用的是 Linux 或 Darwin（Mac OS）系统，并且安装了 Python 3.9 至 3.11，请通过以下命令安装预编译的二进制包：\n\n```sh\npip install tensorflow-compression-ops==2.15.0\n```","2024-02-02T01:53:52",{"id":207,"version":208,"summary_zh":209,"released_at":210},315608,"v2.14.1","版本 2.14.1 是一个维护版本，不包含任何新功能。\n\n它需要 TensorFlow 2.14 和 TensorFlow Probability 0.15。\n\n如果您使用的是 Linux 或 Darwin（Mac OS）系统，并且 Python 版本为 3.9–3.11，请通过运行以下命令安装预编译的二进制包：\n\n```sh\npip install tensorflow-compression==2.14.1\n```","2024-02-02T01:42:40",{"id":212,"version":213,"summary_zh":214,"released_at":215},315609,"v2.14.0","版本 2.14.0 允许 y4m 数据集读取标记了不同色度偏移的 C420 色彩空间格式。\n\n版本 2.14.0 已不再支持 Python 3.8，因为 TensorFlow 2.14 也不再提供对该版本 Python 的支持。\n\n该版本需要 TensorFlow 2.14 和 TensorFlow Probability 0.15。\n\n如果您使用的是 Linux 或 Darwin（Mac OS）系统，并且安装了 Python 3.9 至 3.11，请通过以下命令安装预编译的二进制包：\n\n```sh\npip install tensorflow-compression==2.14.0\n```","2023-10-13T19:56:17",{"id":217,"version":218,"summary_zh":219,"released_at":220},315610,"v2.13.0","版本 2.13.0 是一个维护版本，不包含任何新功能。\n\n它需要 TensorFlow 2.13 和 TensorFlow Probability 0.15。\n\n如果您使用的是 Linux 或 Darwin（Mac OS）系统，并且 Python 版本为 3.8–3.11，请通过运行以下命令安装预编译的二进制包：\n```sh\npip install tensorflow-compression==2.13.0\n```","2023-07-26T23:21:33",{"id":222,"version":223,"summary_zh":224,"released_at":225},315611,"v2.12.0","版本 2.12.0 新增了一个用于随机舍入的运算符。此外，还修复了若干问题。\n\n此版本要求 TensorFlow 2.12 和 TensorFlow Probability 0.15。\n\n如果您使用的是 Linux 或 Darwin（Mac OS）系统，并且 Python 版本为 3.8–3.11，请通过以下命令安装预编译的二进制包：\n```sh\npip install tensorflow-compression==2.12.0\n```","2023-03-23T21:44:50",{"id":227,"version":228,"summary_zh":229,"released_at":230},315619,"v2.7.0","Release 2.7.0 is the first release using a new automated build system, which streamlines the process of building `pip` packages.\r\n\r\nAs part of that, we are also synchronizing the versioning of TFC with TensorFlow, so that it is clearer which TF package needs to be installed (TFC version `x.y.z` now requires TF version `x.y.*`).\r\n\r\nThe main technical updates to this release include:\r\n- Commit 61e7977a6e084fc60359cdb2e3f1005b475c7f1f introduces a new, more general range coder op. This is now used throughout the library. The old ops are still contained in the library, so that older pre-trained models can be run, but all development going forward should use the new implementation.\r\n- Commits edb8df57bcb0c522570a400e7eab7ca83d664d3b and 49fe704d83a3d9d9f6acb214b84732fb8d6e780a revise the handling of quantization offsets, and refactor the entropy model classes in the process. This removes quite a bit of complexity, and possibly solves improper handling of some edge cases.\r\n\r\nThis release requires TensorFlow 2.7 and TensorFlow Probability 0.15.\r\n\r\nIf you're on Linux or Darwin (Mac OS) and Python 3.7–3.9, install the pre-compiled binary by running:\r\n```sh\r\npip install tensorflow-compression==2.7.0\r\n```\r\n","2022-01-26T17:13:57",{"id":232,"version":233,"summary_zh":234,"released_at":235},315620,"v2.2","Release 2.2 introduces a new `tf.Dataset` that can read YUV4MPEG (.y4m) files, adds support for stateless entropy models, reimplements RDFT kernel parameterization using `tf.signal`, and adds support for Python 3.9.\r\n\r\nThis release requires TensorFlow 2.5 and a compatible version of TensorFlow Probability.\r\n\r\nIf you're on Linux or Darwin (Mac OS) and Python 3.6–3.9, install the pre-compiled binary by running:\r\n```sh\r\npip install tensorflow-compression==2.2\r\n```\r\n","2021-05-14T00:38:32",{"id":237,"version":238,"summary_zh":239,"released_at":240},315621,"v2.1","Release 2.1 supports TensorFlow 2, eager mode, and a redesigned and more flexible implementation of the entropy models.\r\n\r\nThis release requires a development version of TensorFlow 2.5 (`tf-nightly==2.5.0.dev*`) and a compatible version of TensorFlow Probability. It replaces release 2.0, which was found to have incompatibilities with TF 2.4.\r\n\r\nIf you're on Linux or Darwin (Mac OS) and Python 3.6–3.8, install the pre-compiled binary by running:\r\n```sh\r\npip install tf-nightly==2.5.0.dev20210312 tensorflow-compression==2.1\r\n```\r\n","2021-03-11T09:05:35",{"id":242,"version":243,"summary_zh":244,"released_at":245},315622,"v2.0","Release 2.0 was meant to support TensorFlow 2, eager mode, and a redesigned and more flexible implementation of the entropy models.\r\n\r\nThis release requires TensorFlow 2.4 and a compatible version of TensorFlow Probability.\r\n\r\nDue to incompatibilities between TFC and the TF 2.4 release (issue #71), we don't recommend using this release. It was replaced by 2.1.","2021-03-06T04:46:21",{"id":247,"version":248,"summary_zh":249,"released_at":250},315623,"v2.0b2","Release 2.0 will support TensorFlow 2, eager mode, and a redesigned and more flexible implementation of the entropy models.\r\n\r\nThis release requires TensorFlow 2.4 and a compatible version of TensorFlow Probability.\r\n\r\nIf you're on Linux or Darwin (Mac OS) and Python 3.6–3.8, install the pre-compiled binary by running:\r\n```sh\r\npip install tensorflow-compression==2.0b2\r\n```\r\n\r\nThis is a pre-release. Please note that the code may fail in unexpected ways.\r\n\r\nOutstanding issues to be resolved for 2.0:\r\n- Adjustments in the interface of the new entropy model classes may still happen until the final release.\r\n- Not all example models in the models\u002F directory have been ported to the new code yet.","2021-01-06T01:50:06",{"id":252,"version":253,"summary_zh":254,"released_at":255},315624,"v2.0b1","Release 2.0 will support TensorFlow 2, eager mode, and a redesigned and more flexible implementation of the entropy models.\r\n\r\nThis release requires TensorFlow 2.3 and a compatible version of TensorFlow Probability.\r\n\r\nIf you're on Linux or Darwin (Mac OS) and Python 3.6–3.8, install the pre-compiled binary by running:\r\n```sh\r\npip install tensorflow-compression==2.0b1\r\n```\r\n\r\nThis is a pre-release. Please note that the code may fail in unexpected ways.\r\n\r\nOutstanding issues to be resolved for 2.0:\r\n- Adjustments in the interface of the new entropy model classes may still happen until the final release.\r\n- Example models in the models\u002F directory have not been ported to the new code yet.","2020-12-04T19:38:43"]