[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-huggingface--exporters":3,"tool-huggingface--exporters":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",143909,2,"2026-04-07T11:33:18",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":72,"owner_website":77,"owner_url":78,"languages":79,"stars":84,"forks":85,"last_commit_at":86,"license":87,"difficulty_score":32,"env_os":88,"env_gpu":89,"env_ram":90,"env_deps":91,"category_tags":96,"github_topics":97,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":106,"updated_at":107,"faqs":108,"releases":137},5088,"huggingface\u002Fexporters","exporters","Export Hugging Face models to Core ML and TensorFlow Lite","exporters 是 Hugging Face 团队推出的一款开源工具，旨在帮助开发者将基于 PyTorch、TensorFlow 或 JAX 构建的 Transformers 模型轻松转换为苹果生态专用的 Core ML 格式。它主要解决了大语言模型和视觉模型从训练环境部署到 iOS、macOS 等移动端设备时的格式兼容难题，让算法工程师无需手动编写复杂的转换脚本，即可实现模型的端侧推理。\n\n这款工具特别适合需要在苹果设备上部署 AI 应用的移动开发者和机器学习研究人员。其核心亮点在于与 Hugging Face Transformers 库的深度集成，内置了针对 BERT、GPT-2、ViT、MobileBERT 等多种主流架构的预置配置，支持一键导出。此外，exporters 还提供了一个无需代码的在线转换空间，用户可先在线验证模型兼容性，成功后再下载权重，极大降低了尝试门槛。需要注意的是，由于 Transformer 模型通常较大，官方建议结合 Optimum 工具先对模型进行优化，以确保在移动设备上的运行效率。","\u003C!---\nCopyright 2022 The HuggingFace Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n-->\n\n# 🤗 Exporters\n\n👷 **WORK IN PROGRESS** 👷\n\nThis package lets you export 🤗 Transformers models to Core ML.\n\n> For converting models to TFLite, we recommend using [Optimum](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Foptimum\u002Fexporters\u002Ftflite\u002Fusage_guides\u002Fexport_a_model).\n\n## When to use 🤗 Exporters\n\n🤗 Transformers models are implemented in PyTorch, TensorFlow, or JAX. However, for deployment you might want to use a different framework such as Core ML. This library makes it easy to convert Transformers models to this format.\n\nThe aim of the Exporters package is to be more convenient than writing your own conversion script with *coremltools* and to be tightly integrated with the 🤗 Transformers library and the Hugging Face Hub.\n\nFor an even more convenient approach, `Exporters` powers a [no-code transformers to Core ML conversion Space](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhuggingface-projects\u002Ftransformers-to-coreml). You can try it out without installing anything to check whether the model you are interested in can be converted. If conversion succeeds, the converted Core ML weights will be pushed to the Hub. For additional flexibility and details about the conversion process, please read on.\n\nNote: Keep in mind that Transformer models are usually quite large and are not always suitable for use on mobile devices. It might be a good idea to [optimize the model for inference](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Foptimum) first using 🤗 Optimum.\n\n## Installation\n\nClone this repo:\n\n```bash\n$ git clone https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fexporters.git\n```\n\nInstall it as a Python package:\n\n```bash\n$ cd exporters\n$ pip install -e .\n```\n\nAll done!\n\nNote: The Core ML exporter can be used from Linux but macOS is recommended.\n\n## Core ML\n\n[Core ML](https:\u002F\u002Fdeveloper.apple.com\u002Fmachine-learning\u002Fcore-ml\u002F) is Apple's software library for fast on-device model inference with neural networks and other types of machine learning models. It can be used on macOS, iOS, tvOS, and watchOS, and is optimized for using the CPU, GPU, and Apple Neural Engine. Although the Core ML framework is proprietary, the Core ML file format is an open format.\n\nThe Core ML exporter uses [coremltools](https:\u002F\u002Fcoremltools.readme.io\u002Fdocs) to perform the conversion from PyTorch or TensorFlow to Core ML.\n\nThe `exporters.coreml` package enables you to convert model checkpoints to a Core ML model by leveraging configuration objects. These configuration objects come ready-made for a number of model architectures, and are designed to be easily extendable to other architectures.\n\nReady-made configurations include the following architectures:\n\n- BEiT\n- BERT\n- ConvNeXT\n- CTRL\n- CvT\n- DistilBERT\n- DistilGPT2\n- GPT2\n- LeViT\n- MobileBERT\n- MobileViT\n- SegFormer\n- SqueezeBERT\n- Vision Transformer (ViT)\n- YOLOS\n\n\u003C!-- TODO: automatically generate this list -->\n\n[See here](MODELS.md) for a complete list of supported models.\n\n### Exporting a model to Core ML\n\n\u003C!--\nTo export a 🤗 Transformers model to Core ML, you'll first need to install some extra dependencies:\n\n``bash\npip install transformers[coreml]\n``\n\nThe `transformers.coreml` package can then be used as a Python module:\n-->\n\nThe `exporters.coreml` package can be used as a Python module from the command line. To export a checkpoint using a ready-made configuration, do the following:\n\n```bash\npython -m exporters.coreml --model=distilbert-base-uncased exported\u002F\n```\n\nThis exports a Core ML version of the checkpoint defined by the `--model` argument. In this example it is `distilbert-base-uncased`, but it can be any checkpoint on the Hugging Face Hub or one that's stored locally.\n\nThe resulting Core ML file will be saved to the `exported` directory as `Model.mlpackage`. Instead of a directory you can specify a filename, such as `DistilBERT.mlpackage`.\n\nIt's normal for the conversion process to output many warning messages and other logging information. You can safely ignore these. If all went well, the export should conclude with the following logs:\n\n```bash\nValidating Core ML model...\n\t-[✓] Core ML model output names match reference model ({'last_hidden_state'})\n\t- Validating Core ML model output \"last_hidden_state\":\n\t\t-[✓] (1, 128, 768) matches (1, 128, 768)\n\t\t-[✓] all values close (atol: 0.0001)\nAll good, model saved at: exported\u002FModel.mlpackage\n```\n\nNote: While it is possible to export models to Core ML on Linux, the validation step will only be performed on Mac, as it requires the Core ML framework to run the model.\n\nThe resulting file is `Model.mlpackage`. This file can be added to an Xcode project and be loaded into a macOS or iOS app.\n\nThe exported Core ML models use the **mlpackage** format with the **ML Program** model type. This format was introduced in 2021 and requires at least iOS 15, macOS 12.0, and Xcode 13. We prefer to use this format as it is the future of Core ML. The Core ML exporter can also make models in the older `.mlmodel` format, but this is not recommended.\n\nThe process is identical for TensorFlow checkpoints on the Hub. For example, you can export a pure TensorFlow checkpoint from the [Keras organization](https:\u002F\u002Fhuggingface.co\u002Fkeras-io) as follows:\n\n```bash\npython -m exporters.coreml --model=keras-io\u002Ftransformers-qa exported\u002F\n```\n\nTo export a model that's stored locally, you'll need to have the model's weights and tokenizer files stored in a directory. For example, we can load and save a checkpoint as follows:\n\n```python\n>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification\n\n>>> # Load tokenizer and PyTorch weights form the Hub\n>>> tokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\n>>> pt_model = AutoModelForSequenceClassification.from_pretrained(\"distilbert-base-uncased\")\n>>> # Save to disk\n>>> tokenizer.save_pretrained(\"local-pt-checkpoint\")\n>>> pt_model.save_pretrained(\"local-pt-checkpoint\")\n```\n\nOnce the checkpoint is saved, you can export it to Core ML by pointing the `--model` argument to the directory holding the checkpoint files:\n\n```bash\npython -m exporters.coreml --model=local-pt-checkpoint exported\u002F\n```\n\n\u003C!--\nTODO: also TFAutoModel example\n-->\n\n### Selecting features for different model topologies\n\nEach ready-made configuration comes with a set of _features_ that enable you to export models for different types of topologies or tasks. As shown in the table below, each feature is associated with a different auto class:\n\n| Feature                                      | Auto Class                           |\n| -------------------------------------------- | ------------------------------------ |\n| `default`, `default-with-past`               | `AutoModel`                          |\n| `causal-lm`, `causal-lm-with-past`           | `AutoModelForCausalLM`               |\n| `ctc`                                        | `AutoModelForCTC`                    |\n| `image-classification`                       | `AutoModelForImageClassification`    |\n| `masked-im`                                  | `AutoModelForMaskedImageModeling`    |\n| `masked-lm`                                  | `AutoModelForMaskedLM`               |\n| `multiple-choice`                            | `AutoModelForMultipleChoice`         |\n| `next-sentence-prediction`                   | `AutoModelForNextSentencePrediction` |\n| `object-detection`                           | `AutoModelForObjectDetection`        |\n| `question-answering`                         | `AutoModelForQuestionAnswering`      |\n| `semantic-segmentation`                      | `AutoModelForSemanticSegmentation`   |\n| `seq2seq-lm`, `seq2seq-lm-with-past`         | `AutoModelForSeq2SeqLM`              |\n| `sequence-classification`                    | `AutoModelForSequenceClassification` |\n| `speech-seq2seq`, `speech-seq2seq-with-past` | `AutoModelForSpeechSeq2Seq`          |\n| `token-classification`                       | `AutoModelForTokenClassification`    |\n\nFor each configuration, you can find the list of supported features via the `FeaturesManager`. For example, for DistilBERT we have:\n\n```python\n>>> from exporters.coreml.features import FeaturesManager\n\n>>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type(\"distilbert\").keys())\n>>> print(distilbert_features)\n['default', 'masked-lm', 'multiple-choice', 'question-answering', 'sequence-classification', 'token-classification']\n```\n\nYou can then pass one of these features to the `--feature` argument in the `exporters.coreml` package. For example, to export a text-classification model we can pick a fine-tuned model from the Hub and run:\n\n```bash\npython -m exporters.coreml --model=distilbert-base-uncased-finetuned-sst-2-english \\\n                           --feature=sequence-classification exported\u002F\n```\n\nwhich will display the following logs:\n\n```bash\nValidating Core ML model...\n\t- Core ML model is classifier, validating output\n\t\t-[✓] predicted class NEGATIVE matches NEGATIVE\n\t\t-[✓] number of classes 2 matches 2\n\t\t-[✓] all values close (atol: 0.0001)\nAll good, model saved at: exported\u002FModel.mlpackage\n```\n\nNotice that in this case, the exported model is a Core ML classifier, which predicts the highest scoring class name in addition to a dictionary of probabilities, instead of the `last_hidden_state` we saw with the `distilbert-base-uncased` checkpoint earlier. This is expected since the fine-tuned model has a sequence classification head.\n\n\u003CTip>\n\nThe features that have a `with-past` suffix (e.g. `causal-lm-with-past`) correspond to model topologies with precomputed hidden states (key and values in the attention blocks) that can be used for fast autoregressive decoding.\n\n\u003C\u002FTip>\n\n### Configuring the export options\n\nTo see the full list of possible options, run the following from the command line:\n\n```bash\npython -m exporters.coreml --help\n```\n\nExporting a model requires at least these arguments:\n\n- `-m \u003Cmodel>`: The model ID from the Hugging Face Hub, or a local path to load the model from.\n- `--feature \u003Ctask>`: The task the model should perform, for example `\"image-classification\"`. See the table above for possible task names.\n- `\u003Coutput>`: The path where to store the generated Core ML model.\n\nThe output path can be a folder, in which case the file will be named `Model.mlpackage`, or you can also specify the filename directly.\n\nAdditional arguments that can be provided:\n\n- `--preprocessor \u003Cvalue>`: Which type of preprocessor to use. `auto` tries to automatically detect it. Possible values are: `auto` (the default), `tokenizer`, `feature_extractor`, `processor`.\n- `--atol \u003Cnumber>`: The absolute difference tolerence used when validating the model. The default value is 1e-4.\n- `--quantize \u003Cvalue>`: Whether to quantize the model weights. The possible quantization options are: `float32` for no quantization (the default) or `float16` for 16-bit floating point.\n- `--compute_units \u003Cvalue>`: Whether to optimize the model for CPU, GPU, and\u002For Neural Engine. Possible values are: `all` (the default), `cpu_and_gpu`, `cpu_only`, `cpu_and_ne`.\n\n### Using the exported model\n\nUsing the exported model in an app is just like using any other Core ML model. After adding the model to Xcode, it will auto-generate a Swift class that lets you make predictions from within the app.\n\nDepending on the chosen export options, you may still need to preprocess or postprocess the input and output tensors.\n\nFor image inputs, there is no need to perform any preprocessing as the Core ML model will already normalize the pixels. For classifier models, the Core ML model will output the predictions as a dictionary of probabilities. For other models, you might need to do more work.\n\nCore ML does not have the concept of a tokenizer and so text models will still require manual tokenization of the input data. [Here is an example](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fswift-coreml-transformers) of how to perform tokenization in Swift.\n\n### Overriding default choices in the configuration object\n\nAn important goal of Core ML is to make it easy to use the models inside apps. Where possible, the Core ML exporter will add extra operations to the model, so that you do not have to do your own pre- and postprocessing.\n\nIn particular,\n\n- Image models will automatically perform pixel normalization as part of the model. You do not need to preprocess the image yourself, except potentially resizing or cropping it.\n\n- For classification models, a softmax layer is added and the labels are included in the model file. Core ML makes a distinction between classifier models and other types of neural networks. For a model that outputs a single classification prediction per input example, Core ML makes it so that the model predicts the winning class label and a dictionary of probabilities instead of a raw logits tensor. Where possible, the exporter uses this special classifier model type.\n\n- Other models predict logits but do not fit into Core ML's definition of a classifier, such as the `token-classificaton` task that outputs a prediction for each token in the sequence. Here, the exporter also adds a softmax to convert the logits into probabilities. The label names are added to the model's metadata. Core ML ignores these label names but they can be retrieved by writing a few lines of Swift code.\n\n- A `semantic-segmentation` model will upsample the output image to the original spatial dimensions and apply an argmax to obtain the predicted class label indices. It does not automatically apply a softmax.\n\nThe Core ML exporter makes these choices because they are the settings you're most likely to need. To override any of the above defaults, you must create a subclass of the configuration object, and then export the model to Core ML by writing a short Python program.\n\nExample: To prevent the MobileViT semantic segmentation model from upsampling the output image, you would create a subclass of `MobileViTCoreMLConfig` and override the `outputs` property to set `do_upsample` to False. Other options you can set for this output are `do_argmax` and `do_softmax`.\n\n```python\nfrom collections import OrderedDict\nfrom exporters.coreml.models import MobileViTCoreMLConfig\nfrom exporters.coreml.config import OutputDescription\n\nclass MyCoreMLConfig(MobileViTCoreMLConfig):\n    @property\n    def outputs(self) -> OrderedDict[str, OutputDescription]:\n        return OrderedDict(\n            [\n                (\n                    \"logits\",\n                    OutputDescription(\n                        \"classLabels\",\n                        \"Classification scores for each pixel\",\n                        do_softmax=True,\n                        do_upsample=False,\n                        do_argmax=False,\n                    )\n                ),\n            ]\n        )\n\nconfig = MyCoreMLConfig(model.config, \"semantic-segmentation\")\n```\n\nHere you can also change the name of the output from `classLabels` to something else, or fill in the output description (\"Classification scores for each pixel\").\n\nIt is also possible to change the properties of the model inputs. For example, for text models the default sequence length is between 1 and 128 tokens. To set the input sequence length on a DistilBERT model to a fixed length of 32 tokens, you could override the config object as follows:\n\n```python\nfrom collections import OrderedDict\nfrom exporters.coreml.models import DistilBertCoreMLConfig\nfrom exporters.coreml.config import InputDescription\n\nclass MyCoreMLConfig(DistilBertCoreMLConfig):\n    @property\n    def inputs(self) -> OrderedDict[str, InputDescription]:\n        input_descs = super().inputs\n        input_descs[\"input_ids\"].sequence_length = 32\n        return input_descs\n\nconfig = MyCoreMLConfig(model.config, \"text-classification\")\n```\n\nUsing a fixed sequence length generally outputs a simpler, and possibly faster, Core ML model. However, for many models the input needs to have a flexible length. In that case, specify a tuple for `sequence_length` to set the (min, max) lengths. Use (1, -1) to have no upper limit on the sequence length. (Note: if `sequence_length` is set to a fixed value, then the batch size is fixed to 1.)\n\nTo find out what input and output options are available for the model you're interested in, create its `CoreMLConfig` object and examine the `config.inputs` and `config.outputs` properties.\n\nNot all inputs or outputs are always required: For text models, you may remove the `attention_mask` input. Without this input, the attention mask is always assumed to be filled with ones (no padding). However, if the task requires a `token_type_ids` input, there must also be an `attention_mask` input.\n\nRemoving inputs and\u002For outputs is accomplished by making a subclass of `CoreMLConfig` and overriding the `inputs` and `outputs` properties.\n\nBy default, a model is generated in the ML Program format. By overriding the `use_legacy_format` property to return `True`, the older NeuralNetwork format will be used. This is not recommended and only exists as a workaround for models that fail to convert to the ML Program format.\n\nOnce you have the modified `config` instance, you can use it to export the model following the instructions from the section \"Exporting the model\" below.\n\nNot everything is described by the configuration objects. The behavior of the converted model is also determined by the model's tokenizer or feature extractor. For example, to use a different input image size, you'd create the feature extractor with different resizing or cropping settings and use that during the conversion instead of the default feature extractor.\n\n### Exporting a model for an unsupported architecture\n\nIf you wish to export a model whose architecture is not natively supported by the library, there are three main steps to follow:\n\n1. Implement a custom Core ML configuration.\n2. Export the model to Core ML.\n3. Validate the outputs of the PyTorch and exported models.\n\nIn this section, we'll look at how DistilBERT was implemented to show what's involved with each step.\n\n#### Implementing a custom Core ML configuration\n\nTODO: didn't write this section yet because the implementation is not done yet\n\nLet’s start with the configuration object. We provide an abstract classes that you should inherit from, `CoreMLConfig`.\n\n```python\nfrom exporters.coreml import CoreMLConfig\n```\n\nTODO: stuff to cover here:\n\n- `modality` property\n- how to implement custom ops + link to coremltools documentation on this topic\n- decoder models (`use_past`) and encoder-decoder models (`seq2seq`)\n\n#### Exporting the model\n\nOnce you have implemented the Core ML configuration, the next step is to export the model. Here we can use the `export()` function provided by the `exporters.coreml` package. This function expects the Core ML configuration, along with the base model and tokenizer (for text models) or feature extractor (for vision models):\n\n```python\nfrom transformers import AutoConfig, AutoModelForSequenceClassification, AutoTokenizer\nfrom exporters.coreml import export\nfrom exporters.coreml.models import DistilBertCoreMLConfig\n\nmodel_ckpt = \"distilbert-base-uncased\"\nbase_model = AutoModelForSequenceClassification.from_pretrained(model_ckpt, torchscript=True)\npreprocessor = AutoTokenizer.from_pretrained(model_ckpt)\n\ncoreml_config = DistilBertCoreMLConfig(base_model.config, task=\"text-classification\")\nmlmodel = export(preprocessor, base_model, coreml_config)\n```\n\nNote: For the best results, pass the argument `torchscript=True` to `from_pretrained` when loading the model. This allows the model to configure itself for PyTorch tracing, which is needed for the Core ML conversion.\n\nAdditional options that can be passed into `export()`:\n\n- `quantize`: Use `\"float32\"` for no quantization (the default), `\"float16\"` to quantize the weights to 16-bit floats.\n- `compute_units`: Whether to optimize the model for CPU, GPU, and\u002For Neural Engine. Defaults to `coremltools.ComputeUnit.ALL`.\n\nTo export the model with precomputed hidden states (key and values in the attention blocks) for fast autoregressive decoding, pass the argument `use_past=True` when creating the `CoreMLConfig` object.\n\nIt is normal for the Core ML exporter to print out a lot of warning and information messages. In particular, you might see messages such as these:\n\n> TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\n\nThose messages are to be expected and are a normal part of the conversion process. If there is a real problem, the converter will throw an error.\n\nIf the export succeeded, the return value from `export()` is a `coremltools.models.MLModel` object. Write `print(mlmodel)` to examine the Core ML model's inputs, outputs, and metadata.\n\nOptionally fill in the model's metadata:\n\n```python\nmlmodel.short_description = \"Your awesome model\"\nmlmodel.author = \"Your name\"\nmlmodel.license = \"Fill in the copyright information here\"\nmlmodel.version = \"1.0\"\n```\n\nFinally, save the model. You can open the resulting **mlpackage** file in Xcode and examine it there.\n\n```python\nmlmodel.save(\"DistilBert.mlpackage\")\n```\n\nNote: If the configuration object used returns `True` from `use_legacy_format`, the model can be saved as `ModelName.mlmodel` instead of `.mlpackage`.\n\n#### Exporting a decoder model\n\nDecoder-based models can use a `past_key_values` input that ontains pre-computed hidden-states (key and values in the self-attention blocks), which allows for much faster sequential decoding. This feature is enabled by passing `use_cache=True` to the Transformer model.\n\nTo enable this feature with the Core ML exporter, set the `use_past=True` argument when creating the `CoreMLConfig` object:\n\n```python\ncoreml_config = CTRLCoreMLConfig(base_model.config, task=\"text-generation\", use_past=True)\n\n# or:\ncoreml_config = CTRLCoreMLConfig.with_past(base_model.config, task=\"text-generation\")\n```\n\nThis adds multiple new inputs and outputs to the model with names such as `past_key_values_0_key`, `past_key_values_0_value`, ... (inputs) and `present_key_values_0_key`, `present_key_values_0_value`, ... (outputs).\n\nEnabling this option makes the model less convenient to use, since you will have to keep track of many additional tensors, but it does make inference much faster on sequences.\n\nThe Transformers model must be loaded with `is_decoder=True`, for example:\n\n```python\nbase_model = BigBirdForCausalLM.from_pretrained(\"google\u002Fbigbird-roberta-base\", torchscript=True, is_decoder=True)\n```\n\nTODO: Example of how to use this in Core ML. The `past_key_values` tensors will grow larger over time. The `attention_mask` tensor must have the size of `past_key_values` plus new `input_ids`.\n\n#### Exporting an encoder-decoder model\n\nTODO: properly write this section\n\nYou'll need to export the model as two separate Core ML models: the encoder and the decoder.\n\nExport the model like so:\n\n```python\ncoreml_config = TODOCoreMLConfig(base_model.config, task=\"text2text-generation\", seq2seq=\"encoder\")\nencoder_mlmodel = export(preprocessor, base_model.get_encoder(), coreml_config)\n\ncoreml_config = TODOCoreMLConfig(base_model.config, task=\"text2text-generation\", seq2seq=\"decoder\")\ndecoder_mlmodel = export(preprocessor, base_model, coreml_config)\n```\n\nWhen the `seq2seq` option is used, the sequence length in the Core ML model is always unbounded. The `sequence_length` specified in the configuration object is ignored.\n\nThis can also be combined with `use_past=True`. TODO: explain how to use this.\n\n#### Validating the model outputs\n\nThe final step is to validate that the outputs from the base and exported model agree within some absolute tolerance. You can use the `validate_model_outputs()` function provided by the `exporters.coreml` package as follows.\n\nFirst enable logging:\n\n```python\nfrom exporters.utils import logging\nlogger = logging.get_logger(\"exporters.coreml\")\nlogger.setLevel(logging.INFO)\n```\n\nThen validate the model:\n\n```python\nfrom exporters.coreml import validate_model_outputs\n\nvalidate_model_outputs(\n    coreml_config, preprocessor, base_model, mlmodel, coreml_config.atol_for_validation\n)\n```\n\nNote: `validate_model_outputs` only works on Mac computers, as it depends on the Core ML framework to make predictions with the model.\n\nThis function uses the `CoreMLConfig.generate_dummy_inputs()` method to generate inputs for the base and exported model, and the absolute tolerance can be defined in the configuration. We generally find numerical agreement in the 1e-6 to 1e-4 range, although anything smaller than 1e-3 is likely to be OK.\n\nIf validation fails with an error such as the following, it doesn't necessarily mean the model is broken:\n\n> ValueError: Output values do not match between reference model and Core ML exported model: Got max absolute difference of: 0.12345\n\nThe comparison is done using an absolute difference value, which in this example is 0.12345. That is much larger than the default tolerance value of 1e-4, hence the reported error. However, the magnitude of the activations also matters. For a model whose activations are on the order of 1e+3, a maximum absolute difference of 0.12345 would usually be acceptable.\n\nIf validation fails with this error and you're not entirely sure if this is a true problem, call `mlmodel.predict()` on a dummy input tensor and look at the largest absolute magnitude in the output tensor.\n\n### Contributing a new configuration to 🤗 Transformers\n\nWe are looking to expand the set of ready-made configurations and welcome contributions from the community! If you would like to contribute your addition to the library, you will need to:\n\n* Implement the Core ML configuration in the `models.py` file\n* Include the model architecture and corresponding features in [`~coreml.features.FeatureManager`]\n* Add your model architecture to the tests in `test_coreml.py`\n\n### Troubleshooting: What if Core ML Exporters doesn't work for your model?\n\nIt's possible that the model you wish to export fails to convert using Core ML Exporters or even when you try to use `coremltools` directly. When running these automated conversion tools, it's quite possible the conversion bails out with an inscrutable error message. Or, the conversion may appear to succeed but the model does not work or produces incorrect outputs.\n\nThe most common reasons for conversion errors are:\n\n- You provided incorrect arguments to the converter. The `task` argument should match the chosen model architecture. For example, the `\"feature-extraction\"` task should only be used with models of type `AutoModel`, not `AutoModelForXYZ`. Additionally, the `seq2seq` argument is required to tell apart encoder-decoder type models from encoder-only or decoder-only models. Passing invalid choices for these arguments may give an error during the conversion process or it may create a model that works but does the wrong thing.\n\n- The model performs an operation that is not supported by Core ML or coremltools. It's also possible coremltools has a bug or can't handle particularly complex models.\n\nIf the Core ML export fails due to the latter, you have a couple of options:\n\n1. Implement the missing operator in the `CoreMLConfig`'s `patch_pytorch_ops()` function.\n\n2. Fix the original model. This requires a deep understanding of how the model works and is not trivial. However, sometimes the fix is to hardcode certain values rather than letting PyTorch or TensorFlow calculate them from the shapes of tensors.\n\n3. Fix coremltools. It is sometimes possible to hack coremltools so that it ignores the issue.\n\n4. Forget about automated conversion and [build the model from scratch using MIL](https:\u002F\u002Fcoremltools.readme.io\u002Fdocs\u002Fmodel-intermediate-language). This is the intermediate language that coremltools uses internally to represent models. It's similar in many ways to PyTorch.\n\n5. Submit an issue and we'll see what we can do. 😀\n\n### Known issues\n\nThe Core ML exporter writes models in the **mlpackage** format. Unfortunately, for some models the generated ML Program is incorrect, in which case it's recommended to convert the model to the older NeuralNetwork format by setting the configuration object's `use_legacy_format` property to `True`. On certain hardware, the older format may also run more efficiently. If you're not sure which one to use, export the model twice and compare the two versions.\n\nKnown models that need to be exported with `use_legacy_format=True` are: GPT2, DistilGPT2.\n\nUsing flexible input sequence length with GPT2 or GPT-Neo causes the converter to be extremely slow and allocate over 200 GB of RAM. This is clearly a bug in coremltools or the Core ML framework, as the allocated memory is never used (the computer won't start swapping). After many minutes, the conversion does succeed, but the model may not be 100% correct. Loading the model afterwards takes a very long time and makes similar memory allocations. Likewise for making predictions. While theoretically the conversion succeeds (if you have enough patience), the model is not really usable like this.\n\n## Pushing the model to the Hugging Face Hub\n\nThe [Hugging Face Hub](https:\u002F\u002Fhuggingface.co) can also host your Core ML models. You can use the [`huggingface_hub` package](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fhuggingface_hub\u002Fmain\u002Fen\u002Findex) to upload the converted model to the Hub from Python.\n\nFirst log in to your Hugging Face account account with the following command:\n\n```bash\nhuggingface-cli login\n```\n\nOnce you are logged in, save the **mlpackage** to the Hub as follows:\n\n```python\nfrom huggingface_hub import Repository\n\nwith Repository(\n        \"\u003Cmodel name>\", clone_from=\"https:\u002F\u002Fhuggingface.co\u002F\u003Cuser>\u002F\u003Cmodel name>\",\n        use_auth_token=True).commit(commit_message=\"add Core ML model\"):\n    mlmodel.save(\"\u003Cmodel name>.mlpackage\")\n```\n\nMake sure to replace `\u003Cmodel name>` with the name of the model and `\u003Cuser>` with your Hugging Face username.\n","\u003C!---\n版权所有 © 2022 HuggingFace 团队。保留所有权利。\n\n根据 Apache License, Version 2.0（“许可证”）授权；\n\n除非符合许可证的规定，否则不得使用此文件。\n您可以在以下网址获得许可证副本：\n\n    http:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0\n\n除非适用法律要求或以书面形式同意，否则软件按“原样”分发，\n不提供任何形式的保证或条件，无论是明示的还是默示的。\n有关权限和限制的具体语言，请参阅许可证。\n-->\n\n# 🤗 Exporters\n\n👷 **开发中** 👷\n\n这个包允许你将 🤗 Transformers 模型导出为 Core ML 格式。\n\n> 对于将模型转换为 TFLite 格式，我们建议使用 [Optimum](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Foptimum\u002Fexporters\u002Ftflite\u002Fusage_guides\u002Fexport_a_model)。\n\n## 何时使用 🤗 Exporters\n\n🤗 Transformers 模型可以用 PyTorch、TensorFlow 或 JAX 实现。然而，在部署时，你可能希望使用不同的框架，例如 Core ML。这个库可以轻松地将 Transformers 模型转换为这种格式。\n\nExporters 包的目标是比使用 *coremltools* 编写自己的转换脚本更加方便，并且与 🤗 Transformers 库和 Hugging Face Hub 紧密集成。\n\n为了更便捷的方式，`Exporters` 提供了一个 [无需代码的 Transformers 到 Core ML 转换 Space](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhuggingface-projects\u002Ftransformers-to-coreml)。你可以无需安装任何东西就试用它，以检查你感兴趣的模型是否可以被转换。如果转换成功，转换后的 Core ML 权重将会被推送到 Hub。如需更多灵活性和关于转换过程的详细信息，请继续阅读。\n\n注意：请记住，Transformer 模型通常非常大，并不总是适合在移动设备上使用。最好先使用 🤗 Optimum 对模型进行 [推理优化](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Foptimum)。\n\n## 安装\n\n克隆这个仓库：\n\n```bash\n$ git clone https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fexporters.git\n```\n\n将其作为 Python 包安装：\n\n```bash\n$ cd exporters\n$ pip install -e .\n```\n\n完成！\n\n注意：Core ML 导出器可以在 Linux 上使用，但推荐使用 macOS。\n\n## Core ML\n\n[Core ML](https:\u002F\u002Fdeveloper.apple.com\u002Fmachine-learning\u002Fcore-ml\u002F) 是 Apple 的软件库，用于在设备上快速运行神经网络和其他类型的机器学习模型。它可以在 macOS、iOS、tvOS 和 watchOS 上使用，并针对 CPU、GPU 和 Apple Neural Engine 进行了优化。尽管 Core ML 框架是专有的，但 Core ML 文件格式是一种开放格式。\n\nCore ML 导出器使用 [coremltools](https:\u002F\u002Fcoremltools.readme.io\u002Fdocs) 将 PyTorch 或 TensorFlow 模型转换为 Core ML 格式。\n\n`exporters.coreml` 包允许你通过配置对象将模型检查点转换为 Core ML 模型。这些配置对象已经为许多模型架构准备好了，并且设计为易于扩展到其他架构。\n\n预置的配置包括以下架构：\n\n- BEiT\n- BERT\n- ConvNeXT\n- CTRL\n- CvT\n- DistilBERT\n- DistilGPT2\n- GPT2\n- LeViT\n- MobileBERT\n- MobileViT\n- SegFormer\n- SqueezeBERT\n- Vision Transformer (ViT)\n- YOLOS\n\n\u003C!-- TODO: 自动生成此列表 -->\n\n[在此处查看](MODELS.md)以获取支持的完整模型列表。\n\n### 将模型导出为 Core ML\n\n\u003C!--\n要将 🤗 Transformers 模型导出为 Core ML，你首先需要安装一些额外的依赖项：\n\n```\npip install transformers[coreml]\n```\n\n然后可以将 `transformers.coreml` 包作为 Python 模块使用：\n-->\n\n`exporters.coreml` 包可以从命令行作为 Python 模块使用。要使用预置配置导出检查点，请执行以下操作：\n\n```bash\npython -m exporters.coreml --model=distilbert-base-uncased exported\u002F\n```\n\n这会导出由 `--model` 参数定义的检查点的 Core ML 版本。在本例中是 `distilbert-base-uncased`，但它可以是 Hugging Face Hub 上的任何检查点，也可以是本地存储的检查点。\n\n生成的 Core ML 文件将保存到 `exported` 目录中，名为 `Model.mlpackage`。你也可以指定一个文件名，而不是目录，例如 `DistilBERT.mlpackage`。\n\n转换过程中输出大量警告信息和其他日志信息是正常现象，可以安全地忽略。如果一切顺利，导出应该以以下日志结束：\n\n```bash\n正在验证 Core ML 模型...\n\t-[✓] Core ML 模型输出名称与参考模型匹配 ({'last_hidden_state'})\n\t- 验证 Core ML 模型输出 \"last_hidden_state\":\n\t\t-[✓] (1, 128, 768) 与 (1, 128, 768) 匹配\n\t\t-[✓] 所有值接近（atol: 0.0001）\n一切正常，模型已保存至：exported\u002FModel.mlpackage\n```\n\n注意：虽然在 Linux 上也可以将模型导出为 Core ML，但验证步骤只能在 Mac 上执行，因为它需要 Core ML 框架来运行模型。\n\n生成的文件是 `Model.mlpackage`。该文件可以添加到 Xcode 项目中，并加载到 macOS 或 iOS 应用程序中。\n\n导出的 Core ML 模型使用 **mlpackage** 格式，模型类型为 **ML Program**。这种格式于 2021 年推出，至少需要 iOS 15、macOS 12.0 和 Xcode 13。我们更倾向于使用这种格式，因为它代表了 Core ML 的未来。Core ML 导出器也可以生成旧的 `.mlmodel` 格式，但不推荐使用。\n\n对于 Hub 上的 TensorFlow 检查点，过程相同。例如，你可以从 [Keras 组织](https:\u002F\u002Fhuggingface.co\u002Fkeras-io)导出一个纯 TensorFlow 检查点，如下所示：\n\n```bash\npython -m exporters.coreml --model=keras-io\u002Ftransformers-qa exported\u002F\n```\n\n要导出本地存储的模型，你需要将模型的权重和分词器文件存储在一个目录中。例如，我们可以加载并保存一个检查点，如下所示：\n\n```python\n>>> from transformers import AutoTokenizer、AutoModelForSequenceClassification\n\n>>> # 从 Hub 加载分词器和 PyTorch 权重\n>>> tokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\n>>> pt_model = AutoModelForSequenceClassification.from_pretrained(\"distilbert-base-uncased\")\n>>> # 保存到磁盘\n>>> tokenizer.save_pretrained(\"local-pt-checkpoint\")\n>>> pt_model.save_pretrained(\"local-pt-checkpoint\")\n```\n\n一旦检查点被保存，你可以通过将 `--model` 参数指向包含检查点文件的目录来将其导出为 Core ML：\n\n```bash\npython -m exporters.coreml --model=local-pt-checkpoint exported\u002F\n```\n\n\u003C!--\nTODO: 也加入 TFAutoModel 示例\n-->\n\n### 为不同模型拓扑选择功能\n\n每个现成的配置都带有一组 _功能_，使您可以为不同类型的拓扑或任务导出模型。如下表所示，每种功能都与一个不同的自动类相关联：\n\n| 功能                                      | 自动类                           |\n| -------------------------------------------- | ------------------------------------ |\n| `default`, `default-with-past`               | `AutoModel`                          |\n| `causal-lm`, `causal-lm-with-past`           | `AutoModelForCausalLM`               |\n| `ctc`                                        | `AutoModelForCTC`                    |\n| `image-classification`                       | `AutoModelForImageClassification`    |\n| `masked-im`                                  | `AutoModelForMaskedImageModeling`    |\n| `masked-lm`                                  | `AutoModelForMaskedLM`               |\n| `multiple-choice`                            | `AutoModelForMultipleChoice`         |\n| `next-sentence-prediction`                   | `AutoModelForNextSentencePrediction` |\n| `object-detection`                           | `AutoModelForObjectDetection`        |\n| `question-answering`                         | `AutoModelForQuestionAnswering`      |\n| `semantic-segmentation`                      | `AutoModelForSemanticSegmentation`   |\n| `seq2seq-lm`, `seq2seq-lm-with-past`         | `AutoModelForSeq2SeqLM`              |\n| `sequence-classification`                    | `AutoModelForSequenceClassification` |\n| `speech-seq2seq`, `speech-seq2seq-with-past` | `AutoModelForSpeechSeq2Seq`          |\n| `token-classification`                       | `AutoModelForTokenClassification`    |\n\n对于每种配置，您可以通过 `FeaturesManager` 查看支持的功能列表。例如，对于 DistilBERT，我们可以这样做：\n\n```python\n>>> from exporters.coreml.features import FeaturesManager\n\n>>> distilbert_features = list(FeaturesManager.get_supported_features_for_model_type(\"distilbert\").keys())\n>>> print(distilbert_features)\n['default', 'masked-lm', 'multiple-choice', 'question-answering', 'sequence-classification', 'token-classification']\n```\n\n然后，您可以将这些功能之一传递给 `exporters.coreml` 包中的 `--feature` 参数。例如，要导出一个文本分类模型，我们可以从 Hub 中选择一个微调过的模型并运行：\n\n```bash\npython -m exporters.coreml --model=distilbert-base-uncased-finetuned-sst-2-english \\\n                           --feature=sequence-classification exported\u002F\n```\n\n这将显示以下日志：\n\n```bash\nValidating Core ML model...\n\t- Core ML model is classifier, validating output\n\t\t-[✓] predicted class NEGATIVE matches NEGATIVE\n\t\t-[✓] number of classes 2 matches 2\n\t\t-[✓] all values close (atol: 0.0001)\nAll good, model saved at: exported\u002FModel.mlpackage\n```\n\n请注意，在这种情况下，导出的模型是一个 Core ML 分类器，它除了输出概率字典外，还会预测得分最高的类别名称，而不是我们之前在 `distilbert-base-uncased` 检查点中看到的 `last_hidden_state`。这是预期的行为，因为该微调过的模型具有序列分类头。\n\n\u003CTip>\n\n带有 `with-past` 后缀的功能（例如 `causal-lm-with-past`）对应于具有预计算隐藏状态（注意力块中的键和值）的模型拓扑，这些状态可用于快速自回归解码。\n\n\u003C\u002FTip>\n\n### 配置导出选项\n\n要查看所有可能的选项，请在命令行中运行以下命令：\n\n```bash\npython -m exporters.coreml --help\n```\n\n导出模型至少需要以下参数：\n\n- `-m \u003Cmodel>`：来自 Hugging Face Hub 的模型 ID，或用于加载模型的本地路径。\n- `--feature \u003Ctask>`：模型应执行的任务，例如 `\"image-classification\"`。请参阅上表以了解可能的任务名称。\n- `\u003Coutput>`：存储生成的 Core ML 模型的路径。\n\n输出路径可以是一个文件夹，在这种情况下文件将被命名为 `Model.mlpackage`，或者您也可以直接指定文件名。\n\n还可以提供以下附加参数：\n\n- `--preprocessor \u003Cvalue>`：使用哪种类型的预处理器。`auto` 会尝试自动检测。可能的值有：`auto`（默认）、`tokenizer`、`feature_extractor`、`processor`。\n- `--atol \u003Cnumber>`：验证模型时使用的绝对差异容差。默认值为 1e-4。\n- `--quantize \u003Cvalue>`：是否对模型权重进行量化。可能的量化选项有：`float32` 表示不进行量化（默认），或 `float16` 表示 16 位浮点数。\n- `--compute_units \u003Cvalue>`：是否针对 CPU、GPU 和\u002F或神经引擎优化模型。可能的值有：`all`（默认）、`cpu_and_gpu`、`cpu_only`、`cpu_and_ne`。\n\n### 使用导出的模型\n\n在应用程序中使用导出的模型就像使用任何其他 Core ML 模型一样。将模型添加到 Xcode 后，它会自动生成一个 Swift 类，让您可以在应用程序内进行预测。\n\n根据所选的导出选项，您可能仍然需要对输入和输出张量进行预处理或后处理。\n\n对于图像输入，无需进行任何预处理，因为 Core ML 模型已经会对像素进行归一化处理。对于分类器模型，Core ML 模型会将预测结果输出为概率字典。而对于其他模型，您可能需要做更多的工作。\n\nCore ML 没有分词器的概念，因此文本模型仍然需要手动对输入数据进行分词。[这里有一个示例](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fswift-coreml-transformers)说明如何在 Swift 中进行分词。\n\n### 覆盖配置对象中的默认设置\n\nCore ML 的一个重要目标是让开发者能够轻松地在应用中使用模型。在可能的情况下，Core ML 导出器会向模型添加额外的操作，从而避免您自行进行预处理和后处理。\n\n具体来说：\n\n- 图像模型会自动在模型内部执行像素归一化操作，因此您无需再对图像进行预处理，除非需要调整大小或裁剪。\n\n- 对于分类模型，会添加一个 softmax 层，并将标签包含在模型文件中。Core ML 会区分分类模型与其他类型的神经网络。对于每个输入样本仅输出单个分类预测的模型，Core ML 会使其直接输出获胜的类别标签以及概率字典，而不是原始的 logits 张量。在可行的情况下，导出器会使用这种特殊的分类模型类型。\n\n- 其他模型虽然也会输出 logits，但并不符合 Core ML 对分类模型的定义，例如 `token-classification` 任务会为序列中的每个标记输出一个预测。在这种情况下，导出器同样会添加一个 softmax 层，将 logits 转换为概率。标签名称会被添加到模型的元数据中。Core ML 会忽略这些标签名称，但您可以通过编写几行 Swift 代码来获取它们。\n\n- `semantic-segmentation` 模型会将输出图像上采样回原始的空间尺寸，并应用 argmax 操作以获得预测的类别标签索引。它不会自动应用 softmax。\n\nCore ML 导出器之所以做出这些选择，是因为这些设置通常是您最常需要的。如果您想覆盖上述任何默认设置，就必须创建配置对象的子类，然后通过编写一段简短的 Python 程序将模型导出为 Core ML 格式。\n\n示例：为了防止 MobileViT 语义分割模型对输出图像进行上采样，您可以创建 `MobileViTCoreMLConfig` 的子类，并重写 `outputs` 属性，将 `do_upsample` 设置为 False。此外，您还可以为该输出设置其他选项，如 `do_argmax` 和 `do_softmax`。\n\n```python\nfrom collections import OrderedDict\nfrom exporters.coreml.models import MobileViTCoreMLConfig\nfrom exporters.coreml.config import OutputDescription\n\nclass MyCoreMLConfig(MobileViTCoreMLConfig):\n    @property\n    def outputs(self) -> OrderedDict[str, OutputDescription]:\n        return OrderedDict(\n            [\n                (\n                    \"logits\",\n                    OutputDescription(\n                        \"classLabels\",\n                        \"每个像素的分类得分\",\n                        do_softmax=True,\n                        do_upsample=False,\n                        do_argmax=False,\n                    )\n                ),\n            ]\n        )\n\nconfig = MyCoreMLConfig(model.config, \"semantic-segmentation\")\n```\n\n在这里，您还可以将输出名称从 `classLabels` 更改为其他名称，或者修改输出描述（“每个像素的分类得分”）。\n\n同样，您也可以更改模型输入的属性。例如，对于文本模型，默认的序列长度介于 1 到 128 个标记之间。如果要将 DistilBERT 模型的输入序列长度固定为 32 个标记，可以按如下方式重写配置对象：\n\n```python\nfrom collections import OrderedDict\nfrom exporters.coreml.models import DistilBertCoreMLConfig\nfrom exporters.coreml.config import InputDescription\n\nclass MyCoreMLConfig(DistilBertCoreMLConfig):\n    @property\n    def inputs(self) -> OrderedDict[str, InputDescription]:\n        input_descs = super().inputs\n        input_descs[\"input_ids\"].sequence_length = 32\n        return input_descs\n\nconfig = MyCoreMLConfig(model.config, \"text-classification\")\n```\n\n使用固定的序列长度通常会生成更简单、也可能更快的 Core ML 模型。然而，对于许多模型而言，输入需要具备灵活的长度。在这种情况下，您可以为 `sequence_length` 指定一个元组来设置最小和最大长度。使用 (1, -1) 表示序列长度没有上限。（注意：如果将 `sequence_length` 设置为固定值，则批处理大小将被固定为 1。）\n\n要了解您感兴趣的模型有哪些可用的输入和输出选项，只需创建其 `CoreMLConfig` 对象，并检查 `config.inputs` 和 `config.outputs` 属性即可。\n\n并非所有输入或输出都是必需的：例如，在文本模型中，您可以移除 `attention_mask` 输入。如果没有这个输入，注意力掩码将始终假定为全 1（即无填充）。然而，如果任务需要 `token_type_ids` 输入，则必须同时提供 `attention_mask` 输入。\n\n移除输入和\u002F或输出的操作可以通过创建 `CoreMLConfig` 的子类并重写 `inputs` 和 `outputs` 属性来实现。\n\n默认情况下，模型会以 ML Program 格式生成。通过将 `use_legacy_format` 属性重写为返回 True，即可使用较旧的 NeuralNetwork 格式。不过，这种方法并不推荐，仅作为无法转换为 ML Program 格式的模型的临时解决方案。\n\n一旦您获得了修改后的 `config` 实例，就可以按照下文“导出模型”部分的说明将其用于模型导出。\n\n需要注意的是，配置对象并不能涵盖所有内容。转换后模型的行为还受到模型的分词器或特征提取器的影响。例如，若要使用不同的输入图像尺寸，您需要使用具有不同缩放或裁剪设置的特征提取器来进行转换，而不是使用默认的特征提取器。\n\n### 导出不支持架构的模型\n\n如果您希望导出一个其架构未被该库原生支持的模型，主要需遵循以下三个步骤：\n\n1. 实现自定义的 Core ML 配置。\n2. 将模型导出为 Core ML 格式。\n3. 验证 PyTorch 模型与导出模型的输出是否一致。\n\n在本节中，我们将以 DistilBERT 的实现为例，展示每个步骤的具体操作。\n\n#### 实现自定义 Core ML 配置\n\nTODO：尚未编写此部分，因为实现尚未完成。\n\n首先从配置对象开始。我们提供了一个抽象类 `CoreMLConfig`，您应从中继承。\n\n```python\nfrom exporters.coreml import CoreMLConfig\n```\n\nTODO：此处需要涵盖的内容：\n\n- `modality` 属性\n- 如何实现自定义算子，并附上关于此主题的 coremltools 文档链接\n- 解码器模型（`use_past`）和编码器-解码器模型（`seq2seq`）\n\n#### 导出模型\n\n一旦您实现了 Core ML 配置，下一步就是导出模型。我们可以使用 `exporters.coreml` 包提供的 `export()` 函数。该函数需要 Core ML 配置、基础模型以及文本模型的分词器或视觉模型的特征提取器作为输入：\n\n```python\nfrom transformers import AutoConfig, AutoModelForSequenceClassification, AutoTokenizer\nfrom exporters.coreml import export\nfrom exporters.coreml.models import DistilBertCoreMLConfig\n\nmodel_ckpt = \"distilbert-base-uncased\"\nbase_model = AutoModelForSequenceClassification.from_pretrained(model_ckpt, torchscript=True)\npreprocessor = AutoTokenizer.from_pretrained(model_ckpt)\n\ncoreml_config = DistilBertCoreMLConfig(base_model.config, task=\"text-classification\")\nmlmodel = export(preprocessor, base_model, coreml_config)\n```\n\n注意：为了获得最佳效果，在加载模型时请将 `torchscript=True` 参数传递给 `from_pretrained` 方法。这会使模型为 PyTorch 跟踪做好准备，而这是 Core ML 转换所必需的。\n\n`export()` 函数还可接受以下附加选项：\n\n- `quantize`：使用 `\"float32\"` 表示不进行量化（默认值），使用 `\"float16\"` 则将权重量化为 16 位浮点数。\n- `compute_units`：指定是否针对 CPU、GPU 和\u002F或神经网络引擎优化模型。默认值为 `coremltools.ComputeUnit.ALL`。\n\n若要导出带有预计算隐藏状态（注意力块中的键和值）的模型，以实现快速的自回归解码，可在创建 `CoreMLConfig` 对象时传入 `use_past=True` 参数。\n\nCore ML 导出工具通常会打印大量警告和信息消息，例如：\n\n> TracerWarning: 将张量转换为 Python 布尔值可能会导致跟踪结果不正确。我们无法记录 Python 值的数据流，因此在未来该值将被视为常量。这意味着跟踪可能无法推广到其他输入！\n\n这些消息是正常的，属于转换过程的一部分。如果确实存在问题，转换器会抛出错误。\n\n如果导出成功，`export()` 函数的返回值将是一个 `coremltools.models.MLModel` 对象。您可以运行 `print(mlmodel)` 来查看 Core ML 模型的输入、输出及元数据。\n\n您还可以选择填写模型的元数据：\n\n```python\nmlmodel.short_description = \"您的优秀模型\"\nmlmodel.author = \"您的姓名\"\nmlmodel.license = \"在此填写版权信息\"\nmlmodel.version = \"1.0\"\n```\n\n最后，保存模型。您可以在 Xcode 中打开生成的 `.mlpackage` 文件并进行检查。\n\n```python\nmlmodel.save(\"DistilBert.mlpackage\")\n```\n\n注意：如果使用的配置对象的 `use_legacy_format` 方法返回 `True`，则可以将模型保存为 `ModelName.mlmodel`，而非 `.mlpackage`。\n\n#### 导出解码器模型\n\n基于解码器的模型可以使用包含预计算隐藏状态（自注意力块中的键和值）的 `past_key_values` 输入，从而实现更快的序列解码。此功能可通过在 Transformer 模型中设置 `use_cache=True` 来启用。\n\n要在 Core ML 导出过程中启用此功能，请在创建 `CoreMLConfig` 对象时设置 `use_past=True` 参数：\n\n```python\ncoreml_config = CTRLCoreMLConfig(base_model.config，task=\"text-generation\"，use_past=True)\n\n# 或者：\ncoreml_config = CTRLCoreMLConfig.with_past(base_model.config, task=\"text-generation\")\n```\n\n这会为模型添加多个新的输入和输出，名称如 `past_key_values_0_key`、`past_key_values_0_value` 等（输入）以及 `present_key_values_0_key`、`present_key_values_0_value` 等（输出）。\n\n启用此选项会使模型使用起来不太方便，因为你需要跟踪许多额外的张量，但确实能显著加快序列推理速度。\n\n必须以 `is_decoder=True` 加载 Transformers 模型，例如：\n\n```python\nbase_model = BigBirdForCausalLM.from_pretrained(\"google\u002Fbigbird-roberta-base\", torchscript=True, is_decoder=True)\n```\n\nTODO：在 Core ML 中如何使用此功能的示例。`past_key_values` 张量会随着时间推移不断增大。`attention_mask` 张量的大小必须等于 `past_key_values` 的大小加上新的 `input_ids`。\n\n#### 导出编码器-解码器模型\n\nTODO：正确撰写本节内容\n\n你需要将模型导出为两个独立的 Core ML 模型：编码器和解码器。\n\n按如下方式导出模型：\n\n```python\ncoreml_config = TODOCoreMLConfig(base_model.config, task=\"text2text-generation\", seq2seq=\"encoder\")\nencoder_mlmodel = export(preprocessor, base_model.get_encoder(), coreml_config)\n\ncoreml_config = TODOCoreMLConfig(base_model.config, task=\"text2text-generation\", seq2seq=\"decoder\")\ndecoder_mlmodel = export(preprocessor, base_model, coreml_config)\n```\n\n当使用 `seq2seq` 选项时，Core ML 模型中的序列长度始终是无界的。配置对象中指定的 `sequence_length` 将被忽略。\n\n这也可以与 `use_past=True` 结合使用。TODO：说明如何使用。\n\n#### 验证模型输出\n\n最后一步是验证基础模型和导出模型的输出在一定绝对误差范围内是否一致。你可以使用 `exporters.coreml` 包提供的 `validate_model_outputs()` 函数，具体操作如下。\n\n首先启用日志记录：\n\n```python\nfrom exporters.utils import logging\nlogger = logging.get_logger(\"exporters.coreml\")\nlogger.setLevel(logging.INFO)\n```\n\n然后验证模型：\n\n```python\nfrom exporters.coreml import validate_model_outputs\n\nvalidate_model_outputs(\n    coreml_config, preprocessor, base_model, mlmodel, coreml_config.atol_for_validation\n)\n```\n\n注意：`validate_model_outputs` 只能在 Mac 计算机上运行，因为它依赖 Core ML 框架来对模型进行预测。\n\n该函数使用 `CoreMLConfig.generate_dummy_inputs()` 方法为基础模型和导出模型生成输入，并且可以在配置中定义绝对误差容差。我们通常发现数值一致性在 1e-6 到 1e-4 范围内，不过只要小于 1e-3 一般都可以接受。\n\n如果验证失败并出现类似以下错误，则并不一定意味着模型有问题：\n\n> ValueError: 参考模型和 Core ML 导出模型的输出值不匹配：最大绝对差异为：0.12345\n\n比较是基于绝对差异值进行的，在这个例子中为 0.12345，远大于默认容差值 1e-4，因此报告了错误。然而，激活值的量级也很重要。对于激活值在 1e+3 左右的模型，最大绝对差异 0.12345 通常是可接受的。\n\n如果验证因该错误而失败，且你不确定是否真的是问题，可以对一个随机输入张量调用 `mlmodel.predict()`，并查看输出张量中最大的绝对值。\n\n### 向 🤗 Transformers 贡献新配置\n\n我们正致力于扩展现成配置集，并欢迎社区贡献！如果你想为库贡献自己的新增内容，你需要：\n\n* 在 `models.py` 文件中实现 Core ML 配置\n* 将模型架构及相应特性纳入 [`~coreml.features.FeatureManager`]\n* 将你的模型架构添加到 `test_coreml.py` 中的测试用例中\n\n### 故障排除：如果 Core ML Exporters 无法用于你的模型怎么办？\n\n你希望导出的模型可能无法通过 Core ML Exporters 转换，甚至直接使用 `coremltools` 也无法成功。在运行这些自动化转换工具时，转换过程很可能因难以理解的错误信息而中断。或者，转换看似成功，但导出的模型却无法正常工作或产生错误的输出。\n\n导致转换错误的常见原因包括：\n\n- 你为转换器提供了错误的参数。`task` 参数应与所选模型架构匹配。例如，“feature-extraction”任务仅适用于 `AutoModel` 类型的模型，而不应用于 `AutoModelForXYZ`。此外，`seq2seq` 参数用于区分编码器-解码器模型与仅编码器或仅解码器模型。如果传递了无效的参数值，可能会在转换过程中引发错误，也可能生成一个虽然能运行但执行错误操作的模型。\n\n- 模型执行了 Core ML 或 `coremltools` 不支持的操作。也可能是 `coremltools` 存在 bug，或者无法处理特别复杂的模型。\n\n如果 Core ML 导出因后者失败，你有几种选择：\n\n1. 在 `CoreMLConfig` 的 `patch_pytorch_ops()` 函数中实现缺失的算子。\n\n2. 修改原始模型。这需要对模型的工作原理有深入的理解，操作并不简单。不过有时只需将某些值硬编码，而不是让 PyTorch 或 TensorFlow 根据张量形状自动计算它们。\n\n3. 修复 `coremltools`。有时可以通过修改 `coremltools` 来忽略该问题。\n\n4. 放弃自动化转换，转而使用 MIL 从头构建模型（[MIL 文档](https:\u002F\u002Fcoremltools.readme.io\u002Fdocs\u002Fmodel-intermediate-language)）。MIL 是 `coremltools` 内部用来表示模型的中间语言，它在很多方面类似于 PyTorch。\n\n5. 提交一个问题，我们会尽力解决。😀\n\n### 已知问题\n\nCore ML 导出器会以 **mlpackage** 格式导出模型。遗憾的是，对于某些模型，生成的 ML 程序并不正确。在这种情况下，建议通过将配置对象的 `use_legacy_format` 属性设置为 `True`，将模型转换为较旧的 NeuralNetwork 格式。在某些硬件上，旧格式的运行效率也可能更高。如果您不确定该使用哪种格式，可以分别以两种格式导出模型，并对比两个版本。\n\n已知需要使用 `use_legacy_format=True` 导出的模型包括：GPT2、DistilGPT2。\n\n当使用 GPT2 或 GPT-Neo 时，如果启用灵活的输入序列长度功能，转换过程会变得极其缓慢，并且会分配超过 200 GB 的内存。这显然是 coremltools 或 Core ML 框架中的一个 bug，因为所分配的内存实际上并不会被使用（计算机不会开始进行交换）。尽管经过数分钟后转换最终会成功，但生成的模型可能并非 100% 正确。随后加载该模型同样需要很长时间，并会再次进行类似的内存分配；进行预测时也是如此。从理论上讲，只要您有足够的耐心，转换确实能够成功，但这样的模型实际上并不可用。\n\n## 将模型推送到 Hugging Face Hub\n\n[Hugging Face Hub](https:\u002F\u002Fhuggingface.co) 也可以托管您的 Core ML 模型。您可以使用 [`huggingface_hub` 包](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fhuggingface_hub\u002Fmain\u002Fen\u002Findex)从 Python 中将转换后的模型上传到 Hub。\n\n首先，使用以下命令登录您的 Hugging Face 账户：\n\n```bash\nhuggingface-cli login\n```\n\n登录完成后，按照如下方式将 **mlpackage** 保存到 Hub：\n\n```python\nfrom huggingface_hub import Repository\n\nwith Repository(\n        \"\u003Cmodel name>\", clone_from=\"https:\u002F\u002Fhuggingface.co\u002F\u003Cuser>\u002F\u003Cmodel name>\",\n        use_auth_token=True).commit(commit_message=\"add Core ML model\"):\n    mlmodel.save(\"\u003Cmodel name>.mlpackage\")\n```\n\n请确保将 `\u003Cmodel name>` 替换为您的模型名称，将 `\u003Cuser>` 替换为您的 Hugging Face 用户名。","# 🤗 Exporters 快速上手指南\n\n🤗 Exporters 是一个用于将 Hugging Face Transformers 模型导出为 Apple **Core ML** 格式的工具库，方便在 macOS、iOS、tvOS 和 watchOS 设备上进行高效的本地推理。\n\n## 环境准备\n\n*   **操作系统**：虽然可以在 Linux 上执行导出操作，但强烈推荐使用 **macOS**。因为模型导出后的验证步骤需要 Core ML 框架支持，该框架仅在 Apple 设备上可用。\n*   **系统版本要求**：导出的模型采用 `.mlpackage` 格式（ML Program 类型），运行环境需满足：\n    *   iOS 15+\n    *   macOS 12.0+\n    *   Xcode 13+\n*   **前置依赖**：\n    *   Python 3.x\n    *   Git\n    *   PyTorch 或 TensorFlow（取决于源模型）\n\n## 安装步骤\n\n通过克隆仓库并以可编辑模式安装来获取最新功能：\n\n```bash\n# 1. 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fexporters.git\n\n# 2. 进入目录并安装\ncd exporters\npip install -e .\n```\n\n> **提示**：国内开发者若遇到网络问题，可配置 pip 使用清华或阿里镜像源加速安装：\n> `pip install -e . -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 基本使用\n\n### 1. 导出默认模型\n最简单的用法是将 Hugging Face Hub 上的模型直接导出为 Core ML 格式。以下示例将 `distilbert-base-uncased` 模型导出到 `exported\u002F` 目录：\n\n```bash\npython -m exporters.coreml --model=distilbert-base-uncased exported\u002F\n```\n\n执行成功后，会在 `exported\u002F` 目录下生成 `Model.mlpackage` 文件。该文件可直接拖入 Xcode 项目中用于开发 macOS 或 iOS 应用。\n\n### 2. 指定任务特性（Feature）\n对于经过微调的模型（如文本分类），需要通过 `--feature` 参数指定具体的任务类型，以确保导出正确的输入输出接口。\n\n例如，导出一个英文情感分析模型：\n\n```bash\npython -m exporters.coreml --model=distilbert-base-uncased-finetuned-sst-2-english \\\n                           --feature=sequence-classification exported\u002F\n```\n\n### 3. 导出本地模型\n如果模型权重已保存在本地目录（包含配置文件和权重文件），可以直接指向该路径进行导出：\n\n```bash\npython -m exporters.coreml --model=.\u002Flocal-pt-checkpoint exported\u002F\n```\n\n### 常用参数说明\n*   `--model`: Hugging Face 模型 ID 或本地路径。\n*   `--feature`: 模型任务类型（如 `image-classification`, `question-answering` 等）。\n*   `--quantize`: 量化选项，设为 `float16` 可减小模型体积（默认 `float32`）。\n*   `--compute_units`: 计算单元优化，可选 `all` (默认), `cpu_and_gpu`, `cpu_only`, `cpu_and_ne`。\n\n查看完整帮助信息：\n```bash\npython -m exporters.coreml --help\n```","某移动端开发团队计划将 Hugging Face 上的 DistilBERT 模型集成到 iOS 应用中，以实现离线的实时情感分析功能。\n\n### 没有 exporters 时\n- 开发者需手动编写复杂的 Python 脚本调用 `coremltools`，反复调试输入输出张量的形状与类型匹配。\n- 面对 Transformer 特有的动态轴（如变长序列），缺乏预设配置导致转换过程频繁报错，排查耗时极长。\n- 每次尝试新模型架构都需重新研究底层转换逻辑，无法直接复用社区已有的最佳实践。\n- 缺乏与 Hugging Face Hub 的无缝衔接，下载权重、预处理和格式转换步骤割裂，工作流繁琐且易出错。\n\n### 使用 exporters 后\n- 仅需一行命令行指令（如 `python -m exporters.coreml --model=distilbert-base-uncased`）即可自动完成高质量转换。\n- 内置针对 DistilBERT 等主流架构的专用配置文件，自动处理动态轴和算子映射，显著提升转换成功率。\n- 轻松切换不同模型进行实验，无需关心底层 Core ML 细节，让团队能专注于业务逻辑而非格式适配。\n- 深度集成 Hugging Face 生态，支持直接从 Hub 拉取模型并导出，甚至可通过无代码 Space 预先验证可行性。\n\nexporters 通过将复杂的模型转换流程标准化和自动化，让开发者能以最低成本将先进的 NLP 模型部署到苹果设备端。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhuggingface_exporters_074f8704.png","huggingface","Hugging Face","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhuggingface_90da21a4.png","The AI community building the future.",null,"https:\u002F\u002Fhuggingface.co\u002F","https:\u002F\u002Fgithub.com\u002Fhuggingface",[80],{"name":81,"color":82,"percentage":83},"Python","#3572A5",100,694,52,"2026-04-02T19:17:46","Apache-2.0","Linux, macOS","未说明（Core ML 模型针对 Apple Neural Engine、GPU 和 CPU 优化，但转换过程主要依赖 CPU；未提及 NVIDIA GPU 或 CUDA 需求）","未说明",{"notes":92,"python":90,"dependencies":93},"虽然可以在 Linux 上运行导出功能，但强烈建议使用 macOS，因为模型验证步骤需要 Core ML 框架（仅在 macOS 上可用）。导出的模型格式为 mlpackage，要求至少 iOS 15、macOS 12.0 和 Xcode 13。Transformer 模型通常较大，可能不适合移动设备，建议先使用 Optimum 进行优化。",[94,95],"transformers","coremltools",[35,14],[98,99,100,101,102,103,104,105,95],"coreml","deep-learning","machine-learning","model-converter","pytorch","tensorflow","tflite","transformer","2026-03-27T02:49:30.150509","2026-04-07T22:51:02.546574",[109,114,119,124,129,133],{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},23122,"转换 Llama-2-7b 模型时因内存不足导致进程被杀死，如何解决？","该问题通常由内存不足引起。用户反馈在 32GB RAM 的 Windows 设备上转换失败，但在升级到 64GB RAM 后成功完成转换。如果硬件资源有限，可以尝试使用量化模型（如 GPTQ 版本），但需注意量化模型可能需要额外的代码调整或特定的支持配置。","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fexporters\u002Fissues\u002F61",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},23123,"是否支持导出 Starcoder (GPTBigCode) 模型到 CoreML？","目前该问题已通过 PR #45 修复。要成功导出，需要安装特定版本的依赖：`transformers` 需使用 main 分支版本，且 `coremltools` 需升级到 7.0b1 版本。此外，关于模型大小超过 2GB protobuf 限制的担忧，在较新的 macOS 版本中该限制已得到解决，大型语言模型可以正常转换和运行。","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fexporters\u002Fissues\u002F34",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},23124,"转换 EleutherAI\u002FPythia 模型时遇到权重未使用警告或追踪错误怎么办？","这通常是由于 `transformers` 库的版本兼容性问题导致的。用户反馈在使用 `transformers==4.28.1` 时失败，但降级到 `transformers==4.26.1` 或 `4.27.3` 后可以成功转换。建议尝试降级 transformers 版本，或者使用 Hugging Face 官方提供的在线转换 Space (huggingface.co\u002Fspaces\u002Fhuggingface-projects\u002Ftransformers-to-coreml)，该空间已预装了兼容的最新版本 exporters。","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fexporters\u002Fissues\u002F42",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},23125,"如何将整个 Stable Diffusion 管道转换为 CoreML 模型？","虽然理论上可以通过 `exporters` 手动逐个组件进行转换，但这将是一个非常漫长且艰难的过程。目前已有专门的 Apple 仓库（如 ml-stable-diffusion）提供了针对 Stable Diffusion 及其他库的成熟转换方案和工具链。因此，不再建议通过此项目手动处理整个管道转换，直接使用专门的解决方案更为高效。","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fexporters\u002Fissues\u002F3",{"id":130,"question_zh":131,"answer_zh":132,"source_url":113},23126,"转换过程中出现 'TracerWarning: Converting a tensor to a Python boolean' 警告会影响结果吗？","这些警告表明在追踪过程中将张量转换为 Python 布尔值可能导致轨迹不正确，因为无法记录 Python 值的数据流，这些值在未来将被视为常量。这意味着生成的模型可能无法泛化到其他输入（例如不同的序列长度）。虽然转换过程可能不会直接报错中断，但为了保证模型在不同输入下的正确性，建议关注相关模型的更新或查看是否有针对该特定架构的补丁来修复动态形状支持。",{"id":134,"question_zh":135,"answer_zh":136,"source_url":113},23127,"Torch 版本不匹配警告（如 Torch 2.0.1 未测试）是否需要处理？","当看到类似 'Torch version 2.0.1 has not been tested with coremltools' 的警告时，表示当前使用的 PyTorch 版本尚未经过核心工具的完整测试，可能会遇到意外错误。最稳妥的方法是按照建议使用经过测试的版本（如提示中提到的 Torch 2.0.0），或者在遇到具体错误时再尝试升级或降级 PyTorch 版本以匹配 coremltools 的兼容性列表。",[]]