[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-krasserm--perceiver-io":3,"tool-krasserm--perceiver-io":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":23,"env_os":97,"env_gpu":98,"env_ram":99,"env_deps":100,"category_tags":111,"github_topics":112,"view_count":23,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":119,"updated_at":120,"faqs":121,"releases":152},3173,"krasserm\u002Fperceiver-io","perceiver-io","A PyTorch implementation of Perceiver, Perceiver IO and Perceiver AR with PyTorch Lightning scripts for distributed training","perceiver-io 是一个基于 PyTorch 的开源深度学习库，实现了 Perceiver、Perceiver IO 和 Perceiver AR 三种先进的神经网络架构。它旨在解决传统模型难以直接处理多模态（如图像、音频、文本）及超长上下文数据的问题。不同于依赖特定输入结构的常规模型，perceiver-io 利用迭代注意力机制，能够灵活地接收任意结构化输入并生成对应的结构化输出，从而在视觉、自然语言处理和音频任务中实现统一的通用感知与生成能力。\n\n该工具特别适合 AI 研究人员和深度学习开发者使用。它不仅提供了轻量级的后端模型实现，还无缝集成了 PyTorch Lightning 以支持高效的分布式训练，并结合 Hugging Face 接口简化了推理流程。其独特的技术亮点在于打破了模态壁垒，允许用户通过统一的架构处理字节级文本、视频光流等复杂数据，无需为不同任务设计专用的输入嵌入层。此外，项目配备了完善的命令行工具和预训练模型，帮助用户快速复现前沿论文成果或构建自己的多模态应用，是探索通用人工智能架构的理想起点。","# Perceiver, Perceiver IO and Perceiver AR\n\nThis repository is a PyTorch implementation of Perceiver, Perceiver IO and Perceiver AR, with PyTorch Lightning\ninterfaces for model training and Hugging Face 🤗 interfaces for inference.\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd>\n       \u003Cb>Perceiver\u003C\u002Fb>: General Perception with Iterative Attention\n       (\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03206\">paper\u003C\u002Fa>,\n        \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=P_xeshTnPZg\">video\u003C\u002Fa>)\n    \u003C\u002Ftd>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_readme_1aea37791df3.png\" alt=\"Perceiver\"\u002F>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\n      \u003Cb>Perceiver IO\u003C\u002Fb>: A General Architecture for Structured Inputs & Outputs\n      (\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.14795\">paper\u003C\u002Fa>,\n       \u003Ca href=\"https:\u002F\u002Fwww.deepmind.com\u002Fblog\u002Fbuilding-architectures-that-can-handle-the-worlds-data\">blog post\u003C\u002Fa>)\n    \u003C\u002Ftd>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_readme_b7bdfb6d24b0.png\" alt=\"Perceiver IO\"\u002F>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\n      General-purpose, long-context autoregressive modeling with \u003Cb>Perceiver AR\u003C\u002Fb>\n      (\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.07765\">paper\u003C\u002Fa>,\n       \u003Ca href=\"https:\u002F\u002Fwww.deepmind.com\u002Fblog\u002Fperceiver-ar-general-purpose-long-context-autoregressive-generation\">blog post\u003C\u002Fa>)\n    \u003C\u002Ftd>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_readme_b22975b9f31b.png\" alt=\"Perceiver AR\"\u002F>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## Overview\n\nCore of the `perceiver-io` library are *backend models*, lightweight PyTorch implementations of Perceiver,\nPerceiver IO and Perceiver AR. They can be wrapped into [PyTorch Lightning](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002Fen\u002Fstable\u002F)\nmodules for training (*Lightning interface*) and 🤗 modules for inference (*Hugging Face interface*). See\n[library design](docs\u002Flibrary-design.md) for details.\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_readme_11d48f79afc6.jpg\" alt=\"library-design\"\u002F>\n\u003C\u002Fp>\n\nThe command line interface for training is implemented with [Lightning CLI](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002Fen\u002Fstable\u002Fcli\u002Flightning_cli.html).\nTraining datasets are 🤗 [datasets](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fdatasets) wrapped into PyTorch Lightning data modules.\nFor NLP tasks, `perceiver-io` supports all 🤗 [fast tokenizers](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Ffast_tokenizers)\nand the 🤗 Perceiver UTF-8 bytes tokenizer.\n\n## Documentation\n\n- [Installation](#installation)\n- [Getting started](#getting-started)\n- [Library design](docs\u002Flibrary-design.md)\n- [Pretrained models](docs\u002Fpretrained-models.md)\n- [Training examples](docs\u002Ftraining-examples.md)\n- [Inference examples](examples\u002Finference.ipynb) [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002Fmain\u002Fexamples\u002Finference.ipynb)\n- [Model construction](docs\u002Fmodel-construction.md)\n- [Building blocks](docs\u002Fbuilding-blocks.md)\n\n## Installation\n\n### Via pip\n\n```shell\npip install perceiver-io[text,vision,audio]\n```\n\n### From sources\n\nInstallation from sources requires a [Miniconda](https:\u002F\u002Fdocs.conda.io\u002Fen\u002Flatest\u002Fminiconda.html) and a\n[Poetry](https:\u002F\u002Fpython-poetry.org\u002Fdocs\u002F#installation) (1.2.0 or higher) installation.\n\nCreate and activate the `perceiver-io` conda environment:\n\n```shell\nconda env create -f environment.yml\nconda activate perceiver-io\n```\n\nInstall main and test dependencies, including all extras:\n\n```shell\n# Without dependencies required for examples\npoetry install --all-extras\n```\n\nIf you want to run the [examples](examples) locally, additionally use `--with examples`:\n\n```shell\npoetry install --all-extras --with examples\n```\n\n### Docker image\n\n```shell\ndocker pull ghcr.io\u002Fkrasserm\u002Fperceiver-io:latest\n```\n\nSee [Docker image](docs\u002Fdocker-image.md) for details.\n\n## Getting started\n\n### Inference\n\n#### Optical flow\n\nCompute the optical flow between consecutive frames of an input video and write the rendered results to an output\nvideo:\n\n```python\nfrom urllib.request import urlretrieve\nfrom transformers import pipeline\n\nfrom perceiver.data.vision import video_utils\nfrom perceiver.model.vision import optical_flow  # register auto-classes and pipeline\n\nurlretrieve(\n    url=\"https:\u002F\u002Fmartin-krasser.com\u002Fperceiver\u002Fflow\u002Fsintel_clip_cave_dragon_fight.mp4\",\n    filename=\"sintel_clip_cave_dragon_fight.mp4\",\n)\n\n# Create optical flow pipeline\noptical_flow_pipeline = pipeline(\"optical-flow\", model=\"krasserm\u002Fperceiver-io-optical-flow\", device=\"cuda:0\")\n\n# load consecutive video frame pairs\nframe_pairs = video_utils.read_video_frame_pairs(\"sintel_clip_cave_dragon_fight.mp4\")\n\n# create and render optical flow for all frame pairs\noptical_flows = optical_flow_pipeline(frame_pairs, render=True, device=\"cuda:0\")\n\n# create video with rendered optical flows\nvideo_utils.write_video(\"sintel_clip_cave_dragon_fight_output.mp4\", optical_flows, fps=24)\n```\n\nHere is a side-by-side comparison of the input and output video:\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_readme_119c3114b5a7.gif\" alt=\"optical-flow-sbs\">\n\u003C\u002Fp>\n\n#### Symbolic audio generation\n\nCreate audio sequences by generating symbolic ([MIDI](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMIDI)) audio data and converting the\ngenerated audio symbols into WAV output using [fluidsynth](https:\u002F\u002Fwww.fluidsynth.org\u002F) (_Note:_ fluidsynth must be installed\nin order for the following example to work):  \n\n```python\nfrom transformers import pipeline\nfrom pretty_midi import PrettyMIDI\nfrom perceiver.model.audio import symbolic  # auto-class registration\n\nrepo_id = \"krasserm\u002Fperceiver-ar-sam-giant-midi\"\n\nprompt = PrettyMIDI(\"prompt.mid\")\naudio_generator = pipeline(\"symbolic-audio-generation\", model=repo_id)\n\noutput = audio_generator(prompt, max_new_tokens=64, num_latents=1, do_sample=True, top_p=0.95, temperature=1.0, render=True)\n\nwith open(\"generated_audio.wav\", \"wb\") as f:\n    f.write(output[\"generated_audio_wav\"])\n```\n\nExamples of generated audio sequences are available on the 🤗 [hub](https:\u002F\u002Fhuggingface.co\u002Fkrasserm\u002Fperceiver-ar-sam-giant-midi#audio-samples).\n\nSee [inference examples](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002Fmain\u002Fexamples\u002Finference.ipynb)\nfor more examples.\n\n### Training\n\nTrain a small Perceiver IO image classifier (907K parameters) on MNIST from the command line. The classifier\ncross-attends to individual pixels of input images with [repeated cross-attention](docs\u002Fbuilding-blocks.md).\nSee [image classification](docs\u002Ftraining-examples.md#image-classification) training example for more details.\n\n```shell\npython -m perceiver.scripts.vision.image_classifier fit \\\n  --model.num_latents=32 \\\n  --model.num_latent_channels=128 \\\n  --model.encoder.num_frequency_bands=32 \\\n  --model.encoder.num_cross_attention_layers=2 \\\n  --model.encoder.num_self_attention_blocks=3 \\\n  --model.encoder.num_self_attention_layers_per_block=3 \\\n  --model.encoder.first_self_attention_block_shared=false \\\n  --model.encoder.dropout=0.1 \\\n  --model.encoder.init_scale=0.1 \\\n  --model.decoder.num_output_query_channels=128 \\\n  --model.decoder.dropout=0.1 \\\n  --model.decoder.init_scale=0.1 \\\n  --data=MNISTDataModule \\\n  --data.batch_size=64 \\\n  --optimizer=AdamW \\\n  --optimizer.lr=1e-3 \\\n  --lr_scheduler.warmup_steps=500 \\\n  --trainer.accelerator=gpu \\\n  --trainer.devices=1 \\\n  --trainer.max_epochs=30 \\\n  --trainer.logger=TensorBoardLogger \\\n  --trainer.logger.save_dir=logs \\\n  --trainer.logger.name=logs\n```\n\n[Model construction](docs\u002Fmodel-construction.md) describes how to implement model-specific command line interfaces\nwith the Lightning CLI. Training checkpoints are written to the `logs\u002Fimg_clf\u002Fversion_0\u002Fcheckpoints` directory. Assuming\na checkpoint with filename `epoch=025-val_loss=0.065.ckpt` exists, it can be converted to a `perceiver-io` 🤗 model with\n\n```python\nfrom perceiver.model.vision.image_classifier import convert_mnist_classifier_checkpoint\n\nconvert_mnist_classifier_checkpoint(\n    save_dir=\"example\u002Fmnist-classifier\",\n    ckpt_url=\"logs\u002Fimg_clf\u002Fversion_0\u002Fcheckpoints\u002Fepoch=025-val_loss=0.065.ckpt\",\n)\n```\n\nso that it can be used in a 🤗 image classification pipeline\n\n```python\nfrom datasets import load_dataset\nfrom transformers import pipeline\n\nmnist_dataset = load_dataset(\"mnist\", split=\"test\")[:9]\n\nimages = mnist_dataset[\"image\"]\nlabels = mnist_dataset[\"label\"]\n\nclassifier = pipeline(\"image-classification\", model=\"example\u002Fmnist-classifier\")\npredictions = [pred[0][\"label\"] for pred in classifier(images)]\n\nprint(f\"Labels:      {labels}\")\nprint(f\"Predictions: {predictions}\")\n```\n```\nLabels:      [7, 2, 1, 0, 4, 1, 4, 9, 5]\nPredictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]\n```\n\nor loaded directly:\n\n```python\nimport torch\nfrom transformers import AutoModelForImageClassification, AutoImageProcessor\n\nmodel = AutoModelForImageClassification.from_pretrained(\"example\u002Fmnist-classifier\")\nprocessor = AutoImageProcessor.from_pretrained(\"example\u002Fmnist-classifier\")\n\ninputs = processor(images, return_tensors=\"pt\")\n\nwith torch.no_grad():\n    # use perceiver-io Hugging Face model\n    output_1 = model(**inputs).logits\n\nwith torch.no_grad():\n    # or use perceiver-io backend model directly  \n    output_2 = model.backend_model(inputs.pixel_values)\n\nprint(f\"Predictions: {output_1.argmax(dim=-1).numpy().tolist()}\")\nprint(f\"Predictions: {output_2.argmax(dim=-1).numpy().tolist()}\")\n```\n```\nPredictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]\nPredictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]\n```\n\nSee [training examples](docs\u002Ftraining-examples.md) for more examples.\n\n## Articles\n\nArticles referencing this repository:\n\n- [Training compute-optimal Perceiver AR language models](https:\u002F\u002Fkrasserm.github.io\u002F2023\u002F01\u002F23\u002Fscaling-perceiver-ar\u002F)\n- [A gentle introduction to Rotary Position Embedding](https:\u002F\u002Fkrasserm.github.io\u002F2022\u002F12\u002F13\u002Frotary-position-embedding\u002F)\n\n## Other implementations\n\n- [Perceiver](https:\u002F\u002Fpaperswithcode.com\u002Fpaper\u002Fperceiver-general-perception-with-iterative#code)\n- [Perceiver IO](https:\u002F\u002Fpaperswithcode.com\u002Fpaper\u002Fperceiver-io-a-general-architecture-for#code)\n- [Perceiver AR](https:\u002F\u002Fpaperswithcode.com\u002Fpaper\u002Fgeneral-purpose-long-context-autoregressive#code)\n","# Perceiver、Perceiver IO 和 Perceiver AR\n\n本仓库是 Perceiver、Perceiver IO 和 Perceiver AR 的 PyTorch 实现，并提供了用于模型训练的 PyTorch Lightning 接口以及用于推理的 Hugging Face 🤗 接口。\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd>\n       \u003Cb>Perceiver\u003C\u002Fb>: 基于迭代注意力机制的通用感知模型\n       (\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03206\">论文\u003C\u002Fa>,\n        \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=P_xeshTnPZg\">视频\u003C\u002Fa>)\n    \u003C\u002Ftd>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_readme_1aea37791df3.png\" alt=\"Perceiver\"\u002F>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\n      \u003Cb>Perceiver IO\u003C\u002Fb>: 面向结构化输入与输出的通用架构\n      (\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.14795\">论文\u003C\u002Fa>,\n       \u003Ca href=\"https:\u002F\u002Fwww.deepmind.com\u002Fblog\u002Fbuilding-architectures-that-can-handle-the-worlds-data\">博客文章\u003C\u002Fa>)\n    \u003C\u002Ftd>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_readme_b7bdfb6d24b0.png\" alt=\"Perceiver IO\"\u002F>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\n      使用 \u003Cb>Perceiver AR\u003C\u002Fb> 进行通用的长上下文自回归建模\n      (\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.07765\">论文\u003C\u002Fa>,\n       \u003Ca href=\"https:\u002F\u002Fwww.deepmind.com\u002Fblog\u002Fperceiver-ar-general-purpose-long-context-autoregressive-generation\">博客文章\u003C\u002Fa>)\n    \u003C\u002Ftd>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_readme_b22975b9f31b.png\" alt=\"Perceiver AR\"\u002F>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## 概述\n\n`perceiver-io` 库的核心是 *后端模型*，即 Perceiver、Perceiver IO 和 Perceiver AR 的轻量级 PyTorch 实现。这些模型可以被封装为 [PyTorch Lightning](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002Fen\u002Fstable\u002F) 模块以进行训练（*Lightning 接口*），也可以被封装为 🤗 模块以用于推理（*Hugging Face 接口*）。详细信息请参阅 [库设计](docs\u002Flibrary-design.md)。\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_readme_11d48f79afc6.jpg\" alt=\"library-design\"\u002F>\n\u003C\u002Fp>\n\n训练的命令行接口使用 [Lightning CLI](https:\u002F\u002Fpytorch-lightning.readthedocs.io\u002Fen\u002Fstable\u002Fcli\u002Flightning_cli.html) 实现。训练数据集是 🤗 [datasets](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fdatasets)，并被封装为 PyTorch Lightning 数据模块。对于 NLP 任务，`perceiver-io` 支持所有 🤗 [fast tokenizers](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Ffast_tokenizers) 以及 🤗 Perceiver UTF-8 字节分词器。\n\n## 文档\n\n- [安装](#installation)\n- [快速入门](#getting-started)\n- [库设计](docs\u002Flibrary-design.md)\n- [预训练模型](docs\u002Fpretrained-models.md)\n- [训练示例](docs\u002Ftraining-examples.md)\n- [推理示例](examples\u002Finference.ipynb) [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002Fmain\u002Fexamples\u002Finference.ipynb)\n- [模型构建](docs\u002Fmodel-construction.md)\n- [基础组件](docs\u002Fbuilding-blocks.md)\n\n## 安装\n\n### 通过 pip\n\n```shell\npip install perceiver-io[text,vision,audio]\n```\n\n### 从源码安装\n\n从源码安装需要先安装 [Miniconda](https:\u002F\u002Fdocs.conda.io\u002Fen\u002Flatest\u002Fminiconda.html) 和 [Poetry](https:\u002F\u002Fpython-poetry.org\u002Fdocs\u002F#installation)（版本 1.2.0 或更高）。\n\n创建并激活 `perceiver-io` 的 conda 环境：\n\n```shell\nconda env create -f environment.yml\nconda activate perceiver-io\n```\n\n安装主依赖和测试依赖，包括所有额外依赖：\n\n```shell\n# 不包含示例所需的依赖\npoetry install --all-extras\n```\n\n如果希望在本地运行 [示例](examples)，则需额外添加 `--with examples`：\n\n```shell\npoetry install --all-extras --with examples\n```\n\n### Docker 镜像\n\n```shell\ndocker pull ghcr.io\u002Fkrasserm\u002Fperceiver-io:latest\n```\n\n详细信息请参阅 [Docker 镜像](docs\u002Fdocker-image.md)。\n\n## 快速入门\n\n### 推理\n\n#### 光流\n\n计算输入视频中连续帧之间的光流，并将渲染结果写入输出视频：\n\n```python\nfrom urllib.request import urlretrieve\nfrom transformers import pipeline\n\nfrom perceiver.data.vision import video_utils\nfrom perceiver.model.vision import optical_flow  # 注册自动类和流水线\n\nurlretrieve(\n    url=\"https:\u002F\u002Fmartin-krasser.com\u002Fperceiver\u002Fflow\u002Fsintel_clip_cave_dragon_fight.mp4\",\n    filename=\"sintel_clip_cave_dragon_fight.mp4\",\n)\n\n# 创建光流流水线\noptical_flow_pipeline = pipeline(\"optical-flow\", model=\"krasserm\u002Fperceiver-io-optical-flow\", device=\"cuda:0\")\n\n# 加载连续的视频帧对\nframe_pairs = video_utils.read_video_frame_pairs(\"sintel_clip_cave_dragon_fight.mp4\")\n\n# 为所有帧对生成并渲染光流\noptical_flows = optical_flow_pipeline(frame_pairs, render=True, device=\"cuda:0\")\n\n# 创建包含渲染光流的视频\nvideo_utils.write_video(\"sintel_clip_cave_dragon_fight_output.mp4\", optical_flows, fps=24)\n```\n\n以下是输入视频与输出视频的并排对比：\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_readme_119c3114b5a7.gif\" alt=\"optical-flow-sbs\">\n\u003C\u002Fp>\n\n#### 符号音频生成\n\n通过生成符号化（[MIDI](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FMIDI))音频数据，并使用 [fluidsynth](https:\u002F\u002Fwww.fluidsynth.org\u002F) 将生成的音频符号转换为 WAV 输出来创建音频序列（_注意：_ 必须先安装 fluidsynth 才能使以下示例正常工作）：\n\n```python\nfrom transformers import pipeline\nfrom pretty_midi import PrettyMIDI\nfrom perceiver.model.audio import symbolic  # 自动类注册\n\nrepo_id = \"krasserm\u002Fperceiver-ar-sam-giant-midi\"\n\nprompt = PrettyMIDI(\"prompt.mid\")\naudio_generator = pipeline(\"symbolic-audio-generation\", model=repo_id)\n\noutput = audio_generator(prompt, max_new_tokens=64, num_latents=1, do_sample=True, top_p=0.95, temperature=1.0, render=True)\n\nwith open(\"generated_audio.wav\", \"wb\") as f:\n    f.write(output[\"generated_audio_wav\"])\n```\n\n生成的音频序列示例可在 🤗 [hub](https:\u002F\u002Fhuggingface.co\u002Fkrasserm\u002Fperceiver-ar-sam-giant-midi#audio-samples) 上找到。\n\n更多示例请参阅 [推理示例](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002Fmain\u002Fexamples\u002Finference.ipynb)。\n\n### 训练\n\n在命令行上训练一个小型 Perceiver IO 图像分类器（907K 参数），数据集为 MNIST。该分类器通过 [重复的交叉注意力](docs\u002Fbuilding-blocks.md) 对输入图像的单个像素进行跨注意力操作。更多详细信息请参阅 [图像分类](docs\u002Ftraining-examples.md#image-classification) 训练示例。\n\n```shell\npython -m perceiver.scripts.vision.image_classifier fit \\\n  --model.num_latents=32 \\\n  --model.num_latent_channels=128 \\\n  --model.encoder.num_frequency_bands=32 \\\n  --model.encoder.num_cross_attention_layers=2 \\\n  --model.encoder.num_self_attention_blocks=3 \\\n  --model.encoder.num_self_attention_layers_per_block=3 \\\n  --model.encoder.first_self_attention_block_shared=false \\\n  --model.encoder.dropout=0.1 \\\n  --model.encoder.init_scale=0.1 \\\n  --model.decoder.num_output_query_channels=128 \\\n  --model.decoder.dropout=0.1 \\\n  --model.decoder.init_scale=0.1 \\\n  --data=MNISTDataModule \\\n  --data.batch_size=64 \\\n  --optimizer=AdamW \\\n  --optimizer.lr=1e-3 \\\n  --lr_scheduler.warmup_steps=500 \\\n  --trainer.accelerator=gpu \\\n  --trainer.devices=1 \\\n  --trainer.max_epochs=30 \\\n  --trainer.logger=TensorBoardLogger \\\n  --trainer.logger.save_dir=logs \\\n  --trainer.logger.name=logs\n```\n\n[模型构建](docs\u002Fmodel-construction.md) 描述了如何使用 Lightning CLI 实现特定于模型的命令行界面。训练检查点将被写入 `logs\u002Fimg_clf\u002Fversion_0\u002Fcheckpoints` 目录。假设存在一个文件名为 `epoch=025-val_loss=0.065.ckpt` 的检查点，可以通过以下方式将其转换为 `perceiver-io` 🤗 模型：\n\n```python\nfrom perceiver.model.vision.image_classifier import convert_mnist_classifier_checkpoint\n\nconvert_mnist_classifier_checkpoint(\n    save_dir=\"example\u002Fmnist-classifier\",\n    ckpt_url=\"logs\u002Fimg_clf\u002Fversion_0\u002Fcheckpoints\u002Fepoch=025-val_loss=0.065.ckpt\",\n)\n```\n\n这样就可以将其用于 🤗 图像分类流水线中：\n\n```python\nfrom datasets import load_dataset\nfrom transformers import pipeline\n\nmnist_dataset = load_dataset(\"mnist\", split=\"test\")[:9]\n\nimages = mnist_dataset[\"image\"]\nlabels = mnist_dataset[\"label\"]\n\nclassifier = pipeline(\"image-classification\", model=\"example\u002Fmnist-classifier\")\npredictions = [pred[0][\"label\"] for pred in classifier(images)]\n\nprint(f\"Labels:      {labels}\")\nprint(f\"Predictions: {predictions}\")\n```\n```\nLabels:      [7, 2, 1, 0, 4, 1, 4, 9, 5]\nPredictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]\n```\n\n或者直接加载：\n\n```python\nimport torch\nfrom transformers import AutoModelForImageClassification, AutoImageProcessor\n\nmodel = AutoModelForImageClassification.from_pretrained(\"example\u002Fmnist-classifier\")\nprocessor = AutoImageProcessor.from_pretrained(\"example\u002Fmnist-classifier\")\n\ninputs = processor(images, return_tensors=\"pt\")\n\nwith torch.no_grad():\n    # 使用 perceiver-io Hugging Face 模型\n    output_1 = model(**inputs).logits\n\nwith torch.no_grad():\n    # 或者直接使用 perceiver-io 后端模型\n    output_2 = model.backend_model(inputs.pixel_values)\n\nprint(f\"Predictions: {output_1.argmax(dim=-1).numpy().tolist()}\")\nprint(f\"Predictions: {output_2.argmax(dim=-1).numpy().tolist()}\")\n```\n```\nPredictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]\nPredictions: [7, 2, 1, 0, 4, 1, 4, 9, 5]\n```\n\n更多示例请参阅 [训练示例](docs\u002Ftraining-examples.md)。\n\n## 文章\n\n引用本仓库的文章：\n\n- [训练计算最优的 Perceiver AR 语言模型](https:\u002F\u002Fkrasserm.github.io\u002F2023\u002F01\u002F23\u002Fscaling-perceiver-ar\u002F)\n- [旋转位置编码的入门介绍](https:\u002F\u002Fkrasserm.github.io\u002F2022\u002F12\u002F13\u002Frotary-position-embedding\u002F)\n\n## 其他实现\n\n- [Perceiver](https:\u002F\u002Fpaperswithcode.com\u002Fpaper\u002Fperceiver-general-perception-with-iterative#code)\n- [Perceiver IO](https:\u002F\u002Fpaperswithcode.com\u002Fpaper\u002Fperceiver-io-a-general-architecture-for#code)\n- [Perceiver AR](https:\u002F\u002Fpaperswithcode.com\u002Fpaper\u002Fgeneral-purpose-long-context-autoregressive#code)","# Perceiver IO 快速上手指南\n\nPerceiver IO 是一个基于 PyTorch 的通用架构库，实现了 Perceiver、Perceiver IO 和 Perceiver AR 模型。它支持处理结构化输入与输出（如图像、音频、视频等），并提供 PyTorch Lightning 训练接口和 Hugging Face 🤗 推理接口。\n\n## 环境准备\n\n### 系统要求\n- **操作系统**: Linux, macOS, Windows (WSL 推荐)\n- **Python**: 3.8+\n- **GPU**: 可选，但推理和训练大型模型时强烈建议使用 CUDA 兼容的 NVIDIA GPU\n\n### 前置依赖\n若选择从源码安装，需预先安装以下工具：\n- [Miniconda](https:\u002F\u002Fdocs.conda.io\u002Fen\u002Flatest\u002Fminiconda.html)\n- [Poetry](https:\u002F\u002Fpython-poetry.org\u002Fdocs\u002F#installation) (版本 1.2.0 或更高)\n\n> **提示**：国内用户可使用清华或阿里镜像源加速 Conda 和 Pip 下载。\n> - Conda 配置：`conda config --add channels https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fanaconda\u002Fpkgs\u002Fmain\u002F`\n> - Pip 配置：`pip config set global.index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 安装步骤\n\n### 方式一：通过 Pip 安装（推荐）\n直接安装核心库及常用扩展（文本、视觉、音频）：\n\n```shell\npip install perceiver-io[text,vision,audio]\n```\n\n### 方式二：从源码安装\n适用于需要运行示例代码或进行开发的场景。\n\n1. 创建并激活 Conda 环境：\n```shell\nconda env create -f environment.yml\nconda activate perceiver-io\n```\n\n2. 安装依赖（包含所有额外功能）：\n```shell\npoetry install --all-extras\n```\n\n3. 若需本地运行示例代码，额外执行：\n```shell\npoetry install --all-extras --with examples\n```\n\n### 方式三：Docker\n直接使用预构建的镜像：\n```shell\ndocker pull ghcr.io\u002Fkrasserm\u002Fperceiver-io:latest\n```\n\n## 基本使用\n\nPerceiver IO 的核心优势在于其统一的推理接口。以下展示两个最典型的用法：视觉任务（光流估计）和音频生成。\n\n### 1. 视觉任务：视频光流估计\n计算连续视频帧之间的光流并生成可视化视频。\n\n```python\nfrom urllib.request import urlretrieve\nfrom transformers import pipeline\n\nfrom perceiver.data.vision import video_utils\nfrom perceiver.model.vision import optical_flow  # register auto-classes and pipeline\n\n# 下载示例视频\nurlretrieve(\n    url=\"https:\u002F\u002Fmartin-krasser.com\u002Fperceiver\u002Fflow\u002Fsintel_clip_cave_dragon_fight.mp4\",\n    filename=\"sintel_clip_cave_dragon_fight.mp4\",\n)\n\n# 创建光流推理管道\noptical_flow_pipeline = pipeline(\"optical-flow\", model=\"krasserm\u002Fperceiver-io-optical-flow\", device=\"cuda:0\")\n\n# 读取连续视频帧对\nframe_pairs = video_utils.read_video_frame_pairs(\"sintel_clip_cave_dragon_fight.mp4\")\n\n# 生成并渲染光流\noptical_flows = optical_flow_pipeline(frame_pairs, render=True, device=\"cuda:0\")\n\n# 输出结果视频\nvideo_utils.write_video(\"sintel_clip_cave_dragon_fight_output.mp4\", optical_flows, fps=24)\n```\n\n### 2. 音频任务：符号化音频生成\n基于 MIDI 提示生成新的音频序列，并转换为 WAV 文件。\n*(注意：此功能需系统预先安装 `fluidsynth`)*\n\n```python\nfrom transformers import pipeline\nfrom pretty_midi import PrettyMIDI\nfrom perceiver.model.audio import symbolic  # auto-class registration\n\nrepo_id = \"krasserm\u002Fperceiver-ar-sam-giant-midi\"\n\n# 加载提示音\nprompt = PrettyMIDI(\"prompt.mid\")\naudio_generator = pipeline(\"symbolic-audio-generation\", model=repo_id)\n\n# 生成音频\noutput = audio_generator(prompt, max_new_tokens=64, num_latents=1, do_sample=True, top_p=0.95, temperature=1.0, render=True)\n\n# 保存为 WAV 文件\nwith open(\"generated_audio.wav\", \"wb\") as f:\n    f.write(output[\"generated_audio_wav\"])\n```\n\n### 3. 自定义模型推理\n如果您有自己训练或转换的模型（例如 MNIST 分类器），可以直接使用 Hugging Face 的标准接口：\n\n```python\nfrom transformers import pipeline\n\n# 加载本地或 Hub 上的模型\nclassifier = pipeline(\"image-classification\", model=\"example\u002Fmnist-classifier\")\n\n# 推理\npredictions = classifier(images)\n```","一家计算机视觉初创团队正在开发基于监控视频的智能行为分析系统，需要处理高分辨率、长时段的连续视频流以识别复杂动作模式。\n\n### 没有 perceiver-io 时\n- **输入模态受限**：传统 Transformer 架构难以直接处理原始像素或非网格化数据，团队需花费大量时间编写预处理代码将视频转换为固定格式的嵌入向量。\n- **上下文长度瓶颈**：面对长视频序列，标准注意力机制的计算复杂度呈平方级增长，导致显存迅速溢出，被迫将视频切割成碎片，丢失了跨片段的关键动作连贯性。\n- **多任务适配困难**：若要同时实现光流估计和行为分类，往往需要维护多套独立的模型架构，增加了训练管线管理和部署的复杂度。\n- **分布式训练门槛高**：缺乏原生支持的分布式训练脚本，团队需手动配置多卡环境，调试过程繁琐且容易出错。\n\n### 使用 perceiver-io 后\n- **通用感知架构**：利用 Perceiver IO 的迭代注意力机制，团队可直接输入原始视频帧甚至字节流，无需复杂的特定模态预处理，大幅简化数据管线。\n- **高效长程建模**：借助其解耦的潜在空间注意力设计，系统能轻松处理超长视频上下文，在保持线性计算复杂度的同时，精准捕捉跨越数分钟的动作依赖关系。\n- **灵活输出结构**：同一模型即可通过配置不同的解码器，同时输出像素级的光流图和句子级的行为描述，实现了真正的多任务统一架构。\n- **开箱即用的训练**：依托集成的 PyTorch Lightning 接口，团队仅需少量配置即可启动高效的分布式训练，显著缩短了从实验到落地的周期。\n\nperceiver-io 通过其通用的架构设计，让开发者能够以统一的模型高效处理多模态长序列数据，彻底打破了传统深度学习模型在输入格式与上下文长度上的双重枷锁。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkrasserm_perceiver-io_42da899f.png","krasserm","Martin Krasser","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkrasserm_7c2d3c95.png","Freelance ML and AI engineer","Gradion AI","Vienna, Austria",null,"https:\u002F\u002Fgradion.ai\u002F","https:\u002F\u002Fgithub.com\u002Fkrasserm",[85,89],{"name":86,"color":87,"percentage":88},"Python","#3572A5",99.8,{"name":90,"color":91,"percentage":92},"Dockerfile","#384d54",0.2,523,46,"2026-04-03T14:14:23","Apache-2.0","Linux, macOS","训练时必需 (通过 --trainer.accelerator=gpu 指定)，推理可选 (支持 device='cuda:0')。具体型号和显存未说明，取决于模型大小和数据集；CUDA 版本未明确说明，需匹配安装的 PyTorch 版本。","未说明",{"notes":101,"python":102,"dependencies":103},"源码安装必须使用 Miniconda 和 Poetry (1.2.0+) 管理依赖。若需运行符号音频生成示例，系统必须单独安装 fluidsynth。提供官方 Docker 镜像以简化环境配置。支持通过 pip 直接安装预编译包。","未说明 (需通过 Miniconda 创建环境)",[104,105,106,107,108,109,110],"PyTorch","PyTorch Lightning","Transformers (Hugging Face)","Datasets (Hugging Face)","Poetry (>=1.2.0)","Miniconda","Fluidsynth (用于音频渲染)",[13],[113,114,115,116,117,67,118],"perceiver","deep-learning","machine-learning","pytorch","pytorch-lightning","perceiver-ar","2026-03-27T02:49:30.150509","2026-04-06T08:18:29.240817",[122,127,132,137,142,147],{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},14618,"导入 perceiver-io 时遇到 'AttributeError: module 'numpy' has no attribute '_no_nep50_warning'' 错误怎么办？","这通常是因为 Google Colab 环境升级导致与最新发布的 perceiver-io 版本不兼容。解决方法有两种：\n1. 等待或确认 Notebook 已升级到最新版本（维护者已修复）。\n2. 如果 pip 安装仍然失败，建议从源代码安装：使用 conda 和 poetry 进行安装可以确保兼容性。\n命令参考：在 Colab 中尝试重新安装或从源码构建。","https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fissues\u002F35",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},14619,"如何处理基因组序列（Genomic sequences）或类似的一维数据输入？位置编码该如何设置？","对于基因组图谱（连续数值表达水平）等位置无关的数据，位置编码可能不适用。如果您需要位置信息或处理一维序列：\n1. 可以保留输入数据原样，并拼接傅里叶位置编码（Fourier position encodings）。参考 `ImageInputAdapter` 的实现，但需将其改为 1D 位置编码。\n2. 如果使用学习到的位置编码（Learned position encodings），可以参考 `TextInputAdapter` 的实现示例。\n3. 另一种方法是将学习到的位置向量拼接到输入数据上（而不是相加），具体取决于您的数据特性。","https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fissues\u002F14",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},14620,"在集成 CIFAR10 或 CIFAR100 数据集时遇到 'image' 未定义的错误如何解决？","这是因为 Hugging Face datasets 加载的 CIFAR 数据集中图像列默认名为 'img'，而代码期望的是 'image'。需要在加载数据集时重命名该列。\n解决代码示例：\n```python\ndef load_dataset(self, split: Optional[str] = None):\n    return load_dataset(\"cifar10\", split=split, cache_dir=self.hparams.dataset_dir).rename_column(\"img\", \"image\")\n```\n请确保在您的 `cifar10.py` 或相关数据模块文件中应用此重命名操作。","https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fissues\u002F54",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},14621,"如何实现论文中提到的多模态自编码器（Multimodal autoencoder）？","目前仓库中没有现成的完整实现，但可以按以下步骤自行构建：\n1. 以后端模型实现为起点，参考 [masked language model](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002Fmain\u002Fperceiver\u002Fmodel\u002Ftext\u002Fmlm\u002Fbackend.py) 或 [image classifier](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002Fmain\u002Fperceiver\u002Fmodel\u002Fvision\u002Fimage_classifier\u002Fbackend.py) 的代码模板。\n2. 转换预训练的 DeepMind 多模态 Perceiver 权重（模型地址：deepmind\u002Fmultimodal-perceiver）。\n3. 参考 MLM 的权重转换代码及相关测试文件了解具体转换逻辑。\n如果不需 Hugging Face 封装，可仅关注后端模型和 Lightning 包装器的实现。","https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fissues\u002F52",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},14622,"如何在学术论文中引用 perceiver-io 这个项目？","您可以在 GitHub 项目页面的右侧栏中找到一个下拉框，标记为 \"Cite this repository\"。点击该选项即可获取标准的引用格式信息（包括 BibTeX 等），直接用于学术论文引用。","https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fissues\u002F21",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},14623,"运行光流（Optical Flow）示例时遇到 'ValueError: Could not load model...' 错误怎么办？","该错误通常与环境配置有关，维护者在标准环境下无法复现此问题。建议检查以下几点：\n1. 确认是否正确安装了 `perceiver-io` 及其依赖项。\n2. 检查是否导入了必要的模块以注册自动类（auto-classes）和 pipeline，例如：`from perceiver.model.vision import optical_flow`。\n3. 确保使用的 transformers 库版本与 perceiver-io 兼容。\n如果问题依旧，请提供详细的运行环境信息（如 Python 版本、包版本列表）以便进一步排查。","https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fissues\u002F53",[153,158,163,168,173,178,183,188,193,198,203,208],{"id":154,"version":155,"summary_zh":156,"released_at":157},81537,"0.11.1","## 变更内容\n* 由 @cstub 在 https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fpull\u002F57 中修复了激活检查点问题\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fcompare\u002F0.11.0...0.11.1","2024-01-02T14:49:33",{"id":159,"version":160,"summary_zh":161,"released_at":162},81538,"0.11.0","## 变更内容\n* ci：将 flake8 替换为 ruff，由 Borda 在 https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fpull\u002F48 中完成（包含 #46 和 #47）\n* 重构因果序列建模代码中的冗余部分，由 krasserm 在 https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fpull\u002F49 中完成\n* 添加键值缓存并支持对比搜索，由 krasserm 在 https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fpull\u002F51 中完成\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fcompare\u002F0.10.0...0.11.0","2023-06-12T11:26:43",{"id":164,"version":165,"summary_zh":166,"released_at":167},81539,"0.10.0","## 变更内容\n* 使用 Perceiver AR 实现的符号音频生成，由 @cstub 在 https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fpull\u002F45 中完成。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fcompare\u002F0.9.0...0.10.0","2023-05-08T10:00:38",{"id":169,"version":170,"summary_zh":171,"released_at":172},81540,"0.9.0","## 亮点\n\n* 所有[官方模型](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002Fmain\u002Fdocs\u002Fpretrained-models.md#official-models)和[训练检查点](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002Fmain\u002Fdocs\u002Fpretrained-models.md#training-checkpoints)现已在🤗 Hub上[提供](https:\u002F\u002Fhuggingface.co\u002Fmodels?search=krasserm\u002Fperceiver)。\n* [推理笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002Fmain\u002Fexamples\u002Finference.ipynb)已更新，以使用来自🤗 Hub的模型。\n\n## 变更内容\n* @krasserm 在 https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fpull\u002F43 中将 PyTorch 升级至 2.0，并将 PyTorch Lightning 升级至 2.0。\n* @krasserm 在 https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fpull\u002F44 中添加了用于推理的 Hugging Face 接口。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fcompare\u002F0.8.2...0.9.0","2023-04-23T14:31:56",{"id":174,"version":175,"summary_zh":176,"released_at":177},81541,"0.8.2","- [将示例级随机截断替换为批次级随机截断](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fissues\u002F39)\n- [为 `TiedTextOutputAdapter` 添加可选的偏置项](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fissues\u002F40)\n- [为 Perceiver AR 添加可选的绝对位置嵌入](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fissues\u002F41)\n","2023-03-31T14:37:26",{"id":179,"version":180,"summary_zh":181,"released_at":182},81542,"0.8.1","- 支持未配置 `pad_token` 的分词器 (#37)\n- 将 `generate` 方法的参数直接设置为 `top_k`，而非 `threshold` (#38)","2023-02-24T14:58:12",{"id":184,"version":185,"summary_zh":186,"released_at":187},81543,"0.8.0","## 变更内容\n\n* @krasserm 在 https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fpull\u002F36 中对 Perceiver AR 进行了增强\n* @krasserm 在 https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fpull\u002F25 中为 Perceiver AR 添加了键掩码支持\n* @cstub 在 https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fpull\u002F29 中实现了流式 C4 数据集\n\n变更日志：https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fcompare\u002F0.7.0...0.8.0\n\n另请参阅 [里程碑 0.8.0](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fissues?q=milestone%3A0.8.0+is%3Aclosed)，以获取完整的拉取请求和已关闭议题列表。","2023-02-21T07:26:21",{"id":189,"version":190,"summary_zh":191,"released_at":192},81544,"0.7.0","本次发布新增了一个 Perceiver IO 模型，用于预测两张图像之间的光流。同时，还添加了相关工具，可以从输入视频生成光流视频（请参阅 [推理笔记本](https:\u002F\u002Ft.co\u002FhJMSM9qNBm) 以获取演示）。感谢 @cstub 的精彩贡献。已关闭的议题列表请见 [里程碑 0.7.0](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fmilestone\u002F3?closed=1)。","2022-12-04T10:27:46",{"id":194,"version":195,"summary_zh":196,"released_at":197},81545,"0.7b1","数据预处理与文档改进、重大重构\n\n功能增强：\n\n- 除了动态词掩码外，新增对静态词掩码的支持。\n- 除了整词掩码外，新增对单个标记掩码的支持。\n- 为所有支持的文本数据集提供任务特定的数据预处理。\n- 默认使用带有预热阶段的常量学习率调度器。\n\n文档改进：\n\n- 所有训练示例现以命令行和 Python 脚本形式提供。\n- 更清晰地概述官方模型及示例训练检查点。\n- 现可单独下载示例训练检查点。\n- 对其他所有文档章节进行了小幅优化。\n\n重构与破坏性变更：\n\n- 将 `image` 包重命名为 `vision`。\n- `TextDataModule` 基类现实现完整的预处理逻辑。\n- `TextDataModule` 子类仅负责将源数据集转换为统一结构。\n- 引入跨注意力查询生成的抽象层（`QueryProvider`）。\n- 解耦 `OutputAdapter` 接口与可训练的跨注意力查询。\n- 将学习到的位置编码实现为 `nn.Embedding`。\n- 将适配器移至独立的 `perceiver.model.core.adapter` 模块。\n- 将 `PerceiverConfig` 重命名为 `PerceiverIOConfig`。\n- 将 `LitModel` 基类重命名为 `LitPerceiverIO`。\n- `LitClassifier.forward` 现在的行为与被包装模型的 `forward` 相同。\n- 面向对象的设计用于从 Hugging Face 的 Perceiver 模型进行转换。\n- 对 `PerceiverAR` 和 `CausalLanguageModel` 进行了大规模重构。\n- 将 `FourierPositionEncoding` 移至 `perceiver.model.core.position` 模块。","2022-11-20T16:03:53",{"id":199,"version":200,"summary_zh":201,"released_at":202},81546,"0.6.0","实现[Perceiver AR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.07765)，包括训练和推理示例（#20）。","2022-09-25T11:07:37",{"id":204,"version":205,"summary_zh":206,"released_at":207},81547,"0.5.1","- Upgrade to PyTorch Lightning 1.7.3 and PyTorch 1.12.1. \r\n- See [milestone 0.5.1](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fmilestone\u002F1?closed=1) for a complete list of closed tickets.","2022-08-31T10:08:26",{"id":209,"version":210,"summary_zh":211,"released_at":212},81548,"0.5.0","Highlights of the [0.5.0](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Ftree\u002F0.5.0) release:\r\n\r\n- Import [pretrained models](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io#pretrained-models) from Huggingface Hub\r\n- New [training examples](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io#training-examples)\r\n- New [inference examples](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002Fmain\u002Fnotebooks\u002Finference_examples.ipynb) [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fkrasserm\u002Fperceiver-io\u002Fblob\u002F0.5.0\u002Fnotebooks\u002Finference_examples.ipynb)\r\n- UTF-8 bytes tokenization","2022-08-22T04:35:23"]