[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Xilinx--finn":3,"tool-Xilinx--finn":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":109,"forks":110,"last_commit_at":111,"license":112,"difficulty_score":113,"env_os":114,"env_gpu":115,"env_ram":116,"env_deps":117,"category_tags":125,"github_topics":126,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":132,"updated_at":133,"faqs":134,"releases":163},2202,"Xilinx\u002Ffinn","finn","Dataflow compiler for QNN inference on FPGAs","FINN 是一个由 AMD 研究院开发的开源框架，专为在 FPGA 芯片上高效运行量化神经网络（QNN）而设计。它核心解决了深度学习模型在边缘设备部署时，如何兼顾高吞吐量与低延迟的难题。传统方法往往难以充分发挥 FPGA 的并行计算潜力，而 FINN 通过独特的数据流编译技术，能够针对每一个特定的神经网络结构，自动生成高度定制化的硬件加速器架构。这种“量体裁衣”的方式，使得生成的 FPGA 方案在执行推理任务时极为高效。\n\nFINN 特别适合人工智能研究人员、嵌入式系统开发者以及希望探索软硬件协同设计的工程师使用。如果你需要在资源受限的边缘端部署高性能 AI 模型，或者想深入研究从算法到硬件底层的完整优化链路，FINN 提供了极佳的实验平台。其最大的技术亮点在于完全开源且基于 Docker 运行，不仅支持灵活的研究扩展，还内置了丰富的教程与预构建示例，帮助用户快速上手。作为连接软件算法与硬件实现的桥梁，FINN 让复杂的 FPGA 加速开发变得更加触手可及，是探索下一代高效能边缘 AI 的理想工具。","## \u003Cimg src=https:\u002F\u002Fraw.githubusercontent.com\u002FXilinx\u002Ffinn\u002Fgithub-pages\u002Fdocs\u002Fimg\u002Ffinn-logo.png width=128\u002F> Fast, Scalable Quantized Neural Network Inference on FPGAs\n\n\n\n\u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FXilinx_finn_readme_6b7f23be8483.png\" alt=\"drawing\" style=\"margin-right: 20px\" width=\"250\"\u002F>\n\n[![GitHub Discussions](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdiscussions-join-green)](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions)\n[![ReadTheDocs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FXilinx_finn_readme_2e20df9dc73f.png)](http:\u002F\u002Ffinn.readthedocs.io\u002F)\n\nFINN is an experimental framework from Integrated Communications and AI Lab of AMD Research & Advanced Development to explore deep neural network inference on FPGAs.\nIt specifically targets \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmaltanar\u002Fqnn-inference-examples\" target=\"_blank\">quantized neural\nnetworks\u003C\u002Fa>, with emphasis on\ngenerating dataflow-style architectures customized for each network.\nThe resulting FPGA accelerators are highly efficient and can yield high throughput and low latency.\nThe framework is fully open-source in order to give a higher degree of flexibility, and is intended to enable neural network research spanning several layers of the software\u002Fhardware abstraction stack.\n\nWe have a separate repository [finn-examples](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn-examples) that houses pre-built examples for several neural networks.\nFor more general information about FINN, please visit the [project page](https:\u002F\u002Fxilinx.github.io\u002Ffinn\u002F) and check out the [publications](https:\u002F\u002Fxilinx.github.io\u002Ffinn\u002Fpublications).\n\n## Getting Started\n\nPlease see the [Getting Started](https:\u002F\u002Ffinn.readthedocs.io\u002Fen\u002Flatest\u002Fgetting_started.html) page for more information on requirements, installation, and how to run FINN in different modes. Due to the complex nature of the dependencies of the project, **we only support Docker-based execution of the FINN compiler at this time**.\n\n## What's New in FINN?\n\n* Please find all news under [GitHub discussions Announcements](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions\u002Fcategories\u002Fannouncements).\n\n## Documentation\n\nYou can view the documentation on [readthedocs](https:\u002F\u002Ffinn.readthedocs.io). Additionally, there is a series of [Jupyter notebook tutorials](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Ftree\u002Fmain\u002Fnotebooks), which we recommend running from inside Docker for a better experience.\n\n## Community\n\nWe have [GitHub discussions](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions) where you can ask questions. You can use the GitHub issue tracker to report bugs, but please don't file issues to ask questions as this is better handled in GitHub discussions.\n\nWe also heartily welcome contributions to the project, please check out the [contribution guidelines](CONTRIBUTING.md) and the [list of open issues](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fissues). Don't hesitate to get in touch over [GitHub discussions](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions) to discuss your ideas.\n\nIn the past, we also had a [Gitter channel](https:\u002F\u002Fgitter.im\u002Fxilinx-finn\u002Fcommunity). Please be aware that this is no longer maintained by us but can still be used to search for questions previous users had.\n\n\n## Citation\n\nThe current implementation of the framework is based on the following publications. Please consider citing them if you find FINN useful.\n\n    @article{blott2018finn,\n      title={FINN-R: An end-to-end deep-learning framework for fast exploration of quantized neural networks},\n      author={Blott, Michaela and Preu{\\ss}er, Thomas B and Fraser, Nicholas J and Gambardella, Giulio and O’brien, Kenneth and Umuroglu, Yaman and Leeser, Miriam and Vissers, Kees},\n      journal={ACM Transactions on Reconfigurable Technology and Systems (TRETS)},\n      volume={11},\n      number={3},\n      pages={1--23},\n      year={2018},\n      publisher={ACM New York, NY, USA}\n    }\n\n    @inproceedings{finn,\n    author = {Umuroglu, Yaman and Fraser, Nicholas J. and Gambardella, Giulio and Blott, Michaela and Leong, Philip and Jahre, Magnus and Vissers, Kees},\n    title = {FINN: A Framework for Fast, Scalable Binarized Neural Network Inference},\n    booktitle = {Proceedings of the 2017 ACM\u002FSIGDA International Symposium on Field-Programmable Gate Arrays},\n    series = {FPGA '17},\n    year = {2017},\n    pages = {65--74},\n    publisher = {ACM}\n    }\n\n## Old version\n\nWe previously released an early-stage prototype of a toolflow that took in Caffe-HWGQ binarized network descriptions and produced dataflow architectures. You can find it in the [v0.1](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Ftree\u002Fv0.1) branch in this repository.\nPlease be aware that this version is deprecated and unsupported, and the main branch does not share history with that branch so it should be treated as a separate repository for all purposes.\n","## \u003Cimg src=https:\u002F\u002Fraw.githubusercontent.com\u002FXilinx\u002Ffinn\u002Fgithub-pages\u002Fdocs\u002Fimg\u002Ffinn-logo.png width=128\u002F> 基于FPGA的快速、可扩展量化神经网络推理\n\n\n\n\u003Cimg align=\"left\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FXilinx_finn_readme_6b7f23be8483.png\" alt=\"drawing\" style=\"margin-right: 20px\" width=\"250\"\u002F>\n\n[![GitHub Discussions](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdiscussions-join-green)](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions)\n[![ReadTheDocs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FXilinx_finn_readme_2e20df9dc73f.png)](http:\u002F\u002Ffinn.readthedocs.io\u002F)\n\nFINN是AMD研究与高级开发部门集成通信与人工智能实验室推出的一个实验性框架，用于探索在FPGA上进行深度神经网络推理。该框架专门针对\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmaltanar\u002Fqnn-inference-examples\" target=\"_blank\">量化神经网络\u003C\u002Fa>,重点在于为每个网络生成定制的数据流式架构。由此产生的FPGA加速器效率极高，能够实现高吞吐量和低延迟。该框架完全开源，以提供更高的灵活性，并旨在支持跨越软件\u002F硬件抽象层的神经网络研究。\n\n我们还有一个独立的仓库[finn-examples](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn-examples)，其中包含了多个预构建的神经网络示例。有关FINN的更多信息，请访问[项目页面](https:\u002F\u002Fxilinx.github.io\u002Ffinn\u002F)并查看[出版物](https:\u002F\u002Fxilinx.github.io\u002Ffinn\u002Fpublications)。\n\n## 入门指南\n\n请参阅[入门指南](https:\u002F\u002Ffinn.readthedocs.io\u002Fen\u002Flatest\u002Fgetting_started.html)，了解所需条件、安装方法以及如何以不同模式运行FINN。由于该项目依赖关系较为复杂，**目前我们仅支持基于Docker的FINN编译器执行方式**。\n\n## FINN最新动态\n\n* 所有最新消息请参见[GitHub讨论区公告](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions\u002Fcategories\u002Fannouncements)。\n\n## 文档\n\n您可以在[readthedocs](https:\u002F\u002Ffinn.readthedocs.io)上查看文档。此外，我们还提供一系列[Jupyter笔记本教程](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Ftree\u002Fmain\u002Fnotebooks)，建议在Docker环境中运行，以获得更好的体验。\n\n## 社区\n\n我们设有[Github讨论区](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions)，您可以在那里提问。如需报告Bug，可以使用Github问题跟踪器，但请勿在此处提出问题，因为这更适合在Github讨论区处理。\n\n我们也热烈欢迎对本项目做出贡献，请查阅[贡献指南](CONTRIBUTING.md)和[开放问题列表](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fissues)。如果您有任何想法，欢迎随时通过[Github讨论区](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions)与我们联系。\n\n过去，我们还有一个[Gitter频道](https:\u002F\u002Fgitter.im\u002Fxilinx-finn\u002Fcommunity)。请注意，该频道已不再由我们维护，但仍可用于搜索之前用户提出的问题。\n\n\n## 引用\n\n当前框架的实现基于以下出版物。如果您认为FINN有用，请考虑引用它们。\n\n    @article{blott2018finn,\n      title={FINN-R: An end-to-end deep-learning framework for fast exploration of quantized neural networks},\n      author={Blott, Michaela and Preu{\\ss}er, Thomas B and Fraser, Nicholas J and Gambardella, Giulio and O’brien, Kenneth and Umuroglu, Yaman and Leeser, Miriam and Vissers, Kees},\n      journal={ACM Transactions on Reconfigurable Technology and Systems (TRETS)},\n      volume={11},\n      number={3},\n      pages={1--23},\n      year={2018},\n      publisher={ACM New York, NY, USA}\n    }\n\n    @inproceedings{finn,\n    author = {Umuroglu, Yaman and Fraser, Nicholas J. and Gambardella, Giulio and Blott, Michaela and Leong, Philip and Jahre, Magnus and Vissers, Kees},\n    title = {FINN: A Framework for Fast, Scalable Binarized Neural Network Inference},\n    booktitle = {Proceedings of the 2017 ACM\u002FSIGDA International Symposium on Field-Programmable Gate Arrays},\n    series = {FPGA '17},\n    year = {2017},\n    pages = {65--74},\n    publisher = {ACM}\n    }\n\n## 旧版本\n\n我们此前发布过一个早期工具流程原型，该原型接受Caffe-HWGQ二值化网络描述并生成数据流架构。您可以在本仓库的[v0.1](https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Ftree\u002Fv0.1)分支中找到它。请注意，此版本已被弃用且不再受支持，主分支与该分支没有历史记录共享，因此应将其视为一个独立的仓库。","# FINN 快速上手指南\n\nFINN 是 AMD 研究实验室推出的开源框架，专为在 FPGA 上高效执行**量化神经网络**（Quantized Neural Networks）而设计。它能将神经网络自动转换为定制的数据流架构，实现高吞吐量和低延迟推理。\n\n> **注意**：由于项目依赖复杂，目前官方**仅支持通过 Docker 容器运行** FINN 编译器。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：推荐 Linux (Ubuntu 18.04\u002F20.04) 或 macOS。Windows 用户需使用 WSL2。\n*   **核心依赖**：\n    *   **Docker**：必须安装并配置好 Docker Engine。\n    *   **Git**：用于克隆代码仓库。\n*   **硬件建议**：虽然编译过程可在 CPU 上进行，但若要生成比特流并在 FPGA 板上验证，需要安装 Xilinx Vivado 工具链（通常需在宿主机或特定 Docker 配置中挂载）。对于初学者体验编译器功能，仅需 Docker 即可。\n\n## 安装步骤\n\nFINN 推荐使用预构建的 Docker 镜像，以避免繁琐的环境配置。\n\n1.  **克隆仓库**\n    获取 FINN 源代码：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn.git\n    cd finn\n    ```\n\n2.  **拉取 Docker 镜像**\n    从 Docker Hub 拉取官方提供的最新稳定版镜像：\n    ```bash\n    docker pull xilinx\u002Ffinn:latest\n    ```\n    *(注：如果国内网络拉取缓慢，可尝试配置 Docker 国内镜像加速器，或使用 `docker pull registry.cn-hangzhou.aliyuncs.com\u002Fxilinx-finn\u002Ffinn:latest` 等第三方同步源，若不可用请以官方源为准)*\n\n3.  **启动容器**\n    运行容器并挂载当前目录，以便在容器内访问代码和保存生成的文件：\n    ```bash\n    docker run -it --rm -v $(pwd):\u002Fworkspace xilinx\u002Ffinn:latest\n    ```\n    执行后，您将进入容器的交互式命令行环境，后续操作均在此环境中进行。\n\n## 基本使用\n\nFINN 的核心工作流是通过 Python 脚本定义神经网络模型，然后调用编译器将其转换为 FPGA 可执行的指令或硬件描述。\n\n以下是一个最简单的流程示例，展示如何加载一个预定义的量化模型并进行编译转换：\n\n1.  **进入教程目录**\n    容器内已包含丰富的 Jupyter Notebook 教程，位于 `\u002Fworkspace\u002Fnotebooks`。您可以直接运行 Python 脚本测试：\n\n2.  **运行简单示例脚本**\n    创建一个名为 `test_finn.py` 的文件，输入以下内容（以处理一个简单的全连接网络为例）：\n\n    ```python\n    from finn.core.onnx_exec import execute_onnx\n    from finn.transformation.flow import ModelWrapper\n    from finn.util.basic import get_by_name\n\n    # 1. 加载一个已有的 ONNX 量化模型 (此处以仓库内的示例模型路径为例)\n    # 实际使用时，请替换为您自己导出的量化 ONNX 模型路径\n    model_path = \"\u002Fworkspace\u002Fnotebooks\u002Fmodels\u002Fmnist_fc_quant.onnx\" \n    \n    try:\n        model = ModelWrapper(model_path)\n        \n        # 2. 打印模型基本信息\n        print(f\"Model input shape: {model.get_tensor_shape(model.graph.input[0].name)}\")\n        print(f\"Model output shape: {model.get_tensor_shape(model.graph.output[0].name)}\")\n        \n        # 3. 执行简单的数据流转换 (示例：将模型转换为数据流风格)\n        # 注意：完整的编译流程包含多个转换步骤，详见 notebooks 中的完整教程\n        from finn.transformation.infer_shapes import InferShapes\n        model = model.transform(InferShapes())\n        \n        print(\"Model loaded and basic transformation applied successfully!\")\n        \n    except FileNotFoundError:\n        print(\"示例模型文件未找到，请先查看 notebooks 目录获取示例模型或导出自己的 ONNX 模型。\")\n    ```\n\n3.  **执行脚本**\n    在 Docker 容器终端中运行：\n    ```bash\n    python test_finn.py\n    ```\n\n**进阶建议**：\n为了获得最佳体验，强烈建议在启动 Docker 时映射端口，并在本地浏览器中打开 **Jupyter Notebook** 界面，按照官方提供的系列教程逐步操作：\n```bash\n# 在宿主机执行，启动带 Jupyter 服务的容器\ndocker run -it --rm -p 8888:8888 -v $(pwd):\u002Fworkspace xilinx\u002Ffinn:latest jupyter notebook --ip=0.0.0.0 --port=8888 --no-browser --allow-root\n```\n随后访问 `http:\u002F\u002Flocalhost:8888` 即可浏览 `\u002Fworkspace\u002Fnotebooks` 下的详细教程。","某边缘计算团队正致力于将轻量级量化神经网络部署到工业质检产线的 FPGA 设备上，以实现毫秒级缺陷检测。\n\n### 没有 finn 时\n- 开发人员需手动编写复杂的 Verilog\u002FVHDL 代码来构建数据流架构，耗时数周且极易出错。\n- 难以针对特定的量化网络结构定制硬件，导致 FPGA 资源利用率低，推理延迟无法满足实时性要求。\n- 从模型训练到硬件比特流生成的流程割裂，每次调整网络参数都意味着漫长的重新设计与验证周期。\n- 缺乏自动化的数据流编译能力，无法充分发挥 FPGA 并行计算优势，吞吐量远低于理论峰值。\n\n### 使用 finn 后\n- 直接导入量化模型即可自动生成定制化的数据流架构硬件设计，将开发周期从数周缩短至数小时。\n- FINN 专为量化神经网络优化，生成的加速器高度匹配网络特性，显著降低延迟并提升资源效率。\n- 提供端到端的编译流程，支持快速迭代网络结构，研究人员能灵活探索软硬件协同设计方案。\n- 自动构建高吞吐、低延迟的数据流引擎，在同等硬件条件下大幅提升每秒处理帧数，满足产线高速检测需求。\n\nFINN 通过自动化生成定制化数据流架构，彻底打通了从量化模型到高效 FPGA 加速器的最后一公里。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FXilinx_finn_f92f2fa7.png","Xilinx","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FXilinx_f160a5e5.png","GitHub.Com\u002FXilinx\u002F",null,"www.amd.com","https:\u002F\u002Fgithub.com\u002FXilinx",[82,86,90,94,98,102,105],{"name":83,"color":84,"percentage":85},"Python","#3572A5",73.7,{"name":87,"color":88,"percentage":89},"Jupyter Notebook","#DA5B0B",11.3,{"name":91,"color":92,"percentage":93},"SystemVerilog","#DAE1C2",9.5,{"name":95,"color":96,"percentage":97},"Verilog","#b2b7f8",3.5,{"name":99,"color":100,"percentage":101},"C++","#f34b7d",0.8,{"name":103,"color":104,"percentage":101},"Shell","#89e051",{"name":106,"color":107,"percentage":108},"Tcl","#e4cc98",0.4,967,292,"2026-04-03T02:54:26","BSD-3-Clause",4,"未说明 (仅支持 Docker)","未说明 (基于 FPGA 的推理框架，编译过程在 Docker 中运行，未明确提及宿主 GPU 需求)","未说明",{"notes":118,"python":119,"dependencies":120},"由于项目依赖关系复杂，目前仅支持通过 Docker 运行 FINN 编译器。该工具专为在 FPGA 上部署量化神经网络设计，生成数据流架构。用户需自行安装 Xilinx Vivado 工具链以完成硬件生成流程。详细安装和运行模式请参考官方文档的'Getting Started'页面。","未说明 (封装于 Docker 镜像中)",[121,122,123,124],"Docker","Xilinx Vivado (用于综合与实现)","hls4ml (隐含依赖)","ONNX (隐含依赖)",[13,51],[127,128,129,130,131],"dataflow","quantization","fpga","compiler","neural-network","2026-03-27T02:49:30.150509","2026-04-06T08:09:04.466303",[135,140,145,150,155,159],{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},10146,"FINN v0.10 版本在 step_hw_ipgen 步骤因阈值 HLS 层报错失败，提示位宽超过限制怎么办？","该问题通常是由于输入数据类型与阈值数据类型不匹配导致的。在旧版实现中要求两者必须一致，但新版已支持独立设置。请检查您的模型中阈值节点的输入数据类型和阈值数据类型设置。此外，建议尝试使用最新的开发分支（dev branch），因为相关修复可能已合并。如果问题依旧，可以考虑使用 RTL 阈值化方案作为替代。","https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fissues\u002F1060",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},10147,"运行 FINN 综合示例时出现 'cannot stat file' 错误且找不到 .bit 文件，如何解决？","这通常是因为布局布线器（Placer）失败导致未生成比特流文件。主要原因是硬件成本模型与较新版本的 Vivado 工具输出不匹配。解决方法是降低配置中的 BRAM 使用比例：打开 `FINN\u002Fcore\u002Fconfig.py` 文件，将 `BRAM_PROPORTION = 1` 修改为更低的值（如 0.9 或 0.8），然后重新运行。注意：v0.1 版本已废弃，建议使用新版本。","https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fissues\u002F32",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},10148,"更新 FINN 仓库后仿真输出全为零且资源利用率异常下降，可能是什么原因？","如果内部 AXI Stream 信号全为零且资源利用率异常变化，首先应排除 FINN 或 Vivado 本身的问题。常见原因是输入的 ONNX 模型文件有误或不匹配。请仔细检查传递给 FINN 流程的 ONNX 文件是否正确，确认权重文件（.npy）和数据文件（.dat）是否加载了正常数值而非零值。很多时候问题出在模型预处理或导出环节，而非 FINN 工具链本身。","https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fissues\u002F1402",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},10149,"在 Ubuntu 上安装 FINN 并运行 quicktest 时遇到依赖项错误怎么办？","主分支（main）可能存在依赖项兼容性问题。建议切换到开发分支（dev branch）进行安装，该分支通常修复了最新的依赖问题。使用方法是将克隆命令改为：`git clone -b dev \u003C远程仓库 URL>`。例如：`git clone -b dev https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn.git`。克隆完成后按常规步骤安装即可解决该依赖报错。","https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fissues\u002F870",{"id":156,"question_zh":157,"answer_zh":158,"source_url":139},10150,"在 FINN 执行流程中如何避免因张量名称假设错误导致的运行失败？","FINN 的执行函数基于张量名称工作。常见的错误是代码中硬编码了特定的张量名称（如 \"global_in\"），而实际模型中的张量名称可能不同。为避免此问题，不要使用固定字符串作为键名，而应动态获取名称。例如，在构建 input_dict 时，使用 `model.graph.input[0].name` 来获取实际的输入张量名称，而不是手动输入字符串。",{"id":160,"question_zh":161,"answer_zh":162,"source_url":139},10151,"如何在 ONNX 模型中实现张量拆分（Split）以便在 FINN 中并行处理不同通道？","FINN 目前对 Split 节点的支持有限，直接引入 Split 节点可能导致转换失败。虽然 ONNX 标准中有 Split 操作符，但在 FINN 流程中可能需要特定的处理方式或变通方案。如果遇到无法转换的情况，建议检查是否可以使用其他等效结构，或者参考社区讨论看是否有针对多通道分离处理的自定义节点实现。目前官方文档中主要提及 Concat 层，对于 Split 操作需格外小心验证兼容性。",[164,169,174,179,184,189,194,199,204,209,214],{"id":165,"version":166,"summary_zh":167,"released_at":168},107397,"v0.10.1","Release blog post: https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions\u002F1128\r\nList of merged PRs: https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fpulls?q=is%3Apr+is%3Aclosed+merged%3A2024-04-03..2024-07-08\r\n_____________________\r\n**_Known issues_**\r\nWe are seeing some unexpected behaviour for external mem mode, especially with the new RTL components. Please note that you might run into issues when working with external weights.\r\n\r\n\r\n","2024-07-08T11:05:40",{"id":170,"version":171,"summary_zh":172,"released_at":173},107398,"v0.10","Release blog post: https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions\u002F1026\r\nList of merged PRs: https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fpulls?q=is%3Apr+is%3Aclosed+merged%3A2023-02-11..2024-04-02","2024-04-02T10:57:08",{"id":175,"version":176,"summary_zh":177,"released_at":178},107399,"v0.9","Release blog post: #761 \r\n\r\nList of merged PRs: https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fpulls?q=is%3Apr+is%3Aclosed+merged%3A2022-07-15..2023-02-10\r\n\r\n## Known issues\r\nThere was a wrong model file referenced in the tfc and cnv end2end example notebook and the build_dataflow example needed to have an additional build argument (vcd tracing does only work with python rtlsim)\r\nBoth is fixed on main, see merged PR: https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fpull\u002F762","2023-02-10T12:29:42",{"id":180,"version":181,"summary_zh":182,"released_at":183},107400,"v0.8.1","Release blog post: https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fdiscussions\u002F638\r\n\r\nList of merged PRs: https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fpulls?q=is%3Apr+is%3Aclosed+merged%3A2021-11-05..2022-07-14","2022-07-14T09:17:53",{"id":185,"version":186,"summary_zh":187,"released_at":188},107401,"v0.8","Please do not use this release as it was broken due to a faulty merge, use v0.8.1 instead.","2022-07-13T17:03:23",{"id":190,"version":191,"summary_zh":192,"released_at":193},107402,"v0.7","Release blog post: https:\u002F\u002Fxilinx.github.io\u002Ffinn\u002F\u002F2021\u002F11\u002F05\u002Ffinn-v07-is-released.html\r\n\r\nList of merged PRs: https:\u002F\u002Fgithub.com\u002FXilinx\u002Ffinn\u002Fpulls?q=is%3Apr+is%3Aclosed+merged%3A2021-06-15..2021-11-05+","2021-11-05T14:53:43",{"id":195,"version":196,"summary_zh":197,"released_at":198},107403,"v0.6","Release blog post: https:\u002F\u002Fxilinx.github.io\u002Ffinn\u002F\u002F2021\u002F06\u002F15\u002Ffinn-v06-is-released.html\r\n","2021-06-15T10:09:01",{"id":200,"version":201,"summary_zh":202,"released_at":203},107404,"v0.5b","Release blog post:\r\nhttps:\u002F\u002Fxilinx.github.io\u002Ffinn\u002F\u002F2020\u002F12\u002F17\u002Ffinn-v05b-beta-is-released.html","2020-12-17T23:05:57",{"id":205,"version":206,"summary_zh":207,"released_at":208},107405,"v0.4b","Easier build transformations including Alveo, fully accelerated end-to-end network examples, Brevitas co-debug, accumulator minimization, cycle estimation, many new hlslib layers, plus a whole lot more.\r\n\r\nRelease blog post:\r\nhttps:\u002F\u002Fxilinx.github.io\u002Ffinn\u002F\u002F2020\u002F09\u002F21\u002Ffinn-v04b-beta-is-released.html","2020-09-22T07:52:08",{"id":210,"version":211,"summary_zh":212,"released_at":213},107406,"v0.3b","Initial support for convolutions including new end-to-end notebook, parallel transformations, more flexible memory allocation for MVAUs, throughput testing and many other smaller improvements and bugfixes.","2020-05-09T01:25:41",{"id":215,"version":216,"summary_zh":217,"released_at":218},107407,"v0.2.1b","Use fixed commit versions for dependency repos, otherwise identical to 0.2b.","2020-04-15T13:15:54"]