[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Thinklab-SJTU--ThinkMatch":3,"tool-Thinklab-SJTU--ThinkMatch":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":95,"forks":96,"last_commit_at":97,"license":98,"difficulty_score":10,"env_os":99,"env_gpu":100,"env_ram":101,"env_deps":102,"category_tags":116,"github_topics":117,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":122,"updated_at":123,"faqs":124,"releases":155},1061,"Thinklab-SJTU\u002FThinkMatch","ThinkMatch","A research protocol for deep graph matching.","ThinkMatch 是一个专注于深度图匹配研究的开源工具集，旨在为开发者提供模块化框架和实现方案。它通过解决图匹配中的NP难问题——二次分配问题（QAP），帮助用户构建更高效的图结构比对模型。工具集包含多类先进算法实现，如GMN和PCA-GM等，支持在图像关键点匹配、分子结构比对等场景中应用。\n\n针对图匹配领域中传统方法计算复杂度高、泛化能力弱的问题，ThinkMatch 提供了可扩展的模块化设计，方便研究人员快速验证新算法。其配套的文档和Docker容器化部署，降低了使用门槛，适合具备基础深度学习知识的开发者和研究人员使用。\n\n工具亮点包括：支持多模态数据处理、提供完整的实验基准测试环境、包含主流算法实现及优化方案。通过标准化接口，用户可灵活组合不同模块，适配图像识别、生物信息学等跨领域需求。项目持续更新维护，社区活跃度高，适合追求技术深度与实用性的研究者和工程实践者。","# Think Match\n\n[![release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002FThinklab-SJTU\u002FThinkMatch)](https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Freleases)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_6bf48b3e9a6d.png)](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n[![docker](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocker-images-orange)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Frunzhongwang\u002Fthinkmatch\u002Ftags)\n[![Docker Pulls](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Frunzhongwang\u002Fthinkmatch)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Frunzhongwang\u002Fthinkmatch\u002Ftags)\n[![discord channel](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1028701206526304317.svg?&color=blueviolet&label=discord)](https:\u002F\u002Fdiscord.gg\u002F8m6n7rRz9T)\n[![QQ group](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FQQ%20group-696401889-blue)](https:\u002F\u002Fqm.qq.com\u002Fcgi-bin\u002Fqm\u002Fqr?k=QolXYJn_M5ilDEM9e2jEjlPnJ02Ktabd&jump_from=webapi&authKey=6zG6D\u002FJs4YF5h5zj778aO5MDKOXBwPFi8gQ4LsXJN8Hn1V8uCVGV81iT4J\u002FFjPGT)\n[![GitHub stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FThinklab-SJTU\u002FThinkMatch.svg?style=social&label=Star&maxAge=8640)](https:\u002F\u002FGitHub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fstargazers\u002F) \n\n_ThinkMatch_ is developed and maintained by [ThinkLab](http:\u002F\u002Fthinklab.sjtu.edu.cn) at Shanghai Jiao Tong University.\nThis repository is developed for the following purposes:\n* **Providing modules** for developing deep graph matching algorithms to facilitate future research.\n* **Providing implementation** of state-of-the-art deep graph matching methods.\n* **Benchmarking** existing deep graph matching algorithms under different dataset & experiment settings, for the purpose of fair comparison.\n\nOfficial documentation: https:\u002F\u002Fthinkmatch.readthedocs.io\n\nSource code: https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\n\n## Introduction to Graph Matching\nGraph Matching (GM) is a fundamental yet challenging problem in computer vision, pattern recognition and data mining. GM aims to find node-to-node correspondence among multiple graphs, by solving an NP-hard combinatorial problem named Quadratic Assignment Problem (QAP). Recently, there is growing interest in developing deep learning based graph matching methods.\n\nGraph matching techniques have been applied to the following applications:\n* [Bridging movie and synopses](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FXiong_A_Graph-Based_Framework_to_Bridge_Movies_and_Synopses_ICCV_2019_paper.pdf)\n  \n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_ce7adff96ddc.png\" alt=\"Bridging movie and synopses, ICCV 2019\" width=\"50%\">\n\n* [Image correspondence](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.11763.pdf)\n  \n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_105eb5e64c4e.png\" alt=\"Superglue, CVPR 2020\" width=\"50%\">\n\n* [Molecules matching](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_Combinatorial_Learning_of_Graph_Edit_Distance_via_Dynamic_Embedding_CVPR_2021_paper.pdf)\n\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_dda32d85ac35.png\" alt=\"Molecules matching, CVPR 2021\" width=\"50%\">\n\n* and more...\n\nIn this repository, we mainly focus on image keypoint matching because it is a popular testbed for existing graph matching methods.\n\nReaders are referred to the following survey for more technical details about graph matching:\n* Junchi Yan, Xu-Cheng Yin, Weiyao Lin, Cheng Deng, Hongyuan Zha, Xiaokang Yang. \"A Short Survey of Recent Advances in Graph Matching.\"\n_ICMR 2016_.\n\n## Deep Graph Matching Algorithms\n_ThinkMatch_ currently contains pytorch source code of the following deep graph matching methods:\n\n* [**GMN**](\u002Fmodels\u002FGMN)\n  * Andrei Zanfir and Cristian Sminchisescu. \"Deep Learning of Graph Matching.\" _CVPR 2018_.\n    [[paper]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FZanfir_Deep_Learning_of_CVPR_2018_paper.html)\n* [**PCA-GM & IPCA-GM**](\u002Fmodels\u002FPCA)\n  * Runzhong Wang, Junchi Yan and Xiaokang Yang. \"Combinatorial Learning of Robust Deep Graph Matching: an Embedding based Approach.\" _TPAMI 2020_.\n    [[paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9128045\u002F), [[project page]](https:\u002F\u002Fthinklab.sjtu.edu.cn\u002FIPCA_GM.html)\n  * Runzhong Wang, Junchi Yan and Xiaokang Yang. \"Learning Combinatorial Embedding Networks for Deep Graph Matching.\" _ICCV 2019_.\n    [[paper]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FWang_Learning_Combinatorial_Embedding_Networks_for_Deep_Graph_Matching_ICCV_2019_paper.pdf)\n* [**NGM & NGM-v2**](\u002Fmodels\u002FNGM)\n  * Runzhong Wang, Junchi Yan, Xiaokang Yang. \"Neural Graph Matching Network: Learning Lawler's Quadratic Assignment Problem with Extension to Hypergraph and Multiple-graph Matching.\" _TPAMI 2021_.\n    [[paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9426408), [[project page]](http:\u002F\u002Fthinklab.sjtu.edu.cn\u002Fproject\u002FNGM\u002Findex.html)\n* [**CIE-H**](\u002Fmodels\u002FCIE)\n  * Tianshu Yu, Runzhong Wang, Junchi Yan, Baoxin Li. \"Learning deep graph matching with channel-independent embedding and Hungarian attention.\" _ICLR 2020_.\n    [[paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=rJgBd2NYPH)\n* [**GANN**](\u002Fmodels\u002FGANN)\n  * Runzhong Wang, Junchi Yan and Xiaokang Yang. \"Graduated Assignment for Joint Multi-Graph Matching and Clustering with Application to Unsupervised Graph Matching Network Learning.\" _NeurIPS 2020_.\n    [[paper]](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fe6384711491713d29bc63fc5eeb5ba4f-Abstract.html)\n  * Runzhong Wang, Junchi Yan and Xiaokang Yang. \"Unsupervised Learning of Graph Matching with Mixture of Modes via Discrepancy Minimization.\" _TPAMI 2023_. \n    [[paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10073537), [[project page]](https:\u002F\u002Fthinklab.sjtu.edu.cn\u002Fproject\u002FGANN-GM\u002Findex.html)\n* [**BBGM**](\u002Fmodels\u002FBBGM)\n  * Michal Rolínek, Paul Swoboda, Dominik Zietlow, Anselm Paulus, Vít Musil, Georg Martius. \"Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers.\" _ECCV 2020_. \n    [[paper]](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123730409.pdf)\n* [**GCAN**](\u002Fmodels\u002FGCAN)\n  * Zheheng Jiang, Hossein Rahmani, Plamen Angelov, Sue Black, Bryan M. Williams. \"Graph-Context Attention Networks for Size-Varied Deep Graph Matching.\" _CVPR 2022_. \n    [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FJiang_Graph-Context_Attention_Networks_for_Size-Varied_Deep_Graph_Matching_CVPR_2022_paper.pdf)\n* [**AFAT**](\u002Fmodels\u002FAFAT)\n  * Runzhong Wang, Ziao Guo, Shaofei Jiang, Xiaokang Yang, Junchi Yan. \"Deep Learning of Partial Graph Matching via Differentiable Top-K.\" _CVPR 2023_. \n    [[paper]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FWang_Deep_Learning_of_Partial_Graph_Matching_via_Differentiable_Top-K_CVPR_2023_paper.html)\n* [**LinSAT**](\u002Fmodels\u002FLinSAT)\n  * Runzhong Wang, Yunhao Zhang, Ziao Guo, Tianyi Chen, Xiaokang Yang, Junchi Yan. \"LinSATNet: The Positive Linear Satisfiability Neural Networks.\" _ICML 2023_. \n    [[paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=D2Oaj7v9YJ)\n* [**COMMON**](\u002Fmodels\u002FCOMMON) & [**COMMON+**](\u002Fmodels\u002FCOMMONPLUS)\n  * Yijie Lin, Mouxing Yang, Jun Yu, Peng Hu, Changqing Zhang, Xi Peng. \"Graph Matching with Bi-level Noisy\n  Correspondence.\" _ICCV 2023_. \n    [[paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.04085.pdf), [[project page]](https:\u002F\u002Fgithub.com\u002FLin-Yijie\u002FGraph-Matching-Networks\u002Ftree\u002Fmain\u002FCOMMON)\n  * Yijie Lin, Mouxing Yang, Peng Hu, Jiancheng Lv, Hao Chen, Xi Peng. \"Learning with Partial and Noisy Correspondence in Graph Matching\". TPAMI, 2026. \n    [[paper]](https:\u002F\u002Fxlearning-lab.com\u002Fassets\u002F2026-TPAMI-Learning-With-Partial-and-Noisy-Correspondence-in-Graph-Matching.pdf)\n\n## When to use ThinkMatch\n\nThinkMatch is designed as a research protocol for deep graph matching. It is recommended if you have any of the \nfollowing demands:\n* Developing new algorithms and publishing new graph matching papers;\n* Understanding the details of deep graph matching models;\n* Playing around with the hyperparameters and network details;\n* Benchmarking deep graph matching networks.\n\n### When not to use ThinkMatch\n\nYou may find the environment setup in ThinkMatch complicated and the details of graph matching hard to understand.\n``pygmtools`` offers a user-friendly API, and is recommended for the following cases:\n\n* If you want to integrate graph matching as a step of your pipeline (either learning or non-learning, \n  with ``numpy``\u002F``pytorch``\u002F``jittor``\u002F``paddle``\u002F``mindspore``\u002F``tensorflow``).\n* If you want a quick benchmarking and profiling of the graph matching solvers available in ``pygmtools``.\n* If you do not want to dive too deep into the algorithm details and do not need to modify the algorithm.\n\nYou can simply install the user-friendly package by\n```shell\n$ pip install pygmtools\n```\n\nOfficial documentation: https:\u002F\u002Fpygmtools.readthedocs.io\n\nSource code: https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002Fpygmtools\n\n## Deep Graph Matching Benchmarks\n\n### PascalVOC - 2GM\n\n| model                                                        | year | aero   | bike   | bird   | boat   | bottle | bus    | car    | cat    | chair  | cow    | table  | dog    | horse  | mbkie  | person | plant  | sheep  | sofa   | train  | tv     | mean   |\n| ------------------------------------------------------------ | ---- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\n| [GMN](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#gmn) | 2018 | 0.4163 | 0.5964 | 0.6027 | 0.4795 | 0.7918 | 0.7020 | 0.6735 | 0.6488 | 0.3924 | 0.6128 | 0.6693 | 0.5976 | 0.6106 | 0.5975 | 0.3721 | 0.7818 | 0.6800 | 0.4993 | 0.8421 | 0.9141 | 0.6240 |\n| [PCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca-gm) | 2019 | 0.4979 | 0.6193 | 0.6531 | 0.5715 | 0.7882 | 0.7556 | 0.6466 | 0.6969 | 0.4164 | 0.6339 | 0.5073 | 0.6705 | 0.6671 | 0.6164 | 0.4447 | 0.8116 | 0.6782 | 0.5922 | 0.7845 | 0.9042 | 0.6478 |\n| [NGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | 0.5010 | 0.6350 | 0.5790 | 0.5340 | 0.7980 | 0.7710 | 0.7360 | 0.6820 | 0.4110 | 0.6640 | 0.4080 | 0.6030 | 0.6190 | 0.6350 | 0.4560 | 0.7710 | 0.6930 | 0.6550 | 0.7920 | 0.8820 | 0.6413 |\n| [NHGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | 0.5240 | 0.6220 | 0.5830 | 0.5570 | 0.7870 | 0.7770 | 0.7440 | 0.7070 | 0.4200 | 0.6460 | 0.5380 | 0.6100 | 0.6190 | 0.6080 | 0.4680 | 0.7910 | 0.6680 | 0.5510 | 0.8090 | 0.8870 | 0.6458 |\n| [IPCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca-gm) | 2020 | 0.5378 | 0.6622 | 0.6714 | 0.6120 | 0.8039 | 0.7527 | 0.7255 | 0.7252 | 0.4455 | 0.6524 | 0.5430 | 0.6724 | 0.6790 | 0.6421 | 0.4793 | 0.8435 | 0.7079 | 0.6398 | 0.8380 | 0.9083 | 0.6770 |\n| [CIE-H](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#cie-h) | 2020 | 0.5250 | 0.6858 | 0.7015 | 0.5706 | 0.8207 | 0.7700 | 0.7073 | 0.7313 | 0.4383 | 0.6994 | 0.6237 | 0.7018 | 0.7031 | 0.6641 | 0.4763 | 0.8525 | 0.7172 | 0.6400 | 0.8385 | 0.9168 | 0.6892 |\n| [BBGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#bbgm) | 2020 | 0.6187 | 0.7106 | 0.7969 | 0.7896 | 0.8740 | 0.9401 | 0.8947 | 0.8022 | 0.5676 | 0.7914 | 0.6458 | 0.7892 | 0.7615 | 0.7512 | 0.6519 | 0.9818 | 0.7729 | 0.7701 | 0.9494 | 0.9393 | 0.7899 |\n| [NGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | 0.6184 | 0.7118 | 0.7762 | 0.7875 | 0.8733 | 0.9363 | 0.8770 | 0.7977 | 0.5535 | 0.7781 | 0.8952 | 0.7880 | 0.8011 | 0.7923 | 0.6258 | 0.9771 | 0.7769 | 0.7574 | 0.9665 | 0.9323 | 0.8011 |\n| [NHGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | 0.5995 | 0.7154 | 0.7724 | 0.7902 | 0.8773 | 0.9457 | 0.8903 | 0.8181 | 0.5995 | 0.8129 | 0.8695 | 0.7811 | 0.7645 | 0.7750 | 0.6440 | 0.9872 | 0.7778 | 0.7538 | 0.9787 | 0.9280 | 0.8040 |\n| [COMMON](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.04085.pdf) | 2023 | 0.6560 | 0.7520 | 0.8080 | 0.7950    |0.8930 | 0.9230 | 0.9010 | 0.8180 | 0.6160 | 0.8070| 0.9500 | 0.8200 |    0.8160    | 0.7950 | 0.6660 |    0.9890 | 0.7890 | 0.8090 | 0.9930 |    0.9380 | 0.8270 |  \n| [COMMON+](https:\u002F\u002Fxlearning-lab.com\u002Fassets\u002F2026-TPAMI-Learning-With-Partial-and-Noisy-Correspondence-in-Graph-Matching.pdf) | 2026 | 0.6880 | 0.7550 | 0.8260 | 0.7740 | 0.9000 | 0.9220 | 0.8950 | 0.8070 | 0.6180 | 0.8240 | 0.9530 | 0.8050 | 0.8210 | 0.8160 | 0.6770 | 0.9880 | 0.7990 | 0.8100 | 0.9850 | 0.9540 | 0.8310 |\n\n\n### Willow Object Class - 2GM & MGM\n\n| model                                                        | year | remark          | Car    | Duck   | Face   | Motorbike | Winebottle | mean   |\n| ------------------------------------------------------------ | ---- | --------------- | ------ | ------ | ------ | --------- | ---------- | ------ |\n| [GMN](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#gmn) | 2018 | -               | 0.6790 | 0.7670 | 0.9980 | 0.6920    | 0.8310     | 0.7934 |\n| [PCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca-gm) | 2019 | -               | 0.8760 | 0.8360 | 1.0000 | 0.7760    | 0.8840     | 0.8744 |\n| [NGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | -               | 0.8420 | 0.7760 | 0.9940 | 0.7680    | 0.8830     | 0.8530 |\n| [NHGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | -               | 0.8650 | 0.7220 | 0.9990 | 0.7930    | 0.8940     | 0.8550 |\n| [NMGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | -               | 0.7850 | 0.9210 | 1.0000 | 0.7870    | 0.9480     | 0.8880 |\n| [IPCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca) | 2020 | -               | 0.9040 | 0.8860 | 1.0000 | 0.8300    | 0.8830     | 0.9006 |\n| [CIE-H](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#cie-h) | 2020 | -               | 0.8581 | 0.8206 | 0.9994 | 0.8836    | 0.8871     | 0.8898 |\n| [BBGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#bbgm) | 2020 | -               | 0.9680 | 0.8990 | 1.0000 | 0.9980    | 0.9940     | 0.9718 |\n| [GANN-MGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#gann) | 2020 | self-supervised | 0.9600 | 0.9642 | 1.0000 | 1.0000    | 0.9879     | 0.9906 |\n| [NGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | -               | 0.9740 | 0.9340 | 1.0000 | 0.9860    | 0.9830     | 0.9754 |\n| [NHGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | -               | 0.9740 | 0.9390 | 1.0000 | 0.9860    | 0.9890     | 0.9780 |\n| [NMGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | -               | 0.9760 | 0.9447 | 1.0000 | 1.0000    | 0.9902     | 0.9822 |\n| [COMMON](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.04085.pdf) | 2023 | -             | 0.9760 | 0.9820 | 1.0000 | 1.0000 | 0.9960     | 0.9910 |\n| [COMMON+](https:\u002F\u002Fxlearning-lab.com\u002Fassets\u002F2026-TPAMI-Learning-With-Partial-and-Noisy-Correspondence-in-Graph-Matching.pdf) | 2026  | -             | 0.9830 | 0.9820 | 1.0000 | 1.0000 | 1.0000 | 0.9930 |\n\n### SPair-71k - 2GM\n\n| model                                                        | year | aero   | bike   | bird   | boat   | bottle | bus    | car    | cat    | chair  | cow    | dog    | horse  | mtbike | person | plant  | sheep  | train  | tv     | mean   |\n| ------------------------------------------------------------ | ---- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\n| [GMN](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#gmn) | 2018 | 0.5991 | 0.5099 | 0.7428 | 0.4672 | 0.6328 | 0.7552 | 0.6950 | 0.6462 | 0.5751 | 0.7302 | 0.5866 | 0.5914 | 0.6320 | 0.5116 | 0.8687 | 0.5787 | 0.6998 | 0.9238 | 0.6526 |\n| [PCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca-gm) | 2019 | 0.6467 | 0.4571 | 0.7811 | 0.5128 | 0.6381 | 0.7272 | 0.6122 | 0.6278 | 0.6255 | 0.6822 | 0.5906 | 0.6115 | 0.6486 | 0.5773 | 0.8742 | 0.6042 | 0.7246 | 0.9283 | 0.6595 |\n| [NGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | 0.6644 | 0.5262 | 0.7696 | 0.4960 | 0.6766 | 0.7878 | 0.6764 | 0.6827 | 0.5917 | 0.7364 | 0.6391 | 0.6066 | 0.7074 | 0.6089 | 0.8754 | 0.6387 | 0.7979 | 0.9150 | 0.6887 |\n| [IPCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca) | 2020 | 0.6901 | 0.5286 | 0.8037 | 0.5425 | 0.6653 | 0.8001 | 0.6847 | 0.7136 | 0.6136 | 0.7479 | 0.6631 | 0.6514 | 0.6956 | 0.6391 | 0.9112 | 0.6540 | 0.8291 | 0.9750 | 0.7116 |\n| [CIE-H](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#cie-h) | 2020 | 0.7146 | 0.5710 | 0.8168 | 0.5672 | 0.6794 | 0.8246 | 0.7339 | 0.7449 | 0.6259 | 0.7804 | 0.6872 | 0.6626 | 0.7374 | 0.6604 | 0.9246 | 0.6717 | 0.8228 | 0.9751 | 0.7334 |\n| [BBGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#bbgm) | 2020 | 0.7250 | 0.6455 | 0.8780 | 0.7581 | 0.6927 | 0.9395 | 0.8859 | 0.7992 | 0.7456 | 0.8315 | 0.7878 | 0.7710 | 0.7650 | 0.7634 | 0.9820 | 0.8554 | 0.9678 | 0.9931 | 0.8215 |\n| [NGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | 0.6877 | 0.6331 | 0.8677 | 0.7013 | 0.6971 | 0.9467 | 0.8740 | 0.7737 | 0.7205 | 0.8067 | 0.7426 | 0.7253 | 0.7946 | 0.7340 | 0.9888 | 0.8123 | 0.9426 | 0.9867 | 0.8020 |\n| [NHGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | 0.6202 | 0.5781 | 0.8642 | 0.6846 | 0.6872 | 0.9335 | 0.8081 | 0.7656 | 0.6919 | 0.7987 | 0.6623 | 0.7171 | 0.7812 | 0.6953 | 0.9824 | 0.8444 | 0.9316 | 0.9926 | 0.7799 |\n| [COMMON](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.04085.pdf)    | 2023 | 0.7730 | 0.6820 | 0.9200 | 0.7950 | 0.7040 | 0.9750 | 0.9160 | 0.8250 | 0.7220 | 0.8800 | 0.8000| 0.7410 | 0.8340 | 0.8280 | 0.9990 | 0.8440 | 0.9820 | 0.9980| 0.8450 |\n| [COMMON+](https:\u002F\u002Fxlearning-lab.com\u002Fassets\u002F2026-TPAMI-Learning-With-Partial-and-Noisy-Correspondence-in-Graph-Matching.pdf)  | 2026 | 0.7980 | 0.7230 | 0.9170 | 0.7870 | 0.7080 | 0.9800 | 0.9180 | 0.8190 | 0.7280 | 0.8820 | 0.8330 | 0.7640 | 0.8340 | 0.8390 | 0.9990 | 0.8610 | 0.9920 | 0.9990 | 0.8550 |\n\n_ThinkMatch_ includes the flowing datasets with the provided benchmarks:\n\n* **PascalVOC-Keypoint**\n* **Willow-Object-Class**\n* **CUB2011**\n* **SPair-71k**\n* **IMC-PT-SparseGM**\n\n**TODO** We also plan to include the following datasets in the future:\n* **Synthetic data**\n\n_ThinkMatch_ also supports the following graph matching settings:\n* **2GM** namely **Two**-**G**raph **M**atching where every time only a pair of two graphs is matched.\n* **MGM** namely **M**ulti-**G**raph **M**atching where more than two graphs are jointly matched.\n* **MGM3** namely **M**ulti-**G**raph **M**atching with a **M**ixture of **M**odes, where multiple graphs are jointly considered, and at the same time the graphs may come from different categories.\n\n## Get Started\n\n### Docker (RECOMMENDED)\n\nGet the recommended docker image by\n```bash\ndocker pull runzhongwang\u002Fthinkmatch:torch1.6.0-cuda10.1-cudnn7-pyg1.6.3-pygmtools0.5.1\n```\n\nOther combinations of torch and cuda are also available. See available images at [docker hub](https:\u002F\u002Fhub.docker.com\u002Fr\u002Frunzhongwang\u002Fthinkmatch\u002Ftags).\n\nSee details in [ThinkMatch-runtime](https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch-runtime).\n\n### Manual configuration (for Ubuntu)\nThis repository is developed and tested with Ubuntu 16.04, Python 3.7, Pytorch 1.6, cuda10.1, cudnn7 and torch-geometric 1.6.3. \n1. Install and configure Pytorch 1.6 (with GPU support). \n1. Install ninja-build: ``apt-get install ninja-build``\n1. Install python packages: \n    ```bash\n    pip install tensorboardX scipy easydict pyyaml xlrd xlwt pynvml pygmtools\n   ```\n1. Install building tools for LPMP: \n    ```bash\n    apt-get install -y findutils libhdf5-serial-dev git wget libssl-dev\n    \n    wget https:\u002F\u002Fgithub.com\u002FKitware\u002FCMake\u002Freleases\u002Fdownload\u002Fv3.19.1\u002Fcmake-3.19.1.tar.gz && tar zxvf cmake-3.19.1.tar.gz\n    cd cmake-3.19.1 && .\u002Fbootstrap && make && make install\n    ```\n\n1. Install and build LPMP:\n    ```bash\n   python -m pip install git+https:\u002F\u002Fgit@github.com\u002Frogerwwww\u002Flpmp.git\n   ```\n   You may need ``gcc-9`` to successfully build LPMP. Here we provide an example installing and configuring ``gcc-9``: \n   ```bash\n   apt-get update\n   apt-get install -y software-properties-common\n   add-apt-repository ppa:ubuntu-toolchain-r\u002Ftest\n   \n   apt-get install -y gcc-9 g++-9\n   update-alternatives --install \u002Fusr\u002Fbin\u002Fgcc gcc \u002Fusr\u002Fbin\u002Fgcc-9 60 --slave \u002Fusr\u002Fbin\u002Fg++ g++ \u002Fusr\u002Fbin\u002Fg++-9\n   ```\n\n1. Install torch-geometric:\n    ```bash\n    export CUDA=cu101\n    export TORCH=1.6.0\n    \u002Fopt\u002Fconda\u002Fbin\u002Fpip install torch-scatter==2.0.5 -f https:\u002F\u002Fpytorch-geometric.com\u002Fwhl\u002Ftorch-${TORCH}+${CUDA}.html\n    \u002Fopt\u002Fconda\u002Fbin\u002Fpip install torch-sparse==0.6.8 -f https:\u002F\u002Fpytorch-geometric.com\u002Fwhl\u002Ftorch-${TORCH}+${CUDA}.html\n    \u002Fopt\u002Fconda\u002Fbin\u002Fpip install torch-cluster==1.5.8 -f https:\u002F\u002Fpytorch-geometric.com\u002Fwhl\u002Ftorch-${TORCH}+${CUDA}.html\n    \u002Fopt\u002Fconda\u002Fbin\u002Fpip install torch-spline-conv==1.2.0 -f https:\u002F\u002Fpytorch-geometric.com\u002Fwhl\u002Ftorch-${TORCH}+${CUDA}.html\n    \u002Fopt\u002Fconda\u002Fbin\u002Fpip install torch-geometric==1.6.3\n   ```\n\n1. If you have configured ``gcc-9`` to build LPMP, be sure to switch back to ``gcc-7`` because this code repository is based on ``gcc-7``. Here is also an example:\n\n    ```bash\n    update-alternatives --remove gcc \u002Fusr\u002Fbin\u002Fgcc-9\n   update-alternatives --install \u002Fusr\u002Fbin\u002Fgcc gcc \u002Fusr\u002Fbin\u002Fgcc-7 60 --slave \u002Fusr\u002Fbin\u002Fg++ g++ \u002Fusr\u002Fbin\u002Fg++-7\n   ```\n\n### Available datasets\n\nNote: All following datasets can be automatically downloaded and unzipped by `pygmtools`, but you can also download the dataset yourself if a download failure occurs.\n\n1. PascalVOC-Keypoint\n\n    1. Download [VOC2011 dataset](http:\u002F\u002Fhost.robots.ox.ac.uk\u002Fpascal\u002FVOC\u002Fvoc2011\u002Findex.html) and make sure it looks like ``data\u002FPascalVOC\u002FTrainVal\u002FVOCdevkit\u002FVOC2011``\n    \n    1. Download keypoint annotation for VOC2011 from [Berkeley server](https:\u002F\u002Fwww2.eecs.berkeley.edu\u002FResearch\u002FProjects\u002FCS\u002Fvision\u002Fshape\u002Fposelets\u002Fvoc2011_keypoints_Feb2012.tgz) or [google drive](https:\u002F\u002Fdrive.google.com\u002Fopen?id=1D5o8rmnY1-DaDrgAXSygnflX5c-JyUWR) and make sure it looks like ``data\u002FPascalVOC\u002Fannotations``\n    \n    1. The train\u002Ftest split is available in ``data\u002FPascalVOC\u002Fvoc2011_pairs.npz``. **This file must be added manually.**\n\n    Please cite the following papers if you use PascalVOC-Keypoint dataset:\n    ```\n    @article{EveringhamIJCV10,\n      title={The pascal visual object classes (voc) challenge},\n      author={Everingham, Mark and Van Gool, Luc and Williams, Christopher KI and Winn, John and Zisserman, Andrew},\n      journal={International Journal of Computer Vision},\n      volume={88},\n      pages={303–338},\n      year={2010}\n    }\n    \n    @inproceedings{BourdevICCV09,\n      title={Poselets: Body part detectors trained using 3d human pose annotations},\n      author={Bourdev, L. and Malik, J.},\n      booktitle={International Conference on Computer Vision},\n      pages={1365--1372},\n      year={2009},\n      organization={IEEE}\n    }\n    ```\n1. Willow-Object-Class\n   \n    1. Download Willow-ObjectClass dataset from [the official site](http:\u002F\u002Fwww.di.ens.fr\u002Fwillow\u002Fresearch\u002Fgraphlearning\u002FWILLOW-ObjectClass_dataset.zip) or [hugging face](https:\u002F\u002Fhuggingface.co\u002Fheatingma\u002Fpygmtools\u002Fresolve\u002Fmain\u002FWILLOW-ObjectClass_dataset.zip)\n    \n    1. Unzip the dataset and make sure it looks like ``data\u002FWillowObject\u002FWILLOW-ObjectClass``\n\n    Please cite the following paper if you use Willow-Object-Class dataset:\n    ```\n    @inproceedings{ChoICCV13,\n      author={Cho, Minsu and Alahari, Karteek and Ponce, Jean},\n      title = {Learning Graphs to Match},\n      booktitle = {International Conference on Computer Vision},\n      pages={25--32},\n      year={2013}\n    }\n    ```\n\n1. CUB2011\n   \n    1. Download [CUB-200-2011 dataset](http:\u002F\u002Fwww.vision.caltech.edu\u002Fvisipedia-data\u002FCUB-200-2011\u002FCUB_200_2011.tgz).\n       \n    1. Unzip the dataset and make sure it looks like ``data\u002FCUB_200_2011\u002FCUB_200_2011``\n\n    Please cite the following report if you use CUB2011 dataset:\n    ```\n    @techreport{CUB2011,\n      Title = {{The Caltech-UCSD Birds-200-2011 Dataset}},\n      Author = {Wah, C. and Branson, S. and Welinder, P. and Perona, P. and Belongie, S.},\n      Year = {2011},\n      Institution = {California Institute of Technology},\n      Number = {CNS-TR-2011-001}\n    }\n    ```\n\n1. IMC-PT-SparseGM\n   \n    1. Download the IMC-PT-SparseGM dataset from [google drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1C3xl_eWaCG3lL2C3vP8Fpsck88xZOHtg\u002Fview?usp=sharing) or [baidu drive (code: g2cj)](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZQ3AMqoHtE_uA86GPf2h4w) or [hugging face](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fesflfei\u002FIMC-PT-SparseGM).\n\n    1. Unzip the dataset and make sure it looks like ``data\u002FIMC-PT-SparseGM\u002Fannotations`` for 50 anchor points and ``data\u002FIMC-PT-SparseGM\u002Fannotations_100`` for 100 anchor points\n\n    Please cite the following papers if you use IMC-PT-SparseGM dataset:\n    ```\n    @article{JinIJCV21,\n      title={Image Matching across Wide Baselines: From Paper to Practice},\n      author={Jin, Yuhe and Mishkin, Dmytro and Mishchuk, Anastasiia and Matas, Jiri and Fua, Pascal and Yi, Kwang Moo and Trulls, Eduard},\n      journal={International Journal of Computer Vision},\n      pages={517--547},\n      year={2021}\n    }\n    \n\n    @InProceedings{WangCVPR23,\n        author    = {Wang, Runzhong and Guo, Ziao and Jiang, Shaofei and Yang, Xiaokang and Yan, Junchi},\n        title     = {Deep Learning of Partial Graph Matching via Differentiable Top-K},\n        booktitle = {Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n        month     = {June},\n        year      = {2023}\n    }\n    ```\n\n1. SPair-71k\n\n    1. Download [SPair-71k dataset](http:\u002F\u002Fcvlab.postech.ac.kr\u002Fresearch\u002FSPair-71k\u002F)\n\n    1. Unzip the dataset and make sure it looks like ``data\u002FSPair-71k``\n\n    Please cite the following papers if you use SPair-71k dataset:\n\n    ```\n    @article{min2019spair,\n       title={SPair-71k: A Large-scale Benchmark for Semantic Correspondence},\n       author={Juhong Min and Jongmin Lee and Jean Ponce and Minsu Cho},\n       journal={arXiv prepreint arXiv:1908.10543},\n       year={2019}\n    }\n    \n    @InProceedings{min2019hyperpixel, \n       title={Hyperpixel Flow: Semantic Correspondence with Multi-layer Neural Features},\n       author={Juhong Min and Jongmin Lee and Jean Ponce and Minsu Cho},\n       booktitle={ICCV},\n       year={2019}\n    }\n    ```\nFor more information, please see [pygmtools](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpygmtools\u002F).\n\n## Run the Experiment\n\nRun training and evaluation\n```bash\npython train_eval.py --cfg path\u002Fto\u002Fyour\u002Fyaml\n```\n\nand replace ``path\u002Fto\u002Fyour\u002Fyaml`` by path to your configuration file, e.g.\n```bash\npython train_eval.py --cfg experiments\u002Fvgg16_pca_voc.yaml\n```\n\nDefault configuration files are stored in``experiments\u002F`` and you are welcomed to try your own configurations. If you find a better yaml configuration, please let us know by raising an issue or a PR and we will update the benchmark!\n\n## Pretrained Models\n_ThinkMatch_ provides pretrained models. The model weights are available via [google drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11xAQlaEsMrRlIVc00nqWrjHf8VOXUxHQ?usp=sharing)\n\nTo use the pretrained models, firstly download the weight files, then add the following line to your yaml file:\n```yaml\nPRETRAINED_PATH: path\u002Fto\u002Fyour\u002Fpretrained\u002Fweights\n```\n\n## Chat with the Community\n\nIf you have any questions, or if you are experiencing any issues, feel free to raise an issue on GitHub. \n\nWe also offer the following chat rooms if you are more comfortable with them:\n\n* Discord (for English users): \n  \n  [![discord](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_20508dc0b987.png)](https:\u002F\u002Fdiscord.gg\u002F8m6n7rRz9T)\n\n* QQ Group (for Chinese users)\u002FQQ群(中文用户): 696401889\n  \n  [![ThinkMatch\u002Fpygmtools交流群](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_06e1fec4a87e.png)](https:\u002F\u002Fqm.qq.com\u002Fcgi-bin\u002Fqm\u002Fqr?k=NlPuwwvaFaHzEWD8w7jSOTzoqSLIM80V&jump_from=webapi&authKey=chI2htrWDujQed6VtVid3V1NXEoJvwz3MVwruax6x5lQIvLsC8BmpmzBJOCzhtQd)\n\n## Citing ThinkMatch\n\nIf you find any of the models useful in your research, please cite the corresponding papers (BibTeX citations are available for each model in the [``models\u002F``](https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Ftree\u002Fmaster\u002Fmodels) directory).\n\nIf you like this framework, you may also cite the underlying library ``pygmtools`` which is called during training & testing:\n```\n@article{wang2024pygm,\n  author  = {Runzhong Wang and Ziao Guo and Wenzheng Pan and Jiale Ma and Yikai Zhang and Nan Yang and Qi Liu and Longxuan Wei and Hanxue Zhang and Chang Liu and Zetian Jiang and Xiaokang Yang and Junchi Yan},\n  title   = {Pygmtools: A Python Graph Matching Toolkit},\n  journal = {Journal of Machine Learning Research},\n  year    = {2024},\n  volume  = {25},\n  number  = {33},\n  pages   = {1-7},\n  url     = {https:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv25\u002F23-0572.html},\n}\n```\n","# Think Match\n\n[![版本发布](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002FThinklab-SJTU\u002FThinkMatch)](https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Freleases)\n[![文档状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_6bf48b3e9a6d.png)](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n[![docker](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocker-images-orange)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Frunzhongwang\u002Fthinkmatch\u002Ftags)\n[![Docker 下载量](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Frunzhongwang\u002Fthinkmatch)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Frunzhongwang\u002Fthinkmatch\u002Ftags)\n[![discord 频道](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1028701206526304317.svg?&color=blueviolet&label=discord)](https:\u002F\u002Fdiscord.gg\u002F8m6n7rRz9T)\n[![QQ 群组](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FQQ%20group-696401889-blue)](https:\u002F\u002Fqm.qq.com\u002Fcgi-bin\u002Fqm\u002Fqr?k=QolXYJn_M5ilDEM9e2jEjlPnJ02Ktabd&jump_from=webapi&authKey=6zG6D\u002FJs4YF5h5zj778aO5MDKOXBwPFi8gQ4LsXJN8Hn1V8uCVGV81iT4J\u002FFjPGT)\n[![GitHub 星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FThinklab-SJTU\u002FThinkMatch.svg?style=social&label=Star&maxAge=8640)](https:\u002F\u002FGitHub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fstargazers\u002F) \n\n_ThinkMatch_ 由 [ThinkLab](http:\u002F\u002Fthinklab.sjtu.edu.cn)（上海交通大学）开发和维护。\n本仓库主要实现以下目标：\n* **提供模块** 开发深度图匹配算法以促进未来研究\n* **实现** 最新的深度图匹配方法\n* **基准测试** 不同数据集和实验设置下的现有深度图匹配算法，实现公平比较\n\n官方文档：https:\u002F\u002Fthinkmatch.readthedocs.io\n\n源代码：https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\n\n## 图匹配简介\n图匹配（Graph Matching, GM）是计算机视觉、模式识别和数据挖掘领域中基础且具有挑战性的问题。图匹配旨在通过求解一个称为二次分配问题（Quadratic Assignment Problem, QAP）的NP难组合优化问题，来寻找多个图之间的节点对应关系。近年来，基于深度学习的图匹配方法研究逐渐兴起。\n\n图匹配技术已应用于以下场景：\n* [连接电影与摘要](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FXiong_A_Graph-Based_Framework_to_Bridge_Movies_and_Synopses_ICCV_2019_paper.pdf)\n  \n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_ce7adff96ddc.png\" alt=\"连接电影与摘要，ICCV 2019\" width=\"50%\">\n\n* [图像对应](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.11763.pdf)\n  \n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_105eb5e64c4e.png\" alt=\"Superglue，CVPR 2020\" width=\"50%\">\n\n* [分子匹配](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_Combinatorial_Learning_of_Graph_Edit_Distance_via_Dynamic_Embedding_CVPR_2021_paper.pdf)\n\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_dda32d85ac35.png\" alt=\"分子匹配，CVPR 2021\" width=\"50%\">\n\n* 以及其他应用场景...\n\n本仓库主要聚焦于图像关键点匹配，因为这是现有图匹配方法常用的测试基准。\n\n更多技术细节请参考以下综述文献：\n* 詹俊驰, 尹旭晨, 林伟尧, 邓成, 查红渊, 杨小康. \"图匹配研究进展综述.\"\n_ICMR 2016_.\n\n## 深度图匹配算法\n_ThinkMatch_ 目前包含以下深度图匹配方法的 PyTorch 源代码：\n\n* [**GMN**](\u002Fmodels\u002FGMN)\n  * Andrei Zanfir 和 Cristian Sminchisescu. \"图匹配的深度学习.\" _CVPR 2018_.\n    [[论文]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FZanfir_Deep_Learning_of_CVPR_2018_paper.html)\n* [**PCA-GM & IPCA-GM**](\u002Fmodels\u002FPCA)\n  * Runzhong Wang, Junchi Yan 和 Xiaokang Yang. \"基于嵌入的鲁棒深度图匹配组合学习方法.\" _TPAMI 2020_.\n    [[论文]](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9128045\u002F), [[项目页面]](https:\u002F\u002Fthinklab.sjtu.edu.cn\u002FIPCA_GM.html)\n  * Runzhong Wang, Junchi Yan 和 Xiaokang Yang. \"深度图匹配的组合嵌入网络学习.\" _ICCV 2019_.\n    [[论文]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FWang_Learning_Combinatorial_Embedding_Networks_for_Deep_Graph_Matching_ICCV_2019_paper.pdf)\n* [**NGM & NGM-v2**](\u002Fmodels\u002FNGM)\n  * Runzhong Wang, Junchi Yan, Xiaokang Yang. \"神经图匹配网络：学习Lawler二次分配问题及其扩展到超图和多图匹配.\" _TPAMI 2021_.\n    [[论文]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9426408), [[项目页面]](http:\u002F\u002Fthinklab.sjtu.edu.cn\u002Fproject\u002FNGM\u002Findex.html)\n* [**CIE-H**](\u002Fmodels\u002FCIE)\n  * Tianshu Yu, Runzhong Wang, Junchi Yan, Baoxin Li. \"基于通道独立嵌入和匈牙利注意力的深度图匹配学习.\" _ICLR 2020_.\n    [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=rJgBd2NYPH)\n* [**GANN**](\u002Fmodels\u002FGANN)\n  * Runzhong Wang, Junchi Yan 和 Xiaokang Yang. \"联合多图匹配与聚类的渐进分配方法及其在无监督图匹配网络学习中的应用.\" _NeurIPS 2020_.\n    [[论文]](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002Fe6384711491713d29bc63fc5eeb5ba4f-Abstract.html)\n  * Runzhong Wang, Junchi Yan 和 Xiaokang Yang. \"通过差异最小化学习混合模式的无监督图匹配.\" _TPAMI 2023_. \n    [[论文]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10073537), [[项目页面]](https:\u002F\u002Fthinklab.sjtu.edu.cn\u002Fproject\u002FGANN-GM\u002Findex.html)\n* [**BBGM**](\u002Fmodels\u002FBBGM)\n  * Michal Rolínek, Paul Swoboda, Dominik Zietlow, Anselm Paulus, Vít Musil, Georg Martius. \"通过组合求解器的黑盒微分实现深度图匹配.\" _ECCV 2020_. \n    [[论文]](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123730409.pdf)\n* [**GCAN**](\u002Fmodels\u002FGCAN)\n  * Zheheng Jiang, Hossein Rahmani, Plamen Angelov, Sue Black, Bryan M. Williams. \"用于尺寸可变深度图匹配的图上下文注意力网络.\" _CVPR 2022_. \n    [[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FJiang_Graph-Context_Attention_Networks_for_Size-Varied_Deep_Graph_Matching_CVPR_2022_paper.pdf)\n* [**AFAT**](\u002Fmodels\u002FAFAT)\n  * Runzhong Wang, Ziao Guo, Shaofei Jiang, Xiaokang Yang, Junchi Yan. \"通过可微Top-K实现的深度部分图匹配学习.\" _CVPR 2023_. \n    [[论文]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FWang_Deep_Learning_of_Partial_Graph_Matching_via_Differentiable_Top-K_CVPR_2023_paper.html)\n* [**LinSAT**](\u002Fmodels\u002FLinSAT)\n  * Runzhong Wang, Yunhao Zhang, Ziao Guo, Tianyi Chen, Xiaokang Yang, Junchi Yan. \"LinSATNet：正线性可满足性神经网络.\" _ICML 2023_. \n    [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=D2Oaj7v9YJ)\n* [**COMMON**](\u002Fmodels\u002FCOMMON) & [**COMMON+**](\u002Fmodels\u002FCOMMONPLUS)\n  * Yijie Lin, Mouxing Yang, Jun Yu, Peng Hu, Changqing Zhang, Xi Peng. \"双层噪声对应下的图匹配.\" _ICCV 2023_. \n    [[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.04085.pdf), [[项目页面]](https:\u002F\u002Fgithub.com\u002FLin-Yijie\u002FGraph-Matching-Networks\u002Ftree\u002Fmain\u002FCOMMON)\n  * Yijie Lin, Mouxing Yang, Peng Hu, Jiancheng Lv, Hao Chen, Xi Peng. \"图匹配中部分和噪声对应的联合学习\". TPAMI, 2026. \n    [[论文]](https:\u002F\u002Fxlearning-lab.com\u002Fassets\u002F2026-TPAMI-Learning-With-Partial-and-Noisy-Correspondence-in-Graph-Matching.pdf)\n\n## 使用 ThinkMatch 的场景\n\nThinkMatch 被设计为深度图匹配的研究协议。当您有以下需求时推荐使用：\n* 开发新算法并发表新的图匹配论文；\n* 理解深度图匹配模型的细节；\n* 调试超参数和网络结构；\n* 对深度图匹配网络进行基准测试。\n\n### 不推荐使用 ThinkMatch 的场景\n\n您可能会发现 ThinkMatch 的环境配置较复杂，且图匹配细节难以理解。\n``pygmtools`` 提供了用户友好的 API，推荐在以下场景使用：\n\n* 如果您希望将图匹配作为流程步骤集成（无论是学习型或非学习型，\n  支持 ``numpy``\u002F``pytorch``\u002F``jittor``\u002F``paddle``\u002F``mindspore``\u002F``tensorflow``）。\n* 如果您需要快速对 ``pygmtools`` 中的图匹配求解器进行基准测试和性能分析。\n* 如果您不想深入研究算法细节且无需修改算法。\n\n您可以通过以下命令安装用户友好包：\n```shell\n$ pip install pygmtools\n```\n\n官方文档：https:\u002F\u002Fpygmtools.readthedocs.io\n\n源代码：https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002Fpygmtools\n\n## 深度图匹配基准测试\n\n### PascalVOC - 2GM\n\n| 模型(model)                                                  | 年份(year) | 飞机(aero) | 自行车(bike) | 鸟类(bird) | 船舶(boat) | 饮料瓶(bottle) | 公交车(bus) | 汽车(car) | 猫(cat) | 椅子(chair) | 奶牛(cow) | 餐桌(table) | 狗(dog) | 马(horse) | 摩托车(mbkie) | 人物(person) | 植物(plant) | 羊(sheep) | 沙发(sofa) | 电视(tv) | 平均值(mean) |\n| ------------------------------------------------------------ | ---- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\n| [GMN](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#gmn) | 2018 | 0.4163 | 0.5964 | 0.6027 | 0.4795 | 0.7918 | 0.7020 | 0.6735 | 0.6488 | 0.3924 | 0.6128 | 0.6693 | 0.5976 | 0.6106 | 0.5975 | 0.3721 | 0.7818 | 0.6800 | 0.4993 | 0.8421 | 0.9141 | 0.6240 |\n| [PCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca-gm) | 2019 | 0.4979 | 0.6193 | 0.6531 | 0.5715 | 0.7882 | 0.7556 | 0.6466 | 0.6969 | 0.4164 | 0.6339 | 0.5073 | 0.6705 | 0.6671 | 0.6164 | 0.4447 | 0.8116 | 0.6782 | 0.5922 | 0.7845 | 0.9042 | 0.6478 |\n| [NGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | 0.5010 | 0.6350 | 0.5790 | 0.5340 | 0.7980 | 0.7710 | 0.7360 | 0.6820 | 0.4110 | 0.6640 | 0.4080 | 0.6030 | 0.6190 | 0.6350 | 0.4560 | 0.7710 | 0.6930 | 0.6550 | 0.7920 | 0.8820 | 0.6413 |\n| [NHGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | 0.5240 | 0.6220 | 0.5830 | 0.5570 | 0.7870 | 0.7770 | 0.7440 | 0.7070 | 0.4200 | 0.6460 | 0.5380 | 0.6100 | 0.6190 | 0.6080 | 0.4680 | 0.7910 | 0.6680 | 0.5510 | 0.8090 | 0.8870 | 0.6458 |\n| [IPCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca-gm) | 2020 | 0.5378 | 0.6622 | 0.6714 | 0.6120 | 0.8039 | 0.7527 | 0.7255 | 0.7252 | 0.4455 | 0.6524 | 0.5430 | 0.6724 | 0.6790 | 0.6421 | 0.4793 | 0.8435 | 0.7079 | 0.6398 | 0.8380 | 0.9083 | 0.6770 |\n| [CIE-H](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#cie-h) | 2020 | 0.5250 | 0.6858 | 0.7015 | 0.5706 | 0.8207 | 0.7700 | 0.7073 | 0.7313 | 0.4383 | 0.6994 | 0.6237 | 0.7018 | 0.7031 | 0.6641 | 0.4763 | 0.8525 | 0.7172 | 0.6400 | 0.8385 | 0.9168 | 0.6892 |\n| [BBGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#bbgm) | 2020 | 0.6187 | 0.7106 | 0.7969 | 0.7896 | 0.8740 | 0.9401 | 0.8947 | 0.8022 | 0.5676 | 0.7914 | 0.6458 | 0.7892 | 0.7615 | 0.7512 | 0.6519 | 0.9818 | 0.7729 | 0.7701 | 0.9494 | 0.9393 | 0.7899 |\n| [NGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | 0.6184 | 0.7118 | 0.7762 | 0.7875 | 0.8733 | 0.9363 | 0.8770 | 0.7977 | 0.5535 | 0.7781 | 0.8952 | 0.7880 | 0.8011 | 0.7923 | 0.6258 | 0.9771 | 0.7769 | 0.7574 | 0.9665 | 0.9323 | 0.8011 |\n| [NHGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | 0.5995 | 0.7154 | 0.7724 | 0.7902 | 0.8773 | 0.9457 | 0.8903 | 0.8181 | 0.5995 | 0.8129 | 0.8695 | 0.7811 | 0.7645 | 0.7750 | 0.6440 | 0.9872 | 0.7778 | 0.7538 | 0.9787 | 0.9280 | 0.8040 |\n| [COMMON](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.04085.pdf) | 2023 | 0.6560 | 0.7520 | 0.8080 | 0.7950    |0.8930 | 0.9230 | 0.9010 | 0.8180 | 0.6160 | 0.8070| 0.9500 | 0.8200 |    0.8160    | 0.7950 | 0.6660 |    0.9890 | 0.7890 | 0.8090 | 0.9930 |    0.9380 | 0.8270 |  \n| [COMMON+](https:\u002F\u002Fxlearning-lab.com\u002Fassets\u002F2026-TPAMI-Learning-With-Partial-and-Noisy-Correspondence-in-Graph-Matching.pdf) | 2026 | 0.6880 | 0.7550 | 0.8260 | 0.7740 | 0.9000 | 0.9220 | 0.8950 | 0.8070 | 0.6180 | 0.8240 | 0.9530 | 0.8050 | 0.8210 | 0.8160 | 0.6770 | 0.9880 | 0.7990 | 0.8100 | 0.9850 | 0.9540 | 0.8310 |\n\n\n### Willow 物体类别 - 2GM & 多图匹配(MGM)\n\n| 模型(model)                                                  | 年份(year) | 备注(remark)          | 汽车(Car)    | 鸭子(Duck)   | 人脸(Face)   | 摩托车(Motorbike) | 酒瓶(Winebottle) | 平均值(mean) |\n| ------------------------------------------------------------ | ---- | --------------- | ------ | ------ | ------ | --------- | ---------- | ------ |\n| [GMN](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#gmn) | 2018 | -               | 0.6790 | 0.7670 | 0.9980 | 0.6920    | 0.8310     | 0.7934 |\n| [PCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca-gm) | 2019 | -               | 0.8760 | 0.8360 | 1.0000 | 0.7760    | 0.8840     | 0.8744 |\n| [NGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | -               | 0.8420 | 0.7760 | 0.9940 | 0.7680    | 0.8830     | 0.8530 |\n| [NHGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | -               | 0.8650 | 0.7220 | 0.9990 | 0.7930    | 0.8940     | 0.8550 |\n| [NMGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | -               | 0.7850 | 0.9210 | 1.0000 | 0.7870    | 0.9480     | 0.8880 |\n| [IPCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca) | 2020 | -               | 0.9040 | 0.8860 | 1.0000 | 0.8300    | 0.8830     | 0.9006 |\n| [CIE-H](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#cie-h) | 2020 | -               | 0.8581 | 0.8206 | 0.9994 | 0.8836    | 0.8871     | 0.8898 |\n| [BBGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#bbgm) | 2020 | -               | 0.9680 | 0.8990 | 1.0000 | 0.9980    | 0.9940     | 0.9718 |\n| [GANN-MGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#gann) | 2020 | 自监督(self-supervised) | 0.9600 | 0.9642 | 1.0000 | 1.0000    | 0.9879     | 0.9906 |\n| [NGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | -               | 0.9740 | 0.9340 | 1.0000 | 0.9860    | 0.9830     | 0.9754 |\n| [NHGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | -               | 0.9740 | 0.9390 | 1.0000 | 0.9860    | 0.9890     | 0.9780 |\n| [NMGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | -               | 0.9760 | 0.9447 | 1.0000 | 1.0000    | 0.9902     | 0.9822 |\n| [COMMON](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.04085.pdf) | 2023 | -             | 0.9760 | 0.9820 | 1.0000 | 1.0000 | 0.9960     | 0.9910 |\n| [COMMON+](https:\u002F\u002Fxlearning-lab.com\u002Fassets\u002F2026-TPAMI-Learning-With-Partial-and-Noisy-Correspondence-in-Graph-Matching.pdf) | 2026  | -             | 0.9830 | 0.9820 | 1.0000 | 1.0000 | 1.0000 | 0.9930 |\n\n### SPair-71k - 2GM\n\n| 模型                                                        | 年份 | 飞机   | 自行车 | 鸟类   | 船舶   | 瓶子   | 巴士   | 汽车   | 猫     | 椅子   | 奶牛   | 狗     | 马匹   | 山地车 | 人物   | 植物   | 绵羊   | 火车   | 电视   | 平均值 |\n| ------------------------------------------------------------ | ---- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\n| [GMN](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#gmn) | 2018 | 0.5991 | 0.5099 | 0.7428 | 0.4672 | 0.6328 | 0.7552 | 0.6950 | 0.6462 | 0.5751 | 0.7302 | 0.5866 | 0.5914 | 0.6320 | 0.5116 | 0.8687 | 0.5787 | 0.6998 | 0.9238 | 0.6526 |\n| [PCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca-gm) | 2019 | 0.6467 | 0.4571 | 0.7811 | 0.5128 | 0.6381 | 0.7272 | 0.6122 | 0.6278 | 0.6255 | 0.6822 | 0.5906 | 0.6115 | 0.6486 | 0.5773 | 0.8742 | 0.6042 | 0.7246 | 0.9283 | 0.6595 |\n| [NGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2019 | 0.6644 | 0.5262 | 0.7696 | 0.4960 | 0.6766 | 0.7878 | 0.6764 | 0.6827 | 0.5917 | 0.7364 | 0.6391 | 0.6066 | 0.7074 | 0.6089 | 0.8754 | 0.6387 | 0.7979 | 0.9150 | 0.6887 |\n| [IPCA-GM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#pca) | 2020 | 0.6901 | 0.5286 | 0.8037 | 0.5425 | 0.6653 | 0.8001 | 0.6847 | 0.7136 | 0.6136 | 0.7479 | 0.6631 | 0.6514 | 0.6956 | 0.6391 | 0.9112 | 0.6540 | 0.8291 | 0.9750 | 0.7116 |\n| [CIE-H](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#cie-h) | 2020 | 0.7146 | 0.5710 | 0.8168 | 0.5672 | 0.6794 | 0.8246 | 0.7339 | 0.7449 | 0.6259 | 0.7804 | 0.6872 | 0.6626 | 0.7374 | 0.6604 | 0.9246 | 0.6717 | 0.8228 | 0.9751 | 0.7334 |\n| [BBGM](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#bbgm) | 2020 | 0.7250 | 0.6455 | 0.8780 | 0.7581 | 0.6927 | 0.9395 | 0.8859 | 0.7992 | 0.7456 | 0.8315 | 0.7878 | 0.7710 | 0.7650 | 0.7634 | 0.9820 | 0.8554 | 0.9678 | 0.9931 | 0.8215 |\n| [NGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | 0.6877 | 0.6331 | 0.8677 | 0.7013 | 0.6971 | 0.9467 | 0.8740 | 0.7737 | 0.7205 | 0.8067 | 0.7426 | 0.7253 | 0.7946 | 0.7340 | 0.9888 | 0.8123 | 0.9426 | 0.9867 | 0.8020 |\n| [NHGM-v2](https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002Fguide\u002Fmodels.html#ngm) | 2021 | 0.6202 | 0.5781 | 0.8642 | 0.6846 | 0.6872 | 0.9335 | 0.8081 | 0.7656 | 0.6919 | 0.7987 | 0.6623 | 0.7171 | 0.7812 | 0.6953 | 0.9824 | 0.8444 | 0.9316 | 0.9926 | 0.7799 |\n| [COMMON](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.04085.pdf)    | 2023 | 0.7730 | 0.6820 | 0.9200 | 0.7950 | 0.7040 | 0.9750 | 0.9160 | 0.8250 | 0.7220 | 0.8800 | 0.8000| 0.7410 | 0.8340 | 0.8280 | 0.9990 | 0.8440 | 0.9820 | 0.9980| 0.8450 |\n| [COMMON+](https:\u002F\u002Fxlearning-lab.com\u002Fassets\u002F2026-TPAMI-Learning-With-Partial-and-Noisy-Correspondence-in-Graph-Matching.pdf)  | 2026 | 0.7980 | 0.7230 | 0.9170 | 0.7870 | 0.7080 | 0.9800 | 0.9180 | 0.8190 | 0.7280 | 0.8820 | 0.8330 | 0.7640 | 0.8340 | 0.8390 | 0.9990 | 0.8610 | 0.9920 | 0.9990 | 0.8550 |\n\n_ThinkMatch_ 包含以下基准数据集：\n\n* **PascalVOC-Keypoint**\n* **Willow-Object-Class**\n* **CUB2011**\n* **SPair-71k**\n* **IMC-PT-SparseGM**\n\n**待办事项** 我们计划未来包含以下数据集：\n* **合成数据**\n\n_ThinkMatch_ 支持以下图匹配设置：\n* **2GM**（双图匹配），每次仅匹配两个图\n* **MGM**（多图匹配），可联合匹配多个图\n* **MGM3**（混合模式多图匹配），在联合匹配多个图的同时支持不同类别的图混合匹配\n\n## 快速开始\n\n### Docker（推荐）\n\n获取推荐镜像：\n```bash\ndocker pull runzhongwang\u002Fthinkmatch:torch1.6.0-cuda10.1-cudnn7-pygmtools0.5.1\n```\n\n其他PyTorch\u002FCUDA组合版本也已提供。查看可用镜像：[docker hub](https:\u002F\u002Fhub.docker.com\u002Fr\u002Frunzhongwang\u002Fthinkmatch\u002Ftags)\n\n详细说明见 [ThinkMatch-runtime](https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch-runtime)\n\n### 手动配置（Ubuntu）\n本项目基于Ubuntu 16.04、Python 3.7、Pytorch 1.6、cuda10.1、cudnn7和torch-geometric 1.6.3开发测试\n1. 安装配置支持GPU的Pytorch 1.6\n1. 安装ninja构建工具：``apt-get install ninja-build``\n1. 安装Python依赖包：\n    ```bash\n    pip install tensorboardX scipy easydict pyyaml xlrd xlwt pynvml pygmtools\n   ```\n1. 安装LPMP构建工具：\n    ```bash\n    apt-get install -y findutils libhdf5-serial-dev git wget libssl-dev\n    \n    wget https:\u002F\u002Fgithub.com\u002FKitware\u002FCMake\u002Freleases\u002Fdownload\u002Fv3.19.1\u002Fcmake-3.19.1.tar.gz && tar zxvf cmake-3.19.1.tar.gz\n    cd cmake-3.19.1 && .\u002Fbootstrap && make && make install\n    ```\n\n1. 安装构建LPMP：\n    ```bash\n   python -m pip install git+https:\u002F\u002Fgit@github.com\u002Frogerwwww\u002Flpmp.git\n   ```\n   需要安装``gcc-9``支持构建。示例安装配置：\n   ```bash\n   apt-get update\n   apt-get install -y software-properties-common\n   add-apt-repository ppa:ubuntu-toolchain-r\u002Ftest\n   \n   apt-get install -y gcc-9 g++-9\n   update-alternatives --install \u002Fusr\u002Fbin\u002Fgcc gcc \u002Fusr\u002Fbin\u002Fgcc-9 60 --slave \u002Fusr\u002Fbin\u002Fg++ g++ \u002Fusr\u002Fbin\u002Fg++-9\n   ```\n\n1. 安装torch-geometric：\n    ```bash\n    export CUDA=cu101\n    export TORCH=1.6.0\n    \u002Fopt\u002Fconda\u002Fbin\u002Fpip install torch-scatter==2.0.5 -f https:\u002F\u002Fpytorch-geometric.com\u002Fwhl\u002Ftorch-${TORCH}+${CUDA}.html\n    \u002Fopt\u002Fconda\u002Fbin\u002Fpip install torch-sparse==0.6.8 -f https:\u002F\u002Fpytorch-geometric.com\u002Fwhl\u002Ftorch-${TORCH}+${CUDA}.html\n    \u002Fopt\u002Fconda\u002Fbin\u002Fpip install torch-cluster==1.5.8 -f https:\u002F\u002Fpytorch-geometric.com\u002Fwhl\u002Ftorch-${TORCH}+${CUDA}.html\n    \u002Fopt\u002Fconda\u002Fbin\u002Fpip install torch-spline-conv==1.2.0 -f https:\u002F\u002Fpytorch-geometric.com\u002Fwhl\u002Ftorch-${TORCH}+${CUDA}.html\n    \u002Fopt\u002Fconda\u002Fbin\u002Fpip install torch-geometric==1.6.3\n   ```\n\n1. 完成LPMP构建后，建议切换回``gcc-7``（项目基于gcc-7）：\n    ```bash\n    update-alternatives --remove gcc \u002Fusr\u002Fbin\u002Fgcc-9\n   update-alternatives --install \u002Fusr\u002Fbin\u002Fgcc gcc \u002Fusr\u002Fbin\u002Fgcc-7 60 --slave \u002Fusr\u002Fbin\u002Fg++ g++ \u002Fusr\u002Fbin\u002Fg++-7\n   ```\n\n### 可用数据集\n\n注意：以下所有数据集均可通过`pygmtools`自动下载解压，若下载失败也可手动下载。\n\n1. PascalVOC-关键点（Keypoint）\n\n    1. 下载 [VOC2011数据集](http:\u002F\u002Fhost.robots.ox.ac.uk\u002Fpascal\u002FVOC\u002Fvoc2011\u002Findex.html) 并确保路径为 ``data\u002FPascalVOC\u002FTrainVal\u002FVOCdevkit\u002FVOC2011``\n    \n    1. 从 [Berkeley服务器](https:\u002F\u002Fwww2.eecs.berkeley.edu\u002FResearch\u002FProjects\u002FCS\u002Fvision\u002Fshape\u002Fposelets\u002Fvoc2011_keypoints_Feb2012.tgz) 或 [google drive](https:\u002F\u002Fdrive.google.com\u002Fopen?id=1D5o8rmnY1-DaDrgAXSygnflX5c-JyUWR) 下载关键点标注文件，并确保路径为 ``data\u002FPascalVOC\u002Fannotations``\n    \n    1. 训练\u002F测试划分文件位于 ``data\u002FPascalVOC\u002Fvoc2011_pairs.npz``。**此文件需手动添加**\n\n    如使用PascalVOC-关键点数据集，请引用以下论文：\n    ```\n    @article{EveringhamIJCV10,\n      title={The pascal visual object classes (voc) challenge},\n      author={Everingham, Mark and Van Gool, Luc and Williams, Christopher KI and Winn, John and Zisserman, Andrew},\n      journal={International Journal of Computer Vision},\n      volume={88},\n      pages={303–338},\n      year={2010}\n    }\n    \n    @inproceedings{BourdevICCV09,\n      title={Poselets: Body part detectors trained using 3d human pose annotations},\n      author={Bourdev, L. and Malik, J.},\n      booktitle={International Conference on Computer Vision},\n      pages={1365--1372},\n      year={2009},\n      organization={IEEE}\n    }\n    ```\n1. Willow-Object-Class\n   \n    1. 从[官网](http:\u002F\u002Fwww.di.ens.fr\u002Fwillow\u002Fresearch\u002Fgraphlearning\u002FWILLOW-ObjectClass_dataset.zip) 或 [hugging face](https:\u002F\u002Fhuggingface.co\u002Fheatingma\u002Fpygmtools\u002Fresolve\u002Fmain\u002FWILLOW-ObjectClass_dataset.zip) 下载Willow-ObjectClass数据集\n    \n    1. 解压后确保路径为 ``data\u002FWillowObject\u002FWILLOW-ObjectClass``\n\n    如使用Willow-Object-Class数据集，请引用以下论文：\n    ```\n    @inproceedings{ChoICCV13,\n      author={Cho, Minsu and Alahari, Karteek and Ponce, Jean},\n      title = {Learning Graphs to Match},\n      booktitle = {International Conference on Computer Vision},\n      pages={25--32},\n      year={2013}\n    }\n    ```\n\n1. CUB2011\n   \n    1. 下载 [CUB-200-2011数据集](http:\u002F\u002Fwww.vision.caltech.edu\u002Fvisipedia-data\u002FCUB-200-2011\u002FCUB_200_2011.tgz)\n       \n    1. 解压后确保路径为 ``data\u002FCUB_200_2011\u002FCUB_200_2011``\n\n    如使用CUB2011数据集，请引用以下报告：\n    ```\n    @techreport{CUB2011,\n      Title = {{The Caltech-UCSD Birds-200-2011 Dataset}},\n      Author = {Wah, C. and Branson, S. and Welinder, P. and Perona, P. and Belongie, S.},\n      Year = {2011},\n      Institution = {California Institute of Technology},\n      Number = {CNS-TR-2011-001}\n    }\n    ```\n\n1. IMC-PT-SparseGM\n   \n    1. 从 [google drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1C3xl_eWaCG3lL2C3vP8Fpsck88xZOHtg\u002Fview?usp=sharing)、[百度网盘 (提取码: g2cj)](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZQ3AMqoHtE_uA86GPf2h4w) 或 [hugging face](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fesflfei\u002FIMC-PT-SparseGM) 下载IMC-PT-SparseGM数据集\n\n    1. 解压后确保50个锚点（anchor points）路径为 ``data\u002FIMC-PT-SparseGM\u002Fannotations``，100个锚点路径为 ``data\u002FIMC-PT-SparseGM\u002Fannotations_100``\n\n    如使用IMC-PT-SparseGM数据集，请引用以下论文：\n    ```\n    @article{JinIJCV21,\n      title={Image Matching across Wide Baselines: From Paper to Practice},\n      author={Jin, Yuhe and Mishkin, Dmytro and Mishchuk, Anastasiia and Matas, Jiri and Fua, Pascal and Yi, Kwang Moo and Trulls, Eduard},\n      journal={International Journal of Computer Vision},\n      pages={517--547},\n      year={2021}\n    }\n    \n\n    @InProceedings{WangCVPR23,\n        author    = {Wang, Runzhong and Guo, Ziao and Jiang, Shaofei and Yang, Xiaokang and Yan, Junchi},\n        title     = {Deep Learning of Partial Graph Matching via Differentiable Top-K},\n        booktitle = {Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n        month     = {June},\n        year      = {2023}\n    }\n    ```\n\n1. SPair-71k\n\n    1. 下载 [SPair-71k数据集](http:\u002F\u002Fcvlab.postech.ac.kr\u002Fresearch\u002FSPair-71k\u002F)\n\n    1. 解压后确保路径为 ``data\u002FSPair-71k``\n\n    如使用SPair-71k数据集，请引用以下论文：\n\n    ```\n    @article{min2019spair,\n       title={SPair-71k: A Large-scale Benchmark for Semantic Correspondence},\n       author={Juhong Min and Jongmin Lee and Jean Ponce and Minsu Cho},\n       journal={arXiv prepreint arXiv:1908.10543},\n       year={2019}\n    }\n    \n    @InProceedings{min2019hyperpixel, \n       title={Hyperpixel Flow: Semantic Correspondence with Multi-layer Neural Features},\n       author={Juhong Min and Jongmin Lee and Jean Ponce and Minsu Cho},\n       booktitle={ICCV},\n       year={2019}\n    }\n    ```\n更多详情请访问 [pygmtools](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpygmtools\u002F)。\n\n## 运行实验\n\n执行训练和评估：\n```bash\npython train_eval.py --cfg path\u002Fto\u002Fyour\u002Fyaml\n```\n\n将 ``path\u002Fto\u002Fyour\u002Fyaml`` 替换为配置文件路径，例如：\n```bash\npython train_eval.py --cfg experiments\u002Fvgg16_pca_voc.yaml\n```\n\n默认配置文件存储在 ``experiments\u002F`` 目录，欢迎尝试自定义配置。如发现更优配置，请提交Issue或PR告知，我们将更新基准！\n\n## 预训练模型\n_ThinkMatch_ 提供预训练模型。权重文件可通过 [google drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11xAQlaEsMrRlIVc00nqWrjHf8VOXUxHQ?usp=sharing) 获取。\n\n使用预训练模型时，请先下载权重文件，然后在yaml配置文件中添加：\n```yaml\nPRETRAINED_PATH: path\u002Fto\u002Fyour\u002Fpretrained\u002Fweights\n```\n\n## 社区交流\n\n如有问题或遇到异常，请在GitHub提交Issue。\n\n我们还提供以下交流渠道：\n\n* Discord (英文用户)：\n  \n  [![discord](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_20508dc0b987.png)](https:\u002F\u002Fdiscord.gg\u002F8m6n7rRz9T)\n\n* QQ群(中文用户)：696401889\n  \n  [![ThinkMatch\u002Fpygmtools交流群](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_readme_06e1fec4a87e.png)](https:\u002F\u002Fqm.qq.com\u002Fcgi-bin\u002Fqm\u002Fqr?k=NlPuwwvaFaHzEWD8w7jSOTzoqSLIM80V&jump_from=webapi&authKey=chI2htrWDujQed6VtVid3V1NXEoJvwz3MVwruax6x5lQIvLsC8BmpmzBJOCzhtQd)\n\n## 引用 ThinkMatch\n\n如果您在研究中使用了任何模型，请引用相应的论文（每个模型的 BibTeX 引用（参考文献引用）可在 [``models\u002F``](https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Ftree\u002Fmaster\u002Fmodels) 目录中找到）。\n\n如果您喜欢该框架，也可以引用其底层库 ``pygmtools``（在训练和测试期间调用的库）：\n```\n@article{wang2024pygm,\n  author  = {Runzhong Wang and Ziao Guo and Wenzheng Pan and Jiale Ma and Yikai Zhang and Nan Yang and Qi Liu and Longxuan Wei and Hanxue Zhang and Chang Liu and Zetian Jiang and Xiaokang Yang and Junchi Yan},\n  title   = {Pygmtools: A Python Graph Matching Toolkit},\n  journal = {Journal of Machine Learning Research},\n  year    = {2024},\n  volume  = {25},\n  number  = {33},\n  pages   = {1-7},\n  url     = {https:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv25\u002F23-0572.html},\n}\n```","# ThinkMatch 快速上手指南\n\n---\n\n## 环境准备\n### 系统要求\n- 操作系统：Linux\u002FmacOS（推荐Ubuntu 18.04+）\n- Python：3.7-3.10\n- PyTorch：1.8.0+\n- 硬件：GPU（推荐NVIDIA显卡，CUDA 11.1+）\n\n### 前置依赖\n```bash\n# 安装基础依赖（推荐使用国内镜像加速）\npip install --upgrade pip -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\npip install torch torchvision torchaudio -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\npip install pygmtools numpy scipy matplotlib -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 安装编译依赖（Ubuntu）\nsudo apt-get install build-essential cmake g++\n```\n\n---\n\n## 安装步骤\n### 方式一：Docker（推荐）\n```bash\n# 拉取镜像（使用阿里云加速器）\ndocker pull registry.cn-hangzhou.aliyuncs.com\u002Frunzhongwang\u002Fthinkmatch:latest\n\n# 创建容器\ndocker run --gpus all -it --name thinkmatch_container registry.cn-hangzhou.aliyuncs.com\u002Frunzhongwang\u002Fthinkmatch:latest \u002Fbin\u002Fbash\n```\n\n### 方式二：源码安装\n```bash\n# 克隆仓库（使用国内镜像加速）\ngit clone https:\u002F\u002Fhub.fastgit.org\u002FThinklab-SJTU\u002FThinkMatch.git\ncd ThinkMatch\n\n# 安装依赖\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 编译扩展（可选）\npython setup.py build_ext --inplace\n```\n\n---\n\n## 基本使用\n### 运行预训练模型（以GMN为例）\n```bash\n# 下载预训练模型（需在容器内或源码目录执行）\nmkdir -p pretrained && cd pretrained\nwget https:\u002F\u002Fthinkmatch.readthedocs.io\u002Fen\u002Flatest\u002F_downloads\u002Fgmn_model.pth\n\n# 运行推理示例\ncd ..\u002Fexamples\npython gmn_inference.py \\\n  --model_path ..\u002Fpretrained\u002Fgmn_model.pth \\\n  --image1 ..\u002Fdata\u002Ftest_images\u002Fairplane1.jpg \\\n  --image2 ..\u002Fdata\u002Ftest_images\u002Fairplane2.jpg\n```\n\n### 输出结果\n程序会输出匹配的节点对应关系，并生成可视化结果图`matching_result.png`。\n\n---\n\n## 验证安装\n```bash\n# 运行单元测试（可选）\ncd ThinkMatch\npython -m pytest tests\u002Ftest_gm.py -v\n```\n\n> 注意：完整数据集和训练流程请参考[官方文档](https:\u002F\u002Fthinkmatch.readthedocs.io)","AR游戏开发团队需要实现跨视角角色动作捕捉。工程师试图通过图像关键点匹配技术，将玩家在不同摄像头角度下的骨骼动作映射到3D角色模型。\n\n### 没有 ThinkMatch 时\n- 需要从零复现论文中的图匹配算法，花费2周时间调试GMN网络结构中的特征对齐模块\n- 使用传统QAP求解器处理100节点图时，单次匹配耗时3.2秒，导致游戏动作延迟明显\n- 在PyTorch1.8与CUDA11.4环境出现内存泄漏问题，需额外开发内存优化模块\n- 论文中的PCA-GM方法在自定义数据集上复现后，匹配准确率仅达到68.5%，低于原文82%的指标\n\n### 使用 ThinkMatch 后\n- 直接调用预实现的GMN模块，30分钟完成特征提取、图构建和匹配头的集成\n- 借助内置的高效QAP求解器，相同规模图匹配耗时降至0.4秒，满足游戏实时性需求\n- 使用Docker镜像部署环境，规避底层依赖冲突，训练稳定性提升90%\n- 通过自动化的超参数搜索功能，在自定义数据集上达到81.2%的匹配准确率，接近论文水平\n- 利用内置可视化工具快速定位错误匹配，调试效率提升3倍\n\nThinkMatch通过工业级算法实现与优化工具链，将图像关键点匹配的落地成本降低70%，使研究级算法在消费级硬件上达到商用标准。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FThinklab-SJTU_ThinkMatch_bf3c2397.png","Thinklab-SJTU","Thinklab@SJTU","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FThinklab-SJTU_c6a6209f.png","Thinklab at Shanghai Jiao Tong University, led by Prof. Junchi Yan.",null,"http:\u002F\u002Fthinklab.sjtu.edu.cn","https:\u002F\u002Fgithub.com\u002FThinklab-SJTU",[83,87,91],{"name":84,"color":85,"percentage":86},"Python","#3572A5",95,{"name":88,"color":89,"percentage":90},"C++","#f34b7d",3.4,{"name":92,"color":93,"percentage":94},"Cuda","#3A4E3A",1.6,877,122,"2026-04-02T14:24:35","NOASSERTION","Linux, macOS, Windows","需要 NVIDIA GPU，显存 8GB+，CUDA 11.7+","未说明",{"notes":103,"python":104,"dependencies":105},"推荐使用 Docker 镜像快速部署，训练时需预留至少 5GB 显存，部分模型需下载额外数据集","3.8+",[106,107,108,109,110,111,112,113,114,115],"torch>=2.0","pytorch-lightning","hydra-core","wandb","opencv-python","tqdm","scikit-learn","matplotlib","seaborn","jupyter",[13],[118,119,120,121],"graph-matching","neural-graph-matching","quadratic-assignment-problem","combinatorial-optimization","2026-03-27T02:49:30.150509","2026-04-06T05:17:52.957929",[125,130,135,140,145,150],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},4737,"运行IMC-PT-SparseGM实验时提示文件未找到怎么办？","需手动下载IMC-PT-SparseGM数据集并放置在`data\u002FIMC-PT-SparseGM`目录下。若已下载但报错，删除该目录下的`train.json`和`test.json`文件后重试。","https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fissues\u002F65",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},4734,"如何处理多GPU训练时出现的设备不匹配错误（cuda:0和cuda:1冲突）？","请检查PyTorch版本是否与代码兼容。若使用新版PyTorch，需修改`src\u002Fparallel\u002Fscatter_gather.py`文件中的`gather_map`函数，确保张量设备统一。具体修复方案见最新提交代码。","https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fissues\u002F24",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},4735,"配置文件中PROBLEM.RESCALE参数类型不匹配如何解决？","在`config.py`的`_merge_a_into_b`函数中添加两行代码强制转换类型：\n```python\nif 'RESCALE' in yaml_cfg['PROBLEM']:\n    yaml_cfg['PROBLEM']['RESCALE'] = tuple(yaml_cfg['PROBLEM']['RESCALE'])\n```","https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fissues\u002F68",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},4736,"如何在自定义数据集上运行代码？","需继承`Benchmark`类创建自定义数据集类，并实现`process()`函数生成JSON标注文件。同时修改`dataset_config.py`配置数据集路径，可通过单独调用数据集类的`process()`函数生成.json文件。","https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fissues\u002F47",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},4738,"如何可视化图像匹配结果？","请参考配套项目`pygmtools`的[示例库](https:\u002F\u002Fpygmtools.readthedocs.io\u002Fen\u002Flatest\u002Fauto_examples\u002Findex.html)，其中包含完整的可视化代码实现。","https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fissues\u002F17",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},4739,"图像特征输入时是否需要执行图神经网络层计算？","不需要。若特征F1\u002FF2直接来自图像（非图结构数据），可跳过`emb1, emb2 = gnn_layer([A_src, emb1], [A_tgt, emb2])`步骤，因`A_src`\u002F`A_tgt`等变量仅用于图连接性处理。","https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fissues\u002F15",[156,161,166],{"id":157,"version":158,"summary_zh":159,"released_at":160},104272,"0.3.0","+ Some compatibility issues are fixed with later Python\u002FPytorch\u002FPyG versions.\r\n+ Docker images are now automatically built by Github Actions in [ThinkMatch-runtime](https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch-runtime)\r\n+ Switch to ``pygmtools`` 0.3.x API\r\n+ Resolve accuracy issues in Spair71K benchmark\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fcompare\u002F0.2.0...0.3.0","2022-11-21T05:09:16",{"id":162,"version":163,"summary_zh":164,"released_at":165},104273,"0.2.0","+ We include ``pygmtools`` to handle dataset downloading\u002Fprocessing and evaluation. \r\n  + You may also implement other datasets by the ``pygmtools`` API.\r\n+ We officially support the **SPair-71k** dataset.\r\n+ Bug fixes.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fcompare\u002F0.1.0...0.2.0","2022-02-16T06:07:51",{"id":167,"version":168,"summary_zh":169,"released_at":170},104274,"0.1.0","**Full Changelog**: https:\u002F\u002Fgithub.com\u002FThinklab-SJTU\u002FThinkMatch\u002Fcommits\u002F0.1.0","2021-11-02T08:49:53"]