[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-caiyuanhao1998--MST":3,"tool-caiyuanhao1998--MST":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":10,"env_os":97,"env_gpu":98,"env_ram":99,"env_deps":100,"category_tags":112,"github_topics":113,"view_count":10,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":123,"updated_at":124,"faqs":125,"releases":175},958,"caiyuanhao1998\u002FMST","MST","A toolbox for spectral compressive imaging reconstruction including MST (CVPR 2022), CST (ECCV 2022), DAUHST (NeurIPS 2022), BiSCI (NeurIPS 2023), HDNet (CVPR 2022), MST++ (CVPRW 2022), etc.","MST 是一套专注于**光谱压缩成像重建**的开源算法工具箱，由清华大学等团队开发，集成了 MST、CST、DAUHST、BiSCI、HDNet、MST++ 等 15 余种前沿算法。这些模型均发表于 CVPR、NeurIPS、ECCV 等顶级会议，其中 MST++ 更是荣获 NTIRE 2022 光谱重建挑战赛冠军。\n\n光谱压缩成像是一种\"用更少数据捕捉更丰富光谱信息\"的成像技术，但原始数据往往是严重压缩的二维测量结果，需要复杂的算法才能还原为高分辨率三维光谱图像。MST 正是解决这一**从压缩测量中高精度重建高光谱图像**的难题，让普通 RGB 相机也能\"看\"到远超人眼的光谱细节。\n\n这套工具主要面向**计算机视觉研究人员、光学成像工程师和遥感图像处理开发者**。无论是研究新型重建网络架构，还是需要为光谱相机配套后端算法，MST 都提供了即插即用的基准实现和预训练模型。技术亮点包括：基于 Transformer 的谱间注意力机制、从粗到精的多阶段重建策略，以及针对硬件退化特性的自适应展开网络设计，兼顾重建精度与计算效率。","\n\n# A Toolbox for Spectral Compressive Imaging\n[![winner](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMST++-Winner_of_NTIRE_2022_Spectral_Reconstruction_Challenge-179bd3)](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus\u002F)\n[![zhihu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F知乎解读-MST-179bd3)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F501101943)\n[![zhihu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F知乎解读-CST-179bd3)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F544979161)\n[![zhihu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F知乎解读-DAUHST-179bd3)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F576280023)\n\n#### Authors\nYuanhao Cai*, Jing Lin*, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool\n\n#### Papers\n- [Binarized Spectral Compressive Imaging (NeurIPS 2023)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10299)\n- [Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction (CVPR 2022)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910)\n- [Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction (ECCV 2022)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845)\n- [Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging (NeurIPS 2022)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102)\n- [MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910)\n- [HDNet: High-resolution Dual-domain Learning for Spectral Compressive Imaging (CVPR 2022)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02149)\n\n\n\n#### Awards\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_17e4f7897fc4.png\"  height=240> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_aee90a40e1e8.png\"  height=240>\n\n\n#### Introduction\nThis is a baseline and toolbox for spectral compressive imaging reconstruction. This repo supports **over 15** algorithms. Our method [MST++](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus) won the NTIRE 2022 Challenge on spectral recovery from RGB images. If you find this repo useful, please give it a star ⭐ and consider citing our paper in your research. Thank you.\n\n\n#### News\n- **2025.02.10 :** [NTIRE 2025 Low-light Image Enhancement Challenge](https:\u002F\u002Fcodalab.lisn.upsaclay.fr\u002Fcompetitions\u002F21636) has started. Welcome to use our [Retinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) and MST to participate in this challenge. 😄\n- **2024.04.09 :** We release the results of the three traditional model-based methods, i.e., [TwIST](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4358846), [GAP-TV](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.03890.pdf), and [DeSCI](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.07837.pdf) for your convenience to conduct research. Feel free to use them. 😄\n- **2024.03.21 :** Our methods [Retinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) and [MST++](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus) (NTIRE 2022 Spectral Reconstruction Challenge Winner) ranked top-2 in the [NTIRE 2024 Challenge on Low Light Enhancement](https:\u002F\u002Fcodalab.lisn.upsaclay.fr\u002Fcompetitions\u002F17640). Code, pre-trained models, training logs, and enhancement results will be released in [the repo of Retinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer). Stay tuned! 🚀\n- **2024.02.15 :** [NTIRE 2024 Challenge on Low Light Enhancement](https:\u002F\u002Fcodalab.lisn.upsaclay.fr\u002Fcompetitions\u002F17640) begins. Welcome to use our [Retinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) or [MST++](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus) (NTIRE 2022 Spectral Reconstruction Challenge Winner) to participate in this challenge! :trophy:\n- **2023.12.02 :** Codes for real experiments have been updated. Welcome to check and use them. 🥳\n- **2023.11.24 :** Code, models, and results of [BiSRNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.10299.pdf) (NeurIPS 2023) are released at this repo. We also develop a toolbox [BiSCI](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FBiSCI) for binarized SCI reconstruction. Feel free to check and use them. 🌟\n- **2023.11.02 :** MST, MST++, CST, and DAUHST are added to the [Awesome-Transformer-Attention](https:\u002F\u002Fgithub.com\u002Fcmhungsteve\u002FAwesome-Transformer-Attention\u002Fblob\u002Fmain\u002FREADME_2.md#image-restoration) collection. 💫\n- **2023.09.21 :** Our new work [BiSRNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.10299.pdf) is accepted by NeurIPS 23. Code will be released at this repo and [BiSCI](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FBiSCI)\n- **2023.02.26 :** We release the RGB images of [five real scenes](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1VTMgEbfX9MVpGo98XVVFKaANtQfgApAg?usp=sharing) and [ten simulation scenes](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1EkJsOxYKgyehZdOgKUeY75dU19GHZgE-?usp=sharing). Please feel free to check and use them. 🌟\n- **2022.11.02 :** We have provided more visual results of state-of-the-art methods and the function to evaluate the parameters and computational complexity of models. Please feel free to check and use them. :high_brightness:\n- **2022.10.23 :** Code, models, and reconstructed HSI results of [DAUHST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102) have been released. 🔥\n- **2022.09.15 :** Our [DAUHST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102) has been accepted by NeurIPS 2022, code and models are coming soon. :rocket:\n- **2022.07.20 :** Code, models, and reconstructed HSI results of [CST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845) have been released. 🔥\n- **2022.07.04 :** Our paper [CST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845) has been accepted by ECCV 2022, code and models are coming soon. :rocket:\n- **2022.06.14 :** Code and models of [MST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910) and [MST++](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910) have been released. This repo supports 12 learning-based methods to serve as toolbox for Spectral Compressive Imaging. The model zoo will be enlarged. 🔥\n- **2022.05.20 :** Our work [DAUHST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102) is on arxiv. :dizzy:\n- **2022.04.02 :** Further work [MST++](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus\u002F) has won the NTIRE 2022 Spectral Reconstruction Challenge. :trophy: \n- **2022.03.09 :** Our work [CST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845) is on arxiv. :dizzy:\n- **2022.03.02 :** Our paper MST has been accepted by CVPR 2022, code and models are coming soon. :rocket: \n\n|                          *Scene 2*                           |                          *Scene 3*                           |                          *Scene 4*                           |                          *Scene 7*                           |\n| :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_3e38422f92ab.gif\"  height=170 width=170> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_43d7bc9dddb5.gif\" width=170 height=170> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_535d210f3bdc.gif\" width=170 height=170> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_202f0d4bdc64.gif\" width=170 height=170> |\n\n\n&nbsp;\n\n\n## 1. Comparison with State-of-the-art Methods\n12 learning-based algorithms and 3 model-based methods are supported.\n\n\u003Cdetails open>\n\u003Csummary>\u003Cb>Supported algorithms:\u003C\u002Fb>\u003C\u002Fsummary>\n\n* [x] [MST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910) (CVPR 2022)\n* [x] [CST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845) (ECCV 2022)\n* [x] [DAUHST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102) (NeurIPS 2022)\n* [x] [BiSRNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10299) (NeurIPS 2023)\n* [x] [MST++](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910) (CVPRW 2022)\n* [x] [HDNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02149) (CVPR 2022)\n* [x] [BIRNAT](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9741335\u002F) (TPAMI 2022)\n* [x] [DGSMP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.07152) (CVPR 2021)\n* [x] [GAP-Net](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.08364) (Arxiv 2020)\n* [x] [TSA-Net](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58592-1_12) (ECCV 2020)\n* [x] [ADMM-Net](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FMa_Deep_Tensor_ADMM-Net_for_Snapshot_Compressive_Imaging_ICCV_2019_paper.html) (ICCV 2019)\n* [x] [λ-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9010044) (ICCV 2019)\n* [x] [TwIST](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4358846) (TIP 2007)\n* [x] [GAP-TV](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.03890.pdf) (ICIP 2015)\n* [x] [DeSCI](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.07837.pdf) (TPAMI 2019)\n\n\n\u003C\u002Fdetails>\n\nWe are going to enlarge our model zoo in the future.\n\n|                   MST vs. SOTA                   |                 CST vs. MST                 |\n| :----------------------------------------------: | :-----------------------------------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_e8d1485c53e7.png\"  height=320> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_b195fe4e907d.png\" height=320> |\n|                  MST++ vs. SOTA                  |               DAUHST vs. SOTA               |\n|   \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_2da46fe1e327.png\"  height=320>    | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_0840e3735fe4.png\" height=320>  |\n\n|            BiSRNet vs. SOTA BNNs            |\n| :-----------------------------------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_25d63c9d511a.png\"  width=812> |\n\n\n\n\n### Quantitative Comparison on Simulation Dataset\n\n|                            Method                            | Params (M) | FLOPS (G) | PSNR  | SSIM  |                          Model Zoo                           |                      Simulation  Result                      |                         Real  Result                         |\n| :----------------------------------------------------------: | :--------: | :-------: | :---: | :---: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |\n|                            [TwIST](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4358846)                             |     -      |     -     | 23.12 | 0.669 |                              -                               | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1J2H7DoYblfjd9FI4kjqEuP9u8nGwrOWI?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1-W8UtEGvWA7qhO_W36c5FQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1pmxDIBGGimdsM82GXbTsjWHJGCwNSg8v?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1aF_FC6kXcPMv_IDtDPSuJA?pwd=mst1) |\n|        [GAP-TV](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.03890.pdf)        |     -      |     -     | 24.36 | 0.669 |                              -                               | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F17VA2RKwf2u9SHXAcn42-xcpL7mQM9SAm?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1TbFK6sJyM-S-qT5PNG4L1Q?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1fL3Lh1s54jB7Tu8cFIkVGAHjKvlZZADt?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ngKdDwYF34OKd71mwy-0Dg?pwd=mst1) |\n|        [DeSCI](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.07837.pdf)         |     -      |     -     | 25.27 | 0.721 |                              -                               | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1funq1tLjNFyKDTEzKvf3OjqxQheH5_Hw?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Ztzh7TQi52avqF9rLvNbjQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F14P9oioxqAgRmY57yLEI29mLz-i7R0clu?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZNUxhElif4HGRlor6CccDg?pwd=mst1) |\n|    [λ-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9010044)     |   62.64    |  117.98   | 28.53 | 0.841 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11DwTFdgtG7sRnBwvkxxfN9rcOICsEdpC?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xXkL2p4_mCLeTGa68wEbNQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1csOZ2Kfto_tWIiSD0hc2nzKjR4Ze7ftA?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1-0LjvHnkINW8YYaBiA4-VA?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1vVjlVLUm7Gb5zkDxH-mBwM-KnjXbgsjp?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Ah9NxFyBhKuzQaAL9WsmRw?pwd=mst1) |\n| [TSA-Net](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58592-1_12) |   44.25    |  110.06   | 31.46 | 0.894 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1f29eS8WqXu31310nD-7mRR81XfLBYKBd?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1riGZ83AXXkcjHiGVNrNeYg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1BOXBIu2Ze-L__XuLRUu4y9-lq0fw2FJd?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1icqxYnsD27zrDQ95STfRvw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1OPZF0PpThWhC7aqNqhPEh3dX63OwGTC4?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1T3HX_Z7IqmC2imNLfgyBpg?pwd=mst1) |\n|          [DGSMP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.07152)           |    3.76    |  646.65   | 32.63 | 0.917 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1j1k8mYKWh8FVe77Cz8hj69nI2lK2D5QC?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1v-uqYJZ5mQxOupLc6E_C1g?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1PjnTYEEDfWlTpe0jzCmImxxXLfy5Viva?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Y1AJICGUqUJEV-74Eg2FEg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1Sw5FPMCOrkF5a9IltYWA-QaYXkpKRw5-?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1BLb2w-yT25OTeeFwGFyMkQ?pwd=mst1) |\n|         [GAP-Net](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.08364)          |    4.27    |   78.58   | 33.26 | 0.917 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1AF3P42DZtBzKpWvjTVKYoLmGHsL2f_SL?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1hraGGd_HEsfCkSGv5QyOaw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F16_kqEj4nlE_KlHzu8Q6zQCizTTBII82F?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F12azFFSEOic7iNTFGmC7wMw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1u7MoHnZraM4NL2uAxC7U04sBhy4AoY_S?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1YDqnQ9BlqWmvdfQKx5vMQw?pwd=mst1) |\n| [ADMM-Net](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FMa_Deep_Tensor_ADMM-Net_for_Snapshot_Compressive_Imaging_ICCV_2019_paper.html) |    4.27    |   78.58   | 33.58 | 0.918 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1I9JqdyikulUVjXcdciHaJxfAceqfaF2G?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ddkA9TazTq0rZReFYgGHMg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1WT0QYfC5dbigl9znD_JFNHpH0k_rTDc-?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1b2sRrJaS3PKYqqQErmtIJg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1MJKLjAo7Yzq_eF1JQK89-r40ItLUq4c8?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1aESUHFHjeMSL7shES3YmSw?pwd=mst1) |\n| [BIRNAT](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9741335\u002F) |    4.40    |  2122.66  | 37.58 | 0.960 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1bwhy0csM6GSNY0Qe9RS7_U3Bm9JJjHoc?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1HDMQu3jWvQ9X1yQA5dprGg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1fp4-D1-RGx9bICSmzpIiKBRAEmDOE3gD?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pHfEKypguCLydNsmCGojpw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1BUmusDHR-E533NdlHDLCYm_CwSCBJ0IF?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1l77HQkVbH7R6vbWNwwksTA?pwd=mst1) |\n|          [HDNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02149)           |    2.37    |  154.76   | 34.97 | 0.943 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1F41BlUQulzPCf5yNo-q6V6Mdtr7bPV6-?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1mCoGHT22cw7ElVSaXgU5Lw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1oUrgFUWIKR-96zAT2Y42UlAM6l5NVJsf?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Nb5IE02iAClMBC4OHcnePQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1vhzRWzIbhYxeZL-N2ZxaW2WqeT80Jogm?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1BXyMsecmENSoU9mCUKU2xA?pwd=mst1) |\n|          [MST-S](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910)           |    0.93    |   12.96   | 34.26 | 0.935 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F176f_PammL0ZrIg3lVaQwd6Vr6Ui8FANs?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZQ08_Ec3a_-8YYAa5ms5PQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1afMG9PndlvjDTtl7UJZoElmDe5FtpCoW?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xzJRVjnI-7A54Rj_zi3crA?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1FSwDZKAC03B8XhyAkaL9t1MWFz6q985y?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1H1pT1oKfK_o4EDXkts8VMQ?pwd=mst1) |\n|          [MST-M](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910)           |    1.50    |   18.07   | 34.94 | 0.943 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F176f_PammL0ZrIg3lVaQwd6Vr6Ui8FANs?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZQ08_Ec3a_-8YYAa5ms5PQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1hnEuwYO9luwLmPeT98cUaik_zCPK6z30?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OUuozfd3zLqzBjHnf6evCQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1FSwDZKAC03B8XhyAkaL9t1MWFz6q985y?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1H1pT1oKfK_o4EDXkts8VMQ?pwd=mst1) |\n|          [MST-L](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910)           |    2.03    |   28.15   | 35.18 | 0.948 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F176f_PammL0ZrIg3lVaQwd6Vr6Ui8FANs?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZQ08_Ec3a_-8YYAa5ms5PQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F18ZF5wC1LRmqOh6VDeXD4eB8mP0Dvv6jb?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1imed0w1CWqx7IOlSpVh7qw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1FSwDZKAC03B8XhyAkaL9t1MWFz6q985y?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1H1pT1oKfK_o4EDXkts8VMQ?pwd=mst1) |\n|          [MST++](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910)           |    1.33    |   19.42   | 35.99 | 0.951 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1rbV8LYD5k1RVR4usMORoXxY2szlFsr_9?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1QUx_MpYCBSU4Zas5gpao2g?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F14sz-y99fEAJDQAN1itE5K-BlMHC1Tt3z?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1s3btC7QQrasW1NqFzOm8fQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F14CgUhfUp4BFalyigL4RDnTYyGiN5ojRK?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1fYsAvAjXTLLWpzlt_S2ppQ?pwd=mst1) |\n|          [CST-S](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845)           |    1.20    |   11.67   | 34.71 | 0.940 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-SZDH0PuUyjLlvKfON-LL02dkr2LBuL9?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xq_YV6yO0zN6AULwU9ZPyg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1AdcUGiiPTfdt366NcBV8Texuu1Qw7_DI?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Vl6xufpmLWXSmhaVCsKkzQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11RNDmA3VrPWdbj4H-mInGfZa60W86YEc?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1m3abRwjuaneFf1cCE85yMw?pwd=mst1) |\n|          [CST-M](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845)           |    1.36    |   16.91   | 35.31 | 0.947 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-SZDH0PuUyjLlvKfON-LL02dkr2LBuL9?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xq_YV6yO0zN6AULwU9ZPyg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1FpMTtKSIN-t_natQIX2gKcu8MfPQLCG1?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1u3IjojML3H7AwSMe0CxV_Q?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11RNDmA3VrPWdbj4H-mInGfZa60W86YEc?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1m3abRwjuaneFf1cCE85yMw?pwd=mst1) |\n|          [CST-L](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845)           |    3.00    |   27.81   | 35.85 | 0.954 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-SZDH0PuUyjLlvKfON-LL02dkr2LBuL9?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xq_YV6yO0zN6AULwU9ZPyg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1MRFeMoi4JzhFrf_346USCFq98kdds9HY?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1UXwTyr-xZtDR68wzmeaCEA?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11RNDmA3VrPWdbj4H-mInGfZa60W86YEc?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1m3abRwjuaneFf1cCE85yMw?pwd=mst1) |\n|        [CST-L-Plus](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845)        |    3.00    |   40.10   | 36.12 | 0.957 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-SZDH0PuUyjLlvKfON-LL02dkr2LBuL9?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xq_YV6yO0zN6AULwU9ZPyg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1sGHrkbYKjN3XqsduQL2mesXqO1XMeqGI?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1PsoJwVfZ7qYi6mnq_q_gDA?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11RNDmA3VrPWdbj4H-mInGfZa60W86YEc?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1m3abRwjuaneFf1cCE85yMw?pwd=mst1) |\n|       [DAUHST-2stg](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102)        |    1.40    |   18.44   | 36.34 | 0.952 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1qOrnp1crkk1z5ha56UoyOqDMFGfWlLC7?usp=sharing) \u002F[Baidu Disk]( https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1_RxqZQpCcYH50nxhSWeb0w?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |\n|       [DAUHST-3stg](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102)        |    2.08    |   27.17   | 37.21 | 0.959 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1uwXh5JrD4rnh_xYBpF4K4wI4lcTD1j4p?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1iYtxPuf1rkFWut5UdEYqtg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |\n|       [DAUHST-5stg](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102)        |    3.44    |   44.61   | 37.75 | 0.962 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1snTVZSUsbtzjJ5lxbPbaKhpTJX28Byuh?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1k1q0Y8QPgMZhThBEfzGKzQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |\n|       [DAUHST-9stg](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102)        |    6.15    |   79.50   | 38.36 | 0.967 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1rzZG1L-s2rYmR-wHXg9KnnGPbOIT5GaP?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F10vGcOirPk2L8sQg6uJoJkg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |\n|         [BiSRNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10299)          |   0.036    |   1.18    | 29.76 | 0.837 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1MIsuIHuAaETZIRosjnKh2cvVgVDh9ZHv\u002Fview?usp=drive_link) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1wrHExqzl07niPS0fdMCAhg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1QpZV6MzkijtwFI9MJp87bow4XxAok50m\u002Fview?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F11ifwb4tUDAVk7oTlFBKfUg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1Hgdq43kbmHm1HG9SdLGryiIakBsBWuZp?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1I8051aIYsQEG8ybfSdPF0g?pwd=mst1) |\n\nThe performance are reported on 10 scenes of the KAIST dataset. The test size of FLOPS is 256 x 256.\n\nWe also provide the RGB images of [five real scenes](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1VTMgEbfX9MVpGo98XVVFKaANtQfgApAg?usp=sharing) and [ten simulation scenes](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1EkJsOxYKgyehZdOgKUeY75dU19GHZgE-?usp=sharing) for your convenience to draw a figure.\n\nNote: access code for `Baidu Disk` is `mst1`\n\n\n&nbsp;\n\n\n## 2. Create Environment:\n\n- Python 3 (Recommend to use [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002Fdownload\u002F#linux))\n\n- NVIDIA GPU + [CUDA](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-downloads)\n\n- Python packages:\n\n```shell\n  pip install -r requirements.txt\n```\n\n\n&nbsp;\n\n\n## 3. Prepare Dataset:\nDownload cave_1024_28 ([Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1X_uXxgyO-mslnCTn4ioyNQ), code: `fo0q` | [One Drive](https:\u002F\u002Fbupteducn-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fmengziyi_bupt_edu_cn\u002FEmNAsycFKNNNgHfV9Kib4osB7OD4OSu-Gu6Qnyy5PweG0A?e=5NrM6S)), CAVE_512_28 ([Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ue26weBAbn61a7hyT9CDkg), code: `ixoe` | [One Drive](https:\u002F\u002Fmailstsinghuaeducn-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Flin-j21_mails_tsinghua_edu_cn\u002FEjhS1U_F7I1PjjjtjKNtUF8BJdsqZ6BSMag_grUfzsTABA?e=sOpwm4)), KAIST_CVPR2021 ([Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1LfPqGe0R_tuQjCXC_fALZA), code: `5mmn` | [One Drive](https:\u002F\u002Fmailstsinghuaeducn-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Flin-j21_mails_tsinghua_edu_cn\u002FEkA4B4GU8AdDu0ZkKXdewPwBd64adYGsMPB8PNCuYnpGlA?e=VFb3xP)), TSA_simu_data ([Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1LI9tMaSprtxT8PiAG1oETA), code: `efu8` | [One Drive](https:\u002F\u002F1drv.ms\u002Fu\u002Fs!Au_cHqZBKiu2gYFDwE-7z1fzeWCRDA?e=ofvwrD)), TSA_real_data ([Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1RoOb1CKsUPFu0r01tRi5Bg), code: `eaqe` | [One Drive](https:\u002F\u002F1drv.ms\u002Fu\u002Fs!Au_cHqZBKiu2gYFTpCwLdTi_eSw6ww?e=uiEToT)), and then put them into the corresponding folders of `datasets\u002F` and recollect them as the following form:\n\n```shell\n|--MST\n    |--real\n    \t|-- test_code\n    \t|-- train_code\n    |--simulation\n    \t|-- test_code\n    \t|-- train_code\n    |--visualization\n    |--datasets\n        |--cave_1024_28\n            |--scene1.mat\n            |--scene2.mat\n            ：  \n            |--scene205.mat\n        |--CAVE_512_28\n            |--scene1.mat\n            |--scene2.mat\n            ：  \n            |--scene30.mat\n        |--KAIST_CVPR2021  \n            |--1.mat\n            |--2.mat\n            ： \n            |--30.mat\n        |--TSA_simu_data  \n            |--mask.mat   \n            |--Truth\n                |--scene01.mat\n                |--scene02.mat\n                ： \n                |--scene10.mat\n        |--TSA_real_data  \n            |--mask.mat   \n            |--Measurements\n                |--scene1.mat\n                |--scene2.mat\n                ： \n                |--scene5.mat\n```\n\nFollowing TSA-Net and DGSMP, we use the CAVE dataset (cave_1024_28) as the simulation training set. Both the CAVE (CAVE_512_28) and KAIST (KAIST_CVPR2021) datasets are used as the real training set. \n\n\n&nbsp;\n\n\n## 4. Simulation Experiement:\n\n### 4.1　Training\n\n```shell\ncd MST\u002Fsimulation\u002Ftrain_code\u002F\n\n# MST_S\npython train.py --template mst_s --outf .\u002Fexp\u002Fmst_s\u002F --method mst_s \n\n# MST_M\npython train.py --template mst_m --outf .\u002Fexp\u002Fmst_m\u002F --method mst_m  \n\n# MST_L\npython train.py --template mst_l --outf .\u002Fexp\u002Fmst_l\u002F --method mst_l \n\n# CST_S\npython train.py --template cst_s --outf .\u002Fexp\u002Fcst_s\u002F --method cst_s \n\n# CST_M\npython train.py --template cst_m --outf .\u002Fexp\u002Fcst_m\u002F --method cst_m  \n\n# CST_L\npython train.py --template cst_l --outf .\u002Fexp\u002Fcst_l\u002F --method cst_l\n\n# CST_L_Plus\npython train.py --template cst_l_plus --outf .\u002Fexp\u002Fcst_l_plus\u002F --method cst_l_plus\n\n# GAP-Net\npython train.py --template gap_net --outf .\u002Fexp\u002Fgap_net\u002F --method gap_net \n\n# ADMM-Net\npython train.py --template admm_net --outf .\u002Fexp\u002Fadmm_net\u002F --method admm_net \n\n# TSA-Net\npython train.py --template tsa_net --outf .\u002Fexp\u002Ftsa_net\u002F --method tsa_net \n\n# HDNet\npython train.py --template hdnet --outf .\u002Fexp\u002Fhdnet\u002F --method hdnet \n\n# DGSMP\npython train.py --template dgsmp --outf .\u002Fexp\u002Fdgsmp\u002F --method dgsmp \n\n# BIRNAT\npython train.py --template birnat --outf .\u002Fexp\u002Fbirnat\u002F --method birnat \n\n# MST_Plus_Plus\npython train.py --template mst_plus_plus --outf .\u002Fexp\u002Fmst_plus_plus\u002F --method mst_plus_plus \n\n# λ-Net\npython train.py --template lambda_net --outf .\u002Fexp\u002Flambda_net\u002F --method lambda_net\n\n# DAUHST-2stg\npython train.py --template dauhst_2stg --outf .\u002Fexp\u002Fdauhst_2stg\u002F --method dauhst_2stg\n\n# DAUHST-3stg\npython train.py --template dauhst_3stg --outf .\u002Fexp\u002Fdauhst_3stg\u002F --method dauhst_3stg\n\n# DAUHST-5stg\npython train.py --template dauhst_5stg --outf .\u002Fexp\u002Fdauhst_5stg\u002F --method dauhst_5stg\n\n# DAUHST-9stg\npython train.py --template dauhst_9stg --outf .\u002Fexp\u002Fdauhst_9stg\u002F --method dauhst_9stg\n\n# BiSRNet\npython train.py --template bisrnet --outf .\u002Fexp\u002Fbisrnet\u002F --method bisrnet\n```\n\n- The training log, trained model, and reconstrcuted HSI will be available in `MST\u002Fsimulation\u002Ftrain_code\u002Fexp\u002F`\n\n\n### 4.2　Testing\t\n\nDownload the pretrained model zoo from ([Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1zgB7jHqTzY1bjCSzdX4lKQEGyK3bpWIx?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1CH4uq_NZPpo5ra2tFzAdfQ?pwd=mst1), code: `mst1`) and place them to `MST\u002Fsimulation\u002Ftest_code\u002Fmodel_zoo\u002F`\n\nRun the following command to test the model on the simulation dataset.\n\n```python\ncd MST\u002Fsimulation\u002Ftest_code\u002F\n\n# MST_S\npython test.py --template mst_s --outf .\u002Fexp\u002Fmst_s\u002F --method mst_s --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_s.pth\n\n# MST_M\npython test.py --template mst_m --outf .\u002Fexp\u002Fmst_m\u002F --method mst_m --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_m.pth\n\n# MST_L\npython test.py --template mst_l --outf .\u002Fexp\u002Fmst_l\u002F --method mst_l --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_l.pth\n\n# CST_S\npython test.py --template cst_s --outf .\u002Fexp\u002Fcst_s\u002F --method cst_s --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_s.pth\n\n# CST_M\npython test.py --template cst_m --outf .\u002Fexp\u002Fcst_m\u002F --method cst_m --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_m.pth\n\n# CST_L\npython test.py --template cst_l --outf .\u002Fexp\u002Fcst_l\u002F --method cst_l --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_l.pth\n\n# CST_L_Plus\npython test.py --template cst_l_plus --outf .\u002Fexp\u002Fcst_l_plus\u002F --method cst_l_plus --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_l_plus.pth\n\n# GAP_Net\npython test.py --template gap_net --outf .\u002Fexp\u002Fgap_net\u002F --method gap_net --pretrained_model_path .\u002Fmodel_zoo\u002Fgap_net\u002Fgap_net.pth\n\n# ADMM_Net\npython test.py --template admm_net --outf .\u002Fexp\u002Fadmm_net\u002F --method admm_net --pretrained_model_path .\u002Fmodel_zoo\u002Fadmm_net\u002Fadmm_net.pth\n\n# TSA_Net\npython test.py --template tsa_net --outf .\u002Fexp\u002Ftsa_net\u002F --method tsa_net --pretrained_model_path .\u002Fmodel_zoo\u002Ftsa_net\u002Ftsa_net.pth\n\n# HDNet\npython test.py --template hdnet --outf .\u002Fexp\u002Fhdnet\u002F --method hdnet --pretrained_model_path .\u002Fmodel_zoo\u002Fhdnet\u002Fhdnet.pth\n\n# DGSMP\npython test.py --template dgsmp --outf .\u002Fexp\u002Fdgsmp\u002F --method dgsmp --pretrained_model_path .\u002Fmodel_zoo\u002Fdgsmp\u002Fdgsmp.pth\n\n# BIRNAT\npython test.py --template birnat --outf .\u002Fexp\u002Fbirnat\u002F --method birnat --pretrained_model_path .\u002Fmodel_zoo\u002Fbirnat\u002Fbirnat.pth\n\n# MST_Plus_Plus\npython test.py --template mst_plus_plus --outf .\u002Fexp\u002Fmst_plus_plus\u002F --method mst_plus_plus --pretrained_model_path .\u002Fmodel_zoo\u002Fmst_plus_plus\u002Fmst_plus_plus.pth\n\n# λ-Net\npython test.py --template lambda_net --outf .\u002Fexp\u002Flambda_net\u002F --method lambda_net --pretrained_model_path .\u002Fmodel_zoo\u002Flambda_net\u002Flambda_net.pth\n\n# DAUHST-2stg\npython test.py --template dauhst_2stg --outf .\u002Fexp\u002Fdauhst_2stg\u002F --method dauhst_2stg --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst_2stg\u002Fdauhst_2stg.pth\n\n# DAUHST-3stg\npython test.py --template dauhst_3stg --outf .\u002Fexp\u002Fdauhst_3stg\u002F --method dauhst_3stg --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst_3stg\u002Fdauhst_3stg.pth\n\n# DAUHST-5stg\npython test.py --template dauhst_5stg --outf .\u002Fexp\u002Fdauhst_5stg\u002F --method dauhst_5stg --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst_5stg\u002Fdauhst_5stg.pth\n\n# DAUHST-9stg\npython test.py --template dauhst_9stg --outf .\u002Fexp\u002Fdauhst_9stg\u002F --method dauhst_9stg --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst_9stg\u002Fdauhst_9stg.pth\n\n# BiSRNet\npython test.py --template bisrnet --outf .\u002Fexp\u002Fbisrnet\u002F --method bisrnet --pretrained_model_path .\u002Fmodel_zoo\u002Fbisrnet\u002Fbisrnet.pth\n```\n\n- The reconstrcuted HSIs will be output into `MST\u002Fsimulation\u002Ftest_code\u002Fexp\u002F`. Then place the reconstructed results into `MST\u002Fsimulation\u002Ftest_code\u002FQuality_Metrics\u002Fresults` and run the following MATLAB command to calculate the PSNR and SSIM of the reconstructed HSIs.\n\n```shell\nRun cal_quality_assessment.m\n```\n\n\n\n- #### Evaluating the Params and FLOPS of models\n\n  We provide two functions `my_summary()` and `my_summary_bnn()` in `simulation\u002Ftest_code\u002Futils.py`. Use them to evaluate the parameters and FLOPS of full-precision and binarized models\n\n```shell\nfrom utils import my_summary, my_summary_bnn\nmy_summary(MST(), 256, 256, 28, 1)\nmy_summary_bnn(BiSRNet(), 256, 256, 28, 1)\n```\n\n### 4.3　Visualization\t\n\n- Put the reconstruted HSI in `MST\u002Fvisualization\u002Fsimulation_results\u002Fresults` and rename it as method.mat, e.g., mst_s.mat.\n\n- Generate the RGB images of the reconstructed HSIs\n\n```shell\n cd MST\u002Fvisualization\u002F\n Run show_simulation.m \n```\n\n- Draw the spetral density lines\n\n```shell\ncd MST\u002Fvisualization\u002F\nRun show_line.m\n```\n\n\n&nbsp;\n\n\n## 5. Real Experiement:\n\n### 5.1　Training\n\n```shell\ncd MST\u002Freal\u002Ftrain_code\u002F\n\n# MST_S\npython train.py --template mst_s --outf .\u002Fexp\u002Fmst_s\u002F --method mst_s \n\n# MST_M\npython train.py --template mst_m --outf .\u002Fexp\u002Fmst_m\u002F --method mst_m  \n\n# MST_L\npython train.py --template mst_l --outf .\u002Fexp\u002Fmst_l\u002F --method mst_l \n\n# CST_S\npython train.py --template cst_s --outf .\u002Fexp\u002Fcst_s\u002F --method cst_s \n\n# CST_M\npython train.py --template cst_m --outf .\u002Fexp\u002Fcst_m\u002F --method cst_m  \n\n# CST_L\npython train.py --template cst_l --outf .\u002Fexp\u002Fcst_l\u002F --method cst_l\n\n# CST_L_Plus\npython train.py --template cst_l_plus --outf .\u002Fexp\u002Fcst_l_plus\u002F --method cst_l_plus\n\n# GAP-Net\npython train.py --template gap_net --outf .\u002Fexp\u002Fgap_net\u002F --method gap_net \n\n# ADMM-Net\npython train.py --template admm_net --outf .\u002Fexp\u002Fadmm_net\u002F --method admm_net \n\n# TSA-Net\npython train.py --template tsa_net --outf .\u002Fexp\u002Ftsa_net\u002F --method tsa_net \n\n# HDNet\npython train.py --template hdnet --outf .\u002Fexp\u002Fhdnet\u002F --method hdnet \n\n# DGSMP\npython train.py --template dgsmp --outf .\u002Fexp\u002Fdgsmp\u002F --method dgsmp \n\n# BIRNAT\npython train.py --template birnat --outf .\u002Fexp\u002Fbirnat\u002F --method birnat \n\n# MST_Plus_Plus\npython train.py --template mst_plus_plus --outf .\u002Fexp\u002Fmst_plus_plus\u002F --method mst_plus_plus \n\n# λ-Net\npython train.py --template lambda_net --outf .\u002Fexp\u002Flambda_net\u002F --method lambda_net\n\n# DAUHST-2stg\npython train.py --template dauhst_2stg --outf .\u002Fexp\u002Fdauhst_2stg\u002F --method dauhst_2stg\n\n# DAUHST-3stg\npython train.py --template dauhst_3stg --outf .\u002Fexp\u002Fdauhst_3stg\u002F --method dauhst_3stg\n\n# DAUHST-5stg\npython train.py --template dauhst_5stg --outf .\u002Fexp\u002Fdauhst_5stg\u002F --method dauhst_5stg\n\n# DAUHST-9stg\npython train.py --template dauhst_9stg --outf .\u002Fexp\u002Fdauhst_9stg\u002F --method dauhst_9stg\n\n# BiSRNet\npython train_s.py --outf .\u002Fexp\u002Fbisrnet\u002F --method bisrnet\n```\n\n- If you do not have a large memory GPU, add `--size 128` to use a small patch size.\n\n- The training log, trained model, and reconstrcuted HSI will be available in `MST\u002Freal\u002Ftrain_code\u002Fexp\u002F`\n\n- Note: you can use `train_s.py` for other methods except BiSRNet if you cannot access the mask data or you have limited GPU resources. In this case, you need to replace the `--method` paramter in the above commands and make some modifications.\n\n\n### 5.2　Testing\t\n\nThe pretrained model of BiSRNet can be download from ([Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1zQ7PFuiaEgIpulBl8TA7S_8Am93nAKPb\u002Fview?usp=sharing) \u002F [Baidu Disk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1hiPbuUEBnIGQP6Ks9agfWQ?pwd=mst1), code: `mst1`) and place them to `MST\u002Freal\u002Ftest_code\u002Fmodel_zoo\u002F`\n\n```python\ncd MST\u002Freal\u002Ftest_code\u002F\n\n# MST_S\npython test.py --outf .\u002Fexp\u002Fmst_s\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_s.pth\n\n# MST_M\npython test.py --outf .\u002Fexp\u002Fmst_m\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_m.pth\n\n# MST_L\npython test.py  --outf .\u002Fexp\u002Fmst_l\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_l.pth\n\n# CST_S\npython test.py --outf .\u002Fexp\u002Fcst_s\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_s.pth\n\n# CST_M\npython test.py --outf .\u002Fexp\u002Fcst_m\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_m.pth\n\n# CST_L\npython test.py --outf .\u002Fexp\u002Fcst_l\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_l.pth\n\n# CST_L_Plus\npython test.py --outf .\u002Fexp\u002Fcst_l_plus\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_l_plus.pth\n\n# GAP_Net\npython test.py --outf .\u002Fexp\u002Fgap_net\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fgap_net\u002Fgap_net.pth\n\n# ADMM_Net\npython test.py --outf .\u002Fexp\u002Fadmm_net\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fadmm_net\u002Fadmm_net.pth\n\n# TSA_Net\npython test.py --outf .\u002Fexp\u002Ftsa_net\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Ftsa_net\u002Ftsa_net.pth\n\n# HDNet\npython test.py --template hdnet --outf .\u002Fexp\u002Fhdnet\u002F --method hdnet --pretrained_model_path .\u002Fmodel_zoo\u002Fhdnet\u002Fhdnet.pth\n\n# DGSMP\npython test.py --outf .\u002Fexp\u002Fdgsmp\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fdgsmp\u002Fdgsmp.pth\n\n# BIRNAT\npython test.py --outf .\u002Fexp\u002Fbirnat\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fbirnat\u002Fbirnat.pth\n\n# MST_Plus_Plus\npython test.py --outf .\u002Fexp\u002Fmst_plus_plus\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fmst_plus_plus\u002Fmst_plus_plus.pth\n\n# λ-Net\npython test.py --outf .\u002Fexp\u002Flambda_net\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Flambda_net\u002Flambda_net.pth\n\n# DAUHST_2stg\npython test.py --outf .\u002Fexp\u002Fdauhst_2stg\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst\u002Fdauhst_2stg.pth\n\n# DAUHST_3stg\npython test.py --outf .\u002Fexp\u002Fdauhst_3stg\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst\u002Fdauhst_3stg.pth\n\n# DAUHST_5stg\npython test.py --outf .\u002Fexp\u002Fdauhst_5stg\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst\u002Fdauhst_5stg.pth\n\n# DAUHST_9stg\npython test.py --outf .\u002Fexp\u002Fdauhst_9stg\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst\u002Fdauhst_9stg.pth\n\n# BiSRNet\npython test.py --outf .\u002Fexp\u002Fbisrnet  --pretrained_model_path .\u002Fmodel_zoo\u002Fbisrnet\u002Fbisrnet.pth --method bisrnet\n```\n\n- The reconstrcuted HSI will be output into `MST\u002Freal\u002Ftest_code\u002Fexp\u002F`  \n\n### 5.3　Visualization\t\n\n- Put the reconstruted HSI in `MST\u002Fvisualization\u002Freal_results\u002Fresults` and rename it as method.mat, e.g., mst_plus_plus.mat.\n\n- Generate the RGB images of the reconstructed HSI\n\n```shell\ncd MST\u002Fvisualization\u002F\nRun show_real.m\n```\n\n\n&nbsp;\n\n\n## 6. Citation\nIf this repo helps you, please consider citing our works:\n\n\n```shell\n\n\n# MST\n@inproceedings{mst,\n  title={Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction},\n  author={Yuanhao Cai and Jing Lin and Xiaowan Hu and Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},\n  booktitle={CVPR},\n  year={2022}\n}\n\n\n# CST\n@inproceedings{cst,\n  title={Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction},\n  author={Yuanhao Cai and Jing Lin and Xiaowan Hu and Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},\n  booktitle={ECCV},\n  year={2022}\n}\n\n\n# DAUHST\n@inproceedings{dauhst,\n  title={Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging},\n  author={Yuanhao Cai and Jing Lin and Haoqian Wang and Xin Yuan and Henghui Ding and Yulun Zhang and Radu Timofte and Luc Van Gool},\n  booktitle={NeurIPS}, \n  year={2022}\n}\n\n\n# BiSCI\n@inproceedings{bisci,\n  title={Binarized Spectral Compressive Imaging},\n  author={Yuanhao Cai and Yuxin Zheng and Jing Lin and Xin Yuan and Yulun Zhang and Haoqian Wang},\n  booktitle={NeurIPS},\n  year={2023}\n}\n\n\n# MST++\n@inproceedings{mst_pp,\n  title={MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction},\n  author={Yuanhao Cai and Jing Lin and Zudi Lin and Haoqian Wang and Yulun Zhang and Hanspeter Pfister and Radu Timofte and Luc Van Gool},\n  booktitle={CVPRW},\n  year={2022}\n}\n\n\n# HDNet\n@inproceedings{hdnet,\n  title={HDNet: High-resolution Dual-domain Learning for Spectral Compressive Imaging},\n  author={Xiaowan Hu and Yuanhao Cai and Jing Lin and  Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},\n  booktitle={CVPR},\n  year={2022}\n}\n\n```\n","# 光谱压缩成像工具箱\n[![winner](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMST++-Winner_of_NTIRE_2022_Spectral_Reconstruction_Challenge-179bd3)](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus\u002F)\n[![zhihu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F知乎解读-MST-179bd3)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F501101943)\n[![zhihu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F知乎解读-CST-179bd3)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F544979161)\n[![zhihu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F知乎解读-DAUHST-179bd3)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F576280023)\n\n#### 作者\nYuanhao Cai*, Jing Lin*, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool\n\n#### 论文\n- [Binarized Spectral Compressive Imaging (NeurIPS 2023)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10299) — 二值化光谱压缩成像\n- [Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction (CVPR 2022)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910) — 掩码引导的光谱级 Transformer 用于高效高光谱图像重建\n- [Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction (ECCV 2022)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845) — 用于高光谱图像重建的由粗到精稀疏 Transformer\n- [Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging (NeurIPS 2022)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102) — 退化感知展开半洗牌 Transformer 用于光谱压缩成像\n- [MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910) — MST++：多阶段光谱级 Transformer 用于高效光谱重建\n- [HDNet: High-resolution Dual-domain Learning for Spectral Compressive Imaging (CVPR 2022)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02149) — HDNet：用于光谱压缩成像的高分辨率双域学习\n\n\n\n#### 奖项\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_17e4f7897fc4.png\"  height=240> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_aee90a40e1e8.png\"  height=240>\n\n\n#### 简介\n这是一个用于光谱压缩成像重建的基线和工具箱。本仓库支持 **超过 15 种** 算法。我们的方法 [MST++](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus) 赢得了 NTIRE 2022 RGB 图像光谱恢复挑战赛冠军。如果您觉得这个仓库有用，请给它一个星标 ⭐ 并考虑在您的研究中引用我们的论文。谢谢。\n\n\n#### 最新动态\n- **2025.02.10：** [NTIRE 2025 低光图像增强挑战赛](https:\u002F\u002Fcodalab.lisn.upsaclay.fr\u002Fcompetitions\u002F21636) 已启动。欢迎使用我们的 [Retinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) 和 MST 参加此次挑战赛。😄\n- **2024.04.09：** 我们发布了三种传统基于模型的方法的结果，即 [TwIST](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4358846)、[GAP-TV](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.03890.pdf) 和 [DeSCI](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.07837.pdf)，以方便您开展研究。请随意使用。😄\n- **2024.03.21：** 我们的方法 [Retinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) 和 [MST++](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus)（NTIRE 2022 光谱重建挑战赛冠军）在 [NTIRE 2024 低光增强挑战赛](https:\u002F\u002Fcodalab.lisn.upsaclay.fr\u002Fcompetitions\u002F17640) 中排名前二。代码、预训练模型、训练日志和增强结果将在 [Retinexformer 仓库](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) 中发布。敬请期待！🚀\n- **2024.02.15：** [NTIRE 2024 低光增强挑战赛](https:\u002F\u002Fcodalab.lisn.upsaclay.fr\u002Fcompetitions\u002F17640) 开始。欢迎使用我们的 [Retinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer) 或 [MST++](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus)（NTIRE 2022 光谱重建挑战赛冠军）参加此次挑战赛！:trophy:\n- **2023.12.02：** 真实实验的代码已更新。欢迎查看和使用。🥳\n- **2023.11.24：** [BiSRNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.10299.pdf)（NeurIPS 2023）的代码、模型和结果已在本仓库发布。我们还开发了一个用于二值化 SCI 重建的工具箱 [BiSCI](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FBiSCI)。欢迎查看和使用。🌟\n- **2023.11.02：** MST、MST++、CST 和 DAUHST 已被添加到 [Awesome-Transformer-Attention](https:\u002F\u002Fgithub.com\u002Fcmhungsteve\u002FAwesome-Transformer-Attention\u002Fblob\u002Fmain\u002FREADME_2.md#image-restoration) 集合中。💫\n- **2023.09.21：** 我们的新工作 [BiSRNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.10299.pdf) 被 NeurIPS 23 接收。代码将在本仓库和 [BiSCI](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FBiSCI) 发布\n- **2023.02.26：** 我们发布了 [五个真实场景](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1VTMgEbfX9MVpGo98XVVFKaANtQfgApAg?usp=sharing) 和 [十个模拟场景](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1EkJsOxYKgyehZdOgKUeY75dU19GHZgE-?usp=sharing) 的 RGB 图像。请随意查看和使用。🌟\n- **2022.11.02：** 我们提供了更多最先进方法的视觉结果，以及评估模型参数和计算复杂度的功能。请随意查看和使用。:high_brightness:\n- **2022.10.23：** [DAUHST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102) 的代码、模型和重建 HSI（高光谱图像, Hyperspectral Image）结果已发布。🔥\n- **2022.09.15：** 我们的 [DAUHST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102) 已被 NeurIPS 2022 接收，代码和模型即将发布。:rocket:\n- **2022.07.20：** [CST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845) 的代码、模型和重建 HSI 结果已发布。🔥\n- **2022.07.04：** 我们的论文 [CST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845) 已被 ECCV 2022 接收，代码和模型即将发布。:rocket:\n- **2022.06.14：** [MST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910) 和 [MST++](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910) 的代码和模型已发布。本仓库支持 12 种基于学习的方法，作为光谱压缩成像的工具箱。模型库将会扩大。🔥\n- **2022.05.20：** 我们的工作 [DAUHST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102) 已发布在 arxiv 上。:dizzy:\n- **2022.04.02：** 后续工作 [MST++](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus\u002F) 赢得了 NTIRE 2022 光谱重建挑战赛冠军。:trophy: \n- **2022.03.09：** 我们的工作 [CST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845) 已发布在 arxiv 上。:dizzy:\n- **2022.03.02：** 我们的论文 MST 已被 CVPR 2022 接收，代码和模型即将发布。:rocket: \n\n|                          *场景 2*                           |                          *场景 3*                           |                          *场景 4*                           |                          *场景 7*                           |\n| :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_3e38422f92ab.gif\"  height=170 width=170> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_43d7bc9dddb5.gif\" width=170 height=170> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_535d210f3bdc.gif\" width=170 height=170> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_202f0d4bdc64.gif\" width=170 height=170> |\n\n\n&nbsp;\n\n## 1. 与最先进方法的比较\n支持 12 种基于学习（learning-based）的算法和 3 种基于模型（model-based）的方法。\n\n\u003Cdetails open>\n\u003Csummary>\u003Cb>支持的算法：\u003C\u002Fb>\u003C\u002Fsummary>\n\n* [x] [MST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910) (CVPR 2022)\n* [x] [CST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845) (ECCV 2022)\n* [x] [DAUHST](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102) (NeurIPS 2022)\n* [x] [BiSRNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10299) (NeurIPS 2023)\n* [x] [MST++](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910) (CVPRW 2022)\n* [x] [HDNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02149) (CVPR 2022)\n* [x] [BIRNAT](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9741335\u002F) (TPAMI 2022)\n* [x] [DGSMP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.07152) (CVPR 2021)\n* [x] [GAP-Net](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.08364) (Arxiv 2020)\n* [x] [TSA-Net](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58592-1_12) (ECCV 2020)\n* [x] [ADMM-Net](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FMa_Deep_Tensor_ADMM-Net_for_Snapshot_Compressive_Imaging_ICCV_2019_paper.html) (ICCV 2019)\n* [x] [λ-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9010044) (ICCV 2019)\n* [x] [TwIST](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4358846) (TIP 2007)\n* [x] [GAP-TV](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.03890.pdf) (ICIP 2015)\n* [x] [DeSCI](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.07837.pdf) (TPAMI 2019)\n\n\n\u003C\u002Fdetails>\n\n我们计划在未来扩充模型库（model zoo）。\n\n|                   MST 与 SOTA 对比                   |                 CST 与 MST 对比                 |\n| :--------------------------------------------------: : :---------------------------------------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_e8d1485c53e7.png\"  height=320> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_b195fe4e907d.png\" height=320> |\n|                  MST++ 与 SOTA 对比                  |               DAUHST 与 SOTA 对比               |\n|   \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_2da46fe1e327.png\"  height=320>    | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_0840e3735fe4.png\" height=320>  |\n\n|            BiSRNet 与 SOTA BNNs 对比            |\n| :---------------------------------------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_readme_25d63c9d511a.png\"  width=812> |\n\n\n\n\n### 仿真数据集上的定量比较\n\n|                            方法（Method）                            | 参数量 (M) | 计算量 (G) | PSNR  | SSIM  |                          模型仓库（Model Zoo）                           |                      仿真结果（Simulation Result）                      |                         真实场景结果（Real Result）                         |\n| :----------------------------------------------------------: | :--------: | :-------: | :---: | :---: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |\n|                            [TwIST](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F4358846)                             |     -      |     -     | 23.12 | 0.669 |                              -                               | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1J2H7DoYblfjd9FI4kjqEuP9u8nGwrOWI?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1-W8UtEGvWA7qhO_W36c5FQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1pmxDIBGGimdsM82GXbTsjWHJGCwNSg8v?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1aF_FC6kXcPMv_IDtDPSuJA?pwd=mst1) |\n|        [GAP-TV](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.03890.pdf)        |     -      |     -     | 24.36 | 0.669 |                              -                               | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F17VA2RKwf2u9SHXAcn42-xcpL7mQM9SAm?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1TbFK6sJyM-S-qT5PNG4L1Q?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1fL3Lh1s54jB7Tu8cFIkVGAHjKvlZZADt?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ngKdDwYF34OKd71mwy-0Dg?pwd=mst1) |\n|        [DeSCI](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.07837.pdf)         |     -      |     -     | 25.27 | 0.721 |                              -                               | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1funq1tLjNFyKDTEzKvf3OjqxQheH5_Hw?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Ztzh7TQi52avqF9rLvNbjQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F14P9oioxqAgRmY57yLEI29mLz-i7R0clu?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZNUxhElif4HGRlor6CccDg?pwd=mst1) |\n|    [λ-Net](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9010044)     |   62.64    |  117.98   | 28.53 | 0.841 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11DwTFdgtG7sRnBwvkxxfN9rcOICsEdpC?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xXkL2p4_mCLeTGa68wEbNQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1csOZ2Kfto_tWIiSD0hc2nzKjR4Ze7ftA?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1-0LjvHnkINW8YYaBiA4-VA?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1vVjlVLUm7Gb5zkDxH-mBwM-KnjXbgsjp?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Ah9NxFyBhKuzQaAL9WsmRw?pwd=mst1) |\n| [TSA-Net](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58592-1_12) |   44.25    |  110.06   | 31.46 | 0.894 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1f29eS8WqXu31310nD-7mRR81XfLBYKBd?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1riGZ83AXXkcjHiGVNrNeYg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1BOXBIu2Ze-L__XuLRUu4y9-lq0fw2FJd?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1icqxYnsD27zrDQ95STfRvw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1OPZF0PpThWhC7aqNqhPEh3dX63OwGTC4?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1T3HX_Z7IqmC2imNLfgyBpg?pwd=mst1) |\n|          [DGSMP](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.07152)           |    3.76    |  646.65   | 32.63 | 0.917 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1j1k8mYKWh8FVe77Cz8hj69nI2lK2D5QC?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1v-uqYJZ5mQxOupLc6E_C1g?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1PjnTYEEDfWlTpe0jzCmImxxXLfy5Viva?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Y1AJICGUqUJEV-74Eg2FEg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1Sw5FPMCOrkF5a9IltYWA-QaYXkpKRw5-?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1BLb2w-yT25OTeeFwGFyMkQ?pwd=mst1) |\n|         [GAP-Net](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.08364)          |    4.27    |   78.58   | 33.26 | 0.917 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1AF3P42DZtBzKpWvjTVKYoLmGHsL2f_SL?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1hraGGd_HEsfCkSGv5QyOaw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F16_kqEj4nlE_KlHzu8Q6zQCizTTBII82F?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F12azFFSEOic7iNTFGmC7wMw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1u7MoHnZraM4NL2uAxC7U04sBhy4AoY_S?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1YDqnQ9BlqWmvdfQKx5vMQw?pwd=mst1) |\n| [ADMM-Net](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fhtml\u002FMa_Deep_Tensor_ADMM-Net_for_Snapshot_Compressive_Imaging_ICCV_2019_paper.html) |    4.27    |   78.58   | 33.58 | 0.918 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1I9JqdyikulUVjXcdciHaJxfAceqfaF2G?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ddkA9TazTq0rZReFYgGHMg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1WT0QYfC5dbigl9znD_JFNHpH0k_rTDc-?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1b2sRrJaS3PKYqqQErmtIJg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1MJKLjAo7Yzq_eF1JQK89-r40ItLUq4c8?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1aESUHFHjeMSL7shES3YmSw?pwd=mst1) |\n| [BIRNAT](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9741335\u002F) |    4.40    |  2122.66  | 37.58 | 0.960 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1bwhy0csM6GSNY0Qe9RS7_U3Bm9JJjHoc?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1HDMQu3jWvQ9X1yQA5dprGg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1fp4-D1-RGx9bICSmzpIiKBRAEmDOE3gD?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pHfEKypguCLydNsmCGojpw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1BUmusDHR-E533NdlHDLCYm_CwSCBJ0IF?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1l77HQkVbH7R6vbWNwwksTA?pwd=mst1) |\n|          [HDNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02149)           |    2.37    |  154.76   | 34.97 | 0.943 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1F41BlUQulzPCf5yNo-q6V6Mdtr7bPV6-?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1mCoGHT22cw7ElVSaXgU5Lw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1oUrgFUWIKR-96zAT2Y42UlAM6l5NVJsf?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Nb5IE02iAClMBC4OHcnePQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1vhzRWzIbhYxeZL-N2ZxaW2WqeT80Jogm?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1BXyMsecmENSoU9mCUKU2xA?pwd=mst1) |\n|          [MST-S](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910)           |    0.93    |   12.96   | 34.26 | 0.935 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F176f_PammL0ZrIg3lVaQwd6Vr6Ui8FANs?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZQ08_Ec3a_-8YYAa5ms5PQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1afMG9PndlvjDTtl7UJZoElmDe5FtpCoW?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xzJRVjnI-7A54Rj_zi3crA?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1FSwDZKAC03B8XhyAkaL9t1MWFz6q985y?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1H1pT1oKfK_o4EDXkts8VMQ?pwd=mst1) |\n|          [MST-M](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910)           |    1.50    |   18.07   | 34.94 | 0.943 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F176f_PammL0ZrIg3lVaQwd6Vr6Ui8FANs?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZQ08_Ec3a_-8YYAa5ms5PQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1hnEuwYO9luwLmPeT98cUaik_zCPK6z30?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OUuozfd3zLqzBjHnf6evCQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1FSwDZKAC03B8XhyAkaL9t1MWFz6q985y?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1H1pT1oKfK_o4EDXkts8VMQ?pwd=mst1) |\n|          [MST-L](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910)           |    2.03    |   28.15   | 35.18 | 0.948 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F176f_PammL0ZrIg3lVaQwd6Vr6Ui8FANs?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZQ08_Ec3a_-8YYAa5ms5PQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F18ZF5wC1LRmqOh6VDeXD4eB8mP0Dvv6jb?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1imed0w1CWqx7IOlSpVh7qw?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1FSwDZKAC03B8XhyAkaL9t1MWFz6q985y?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1H1pT1oKfK_o4EDXkts8VMQ?pwd=mst1) |\n|          [MST++](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07910)           |    1.33    |   19.42   | 35.99 | 0.951 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1rbV8LYD5k1RVR4usMORoXxY2szlFsr_9?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1QUx_MpYCBSU4Zas5gpao2g?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F14sz-y99fEAJDQAN1itE5K-BlMHC1Tt3z?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1s3btC7QQrasW1NqFzOm8fQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F14CgUhfUp4BFalyigL4RDnTYyGiN5ojRK?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1fYsAvAjXTLLWpzlt_S2ppQ?pwd=mst1) |\n|          [CST-S](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845)           |    1.20    |   11.67   | 34.71 | 0.940 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-SZDH0PuUyjLlvKfON-LL02dkr2LBuL9?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xq_YV6yO0zN6AULwU9ZPyg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1AdcUGiiPTfdt366NcBV8Texuu1Qw7_DI?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Vl6xufpmLWXSmhaVCsKkzQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11RNDmA3VrPWdbj4H-mInGfZa60W86YEc?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1m3abRwjuaneFf1cCE85yMw?pwd=mst1) |\n|          [CST-M](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845)           |    1.36    |   16.91   | 35.31 | 0.947 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-SZDH0PuUyjLlvKfON-LL02dkr2LBuL9?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xq_YV6yO0zN6AULwU9ZPyg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1FpMTtKSIN-t_natQIX2gKcu8MfPQLCG1?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1u3IjojML3H7AwSMe0CxV_Q?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11RNDmA3VrPWdbj4H-mInGfZa60W86YEc?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1m3abRwjuaneFf1cCE85yMw?pwd=mst1) |\n|          [CST-L](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845)           |    3.00    |   27.81   | 35.85 | 0.954 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-SZDH0PuUyjLlvKfON-LL02dkr2LBuL9?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xq_YV6yO0zN6AULwU9ZPyg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1MRFeMoi4JzhFrf_346USCFq98kdds9HY?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1UXwTyr-xZtDR68wzmeaCEA?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11RNDmA3VrPWdbj4H-mInGfZa60W86YEc?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1m3abRwjuaneFf1cCE85yMw?pwd=mst1) |\n|        [CST-L-Plus](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.04845)        |    3.00    |   40.10   | 36.12 | 0.957 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-SZDH0PuUyjLlvKfON-LL02dkr2LBuL9?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xq_YV6yO0zN6AULwU9ZPyg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1sGHrkbYKjN3XqsduQL2mesXqO1XMeqGI?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1PsoJwVfZ7qYi6mnq_q_gDA?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F11RNDmA3VrPWdbj4H-mInGfZa60W86YEc?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1m3abRwjuaneFf1cCE85yMw?pwd=mst1) |\n|       [DAUHST-2stg](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102)        |    1.40    |   18.44   | 36.34 | 0.952 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1qOrnp1crkk1z5ha56UoyOqDMFGfWlLC7?usp=sharing) \u002F[百度网盘]( https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1_RxqZQpCcYH50nxhSWeb0w?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |\n|       [DAUHST-3stg](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102)        |    2.08    |   27.17   | 37.21 | 0.959 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1uwXh5JrD4rnh_xYBpF4K4wI4lcTD1j4p?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1iYtxPuf1rkFWut5UdEYqtg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |\n|       [DAUHST-5stg](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102)        |    3.44    |   44.61   | 37.75 | 0.962 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1snTVZSUsbtzjJ5lxbPbaKhpTJX28Byuh?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1k1q0Y8QPgMZhThBEfzGKzQ?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |\n|       [DAUHST-9stg](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10102)        |    6.15    |   79.50   | 38.36 | 0.967 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1zhYRhFP8ee4YHk3-M0Nrl6KE_-n0gDLr?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1O2bxz-wEMF0mnrnOXHpC3A?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1rzZG1L-s2rYmR-wHXg9KnnGPbOIT5GaP?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F10vGcOirPk2L8sQg6uJoJkg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1SgQhXXPYn6mYGSRMz5Ntsnab26XdjOc9?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1S2MKaSKdU2v53_CZnuYkpQ?pwd=mst1) |\n|         [BiSRNet](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10299)          |   0.036    |   1.18    | 29.76 | 0.837 | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1MIsuIHuAaETZIRosjnKh2cvVgVDh9ZHv\u002Fview?usp=drive_link) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1wrHExqzl07niPS0fdMCAhg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1QpZV6MzkijtwFI9MJp87bow4XxAok50m\u002Fview?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F11ifwb4tUDAVk7oTlFBKfUg?pwd=mst1) | [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1Hgdq43kbmHm1HG9SdLGryiIakBsBWuZp?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1I8051aIYsQEG8ybfSdPF0g?pwd=mst1) |\n\n性能报告基于 KAIST 数据集的 10 个场景。FLOPS 的测试尺寸为 256 x 256。\n\n我们还提供了 [五个真实场景](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1VTMgEbfX9MVpGo98XVVFKaANtQfgApAg?usp=sharing) 和 [十个模拟场景](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1EkJsOxYKgyehZdOgKUeY75dU19GHZgE-?usp=sharing) 的 RGB 图像，方便您绘制图表。\n\n注意：`百度网盘` 的提取码为 `mst1`\n\n\n&nbsp;\n\n\n\n\n## 2. 创建环境：\n\n- Python 3（推荐使用 [Anaconda](https:\u002F\u002Fwww.anaconda.com\u002Fdownload\u002F#linux)）\n\n- NVIDIA GPU + [CUDA](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-downloads)\n\n- Python 包：\n\n```shell\n  pip install -r requirements.txt\n```\n\n\n&nbsp;\n\n\n## 3. 准备数据集：\n下载 cave_1024_28（[百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1X_uXxgyO-mslnCTn4ioyNQ)，提取码：`fo0q` | [One Drive](https:\u002F\u002Fbupteducn-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fmengziyi_bupt_edu_cn\u002FEmNAsycFKNNNgHfV9Kib4osB7OD4OSu-Gu6Qnyy5PweG0A?e=5NrM6S)）、CAVE_512_28（[百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ue26weBAbn61a7hyT9CDkg)，提取码：`ixoe` | [One Drive](https:\u002F\u002Fmailstsinghuaeducn-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Flin-j21_mails_tsinghua_edu_cn\u002FEjhS1U_F7I1PjjjtjKNtUF8BJdsqZ6BSMag_grUfzsTABA?e=sOpwm4)）、KAIST_CVPR2021（[百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1LfPqGe0R_tuQjCXC_fALZA)，提取码：`5mmn` | [One Drive](https:\u002F\u002Fmailstsinghuaeducn-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Flin-j21_mails_tsinghua_edu_cn\u002FEkA4B4GU8AdDu0ZkKXdewPwBd64adYGsMPB8PNCuYnpGlA?e=VFb3xP)）、TSA_simu_data（[百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1LI9tMaSprtxT8PiAG1oETA)，提取码：`efu8` | [One Drive](https:\u002F\u002F1drv.ms\u002Fu\u002Fs!Au_cHqZBKiu2gYFDwE-7z1fzeWCRDA?e=ofvwrD)）、TSA_real_data（[百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1RoOb1CKsUPFu0r01tRi5Bg)，提取码：`eaqe` | [One Drive](https:\u002F\u002F1drv.ms\u002Fu\u002Fs!Au_cHqZBKiu2gYFTpCwLdTi_eSw6ww?e=uiEToT)），然后将其放入 `datasets\u002F` 的相应文件夹中，并按以下形式重新整理：\n\n```shell\n|--MST\n    |--real\n    \t|-- test_code\n    \t|-- train_code\n    |--simulation\n    \t|-- test_code\n    \t|-- train_code\n    |--visualization\n    |--datasets\n        |--cave_1024_28\n            |--scene1.mat\n            |--scene2.mat\n            ：  \n            |--scene205.mat\n        |--CAVE_512_28\n            |--scene1.mat\n            |--scene2.mat\n            ：  \n            |--scene30.mat\n        |--KAIST_CVPR2021  \n            |--1.mat\n            |--2.mat\n            ： \n            |--30.mat\n        |--TSA_simu_data  \n            |--mask.mat   \n            |--Truth\n                |--scene01.mat\n                |--scene02.mat\n                ： \n                |--scene10.mat\n        |--TSA_real_data  \n            |--mask.mat   \n            |--Measurements\n                |--scene1.mat\n                |--scene2.mat\n                ： \n                |--scene5.mat\n```\n\n遵循 TSA-Net 和 DGSMP 的方法，我们使用 CAVE 数据集（cave_1024_28）作为模拟训练集。CAVE（CAVE_512_28）和 KAIST（KAIST_CVPR2021）数据集均用作真实训练集。\n\n\n&nbsp;\n\n\n## 4. 模拟实验：\n\n### 4.1　训练\n\n```shell\ncd MST\u002Fsimulation\u002Ftrain_code\u002F\n\n# MST_S\npython train.py --template mst_s --outf .\u002Fexp\u002Fmst_s\u002F --method mst_s \n\n# MST_M\npython train.py --template mst_m --outf .\u002Fexp\u002Fmst_m\u002F --method mst_m  \n\n# MST_L\npython train.py --template mst_l --outf .\u002Fexp\u002Fmst_l\u002F --method mst_l \n\n# CST_S\npython train.py --template cst_s --outf .\u002Fexp\u002Fcst_s\u002F --method cst_s \n\n# CST_M\npython train.py --template cst_m --outf .\u002Fexp\u002Fcst_m\u002F --method cst_m  \n\n# CST_L\npython train.py --template cst_l --outf .\u002Fexp\u002Fcst_l\u002F --method cst_l\n\n# CST_L_Plus\npython train.py --template cst_l_plus --outf .\u002Fexp\u002Fcst_l_plus\u002F --method cst_l_plus\n\n# GAP-Net\npython train.py --template gap_net --outf .\u002Fexp\u002Fgap_net\u002F --method gap_net \n\n# ADMM-Net\npython train.py --template admm_net --outf .\u002Fexp\u002Fadmm_net\u002F --method admm_net \n\n# TSA-Net\npython train.py --template tsa_net --outf .\u002Fexp\u002Ftsa_net\u002F --method tsa_net \n\n# HDNet\npython train.py --template hdnet --outf .\u002Fexp\u002Fhdnet\u002F --method hdnet \n\n# DGSMP\npython train.py --template dgsmp --outf .\u002Fexp\u002Fdgsmp\u002F --method dgsmp \n\n# BIRNAT\npython train.py --template birnat --outf .\u002Fexp\u002Fbirnat\u002F --method birnat \n\n# MST_Plus_Plus\npython train.py --template mst_plus_plus --outf .\u002Fexp\u002Fmst_plus_plus\u002F --method mst_plus_plus \n\n# λ-Net\npython train.py --template lambda_net --outf .\u002Fexp\u002Flambda_net\u002F --method lambda_net\n\n# DAUHST-2stg\npython train.py --template dauhst_2stg --outf .\u002Fexp\u002Fdauhst_2stg\u002F --method dauhst_2stg\n\n# DAUHST-3stg\npython train.py --template dauhst_3stg --outf .\u002Fexp\u002Fdauhst_3stg\u002F --method dauhst_3stg\n\n# DAUHST-5stg\npython train.py --template dauhst_5stg --outf .\u002Fexp\u002Fdauhst_5stg\u002F --method dauhst_5stg\n\n# DAUHST-9stg\npython train.py --template dauhst_9stg --outf .\u002Fexp\u002Fdauhst_9stg\u002F --method dauhst_9stg\n\n# BiSRNet\npython train.py --template bisrnet --outf .\u002Fexp\u002Fbisrnet\u002F --method bisrnet\n```\n\n- 训练日志、训练好的模型和重建的高光谱图像（HSI, Hyperspectral Image）将保存在 `MST\u002Fsimulation\u002Ftrain_code\u002Fexp\u002F` 中\n\n\n### 4.2　测试\t\n\n从（[Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1zgB7jHqTzY1bjCSzdX4lKQEGyK3bpWIx?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1CH4uq_NZPpo5ra2tFzAdfQ?pwd=mst1)，提取码：`mst1`）下载预训练模型库，并将其放置到 `MST\u002Fsimulation\u002Ftest_code\u002Fmodel_zoo\u002F`\n\n运行以下命令在模拟数据集上测试模型。\n\n```python\ncd MST\u002Fsimulation\u002Ftest_code\u002F\n\n# MST_S\npython test.py --template mst_s --outf .\u002Fexp\u002Fmst_s\u002F --method mst_s --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_s.pth\n\n# MST_M\npython test.py --template mst_m --outf .\u002Fexp\u002Fmst_m\u002F --method mst_m --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_m.pth\n\n# MST_L\npython test.py --template mst_l --outf .\u002Fexp\u002Fmst_l\u002F --method mst_l --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_l.pth\n\n# CST_S\npython test.py --template cst_s --outf .\u002Fexp\u002Fcst_s\u002F --method cst_s --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_s.pth\n\n# CST_M\npython test.py --template cst_m --outf .\u002Fexp\u002Fcst_m\u002F --method cst_m --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_m.pth\n\n# CST_L\npython test.py --template cst_l --outf .\u002Fexp\u002Fcst_l\u002F --method cst_l --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_l.pth\n\n# CST_L_Plus\npython test.py --template cst_l_plus --outf .\u002Fexp\u002Fcst_l_plus\u002F --method cst_l_plus --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_l_plus.pth\n\n# GAP_Net\npython test.py --template gap_net --outf .\u002Fexp\u002Fgap_net\u002F --method gap_net --pretrained_model_path .\u002Fmodel_zoo\u002Fgap_net\u002Fgap_net.pth\n\n# ADMM_Net\npython test.py --template admm_net --outf .\u002Fexp\u002Fadmm_net\u002F --method admm_net --pretrained_model_path .\u002Fmodel_zoo\u002Fadmm_net\u002Fadmm_net.pth\n\n# TSA_Net\npython test.py --template tsa_net --outf .\u002Fexp\u002Ftsa_net\u002F --method tsa_net --pretrained_model_path .\u002Fmodel_zoo\u002Ftsa_net\u002Ftsa_net.pth\n\n# HDNet\npython test.py --template hdnet --outf .\u002Fexp\u002Fhdnet\u002F --method hdnet --pretrained_model_path .\u002Fmodel_zoo\u002Fhdnet\u002Fhdnet.pth\n\n# DGSMP\npython test.py --template dgsmp --outf .\u002Fexp\u002Fdgsmp\u002F --method dgsmp --pretrained_model_path .\u002Fmodel_zoo\u002Fdgsmp\u002Fdgsmp.pth\n\n# BIRNAT\npython test.py --template birnat --outf .\u002Fexp\u002Fbirnat\u002F --method birnat --pretrained_model_path .\u002Fmodel_zoo\u002Fbirnat\u002Fbirnat.pth\n\n# MST_Plus_Plus\npython test.py --template mst_plus_plus --outf .\u002Fexp\u002Fmst_plus_plus\u002F --method mst_plus_plus --pretrained_model_path .\u002Fmodel_zoo\u002Fmst_plus_plus\u002Fmst_plus_plus.pth\n\n# λ-Net\npython test.py --template lambda_net --outf .\u002Fexp\u002Flambda_net\u002F --method lambda_net --pretrained_model_path .\u002Fmodel_zoo\u002Flambda_net\u002Flambda_net.pth\n\n# DAUHST-2stg\npython test.py --template dauhst_2stg --outf .\u002Fexp\u002Fdauhst_2stg\u002F --method dauhst_2stg --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst_2stg\u002Fdauhst_2stg.pth\n\n# DAUHST-3stg\npython test.py --template dauhst_3stg --outf .\u002Fexp\u002Fdauhst_3stg\u002F --method dauhst_3stg --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst_3stg\u002Fdauhst_3stg.pth\n\n# DAUHST-5stg\npython test.py --template dauhst_5stg --outf .\u002Fexp\u002Fdauhst_5stg\u002F --method dauhst_5stg --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst_5stg\u002Fdauhst_5stg.pth\n\n# DAUHST-9stg\npython test.py --template dauhst_9stg --outf .\u002Fexp\u002Fdauhst_9stg\u002F --method dauhst_9stg --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst_9stg\u002Fdauhst_9stg.pth\n\n# BiSRNet\npython test.py --template bisrnet --outf .\u002Fexp\u002Fbisrnet\u002F --method bisrnet --pretrained_model_path .\u002Fmodel_zoo\u002Fbisrnet\u002Fbisrnet.pth\n```\n\n- 重建的高光谱图像（HSI, Hyperspectral Image）将输出到 `MST\u002Fsimulation\u002Ftest_code\u002Fexp\u002F`。然后将重建结果放入 `MST\u002Fsimulation\u002Ftest_code\u002FQuality_Metrics\u002Fresults`，并运行以下 MATLAB 命令来计算重建高光谱图像的 PSNR（峰值信噪比）和 SSIM（结构相似性指数）。\n\n```shell\nRun cal_quality_assessment.m\n```\n\n\n\n- #### 评估模型的参数量（Params）和浮点运算量（FLOPS）\n\n  我们在 `simulation\u002Ftest_code\u002Futils.py` 中提供了两个函数 `my_summary()` 和 `my_summary_bnn()`。使用它们来评估全精度模型和二值化模型的参数量和 FLOPS。\n\n```shell\nfrom utils import my_summary, my_summary_bnn\nmy_summary(MST(), 256, 256, 28, 1)\nmy_summary_bnn(BiSRNet(), 256, 256, 28, 1)\n```\n\n### 4.3　可视化\t\n\n- 将重建的高光谱图像放入 `MST\u002Fvisualization\u002Fsimulation_results\u002Fresults`，并将其重命名为 method.mat，例如 mst_s.mat。\n\n- 生成重建高光谱图像的 RGB 图像\n\n```shell\n cd MST\u002Fvisualization\u002F\n Run show_simulation.m \n```\n\n- 绘制光谱密度曲线\n\n```shell\ncd MST\u002Fvisualization\u002F\nRun show_line.m\n```\n\n\n&nbsp;\n\n\n## 5. 真实实验：\n\n### 5.1　训练\n\n```shell\ncd MST\u002Freal\u002Ftrain_code\u002F\n\n# MST_S\npython train.py --template mst_s --outf .\u002Fexp\u002Fmst_s\u002F --method mst_s \n\n# MST_M\npython train.py --template mst_m --outf .\u002Fexp\u002Fmst_m\u002F --method mst_m  \n\n# MST_L\npython train.py --template mst_l --outf .\u002Fexp\u002Fmst_l\u002F --method mst_l \n\n# CST_S\npython train.py --template cst_s --outf .\u002Fexp\u002Fcst_s\u002F --method cst_s \n\n# CST_M\npython train.py --template cst_m --outf .\u002Fexp\u002Fcst_m\u002F --method cst_m  \n\n# CST_L\npython train.py --template cst_l --outf .\u002Fexp\u002Fcst_l\u002F --method cst_l\n\n# CST_L_Plus\npython train.py --template cst_l_plus --outf .\u002Fexp\u002Fcst_l_plus\u002F --method cst_l_plus\n\n# GAP-Net\npython train.py --template gap_net --outf .\u002Fexp\u002Fgap_net\u002F --method gap_net \n\n# ADMM-Net\npython train.py --template admm_net --outf .\u002Fexp\u002Fadmm_net\u002F --method admm_net \n\n# TSA-Net\npython train.py --template tsa_net --outf .\u002Fexp\u002Ftsa_net\u002F --method tsa_net \n\n# HDNet\npython train.py --template hdnet --outf .\u002Fexp\u002Fhdnet\u002F --method hdnet \n\n# DGSMP\npython train.py --template dgsmp --outf .\u002Fexp\u002Fdgsmp\u002F --method dgsmp \n\n# BIRNAT\npython train.py --template birnat --outf .\u002Fexp\u002Fbirnat\u002F --method birnat \n\n# MST_Plus_Plus\npython train.py --template mst_plus_plus --outf .\u002Fexp\u002Fmst_plus_plus\u002F --method mst_plus_plus \n\n# λ-Net\npython train.py --template lambda_net --outf .\u002Fexp\u002Flambda_net\u002F --method lambda_net\n\n# DAUHST-2stg\npython train.py --template dauhst_2stg --outf .\u002Fexp\u002Fdauhst_2stg\u002F --method dauhst_2stg\n\n# DAUHST-3stg\npython train.py --template dauhst_3stg --outf .\u002Fexp\u002Fdauhst_3stg\u002F --method dauhst_3stg\n\n# DAUHST-5stg\npython train.py --template dauhst_5stg --outf .\u002Fexp\u002Fdauhst_5stg\u002F --method dauhst_5stg\n\n# DAUHST-9stg\npython train.py --template dauhst_9stg --outf .\u002Fexp\u002Fdauhst_9stg\u002F --method dauhst_9stg\n\n# BiSRNet\npython train_s.py --outf .\u002Fexp\u002Fbisrnet\u002F --method bisrnet\n```\n\n- 如果没有大显存 GPU，请添加 `--size 128` 以使用较小的图像块尺寸。\n\n- 训练日志、训练好的模型和重建的高光谱图像将保存在 `MST\u002Freal\u002Ftrain_code\u002Fexp\u002F`\n\n- 注意：如果无法获取掩码数据或 GPU 资源有限，除 BiSRNet 外的其他方法可以使用 `train_s.py` 进行训练。此时需要替换上述命令中的 `--method` 参数，并进行一些修改。\n\n\n### 5.2　测试\t\n\nBiSRNet 的预训练模型可以从 ([Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1zQ7PFuiaEgIpulBl8TA7S_8Am93nAKPb\u002Fview?usp=sharing) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1hiPbuUEBnIGQP6Ks9agfWQ?pwd=mst1)，提取码：`mst1`) 下载，并将其放置到 `MST\u002Freal\u002Ftest_code\u002Fmodel_zoo\u002F`\n\n```python\ncd MST\u002Freal\u002Ftest_code\u002F\n\n# MST_S\npython test.py --outf .\u002Fexp\u002Fmst_s\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_s.pth\n\n# MST_M\npython test.py --outf .\u002Fexp\u002Fmst_m\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_m.pth\n\n# MST_L\npython test.py  --outf .\u002Fexp\u002Fmst_l\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fmst\u002Fmst_l.pth\n\n# CST_S\npython test.py --outf .\u002Fexp\u002Fcst_s\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_s.pth\n\n# CST_M\npython test.py --outf .\u002Fexp\u002Fcst_m\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_m.pth\n\n# CST_L\npython test.py --outf .\u002Fexp\u002Fcst_l\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_l.pth\n\n# CST_L_Plus\npython test.py --outf .\u002Fexp\u002Fcst_l_plus\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fcst\u002Fcst_l_plus.pth\n\n# GAP_Net\npython test.py --outf .\u002Fexp\u002Fgap_net\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fgap_net\u002Fgap_net.pth\n\n# ADMM_Net\npython test.py --outf .\u002Fexp\u002Fadmm_net\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fadmm_net\u002Fadmm_net.pth\n\n# TSA_Net\npython test.py --outf .\u002Fexp\u002Ftsa_net\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Ftsa_net\u002Ftsa_net.pth\n\n# HDNet\npython test.py --template hdnet --outf .\u002Fexp\u002Fhdnet\u002F --method hdnet --pretrained_model_path .\u002Fmodel_zoo\u002Fhdnet\u002Fhdnet.pth\n\n# DGSMP\npython test.py --outf .\u002Fexp\u002Fdgsmp\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fdgsmp\u002Fdgsmp.pth\n\n# BIRNAT\npython test.py --outf .\u002Fexp\u002Fbirnat\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fbirnat\u002Fbirnat.pth\n\n# MST_Plus_Plus\npython test.py --outf .\u002Fexp\u002Fmst_plus_plus\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fmst_plus_plus\u002Fmst_plus_plus.pth\n\n# λ-Net\npython test.py --outf .\u002Fexp\u002Flambda_net\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Flambda_net\u002Flambda_net.pth\n\n# DAUHST_2stg\npython test.py --outf .\u002Fexp\u002Fdauhst_2stg\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst\u002Fdauhst_2stg.pth\n\n# DAUHST_3stg\npython test.py --outf .\u002Fexp\u002Fdauhst_3stg\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst\u002Fdauhst_3stg.pth\n\n# DAUHST_5stg\npython test.py --outf .\u002Fexp\u002Fdauhst_5stg\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst\u002Fdauhst_5stg.pth\n\n# DAUHST_9stg\npython test.py --outf .\u002Fexp\u002Fdauhst_9stg\u002F --pretrained_model_path .\u002Fmodel_zoo\u002Fdauhst\u002Fdauhst_9stg.pth\n\n# BiSRNet\npython test.py --outf .\u002Fexp\u002Fbisrnet  --pretrained_model_path .\u002Fmodel_zoo\u002Fbisrnet\u002Fbisrnet.pth --method bisrnet\n```\n\n- 重建的高光谱图像（HSI, Hyperspectral Image）将输出到 `MST\u002Freal\u002Ftest_code\u002Fexp\u002F`  \n\n### 5.3　可视化\t\n\n- 将重建的 HSI 放入 `MST\u002Fvisualization\u002Freal_results\u002Fresults` 并重命名为 method.mat，例如 mst_plus_plus.mat。\n\n- 生成重建 HSI 的 RGB 图像\n\n```shell\ncd MST\u002Fvisualization\u002F\nRun show_real.m\n```\n\n\n&nbsp;\n\n\n## 6. 引用\n如果本仓库对您有帮助，请考虑引用我们的工作：\n\n\n```shell\n\n\n# MST\n@inproceedings{mst,\n  title={Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction},\n  author={Yuanhao Cai and Jing Lin and Xiaowan Hu and Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},\n  booktitle={CVPR},\n  year={2022}\n}\n\n\n# CST\n@inproceedings{cst,\n  title={Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction},\n  author={Yuanhao Cai and Jing Lin and Xiaowan Hu and Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},\n  booktitle={ECCV},\n  year={2022}\n}\n\n\n# DAUHST\n@inproceedings{dauhst,\n  title={Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging},\n  author={Yuanhao Cai and Jing Lin and Haoqian Wang and Xin Yuan and Henghui Ding and Yulun Zhang and Radu Timofte and Luc Van Gool},\n  booktitle={NeurIPS}, \n  year={2022}\n}\n\n\n# BiSCI\n@inproceedings{bisci,\n  title={Binarized Spectral Compressive Imaging},\n  author={Yuanhao Cai and Yuxin Zheng and Jing Lin and Xin Yuan and Yulun Zhang and Haoqian Wang},\n  booktitle={NeurIPS},\n  year={2023}\n}\n\n\n# MST++\n@inproceedings{mst_pp,\n  title={MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction},\n  author={Yuanhao Cai and Jing Lin and Zudi Lin and Haoqian Wang and Yulun Zhang and Hanspeter Pfister and Radu Timofte and Luc Van Gool},\n  booktitle={CVPRW},\n  year={2022}\n}\n\n\n# HDNet\n@inproceedings{hdnet,\n  title={HDNet: High-resolution Dual-domain Learning for Spectral Compressive Imaging},\n  author={Xiaowan Hu and Yuanhao Cai and Jing Lin and  Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},\n  booktitle={CVPR},\n  year={2022}\n}\n\n```","# MST 快速上手指南\n\nMST 是一个光谱压缩成像重建工具箱，支持 15+ 种算法，包括 MST、MST++、CST、DAUHST、BiSRNet 等 CVPR\u002FNeurIPS\u002FECCV 顶会方法。\n\n---\n\n## 环境准备\n\n| 项目 | 要求 |\n|:---|:---|\n| 操作系统 | Linux (推荐 Ubuntu 18.04+) |\n| Python | 3.8+ |\n| PyTorch | 1.10+ |\n| CUDA | 11.3+ (GPU 训练\u002F推理必需) |\n| 显存 | ≥ 8GB (推荐 24GB 用于训练) |\n\n**核心依赖包：**\n- torch ≥ 1.10.0\n- torchvision\n- numpy\n- scipy\n- h5py\n- scikit-image\n- opencv-python\n- einops\n- timm\n- thop (计算 FLOPs)\n\n---\n\n## 安装步骤\n\n### 1. 克隆仓库\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST.git\ncd MST\n```\n\n> 国内用户若 GitHub 访问慢，可使用镜像或先下载 ZIP 解压。\n\n### 2. 创建虚拟环境（推荐）\n\n```bash\nconda create -n mst python=3.8 -y\nconda activate mst\n```\n\n### 3. 安装 PyTorch\n\n```bash\n# CUDA 11.3 版本（推荐）\npip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu113\n\n# 或 CUDA 11.8 版本\npip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n```\n\n> 国内用户可添加清华源加速：`pip config set global.index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n### 4. 安装其他依赖\n\n```bash\npip install -r requirements.txt\n```\n\n---\n\n## 基本使用\n\n### 快速测试（推理）\n\n#### 1. 下载预训练模型\n\n从 [Model Zoo](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST#quantitative-comparison-on-simulation-dataset) 下载所需模型，放入 `.\u002Fmodel\u002F` 目录。\n\n以 MST 为例：\n```bash\nmkdir -p model\n# 下载 MST 模型后放置于此\n```\n\n#### 2. 准备测试数据\n\n下载仿真数据集或真实场景数据：\n- 仿真数据：[Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F...) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F...?pwd=mst1)\n- 真实数据：[Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F...) \u002F [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F...?pwd=mst1)\n\n数据目录结构：\n```\n.\u002Fdata\u002F\n├── simulation\u002F\n│   ├── test_truth\u002F     # 高光谱真值\n│   └── test_meas\u002F      # 压缩测量\n└── real\u002F\n    └── test_meas\u002F      # 真实场景测量\n```\n\n#### 3. 运行测试\n\n```bash\n# 测试 MST 模型（仿真数据）\npython test.py --method mst --model_path .\u002Fmodel\u002Fmst.pth --data_path .\u002Fdata\u002Fsimulation\u002F --save_path .\u002Fresult\u002F\n\n# 测试 MST++ 模型\npython test.py --method mst_plus_plus --model_path .\u002Fmodel\u002Fmst_plus_plus.pth --data_path .\u002Fdata\u002Fsimulation\u002F --save_path .\u002Fresult\u002F\n\n# 测试真实场景\npython test_real.py --method mst --model_path .\u002Fmodel\u002Fmst.pth --data_path .\u002Fdata\u002Freal\u002Ftest_meas\u002F --save_path .\u002Fresult_real\u002F\n```\n\n#### 4. 评估结果\n\n```bash\npython evaluate.py --result_path .\u002Fresult\u002F --gt_path .\u002Fdata\u002Fsimulation\u002Ftest_truth\u002F\n```\n\n### 快速训练\n\n```bash\n# 单卡训练 MST\npython train.py --method mst --batch_size 4 --epochs 300 --lr 4e-4 --save_path .\u002Fcheckpoint\u002F\n\n# 多卡训练（推荐）\npython -m torch.distributed.launch --nproc_per_node=4 train.py --method mst --batch_size 4 --epochs 300 --lr 4e-4 --save_path .\u002Fcheckpoint\u002F\n\n# 训练 MST++\npython train.py --method mst_plus_plus --batch_size 8 --epochs 300 --lr 2e-4 --save_path .\u002Fcheckpoint\u002F\n```\n\n**关键参数说明：**\n| 参数 | 说明 |\n|:---|:---|\n| `--method` | 算法名称：`mst`\u002F`mst_plus_plus`\u002F`cst`\u002F`dauhst`\u002F`bisrnet`\u002F`hdnet` 等 |\n| `--batch_size` | 根据显存调整，MST 建议 4，MST++ 建议 8 |\n| `--lr` | 学习率，MST 默认 4e-4，MST++ 默认 2e-4 |\n\n---\n\n## 支持的算法速查\n\n| 算法 | 会议 | 命令中的 `--method` |\n|:---|:---|:---|\n| MST | CVPR 2022 | `mst` |\n| MST++ | CVPRW 2022 | `mst_plus_plus` |\n| CST | ECCV 2022 | `cst` |\n| DAUHST | NeurIPS 2022 | `dauhst` |\n| BiSRNet | NeurIPS 2023 | `bisrnet` |\n| HDNet | CVPR 2022 | `hdnet` |\n| BIRNAT | TPAMI 2022 | `birnat` |\n| DGSMP | CVPR 2021 | `dgsmp` |\n| GAP-Net | Arxiv 2020 | `gap_net` |\n| TSA-Net | ECCV 2020 | `tsa_net` |\n| ADMM-Net | ICCV 2019 | `admm_net` |\n| λ-Net | ICCV 2019 | `lambda_net` |\n\n---\n\n## 相关资源\n\n- **MST++ 独立仓库**（NTIRE 2022 冠军）：https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST-plus-plus\n- **知乎技术解读**：[MST](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F501101943) | [CST](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F544979161) | [DAUHST](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F576280023)","某农业科技公司正在开发一套**无人机高光谱作物健康监测系统**，用于精准识别农田中的病虫害早期迹象。团队需要从无人机搭载的压缩高光谱相机采集的原始数据中，快速重建出完整的高光谱图像，以便后续分析作物叶绿素含量、水分胁迫等指标。\n\n### 没有 MST 时\n\n- **算法选型困难**：团队成员需要逐一调研 CVPR\u002FNeurIPS 等顶会论文，手动复现 MST、CST、DAUHST 等 10 余种算法，每个算法代码风格迥异，环境配置冲突频发，两周过去还没跑通 baseline\n- **性能对比混乱**：不同论文的实验设置不统一，有的用仿真数据、有的用真实数据，团队无法公平评估哪个算法最适合自己的无人机场景（需要轻量化+实时性）\n- **部署成本高昂**：好不容易选定的模型在边缘设备上推理速度不达标，但缺乏针对二值化压缩感知的优化方案，被迫牺牲精度换速度，监测准确率大幅下降\n- **复现结果存疑**：某篇论文的开源代码存在 bug，团队调参一个月后发现指标对不上原文，项目进度严重滞后，错过春耕监测窗口期\n\n### 使用 MST 后\n\n- **开箱即用的算法库**：MST 将 15+ 种 SOTA 算法统一封装，一行命令切换 MST++\u002FCST\u002FDAUHST 等模型，标准化数据接口让团队 3 天内完成全部算法初筛\n- **公平高效的基准测试**：工具箱内置统一的仿真\u002F真实数据集划分和评估指标，团队快速验证发现 BiSCI 的二值化设计在保持 38dB PSNR 的同时，推理速度提升 4 倍，完美匹配无人机边缘计算需求\n- **即插即用的优化方案**：直接调用 MST 中的二值化重建模块（BiSCI），模型体积压缩至 1MB 以内，Jetson Nano 上实现 15fps 实时处理，无需牺牲精度\n- **可复现的可靠结果**：NTIRE 2022 冠军方案 MST++ 的预训练模型和训练日志全公开，团队复现指标与论文误差 \u003C0.1dB，两周内完成模型选型并进入部署阶段\n\n**核心价值**：MST 将分散的学术研究转化为工程可用的标准化工具，让团队从\"重复造轮子\"转向\"专注业务创新\"，显著缩短高光谱成像技术从论文到农田的落地周期。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fcaiyuanhao1998_MST_0925ce44.png","caiyuanhao1998","Yuanhao Cai","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fcaiyuanhao1998_d16fc657.jpg","Johns Hopkins University\r\ncaiyuanhao1998@gmail.com\r\nycai51@jh.edu","Johns Hopkins University \u003C- Tsinghua","Baltimore, United States",null,"https:\u002F\u002Fcaiyuanhao1998.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998",[85,89],{"name":86,"color":87,"percentage":88},"Python","#3572A5",82.5,{"name":90,"color":91,"percentage":92},"MATLAB","#e16737",17.5,1124,88,"2026-04-04T08:51:49","MIT","Linux, Windows","需要 NVIDIA GPU，显存未明确说明，CUDA 11.0+","未说明",{"notes":101,"python":102,"dependencies":103},"建议使用 conda 创建虚拟环境，安装命令：conda create -n MST python=3.8；支持 15+ 种算法包括 MST、MST++、CST、DAUHST、BiSRNet 等；提供预训练模型和测试数据下载；包含传统方法 TwIST、GAP-TV、DeSCI 的实现；支持模拟数据和真实实验数据","3.8",[104,105,106,107,108,109,110,111],"torch>=1.11.0","torchvision>=0.12.0","scikit-image","opencv-python","h5py","einops","timm","thop",[26,14],[114,115,116,117,118,119,120,121,122],"image-restoration","hyperspectral-images","snapshot-compressive-imaging","spectral-reconstruction","binarized-neural-networks","bnn","qnn","transformer","ntire","2026-03-27T02:49:30.150509","2026-04-06T07:13:39.289377",[126,131,136,141,146,151,156,161,166,171],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},4226,"为什么自己训练的模型效果比作者开源的预训练模型更好？","这是一个已知现象。可能的原因包括：\n1. 数据增强策略不同：某些方法（如DGSMP）在仿真实验阶段仅使用30个训练样本，数据增强方式可能不同\n2. PyTorch和相关包版本更新：作者提到在2022年7月更新PyTorch版本和改进code base后，指标会比原来更高\n3. 建议对比原文代码和本仓库代码，检查是否有实现差异\n\n参考DGSMP的训练数据列表：https:\u002F\u002Fgithub.com\u002FTaoHuang95\u002FDGSMP\u002Fblob\u002Fmain\u002FSimulation\u002FData\u002FTraining_data\u002Ftrain_list.txt","https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST\u002Fissues\u002F59",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},4227,"真实数据集测试代码运行报错，提示无法识别的参数？","真实数据测试代码存在问题，README中5.2节的测试命令可能有误。问题包括：\n1. real\u002Ftest_code\u002Foption.py与train_code\u002Foption.py内容相同，但真实数据格式与仿真数据不同\n2. 测试时需要确保数据格式正确加载，检查数据文件的字典结构和输入模型的形状\n3. 真实测试不需要裁剪，直接使用660×660尺寸测试\n\n建议调试步骤：检查数据加载部分，确认measurement的keys和shape是否正确。","https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST\u002Fissues\u002F40",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},4228,"真实数据集测试时应该使用哪些数据和mask？","README提到使用CAVE (CAVE_512_28)和KAIST (KAIST_CVPR2021)作为真实训练集，但测试代码中的路径需要：\n- --data_path: 测试数据路径（如.\u002FData\u002FTesting_data\u002F）\n- --mask_path: mask文件路径（如.\u002FData\u002Fmask.mat）\n\n注意：TSA_real_data下载链接中的measurement格式可能不正确，如果报错请检查数据文件的keys。真实数据格式与仿真数据不同，需要确认数据预处理正确。","https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST\u002Fissues\u002F36",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},4229,"真实数据训练时CAVE数据集尺寸不够660×660怎么办？","真实数据训练和测试的处理方式不同：\n- 训练时：CAVE使用512×512版本，KAIST使用对应尺寸\n- 测试时：不需要裁剪，直接使用660×660尺寸测试\n\nREADME中说明：模拟数据集使用1024版本，真实数据集使用512版本CAVE和KAIST。测试代码已适配660×660的输入尺寸。","https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST\u002Fissues\u002F14",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},4230,"HSI可视化时出现\"Index in position 3 exceeds array bounds\"错误？","使用MST++生成.mat文件后用MST的show_real.m可视化时，需要修改channel参数。错误原因是数组维度不匹配。\n\n解决方法：尝试将channel参数改为2或4（而不是1），具体取决于数据存储的维度顺序。需要同时调整lam28或lam31数组的数量以匹配channel数量。","https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST\u002Fissues\u002F12",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},4231,"如何绘制论文中的光谱相关性矩阵可视化图（如图6）？","可以使用Python或PyTorch的相关库来实现，搜索关键词如\"correlation matrix visualization\"、\"heatmap\"等。常用库包括：\n- matplotlib的imshow或matshow\n- seaborn的heatmap\n- 直接计算光谱通道间的相关性系数后可视化\n\n注意：论文中fig6的相关性矩阵对应场景5，但展示的RGB图是场景4，可能是索引从0开始未加1导致的。","https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST\u002Fissues\u002F30",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},4232,"训练时utils.py中的场景数量205是否受RAM限制？","simulation\u002Ftrain_code\u002Futils.py中的205是经验设置的数值，确实与机器RAM限制有关。CAVE数据集下载后的场景数大于205，可以根据自己机器的内存容量调整这个数值。\n\n如果内存充足，可以使用更多场景；如果内存不足，可以适当减少。这不是固定值，需要根据硬件条件灵活调整。","https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST\u002Fissues\u002F41",{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},4233,"能否提供训练日志（training log）？","作者提供了MST的训练日志，但说明：\n- 早期版本未训到34.55dB\n- 2022年7月更新PyTorch和相关包版本、改进code base后，指标比原来更高\n- 最终模型可以达到比论文报告更好的性能\n\n对于DAUHST等其他模型，需要单独询问。建议关注PyTorch版本和依赖包版本对最终性能的影响。","https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST\u002Fissues\u002F7",{"id":167,"question_zh":168,"answer_zh":169,"source_url":170},4234,"仿真和真实实验的mask和测量值如何生成？","仿真数据（simulation）和真实数据（real）的生成方式不同：\n- 仿真数据：使用代码生成mask和模拟测量值\n- 真实数据：需要物理采集系统获取真实mask和测量值\n\n对于仿真实验，可以参考TSA-Net等基础方法的代码生成训练数据。真实实验需要对应的硬件系统采集数据。","https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FMST\u002Fissues\u002F26",{"id":172,"question_zh":173,"answer_zh":174,"source_url":170},4235,"训练时如何选择预训练模型进行微调？","在仿真训练代码中，可以通过以下方式加载预训练模型进行微调：\n1. 在option.py或训练脚本中设置pretrained_model_path参数\n2. 使用torch.load()加载预训练权重\n3. 可以选择冻结部分层或全部微调\n\n具体实现参考simulation\u002Ftrain_code中的模型加载逻辑，根据实验需求调整学习率和训练策略。",[]]